← Retour aux CVEs
CVE-2026-22778
CRITICAL9.8
Description
vLLM is an inference and serving engine for large language models (LLMs). From 0.8.3 to before 0.14.1, when an invalid image is sent to vLLM's multimodal endpoint, PIL throws an error. vLLM returns this error to the client, leaking a heap address. With this leak, we reduce ASLR from 4 billion guesses to ~8 guesses. This vulnerability can be chained a heap overflow with JPEG2000 decoder in OpenCV/FFmpeg to achieve remote code execution. This vulnerability is fixed in 0.14.1.
Details CVE
Score CVSS v3.19.8
SeveriteCRITICAL
Vecteur CVSSCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H
Vecteur d'attaqueNETWORK
ComplexiteLOW
Privileges requisNONE
Interaction utilisateurNONE
Publie2/2/2026
Derniere modification2/23/2026
Sourcenvd
Observations honeypot0
Produits affectes
vllm:vllm
Faiblesses (CWE)
CWE-532
References
https://github.com/vllm-project/vllm/pull/31987(security-advisories@github.com)
https://github.com/vllm-project/vllm/pull/32319(security-advisories@github.com)
https://github.com/vllm-project/vllm/releases/tag/v0.14.1(security-advisories@github.com)
https://github.com/vllm-project/vllm/security/advisories/GHSA-4r2x-xpjr-7cvv(security-advisories@github.com)
Correlations IOC
Aucune correlation enregistree
This product uses data from the NVD API but is not endorsed or certified by the NVD.