← Voltar para CVEs
CVE-2026-22778
CRITICAL9.8
Descricao
vLLM is an inference and serving engine for large language models (LLMs). From 0.8.3 to before 0.14.1, when an invalid image is sent to vLLM's multimodal endpoint, PIL throws an error. vLLM returns this error to the client, leaking a heap address. With this leak, we reduce ASLR from 4 billion guesses to ~8 guesses. This vulnerability can be chained a heap overflow with JPEG2000 decoder in OpenCV/FFmpeg to achieve remote code execution. This vulnerability is fixed in 0.14.1.
Detalhes CVE
Pontuacao CVSS v3.19.8
SeveridadeCRITICAL
Vetor CVSSCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H
Vetor de ataqueNETWORK
ComplexidadeLOW
Privilegios necessariosNONE
Interacao do usuarioNONE
Publicado2/2/2026
Ultima modificacao2/23/2026
Fontenvd
Avistamentos honeypot0
Produtos afetados
vllm:vllm
Fraquezas (CWE)
CWE-532
Referencias
https://github.com/vllm-project/vllm/pull/31987(security-advisories@github.com)
https://github.com/vllm-project/vllm/pull/32319(security-advisories@github.com)
https://github.com/vllm-project/vllm/releases/tag/v0.14.1(security-advisories@github.com)
https://github.com/vllm-project/vllm/security/advisories/GHSA-4r2x-xpjr-7cvv(security-advisories@github.com)
Correlacoes IOC
Sem correlacoes registradas
This product uses data from the NVD API but is not endorsed or certified by the NVD.