← Volver a CVEs
CVE-2026-22807
HIGH8.8
Descripcion
vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.14.0, vLLM loads Hugging Face `auto_map` dynamic modules during model resolution without gating on `trust_remote_code`, allowing attacker-controlled Python code in a model repo/path to execute at server startup. An attacker who can influence the model repo/path (local directory or remote Hugging Face repo) can achieve arbitrary code execution on the vLLM host during model load. This happens before any request handling and does not require API access. Version 0.14.0 fixes the issue.
Detalles CVE
Puntuacion CVSS v3.18.8
SeveridadHIGH
Vector CVSSCVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H
Vector de ataqueNETWORK
ComplejidadLOW
Privilegios requeridosNONE
Interaccion usuarioREQUIRED
Publicado1/21/2026
Ultima modificacion1/30/2026
Fuentenvd
Avistamientos honeypot0
Productos afectados
vllm:vllm
Debilidades (CWE)
CWE-94
Referencias
https://github.com/vllm-project/vllm/commit/78d13ea9de4b1ce5e4d8a5af9738fea71fb024e5(security-advisories@github.com)
https://github.com/vllm-project/vllm/pull/32194(security-advisories@github.com)
https://github.com/vllm-project/vllm/releases/tag/v0.14.0(security-advisories@github.com)
https://github.com/vllm-project/vllm/security/advisories/GHSA-2pc9-4j83-qjmr(security-advisories@github.com)
Correlaciones IOC
Sin correlaciones registradas
This product uses data from the NVD API but is not endorsed or certified by the NVD.