← Voltar para CVEs
CVE-2026-22807
HIGH8.8
Descricao
vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.14.0, vLLM loads Hugging Face `auto_map` dynamic modules during model resolution without gating on `trust_remote_code`, allowing attacker-controlled Python code in a model repo/path to execute at server startup. An attacker who can influence the model repo/path (local directory or remote Hugging Face repo) can achieve arbitrary code execution on the vLLM host during model load. This happens before any request handling and does not require API access. Version 0.14.0 fixes the issue.
Detalhes CVE
Pontuacao CVSS v3.18.8
SeveridadeHIGH
Vetor CVSSCVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H
Vetor de ataqueNETWORK
ComplexidadeLOW
Privilegios necessariosNONE
Interacao do usuarioREQUIRED
Publicado1/21/2026
Ultima modificacao1/30/2026
Fontenvd
Avistamentos honeypot0
Produtos afetados
vllm:vllm
Fraquezas (CWE)
CWE-94
Referencias
https://github.com/vllm-project/vllm/commit/78d13ea9de4b1ce5e4d8a5af9738fea71fb024e5(security-advisories@github.com)
https://github.com/vllm-project/vllm/pull/32194(security-advisories@github.com)
https://github.com/vllm-project/vllm/releases/tag/v0.14.0(security-advisories@github.com)
https://github.com/vllm-project/vllm/security/advisories/GHSA-2pc9-4j83-qjmr(security-advisories@github.com)
Correlacoes IOC
Sem correlacoes registradas
This product uses data from the NVD API but is not endorsed or certified by the NVD.