← Volver a CVEs
CVE-2025-48944
MEDIUM6.5
Descripcion
vLLM is an inference and serving engine for large language models (LLMs). In version 0.8.0 up to but excluding 0.9.0, the vLLM backend used with the /v1/chat/completions OpenAPI endpoint fails to validate unexpected or malformed input in the "pattern" and "type" fields when the tools functionality is invoked. These inputs are not validated before being compiled or parsed, causing a crash of the inference worker with a single request. The worker will remain down until it is restarted. Version 0.9.0 fixes the issue.
Detalles CVE
Puntuacion CVSS v3.16.5
SeveridadMEDIUM
Vector CVSSCVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H
Vector de ataqueNETWORK
ComplejidadLOW
Privilegios requeridosLOW
Interaccion usuarioNONE
Publicado5/30/2025
Ultima modificacion7/1/2025
Fuentenvd
Avistamientos honeypot0
Productos afectados
vllm:vllm
Debilidades (CWE)
CWE-20
Referencias
https://github.com/vllm-project/vllm/pull/17623(security-advisories@github.com)
https://github.com/vllm-project/vllm/security/advisories/GHSA-vrq3-r879-7m65(security-advisories@github.com)
Correlaciones IOC
Sin correlaciones registradas
This product uses data from the NVD API but is not endorsed or certified by the NVD.