← Zuruck zu CVEs
CVE-2025-46570
LOW2.6
Beschreibung
vLLM is an inference and serving engine for large language models (LLMs). Prior to version 0.9.0, when a new prompt is processed, if the PageAttention mechanism finds a matching prefix chunk, the prefill process speeds up, which is reflected in the TTFT (Time to First Token). These timing differences caused by matching chunks are significant enough to be recognized and exploited. This issue has been patched in version 0.9.0.
CVE Details
CVSS v3.1 Bewertung2.6
SchweregradLOW
CVSS VektorCVSS:3.1/AV:N/AC:H/PR:L/UI:R/S:U/C:L/I:N/A:N
AngriffsvektorNETWORK
KomplexitatHIGH
Erforderliche PrivilegienLOW
BenutzerinteraktionREQUIRED
Veroffentlicht5/29/2025
Zuletzt geandert6/24/2025
Quellenvd
Honeypot-Sichtungen0
Betroffene Produkte
vllm:vllm
Schwachen (CWE)
CWE-208CWE-203
Referenzen
https://github.com/vllm-project/vllm/commit/77073c77bc2006eb80ea6d5128f076f5e6c6f54f(security-advisories@github.com)
https://github.com/vllm-project/vllm/pull/17045(security-advisories@github.com)
https://github.com/vllm-project/vllm/security/advisories/GHSA-4qjh-9fv9-r85r(security-advisories@github.com)
IOC Korrelationen
Keine Korrelationen erfasst
This product uses data from the NVD API but is not endorsed or certified by the NVD.