← Back to CVEs
CVE-2026-22773
MEDIUM6.5
Description
vLLM is an inference and serving engine for large language models (LLMs). In versions from 0.6.4 to before 0.12.0, users can crash the vLLM engine serving multimodal models that use the Idefics3 vision model implementation by sending a specially crafted 1x1 pixel image. This causes a tensor dimension mismatch that results in an unhandled runtime error, leading to complete server termination. This issue has been patched in version 0.12.0.
CVE Details
CVSS v3.1 Score6.5
SeverityMEDIUM
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H
Attack VectorNETWORK
ComplexityLOW
Privileges RequiredLOW
User InteractionNONE
Published1/10/2026
Last Modified1/27/2026
Sourcenvd
Honeypot Sightings0
Affected Products
vllm:vllm
Weaknesses (CWE)
CWE-770
References
https://github.com/vllm-project/vllm/security/advisories/GHSA-grg2-63fw-f2qr(security-advisories@github.com)
IOC Correlations
No correlations recorded
This product uses data from the NVD API but is not endorsed or certified by the NVD.