← Retour aux CVEs
CVE-2026-27893
HIGH8.8
Description
vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.18.0, two model implementation files hardcode `trust_remote_code=True` when loading sub-components, bypassing the user's explicit `--trust-remote-code=False` security opt-out. This enables remote code execution via malicious model repositories even when the user has explicitly disabled remote code trust. Version 0.18.0 patches the issue.
Details CVE
Score CVSS v3.18.8
SeveriteHIGH
Vecteur CVSSCVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H
Vecteur d'attaqueNETWORK
ComplexiteLOW
Privileges requisNONE
Interaction utilisateurREQUIRED
Publie3/27/2026
Derniere modification3/30/2026
Sourcenvd
Observations honeypot0
Produits affectes
vllm:vllm
Faiblesses (CWE)
CWE-693
References
https://github.com/vllm-project/vllm/commit/00bd08edeee5dd4d4c13277c0114a464011acf72(security-advisories@github.com)
https://github.com/vllm-project/vllm/pull/36192(security-advisories@github.com)
https://github.com/vllm-project/vllm/security/advisories/GHSA-7972-pg2x-xr59(security-advisories@github.com)
Correlations IOC
Aucune correlation enregistree
This product uses data from the NVD API but is not endorsed or certified by the NVD.