← Retour aux CVEs
CVE-2026-21869
HIGH8.8
Description
llama.cpp is an inference of several LLM models in C/C++. In commits 55d4206c8 and prior, the n_discard parameter is parsed directly from JSON input in the llama.cpp server's completion endpoints without validation to ensure it's non-negative. When a negative value is supplied and the context fills up, llama_memory_seq_rm/add receives a reversed range and negative offset, causing out-of-bounds memory writes in the token evaluation loop. This deterministic memory corruption can crash the process or enable remote code execution (RCE). There is no fix at the time of publication.
Details CVE
Score CVSS v3.18.8
SeveriteHIGH
Vecteur CVSSCVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H
Vecteur d'attaqueNETWORK
ComplexiteLOW
Privileges requisNONE
Interaction utilisateurREQUIRED
Publie1/8/2026
Derniere modification2/2/2026
Sourcenvd
Observations honeypot0
Produits affectes
ggml:llama.cpp
Faiblesses (CWE)
CWE-787
References
https://github.com/ggml-org/llama.cpp/security/advisories/GHSA-8947-pfff-2f3c(security-advisories@github.com)
https://github.com/ggml-org/llama.cpp/security/advisories/GHSA-8947-pfff-2f3c(134c704f-9b21-4f2e-91b3-4a467353bcc0)
Correlations IOC
Aucune correlation enregistree
This product uses data from the NVD API but is not endorsed or certified by the NVD.