← Zuruck zu CVEs
CVE-2025-53630
N/ABeschreibung
llama.cpp is an inference of several LLM models in C/C++. Integer Overflow in the gguf_init_from_file_impl function in ggml/src/gguf.cpp can lead to Heap Out-of-Bounds Read/Write. This vulnerability is fixed in commit 26a48ad699d50b6268900062661bd22f3e792579.
CVE Details
CVSS v3.1 BewertungN/A
Veroffentlicht7/10/2025
Zuletzt geandert7/15/2025
Quellenvd
Honeypot-Sichtungen0
Schwachen (CWE)
CWE-122CWE-680
Referenzen
https://github.com/ggml-org/llama.cpp/commit/26a48ad699d50b6268900062661bd22f3e792579(security-advisories@github.com)
https://github.com/ggml-org/llama.cpp/security/advisories/GHSA-vgg9-87g3-85w8(security-advisories@github.com)
IOC Korrelationen
Keine Korrelationen erfasst
This product uses data from the NVD API but is not endorsed or certified by the NVD.