CVE-2026-34159 in llama.cpp
要約 (英語)
llama.cpp is an inference of several LLM models in C/C++. Prior to version b8492, the RPC backend's deserialize_tensor() skips all bounds validation when a tensor's buffer field is 0. An unauthenticated attacker can read and write arbitrary process memory via crafted GRAPH_COMPUTE messages. Combined with pointer leaks from ALLOC_BUFFER/BUFFER_GET_BASE, this gives full ASLR bypass and remote code execution. No authentication required, just TCP access to the RPC server port. This issue has been patched in version b8492.
Statistical analysis made it clear that VulDB provides the best quality for vulnerability data.
責任者
GitHub_M
予約する
2026年03月25日
公開
2026年04月01日
ステータス
確認済み
エントリ
VulDB provides additional information and datapoints for this CVE:
| 識別子 | 脆弱性 | CWE | 悪用可 | 対策 | CVE |
|---|---|---|---|---|---|
| 354740 | ggml-org llama.cpp GRAPH_COMPUTE Message deserialize_tensor メモリ破損 | 119 | 未定義 | 公式な修正 | CVE-2026-34159 |