Submit #801297: vllm-project vLLM 0.19.0 Use of Uninitialized Resourceinfo

Titlevllm-project vLLM 0.19.0 Use of Uninitialized Resource
DescriptionvLLM's block allocator returns GPU KV cache blocks to the free pool upon request completion or cancellation without zeroing their contents. When a subsequent request is allocated one of these dirty blocks, it decodes from stale activation data belonging to a previous request rather than from its own context. In a multi-tenant deployment, this means one user's conversationdata can influence, or appear verbatim in, another user's response. The bug is confirmed reproducible on vLLM 0.19.0 with 10/10 run consistency across multiple independent traces. It does not require speculative decoding, prefix caching, or any special server configuration, only concurrent requests under normal load. Affected requests produce completely different output sequences across runs at temperature=0, where outputs should be fully deterministic.
Source⚠️ https://github.com/vllm-project/vllm/issues/39146
User
 Zyz3366 (UID 97230)
Submission04/09/2026 21:44 (19 days ago)
Moderation04/26/2026 21:38 (17 days later)
StatusAccepted
VulDB entry359740 [vllm up to 0.19.0 KV Block kv_cache_interface.py has_mamba_layers uninitialized resource]
Points20

Do you want to use VulDB in your project?

Use the official API to access entries easily!