제출 #779122: PromtEngineer localGPT Latest (commit 4d41c7d) LLM Prompt Injection via Unsanitized User Input정보

제목PromtEngineer localGPT Latest (commit 4d41c7d) LLM Prompt Injection via Unsanitized User Input
설명A critical prompt injection vulnerability exists in the AI routing mechanism that allows attackers to manipulate the Large Language Model's behavior by injecting malicious instructions through user queries. The application directly embeds unsanitized user input into system prompts without any input validation, escaping, or output verification. This enables attackers to extract sensitive information, manipulate AI responses, bypass routing logic, and potentially chain with other vulnerabilities to achieve complete system compromise.
원천⚠️ https://github.com/August829/CVEP/issues/9
사용자
 Yu_Bao (UID 89348)
제출2026. 03. 13. AM 02:21 (25 날 ago)
모더레이션2026. 03. 27. PM 02:49 (15 days later)
상태수락
VulDB 항목353889 [PromtEngineer localGPT 까지 4d41c7d1713b16b216d8e062e51a5dd88b20b054 LLM Prompt backend/server.py _route_using_overviews 권한 상승]
포인트들20

Want to know what is going to be exploited?

We predict KEV entries!