Submit #779122: PromtEngineer localGPT Latest (commit 4d41c7d) LLM Prompt Injection via Unsanitized User Inputinfo

TitlePromtEngineer localGPT Latest (commit 4d41c7d) LLM Prompt Injection via Unsanitized User Input
DescriptionA critical prompt injection vulnerability exists in the AI routing mechanism that allows attackers to manipulate the Large Language Model's behavior by injecting malicious instructions through user queries. The application directly embeds unsanitized user input into system prompts without any input validation, escaping, or output verification. This enables attackers to extract sensitive information, manipulate AI responses, bypass routing logic, and potentially chain with other vulnerabilities to achieve complete system compromise.
Source⚠️ https://github.com/August829/CVEP/issues/9
User
 Yu_Bao (UID 89348)
Submission03/13/2026 02:21 (16 days ago)
Moderation03/27/2026 14:49 (15 days later)
StatusAccepted
VulDB entry353889 [PromtEngineer localGPT up to 4d41c7d1713b16b216d8e062e51a5dd88b20b054 LLM Prompt backend/server.py _route_using_overviews injection]
Points20

Do you know our Splunk app?

Download it now for free!