Submeter #779122: PromtEngineer localGPT Latest (commit 4d41c7d) LLM Prompt Injection via Unsanitized User Inputinformação

TítuloPromtEngineer localGPT Latest (commit 4d41c7d) LLM Prompt Injection via Unsanitized User Input
DescriçãoA critical prompt injection vulnerability exists in the AI routing mechanism that allows attackers to manipulate the Large Language Model's behavior by injecting malicious instructions through user queries. The application directly embeds unsanitized user input into system prompts without any input validation, escaping, or output verification. This enables attackers to extract sensitive information, manipulate AI responses, bypass routing logic, and potentially chain with other vulnerabilities to achieve complete system compromise.
Fonte⚠️ https://github.com/August829/CVEP/issues/9
Utilizador
 Yu_Bao (UID 89348)
Submissão13/03/2026 02h21 (há 29 dias)
Moderação27/03/2026 14h49 (15 days later)
EstadoAceite
Entrada VulDB353889 [PromtEngineer localGPT até 4d41c7d1713b16b216d8e062e51a5dd88b20b054 LLM Prompt backend/server.py _route_using_overviews Elevação de Privilégios]
Pontos20

Might our Artificial Intelligence support you?

Check our Alexa App!