Отправить #779122: PromtEngineer localGPT Latest (commit 4d41c7d) LLM Prompt Injection via Unsanitized User InputИнформация

НазваниеPromtEngineer localGPT Latest (commit 4d41c7d) LLM Prompt Injection via Unsanitized User Input
ОписаниеA critical prompt injection vulnerability exists in the AI routing mechanism that allows attackers to manipulate the Large Language Model's behavior by injecting malicious instructions through user queries. The application directly embeds unsanitized user input into system prompts without any input validation, escaping, or output verification. This enables attackers to extract sensitive information, manipulate AI responses, bypass routing logic, and potentially chain with other vulnerabilities to achieve complete system compromise.
Источник⚠️ https://github.com/August829/CVEP/issues/9
Пользователь
 Yu_Bao (UID 89348)
Представление13.03.2026 02:21 (26 дни назад)
Модерация27.03.2026 14:49 (15 days later)
Статуспринято
Запись VulDB353889 [PromtEngineer localGPT до 4d41c7d1713b16b216d8e062e51a5dd88b20b054 LLM Prompt backend/server.py _route_using_overviews эскалация привилегий]
Баллы20

Are you interested in using VulDB?

Download the whitepaper to learn more about our service!