提交 #779122: PromtEngineer localGPT Latest (commit 4d41c7d) LLM Prompt Injection via Unsanitized User Input信息

标题PromtEngineer localGPT Latest (commit 4d41c7d) LLM Prompt Injection via Unsanitized User Input
描述A critical prompt injection vulnerability exists in the AI routing mechanism that allows attackers to manipulate the Large Language Model's behavior by injecting malicious instructions through user queries. The application directly embeds unsanitized user input into system prompts without any input validation, escaping, or output verification. This enables attackers to extract sensitive information, manipulate AI responses, bypass routing logic, and potentially chain with other vulnerabilities to achieve complete system compromise.
来源⚠️ https://github.com/August829/CVEP/issues/9
用户
 Yu_Bao (UID 89348)
提交2026-03-13 02時21分 (25 日前)
管理2026-03-27 14時49分 (15 days later)
状态已接受
VulDB条目353889 [PromtEngineer localGPT 直到 4d41c7d1713b16b216d8e062e51a5dd88b20b054 LLM Prompt backend/server.py _route_using_overviews 权限提升]
积分20

Want to know what is going to be exploited?

We predict KEV entries!