| Title | FoundationAgents MetaGPT 0.8.1 Eval Injection (CWE-95) |
|---|
| Description | # Technical Details
A Code Injection (Eval Injection) vulnerability exists in the `xml_fill` method of `metagpt/actions/action_node.py` in MetaGPT.
The application uses the unsafe `eval()` function to parse strings from LLM responses into Python objects for `list` and `dict` field types. An attacker who can influence the LLM's output (e.g. via prompt injection) can inject arbitrary Python code which is then executed on the server.
# Vulnerable Code
File: metagpt/actions/action_node.py
Method: ActionNode.xml_fill
Why: When extracting fields of type `list` or `dict`, the extracted regex match `raw_value` is directly passed to `eval(raw_value)` without sanitization (e.g. using `ast.literal_eval`).
# Reproduction
1. In MetaGPT, initialize an ActionNode with `expected_type=list` or `dict`.
2. Construct a prompt injection payload that forces the LLM to output a malicious XML tag, e.g.:
`<FileExtraction>__import__('os').system('touch /tmp/verify_rce') or ['pwned']</FileExtraction>`
3. Process the context using `node.xml_fill(context)`.
4. The `eval()` function executes the payload. Verify `/tmp/verify_rce` exists on the host.
# Impact
- Remote Code Execution (RCE): An attacker capable of performing Prompt Injection can force the LLM to output a malicious XML tag. When processed, this executes arbitrary commands, allowing the attacker to access sensitive files or compromise the host system. |
|---|
| Source | ⚠️ https://github.com/FoundationAgents/MetaGPT/issues/1928 |
|---|
| User | Eric-c (UID 96848) |
|---|
| Submission | 03/28/2026 03:58 (13 days ago) |
|---|
| Moderation | 04/09/2026 14:04 (12 days later) |
|---|
| Status | Accepted |
|---|
| VulDB entry | 356525 [FoundationAgents MetaGPT up to 0.8.1 XML action_node.py ActionNode.xml_fill eval injection] |
|---|
| Points | 20 |
|---|