CVE-2026-34070 in langchain-ai langchain정보

요약 (영어)

LangChain is a framework for building agents and LLM-powered applications. Prior to version 1.2.22, multiple functions in langchain_core.prompts.loading read files from paths embedded in deserialized config dicts without validating against directory traversal or absolute path injection. When an application passes user-influenced prompt configurations to load_prompt() or load_prompt_from_config(), an attacker can read arbitrary files on the host filesystem, constrained only by file-extension checks (.txt for templates, .json/.yaml for examples). This issue has been patched in version 1.2.22.

책임이 있는

GitHub_M

예약하다

2026. 03. 25.

공개

2026. 03. 31.

엔트리

VulDB provides additional information and datapoints for this CVE:

Want to stay up to date on a daily basis?

Enable the mail alert feature now!