CVE-2026-34070: LangChain Core has Path Traversal vulnerabilites in legacy `load_prompt` functions
(updated )
Multiple functions in langchain_core.prompts.loading read files from paths embedded in deserialized config dicts without validating against directory traversal or absolute path injection. When an application passes user-influenced prompt configurations to load_prompt() or load_prompt_from_config(), an attacker can read arbitrary files on the host filesystem, constrained only by file-extension checks (.txt for templates, .json/.yaml for examples).
Note: The affected functions (load_prompt, load_prompt_from_config, and the .save() method on prompt classes) are undocumented legacy APIs. They are superseded by the dumpd/dumps/load/loads serialization APIs in langchain_core.load, which do not perform filesystem reads and use an allowlist-based security model. As part of this fix, the legacy APIs have been formally deprecated and will be removed in 2.0.0.
References
- github.com/advisories/GHSA-qh6h-p6c9-ff54
- github.com/langchain-ai/langchain
- github.com/langchain-ai/langchain/commit/27add913474e01e33bededf4096151130ba0d47c
- github.com/langchain-ai/langchain/releases/tag/langchain-core==1.2.22
- github.com/langchain-ai/langchain/security/advisories/GHSA-qh6h-p6c9-ff54
- nvd.nist.gov/vuln/detail/CVE-2026-34070
Code Behaviors & Features
Detect and mitigate CVE-2026-34070 with GitLab Dependency Scanning
Secure your software supply chain by verifying that all open source dependencies used in your projects contain no disclosed vulnerabilities. Learn more about Dependency Scanning →