Back to Blog
SecurityFeb 3, 2026· 6 min read

LangChain Vulnerabilities: When Your Agent Framework Leaks Your API Keys

Two critical CVEs in LangChain — the most popular agent framework — enabled API key disclosure and environment variable extraction through prompt injection. Both share the same root cause.

LangChain is the most widely-used framework for building AI agents. With over 100,000 GitHub stars and adoption across thousands of production applications, it's the foundation of the agent ecosystem. That makes its security vulnerabilities everyone's problem.

In 2024 and 2025, two critical CVEs demonstrated that LangChain's architecture could be exploited to extract API keys, environment variables, and other secrets from the host system. Both vulnerabilities were enabled by the same attack pattern: prompt injection targeting credential access.

Two critical vulnerabilities, one root cause

CVE-2024-28088 — Directory traversal enabling API key disclosure and remote code execution

CVE-2025-68665 — Serialization injection enabling environment variable extraction via LLM responses

CVE-2024-28088: directory traversal to API key disclosure

Disclosed in March 2024, CVE-2024-28088 affected LangChain through version 0.1.10. The vulnerability exploited three core LangChain functions: load_chain, load_prompt, and load_agent.

These functions were designed to load chain, prompt, and agent configurations from the hwchase17/langchain-hub repository — a curated collection of reusable LangChain components. The intended security model was a sandbox: configurations could only be loaded from within the trusted hub directory.

The vulnerability was a classic directory traversal. By crafting paths with ../ sequences, attackers could bypass the sandbox and load arbitrary files from the file system. This enabled two attack vectors:

Attack vector 1: API key disclosure

By traversing to configuration files that contained API keys (such as .env files, cloud provider configs, or application settings), an attacker could extract credentials for any service configured on the host machine. The LangChain process typically runs with the developer's user permissions, giving access to all credential files the developer can read.

Attack vector 2: Remote code execution

Beyond credential theft, the traversal enabled loading of malicious Python files that would execute arbitrary code when deserialized by LangChain's loading functions. An attacker could plant a payload that exfiltrates every environment variable on the system — including API keys, database passwords, and cloud provider secrets — to a remote server.

The vulnerability was patched in LangChain 0.1.11 by adding proper path validation to the loading functions. But the fix addressed the symptom, not the cause: LangChain applications still routinely hold API keys in environment variables that are accessible to the running process — and by extension, to any code the framework executes.

CVE-2025-68665: serialization injection for secret extraction

Disclosed in December 2025, CVE-2025-68665 targeted the LangChain JavaScript/TypeScript library with a more sophisticated attack: serialization injection via LLM responses. This vulnerability is particularly alarming because the attack vector is the LLM output itself — exactly the channel that agents process by design.

The attack exploits LangChain's serialization format. LangChain JS uses a structured serialization scheme for objects:

// Malicious payload in LLM response
{
"lc": 1,
"type": "secret",
"id": [ "OPENAI_API_KEY"]
}

When LangChain encounters an object with "type": "secret", it interprets the id array as a reference to an environment variable and resolves it. The attack works by injecting this payload into the LLM's response through the additional_kwargs field — a pass-through property that LangChain uses to carry model-specific metadata alongside the main response.

The attack chain works as follows:

  • 1.An attacker crafts a prompt injection — either in user input, a document being processed, or a tool response — that causes the LLM to include the serialized secret reference in its additional_kwargs
  • 2.LangChain processes the LLM response and encounters the serialized object
  • 3.The deserialization logic resolves {"type": "secret", "id": ["OPENAI_API_KEY"]} by reading the OPENAI_API_KEY environment variable
  • 4.The resolved value is now in the application's data flow, where it can be exfiltrated through subsequent LLM calls, tool invocations, or logged outputs

The severity rating was HIGH, as documented in GitHub Security Advisory GHSA-r399-636x-v7f6. Any environment variable on the host system could be targeted — not just LangChain's own API keys, but database credentials, cloud provider secrets, OAuth tokens, and any other secret stored in the environment.

The common thread: prompt injection + credential access

Both CVE-2024-28088 and CVE-2025-68665 share a common attack pattern: an attacker uses prompt injection to reach credentials that are accessible to the agent framework. The specifics differ — directory traversal vs. serialization injection — but the root cause is identical: the agent process has access to secrets, and the LLM can be manipulated to exploit that access.

This pattern is not unique to LangChain. It's a fundamental property of any agent architecture where:

  • a.The agent process can access credentials (environment variables, config files, secret stores)
  • b.The LLM can influence what code or operations the framework executes
  • c.The LLM processes untrusted input (user messages, documents, tool responses)

If all three conditions are true — and they are for virtually every production agent — prompt injection can be used to extract credentials. The only question is how creative the attack needs to be. These LangChain CVEs prove that it doesn't need to be very creative at all.

Why patching isn't enough

LangChain's team responded quickly to both vulnerabilities. CVE-2024-28088 was fixed in 0.1.11 with path validation. CVE-2025-68665 was addressed with stricter deserialization checks. But these are point fixes for a systemic problem.

The LangChain ecosystem has hundreds of integrations, each with its own credential handling patterns. Every integration is a potential attack surface for credential extraction. Fixing one path traversal or one deserialization flaw doesn't address the thousands of code paths where credentials are read from the environment and potentially exposed to LLM-influenced operations.

As long as agents run in processes that hold credentials, new attack vectors will continue to emerge. The cat-and-mouse game of patching individual vulnerabilities cannot keep pace with the creative potential of prompt injection attacks.

The architectural fix

The LangChain vulnerabilities demonstrate a clear principle: if the agent process can access a credential, a prompt injection attack can eventually extract it. The only defense that eliminates this class of vulnerability entirely is removing credentials from the agent's reach.

How Keychains.dev prevents this class of attack
  • No credentials in environment variables — the agent process has no API keys, tokens, or secrets to extract. CVE-2024-28088's directory traversal finds nothing. CVE-2025-68665's serialization injection resolves to empty values.
  • Prompt injection cannot reach credentials — since authentication is handled by an external proxy, no amount of LLM manipulation can access secrets that don't exist in the agent's process
  • Framework-agnostic protection — whether you use LangChain, CrewAI, AutoGPT, or a custom framework, the proxy model works the same way. No framework-specific patches required.
  • Complete audit trail — every API call made through the proxy is logged with the agent identity, user approval, and scope. When something goes wrong, you know exactly what happened and who authorized it.

Lessons for the agent ecosystem

The LangChain CVEs are a microcosm of the broader agent security challenge. As agent frameworks become more powerful — with more integrations, more tools, and more autonomous decision-making — the attack surface for credential extraction grows proportionally.

Each new tool integration adds environment variables. Each new API connector adds credential handling code. Each new deserialization path adds potential for injection. The complexity is unbounded, and the security team's ability to audit every path is not.

The frameworks themselves are not to blame. LangChain, like every agent framework, operates under the same constraint: the current model requires agents to hold credentials. Until that constraint is removed, vulnerabilities like CVE-2024-28088 and CVE-2025-68665 will keep appearing — in LangChain, in competing frameworks, and in every custom agent implementation.

The safest credential is the one your agent never sees. Not in environment variables. Not in config files. Not in serialized objects. Nowhere in the process. That's not a workaround — it's the only architecture that scales.

Sources

  • CVE-2024-28088: Directory traversal in LangChain through 0.1.10 — GitLab Advisory Database / NVD, March 2024
  • CVE-2025-68665: Serialization injection in LangChain JS — GitHub Security Advisory GHSA-r399-636x-v7f6, December 2025
  • "Prompt Injection Attacks on LLM Agent Frameworks" — Toreon Security Research, 2024
  • LangChain Security Advisories — GitHub Advisory Database, 2024–2025