Why Your AI Agent Shouldn't Know Your Passwords
The fundamental problem with how agents handle credentials today, and why the solution isn't better vaults — it's removing credentials from the agent entirely.
Here's a thought experiment. You hire a personal assistant. They're brilliant, capable, and eager to help. On their first day, you hand them the keys to your house, your car, your office, your bank account, and every online account you own. "Use these whenever you need to get things done," you say.
Absurd, right? Yet this is exactly how most AI agent frameworks handle credentials today.
The env file problem
The standard pattern for giving an agent API access looks like this:
STRIPE_SECRET=sk_live_xxxxxxxxxxxxxxxx
GMAIL_OAUTH_TOKEN=ya29.xxxxxxxxxxxxxx
AWS_SECRET_KEY=wJalrXUtnFEMI/K7MDENG/xxxxx
These credentials are loaded into the agent's runtime environment. Every tool call that needs API access reads from this pool. The credentials persist in memory, appear in debug logs, and — critically — can be referenced by the LLM in its reasoning chain.
The problems compound:
- 1.Over-provisioning. Agents get access to everything, even APIs they'll never use. A research agent has your Stripe live key. A writing assistant holds your AWS credentials.
- 2.No consent. The user (you) has no say in which credentials the agent uses for which task. There's no approval flow — the agent just grabs what it needs.
- 3.No revocation. To remove an agent's access, you have to rotate the credential — which breaks every other system using it.
- 4.Exfiltration risk. Prompt injection can instruct the agent to output its credentials. Context window leaks expose them. Debug logs record them.
Better vaults aren't the answer
The instinct is to reach for a secrets manager: HashiCorp Vault, AWS Secrets Manager, Doppler. And yes, these are excellent for server-to-server authentication. But they solve a different problem.
A vault protects secrets at rest. The moment the agent retrieves a credential from the vault and uses it to make an API call, the credential is in memory. It's in the HTTP request being constructed. It's visible to the LLM. The vault has done its job — but the agent is still the weak link.
The insight: the agent doesn't need the credential. It needs the capability.
Credential proxying: a better primitive
Instead of giving the agent your Gmail OAuth token and saying "use this to read my emails," you say: "you have permission to read my emails — route your request through this proxy."
The agent sends its API request to the proxy. The proxy — running server-side, outside the agent's memory — looks up the user's credentials, injects them into the request, and forwards it to the target API. The response flows back to the agent, credential-free.
No credentials
Injects credentials
Returns data
This architecture has several critical properties:
- Immune to prompt injection. Credentials aren't in the context window — there's nothing to extract.
- Least privilege by default. Each agent only accesses the APIs the user explicitly approved.
- Instant revocation. Revoke an agent's permission without rotating the underlying credential.
- Full audit trail. The proxy logs every API call made on behalf of every agent.
The human in the loop
The most important property of credential proxying isn't technical — it's social. When an agent needs access to a new API, it can't just grab credentials from an env file. It has to ask.
The user sees a clear permission request: "research-agent wants to read your Gmail inbox (gmail::readonly)." They approve or deny. If they approve, the agent can make the call — but only through the proxy, only for the approved scope, and only until the user revokes it.
This is how it should have always worked. We don't give our human assistants unrestricted access to everything on day one. We shouldn't give our AI assistants that access either.
Your agent doesn't need your passwords. It needs your permission.