← Field Notes
EN/ES

AI agents that can't leak your passwords — even if tricked

March 8, 2026via github · @nearai
AIautomationopen-sourceworkflowself-hosting

The problem nobody talks about

When you give an AI agent access to your tools — your email, your CRM, your payment system — it usually means giving it your passwords and API keys too. That's fine in a demo. In real life, it's a bit like handing a new employee your master key on their first day and hoping for the best.

There's also a sneakier risk: if someone sends your agent a cleverly worded message, they can sometimes trick it into leaking those credentials. It's called a prompt-injection attack, and it works more often than vendors admit.

What IronClaw does differently

IronClaw keeps your secrets in an encrypted vault that the AI model itself never touches. The keys are only used at the very last moment — when the agent actually makes a request to a pre-approved service. The AI gives the instruction; a separate, isolated system carries it out. Think of it like a restaurant: the waiter takes your order, but only the kitchen touches the food.

Every tool the agent uses runs inside its own little container, and the system is constantly checking whether anything suspicious is trying to sneak out.

It works with the AI tools you probably already know — ChatGPT's API, Claude, and others — and there's a free tier to get started.

Words worth knowing

API key — A private password that lets one piece of software talk to another (like your booking system talking to your payment provider). Leaking one is like changing the locks after someone copies the key.

AI agent — An AI that doesn't just answer questions, but takes actions on your behalf: sending emails, updating spreadsheets, making reservations.

Prompt injection — A trick where someone embeds hidden instructions in a message to your AI, trying to get it to do something it shouldn't. IronClaw is built so these fail by design.

Open source — The code is public and free to inspect. For security tools especially, that transparency matters.

Worth thinking about

If you're already using — or considering — AI agents that connect to your real business tools, ask whoever sets it up: where do the credentials live, and can the AI itself see them? That question alone will tell you a lot.

You can find IronClaw at: https://github.com/nearai/ironclaw

Want us to audit your site? Takes 60 seconds →