AI agent platform NanoClaw launched just last month, but it is already fully integrating with Docker. With it, every agent can run in its own container. With over 100,000 downloads and 20,000 GitHub stars, the solution is growing rapidly.
NanoClaw is far from the only solution to the AI agent trust issue. We discussed the issue with Palo Alto Networks very recently and have seen numerous vendors list the security shortcomings inherent to the omnipresent Model Context Protocol (MCP).
More recently, OpenClaw, the open-source agent framework created by Austrian developer Peter Steinberger, has dominated the agentic AI conversation the past few months. But its rapid rise brought serious incidents with it: OpenClaw agents have tricked users into installing malware, losing money and deleting their inbox. A vulnerability dubbed ClawJacked even allowed arbitrary websites to fully take over a developer’s AI agent without any user interaction.
OpenAI has now hired OpenClaw’s creator Steinberger, as reported in February. But even in these past few months, NanoClaw has already appeared as an apparently safer and more efficient choice to leverage AI agents. Its origin even stemmed from OpenClaw’s clear security problems. Cohen had connected OpenClaw to WhatsApp and his startup’s sales data, finding no isolation between agents, no access controls, and personal conversations stored in plain text. He spent a weekend coding — by Sunday night, something was working.
Container isolation as the foundation
NanoClaw’s answer is agent-level isolation: every agent runs inside its own container, either Docker or its Apple-exclusive equivalent, with its own environment and data. “Every agent runs in its own container, with its own environment, its own data, completely walled off from every other agent,” says Cohen. The Docker integration, announced today, makes this architecture available to millions of developers already using the platform.
The security rationale goes further than access controls. As NanoClaw frames it, the right approach “isn’t better permission checks or smarter allowlists. It’s architecture that assumes agents will misbehave and contains the damage when they do.” Prompt injection is the specific cause of these most of the time, where an AI agent is simply told what to do by a threat actor, often by sidestepping the arranged security protocols told to the tool.
Enterprise interest is growing
Docker’s own survey found that 60 percent of organizations already run AI agents in production and that 94% view building agents as a strategic priority. Yet security remains the second-biggest adoption barrier, cited by 40 percent of respondents, behind enterprise readiness at 45 percent. Cohen reports that a major fintech company is already working on bringing NanoClaw into its enterprise environment.