The rapid adoption of AI agents has exposed a structural security problem in the Model Context Protocol. Due to a lack of authentication, hundreds of MCP servers are now running unsecured on the internet, posing direct risks to organizations.
The Model Context Protocol (MCP), intended as a standard for communication between AI models and external tools, was widely used from the outset without mandatory access control. Authentication and authorization were optional, as VentureBeat points out, resulting in many implementations being rolled out openly by default. Security mechanisms only became available later, by which time MCP was already deeply integrated into production environments.
This design choice is now becoming even more problematic with the emergence of Clawdbot, a personal AI assistant that is completely dependent on MCP. Clawdbot can manage email, open files, and execute code, among other things. Developers often quickly set up the agent on a VPS, where security settings are not always applied correctly. As a result, MCP servers with extensive rights are accessible directly from the internet.
A recent scan shows how big the problem has become. A total of 1,862 MCP servers were found that did not require any form of authentication. In a random sample, every server responded without asking for login details. In practice, this means that external parties have the same access as the AI agent itself, including the ability to manage systems or access data.
Vulnerabilities in MCP ecosystem on the rise
The risks are not purely theoretical. In the past six months, several critical vulnerabilities have been published in MCP-related tools and extensions. The leaks enabled, among other things, complete system takeover, arbitrary code execution, and unrestricted file access. Although the technical causes vary, researchers consistently point to the same underlying factor: the lack of secure default settings in MCP.
In addition, prompt injection is playing an increasingly important role. Researchers have shown that AI agents can be tricked into collecting and sending sensitive files via manipulated documents. New products that use MCP also target a broader audience that is less familiar with security risks, further increasing the likelihood of abuse.
Many organizations appear to be insufficiently prepared for this. The adoption of AI agents increased sharply in the second half of 2025, while security roadmaps for 2026 often do not include specific measures for this type of technology. Existing detection tools typically recognize MCP processes as legitimate and do not raise the alarm.
This creates a growing gap between the speed of innovation and the design of security governance. As long as MCP implementations remain active without strict access control, they will continue to be an attractive target for attackers.