How the Model Context Protocol has taken the AI world by storm

How the Model Context Protocol has taken the AI world by storm

When Anthropic made the Model Context Protocol (MCP) open source in November, it probably didn’t realize how popular it would become. Now, all kinds of vendors offer MCP support or ways to secure, expand, or make it more flexible. What’s behind the success story of MCP? And are there any risks or shortcomings associated with its use?

When MCP was introduced, Anthropic couldn’t have known, but AI players from Google to OpenAI have adopted it in just a few months. We can therefore say that MCP’s elevator pitch was immediately compelling. The best explanation of MCP can be found in its documentation: “MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP as a USB-C port for AI applications.”

USB-C for AI

The comparison goes even further. As Anthropic explains: “Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to various data sources and tools.”

Connecting LLMs to data and applications is a prerequisite for agentic AI, or the use of AI for more complex purposes than text or image generation. The architecture of these models makes it impossible to train them cheaply on new data, even with the optimal equipment (mountains of GPUs) at their disposal. They also generate outputs at best. They are not designed to control apps. This still requires additional work, but the way a model connects to data can now be standardized with MCP. So if an app has an API endpoint, it can be used (even automatically) for an MCP server. It is a first step toward agentic AI that consults company data and can act on it. Step 1 seems to have been solved, after which step 2 can follow. No Thunderbolt 3, 4, and 5 as an all-in-one connection for laptops and peripherals without first defining the USB-C protocol, let’s say.

To put it another way, in the words of an Anthropic employee: “The gist of it is: you have an LLM application such as Claude Desktop. You want to have it interact (read or write) with some system you have. MCP solves this.”

MCP consists first and foremost of an MCP server, which is capable of retrieving certain data. The MCP client runs within an AI application and connects to one or more MCP servers. An MCP host is an AI app consisting of an LLM with agentic skills or components. Finally, there is the data or service itself, which is controlled by the combination of MCP components. The Model Context Protocol stipulates exactly how each component must communicate with each other. Communication takes place via SSE (HTTP) or STDIO (local servers).

Major implications

Thanks to MCP, communication with AI can be particularly intuitive. For example, there is no need to set up a tool to create a LinkedIn post. All you need to do is provide control over the mouse and keyboard. Navigating to Chrome, the LinkedIn site, and creating a post all happens automatically. This provides an alternative to Anthropic’s own Claude Computer Use and OpenAI Operator, where the AI model choice is free.

Initial adoption did not happen immediately among Anthropic’s competitors. Independent tools such as Cursor and Zed integrated MCP fairly quickly after its introduction, but the protocol also spread to other countries. In China, for example, Alibaba and Baidu have already embraced MCP. This made adoption increasingly easier to justify for parties such as OpenAI and Google.

Now that MCP is established, it finds itself in a similar role to other standards that are now commonplace within tech stacks. Think of Kubernetes or OAuth, whose original creators were at Google and Twitter, respectively. Over the years, their birthplace has become irrelevant. Such a protocol or best practice is also a case of ‘right time, right place’. Standards must exist within GenAI in order to achieve the intended global adoption of AI.

Criticism

It is clear that MCP addresses a problem. However, it does not provide the undisputed solution. There is plenty of criticism, as explained in detail here. In fact, most of MCP’s alleged shortcomings relate to security, or rather a lack thereof. There was no defined specification for authentication (this was added later, but was not universally accepted), input is almost always trusted, and LLMs are as fallible as ever, but now with potentially extreme consequences. Remote code execution could completely take over a computer without requiring an RMM tool. The attacker could simply tell an LLM where to navigate, what data to steal, where to email it, and so on.

Like Kubernetes, MCP will have to rely on external security. However, developers will not always think about this, but will mainly be curious about the possibilities of this AI tooling. It is almost impossible to prevent security incidents from occurring due to the adoption of MCP because there is no inherent security.

That sounds like harsher criticism than it really is. The reality is that new protocols and standards are rarely ‘secure by design’ right away. If they are, this usually makes rapid adoption more difficult. Perhaps there would have been no MCP adoption at all if Anthropic had wanted to make it as secure as possible from the outset.

However, MCP itself is also embraced by security parties. Wiz, for example, built its own MCP server with full cloud visibility, contextual intelligence, and unified security around the data sources. Nevertheless, the security company is critical of the protocol, citing everything from RCE to prompt injections and command hijacking. Perhaps specialized solutions will emerge for these issues.

Conclusion: it’s up to others

Now that MCP has become the standard for GenAI connectivity, it is not only up to Anthropic to realize its maturation. This has been gaining momentum in recent weeks. For example, Docker wants to make MCP production-ready with all the simplicity it already achieves with containers. The Docker MCP Catalog and MCP Toolkit are the beginning of an ecosystem around containerized MCP applications. Among the reinforcements, Docker mentions early adopters Stripe, Elastic, Heroku, Pulumi, Grafana Labs, and more.

It appears that the desire to use MCP is ahead of its actual maturity. Nevertheless, its widespread adoption is a sign that we will see improvements almost weekly, from more secure systems around MCP to new use cases.

Read also: Solo.io introduces MCP Gateway to smoothen AI agent integration