5 min Applications

Multi-agent systems set to dominate IT environments in 2026

Multi-agent systems set to dominate IT environments in 2026

The image of slow or hesitant AI adoption among enterprises is persistent. Setting up AI systems requires a large-scale approach, meaning that many isolated pilots do not survive the transition to production. But in the background, an agent breakthrough has already taken place, as figures from new Databricks research show. After GenAI and agentic AI in previous years, the buzzword for 2026 could well be ‘multi-agentic’ – and for good reason.

Linking AI tools has proven to be a critical step in the technology’s maturation. The Model Context Protocol already seems years old when you look at its almost universal adoption. Complaints about its security aside, the state-of-the-art has already moved beyond such fundamental protocols and standards. Instead, IT vendors are focusing on the management and oversight of AI agents, which are increasingly performing tasks in a group effort.

From chatbot to workflow

Databricks sees in the use of its own Data Intelligence Platform that organizations are rapidly changing their approach. Although the State of AI Agents report includes information from throughout most of 2025, the company admits that developments later in 2025 were not even detectable in November 2024, the start of the research period.

IT environments that are currently being reorganized, will no longer rely on a few scattered chatbots for complex tasks. Multi-agent workflows, within which multiple AI tools jointly automate tasks, saw a 327 percent growth in the Databricks platform. To illustrate what that means, the researchers offer an example. Take a financial organization, which may use multiple agents to determine the intent behind a message or phone call, retrieve relevant documents, and perform compliance checks. This stems from a lineup of grouped agents instead of them being called separately. The output may ultimately still be a chatbot response, but there’s no reason it has to be. Within Databricks, the Supervisor Agent is the most popular, directing and monitoring the actions of other agents, whatever these agents may be doing. In second place is the Information Extraction agent, which extracts relevant context from company data.

Databases conquered

We may not even have gotten to the most important point yet: the conquest of databases by these (multi-)agent systems. Today, 80 percent of all databases are built by AI agents, while 97 percent (!) of all testing in development environments is no longer done by humans.

This raises the question of whether databases and testing procedures still use the right architecture. After all, the humans for whom these systems were designed have largely disappeared from these environments. Databricks already has an answer in the form of the ‘lakebase’. This is a Postgres-based database that integrates with Databricks’ lakehouse architecture and is specifically designed for agents. The concept is a creation of Neon, which was acquired by Databricks in the middle of last year.

Remarkably varied

The use cases for AI are expanding just as much as the AI systems themselves. If we can identify a trend, it is that the tasks taken over by AI are primarily routine or require a large amount of data. Humans are either not interested in these tasks or don’t have the time for them. Examples include market research, predictive maintenance, and ranking customer service cases, which happen to be the top three from the survey. This variety already shows that AI adoption is not sector-specific, even though it relies heavily on the processes that characterize the retail, logistics, and financial industries, among others. You could argue that we don’t really need to look at specific sectors, but rather specific tasks. Every organization has a target group, whether they are customers, citizens, or patients. Databricks users seem to think that responding to the demands of that group should be suitable for AI.

Databricks offers a platform that naturally provides tools for AI use, but the adoption of the technology is far from mandatory. The reason we mention this is that it is clearer why organizations want to use AI than it is how much adoption will yield in the end. Cost savings are a motivating factor, but there is still no evidence that end users are satisfied with the shift. It is simply too early to determine how multi-agent systems will change human work, except that many tasks can be automated. What cannot be automated? How should IT environments change to optimize multi-agent workflows? These questions may be answered in future reports.

LLMs beyond

One clear conclusion has already been reached regarding the technology that started this AI wave: the Large Language Model. From May to July 2025, more organizations (39 percent) used only one model than three or more (36 percent), but the shift compared to those stats and the ones coevering August to October 2025 was enormous. During the latter period, only 22 percent of all organizations used a single model, compared to a majority (59 percent) that already had three or more LLMs in production.

This suggests that the days of simple LLM-based solutions are numbered. Delivering enterprise-ready AI requires some complexity. This is due to the simple fact that LLMs are not infinitely scalable, and after the growth spurt from GPT-3 to GPT-4 in 2022-23, we have only seen gradual improvement. The real power behind today’s (still much better than GPT-4) AI models from Google, Anthropic, OpenAI, and others lies in the connection these models make with other systems. Incidentally, the models themselves, as we find them in APIs and chatbot windows, are often much more than just an LLM. We now call this combination multi-agent systems, and these in turn need to be managed. Taming the proliferation of AI is also a new challenge, but the days of countless separate AI tools are over.

Read also: Airrived introduces the ‘Agentic OS’. What does that mean?