AI agents currently catch a lot of attention. They promise to handle many repetitive tasks autonomously and without errors, which sounds attractive to many companies. But what exactly is the benefit, and how realistic is it? What should you look for in this wave of artificial intelligence innovation? We discuss this in a roundtable discussion with Cloudera, Pega, Salesforce, and ServiceNow experts.
AI has been all the rage lately. This was primarily due to the enormous progress made by generative AI in recent years. At the end of last year, agentic AI was added to the mix: autonomous systems that work reactively and perform tasks independently, process information, and make decisions based on data.
In everything we have seen of agentic AI so far, it constantly balances between tangible innovation and a promising future. It is anything but a vague promise for the future, and progress is so rapid that it is almost impossible to predict how far these autonomous systems will have come in a year.
What we do know is that AI agents offer significant added value in applications where customer data can be used securely and efficiently. They have the potential to drastically accelerate business processes, provided the preconditions are right. During the roundtable discussion, a number of ingredients for a successful project emerged: a robust data foundation, clear governance, and an understanding of what you want to achieve with AI.
In search of a definition
At the table, Peter van der Putten, Director of the AI Lab at Pega, notes that we can only have a broad discussion about agentic AI once we have clarified the subject’s confusion. In his view, things are still going wrong too often. “We need to move beyond the Babylonian confusion surrounding agents. Some people talk about conversational assistants or chatbots, which have been around for years and often work based on simple rules or workflows. Others refer to real agentic AI with multi-agent systems that can perceive the environment, use tools, make plans, and execute them,” says Van der Putten.
As Head of Solution Consulting at ServiceNow, Nick Botter visits many companies to look for AI use cases. Van der Putten’s explanation prompts him to elaborate on the definition of agentic AI. “For me, an autonomous AI agent is an agent that works independently without human intervention, understands the context, and can then take action on its own,” explains Botter. “The difference with conventional systems is that it’s not just about conversation and answers, but that the agent actually does the work itself.”
This clear outline of the latest AI automatically raises an additional question. AI agents are here, but to what extent can you currently deploy an agent within your company? What is realistic? If you look at companies, you see a lot of experimentation going on. Defining an initial use case can be valuable. Ask yourself whether deploying agentic AI in that specific domain is realistic. As Jan Verbrugghe, Senior Director of Solution Engineering at Salesforce, often sees projects that relate to customer service. An agent can be useful there because people work in a structured way and according to rules. “An agent can handle such conditions perfectly,” Verbrugghe explains. At the same time, the reality is that if you think of using AI agents to automate a large financial process involving highly sensitive information, this may be better left for the future.
The organizations Van der Putten, Botter, and Verbrugghe work for are now making tools available to enable agentic AI on their platforms. At the same time, they are also looking at how they can use it internally. Van der Putten shares, for example, that Pega has completed Iris internally. This agentic AI system functions as a colleague by processing emails from employees. It then uses orchestration capacity, which means it can delegate work. An email can be processed by a specialized agent who knows exactly what steps are needed in that domain. This ties in with Van der Putten’s definition at the outset.
Agents stand or fall on data
Such an application means there are now examples of what we can do with agentic AI. However, before you can get such a system up and running, it is important to look at the basics. Or the ingredients, as we mentioned earlier. Against this backdrop, Cloudera’s Regional Vice President Benelux, Rein de Jong, joined us at the table. Cloudera’s data platform grew alongside the emergence of the first cloud providers. Over the years, it has seen data and AI trends come and go. This has made Cloudera the platform that acts as a link between data and AI. Users prepare data for use with the platform. They can then decide for themselves what to do with it. AI is one possible option, including agentic AI.
The data platform is therefore primarily concerned with preparing and presenting data for AI agents. What an AI agent can and does is not necessarily relevant in the Cloudera context, although it is something to consider when organizing your data. According to De Jong, data must be processed correctly before it is used for AI applications. “We ensure that the right data with the right authorization level is available for different data products. We guarantee that the entire data pod is consistent—that data at moment A is also what should be at moment B.”
The idea here is to bring computing power to the data, rather than the other way around. This is regardless of where the data is located. According to De Jong, only then can you get the desired performance from data. “In this case, we bring the agents to the data, not the data to the agent. Moving data causes ingestion problems and data movement, which is generally vulnerable. You don’t want that with AI agents,” says De Jong.
The right source
De Jong’s description of the situation is generally agreed upon by the other roundtable participants. AI agents that are currently successful all have in common that a lot of time and effort were initially put into cleaning and perfecting the data. That is the key to effective AI agents. “The importance of data has never been greater,” Verbrugghe observes. “We have developed a data platform that unlocks data and makes it usable for AI applications and automation in general.” Combined with the domain knowledge from Salesforce’s software side, this can be an added value when setting up a system.
What is interesting here is that companies are increasingly opting for a hybrid approach. They use both cloud solutions and on-premise systems for their AI applications. Or they use multiple data platforms side by side, so that they have the most suitable tool available for each type of workload—all this to organize data as effectively as possible. De Jong has seen this with a customer using Cloudera and a competing platform. “It sounds contradictory, but using both platforms saves costs.”
If you want the AI agent to comply with the applicable standards, you must include real-time data processing in the programming phase. Only then can agentic AI take over or supplement human work in a business environment. However, the fact is that getting data in is becoming increasingly important, as it comes from everywhere. Sometimes structured, sometimes unstructured, possibly with real-time processing characteristics. These are all things that an agent can and often must be built around.
Keep people in mind
The technical setup gives an agentic AI project an immediate push in the right direction. However, a project can only succeed if you think about the human factor. There is also a social debate surrounding this human role. What will the future balance between automation and human input look like? “It is important to defend both AI and automation. Otherwise, AI will remain nothing more than a nice insight in the lab,” says Van der Putten. “If you really want to create actionable intelligence, you have to be able to combine it with automation.”
The way agents change work ultimately requires a different attitude from people. At least, that’s what Botter observes. Agents are very different from what people use from previous technological innovations. Botter believes a distinction between a digital and AI mindset is necessary. “With digitization, we often simply digitized existing processes. With an AI mindset, however, we look at the desired result and let the AI itself figure out how to get there.”
The first trials and applications have already yielded some useful conclusions. Verbrugghe sees a clear indication from use in companies: treat an agent like a human being. “The most efficient way our customers can use AI right now is to treat it like a human being. You give instructions and guidelines, not strict rules. You specify the data, the KPIs, and the desired outcome, then solve the problem. An AI agent can get to a solution completely differently than possible with structured processes.”
How do you achieve acceptable use?
In line with embracing an agentic AI system, you must consider governance. This involves making the right data and tools available and creating space in which AI can perform actions independently. Only then can the technology truly contribute to a more efficient workflow or better customer interaction. Without these preconditions, AI will remain limited to observation or analysis, while the action-oriented application can make a difference in practice.
This leads to a discussion about the requirements for AI systems. Organizations often impose stricter standards on AI than on human employees. Where AI agents that communicate with customers must comply with strict rules and restrictions – for example, they are not allowed to learn anything from customer conversations – there is much less control over human employees. This imbalance raises questions about the consistency of policy, especially when human interactions in practice also entail risks in terms of privacy or incorrect information transfer.
Yet it is precisely this controllability of AI that offers opportunities. Whereas human actions are difficult to monitor, AI agents can be tracked and adjusted precisely. This allows compliance departments to operate more efficiently. Instead of checking everything manually, they can focus on deviations or exceptional situations. In that sense, using AI means more control over processes and a reduction in the workload involved in monitoring and compliance.
Future prospects
Scale is a decisive factor in reaping the benefits of AI agents. Structural efficiency gains become apparent only when the technology is widely deployed within an organization. The effect remains limited if agents are only used in small, isolated applications. Large-scale deployment creates levers that accelerate and optimize processes in ways that would otherwise not be possible. This scale is therefore the key to achieving tangible impact and sustainable value within business operations.
AI agents are expected to play an increasingly prominent role in organizations’ daily functioning. They are ideally suited to take over repetitive and routine tasks.
Effective use ultimately requires clear instructions and guidelines, rather than rigid rules. AI agents function best when, like humans, they are given some autonomy within defined frameworks. Organizations that apply this principle find that agents learn faster, perform better, and adapt more easily to changing circumstances.
This was the first article following our agentic AI roundtable. In the next article, the experts discuss why waiting for AI agents is not an option.