Red Hat makes Ansible the execution layer for agentic AI systems

Ansible plays pivotal role in Cisco's full-stack approach

Red Hat makes Ansible the execution layer for agentic AI systems

Red Hat is positioning its Ansible automation platform as the critical execution layer for agentic AI operations, introducing ephemeral MCP servers and expanded generative AI capabilities to help enterprises safely deploy AI agents in production environments.

At Cisco Live EMEA in Amsterdam, we sat down with Sathish Balakrishnan, VP and General Manager of Red Hat’s Ansible Business Unit. We discussed in some detail how the automation platform is evolving to support the shift from traditional AI implementations in 2025 to agentic AI systems in 2026. What this means is that Red Hat is turning Ansible into a trusted execution layer. This should enable AI agents to make decisions while Ansible handles the actual changes to critical infrastructure.

The Red Hat and Cisco partnership continues to deepen with integrated solutions spanning AI infrastructure, edge computing, and network automation. Cisco’s global price list now includes Red Hat’s software stack across multiple product lines, from AI Hub to Intersight, with Meraki integration forthcoming.

The role of automation in agentic operations

Balakrishnan emphasizes in our conversation that successful AI agent deployment requires two foundational elements: interconnected systems and trusted execution capabilities. “In order to use agents or in order to use AI in IT operations, all of your systems need to be interconnected and what interconnects all of your systems is an automation platform,” he explains.

Interconnecting systems is only a piece of the puzzle though. There is also some well-founded concern about the autonomous AI systems we are moving towards. AI agents may make decisions and inferences, but enterprises remain hesitant to allow direct execution on production systems. This is especially true for things like core banking applications or critical infrastructure. According to Balakrishnan, Ansible aims to provide the established, trusted execution layer that organizations have relied on for over a decade, but now also for the agentic world.

This architectural approach is rather interesting. That is, it positions Ansible as the intermediary between AI decision-making and actual system changes. AI agents call into the Ansible automation platform to execute their intended actions through validated, secure workflows. They do not have direct access to production environments.

MCP protocol integration with ephemeral servers

Red Hat is implementing the Model Context Protocol (MCP) through a distinctive architectural approach that addresses security and access control concerns. The platform supports both traditional MCP client-server models and an ephemeral container-based implementation.

The ephemeral MCP server approach runs each server instance as a short-lived container within Ansible’s execution environment. “We are actually making sure that the MCP server is not long-living,” Balakrishnan noted. “So you don’t need to have MCP gateways to figure out the surface area.”

This architecture provides several advantages, according to Balakrishnan. Role-based access control limits which MCP servers different administrators can access. A platform administrator might access Linux, ServiceNow, and Splunk MCP servers, while a ServiceNow-focused administrator would only access that specific service. The ephemeral nature eliminates the security vulnerabilities associated with persistent MCP server deployments.

Red Hat is building new playbooks constructed as sequences of MCP calls rather than traditional Python module collections. This capability exists in parallel with existing automation approaches. “We are not replacing our playbooks, we are building in parallel so that the customers can pick either the old or the new way of doing things in the same platform,” Balakrishnan says.

Ansible Lightspeed expands model support

Ansible Lightspeed, Red Hat’s generative AI capability for playbook creation, has been in production for four years. This makes the company an early mover in applying AI to infrastructure automation. The platform initially relied exclusively on IBM WatsonX. This underwent extensive training on Ansible knowledge bases to generate syntactically correct and functionally appropriate playbooks.

However, the frontier AI models have evolved significantly over the past four years. Balakrishnan notes that “four years is like four decades in AI.” Other models have reached parity with the specialized WatsonX implementation. This enables Red Hat to support bring-your-own-model capabilities.

Lightspeed performs post-processing validation on AI-generated playbooks. “When an LLM creates a playbook, we do post processing. We say, hey, does this playbook make sense? Does it have the right syntax? Does it have all the right calls? Is it doing the function that it’s supposed to be doing?” This validation layer is there to ensure trusted execution. It does not allow random AI-generated code to run against production systems.

Red Hat recommends implementing human-in-the-loop approval for initial AI-generated playbooks, Balakrishnan tells us. As organizations gain confidence in the AI’s output quality, they can transition to automated approval workflows.

Democratizing automation access

Balakrishnan teases some new stuff coming up too during our conversation. At Red Hat Summit later this year the company will announce a new drag-and-drop workflow engine designed to make automation accessible beyond traditional IT operations teams. This continues the company’s strategy of removing barriers to automation adoption, complementing existing capabilities like three-click self-service automation wizards.

The company is also building the Ansible Lightspeed Interactive Assistant chatbot, according to Balakrishnan. which enables natural language automation requests. While a workflow engine provides visual construction capabilities, natural language interfaces can directly generate playbooks in the backend. This can potentially eliminate the need for visual workflow tools in simpler use cases.

Balakrishnan emphasizes that Ansible remains focused on IT operations rather than expanding to business users. “We are only focused on IT operations. I think that’s a very big market and it’s the market that we are the leaders in. So we want to focus on where our strengths are.” The accessibility improvements specifically target IT professionals, particularly network administrators who may lack extensive programming backgrounds.

Edge computing and full-stack integration

The Cisco partnership extends significantly into edge computing. That is, Red Hat’s software stack powers Cisco’s edge solutions, Balakrishnan says. Red Hat Device Edge, MicroShift (a lightweight OpenShift distribution), Red Hat Enterprise Linux, and Ansible form the foundation of Cisco’s edge offerings.

“Edge is something that’s very important for Red Hat because it’s part of our hybrid cloud story,” according to Balakrishnan. “It starts at the data center, it goes to the cloud and extends to the edge.” Customers benefit from consistent hardware and software platforms across their entire infrastructure, simplifying management and application development.

Cisco’s move toward full-stack integration through solutions like Unified Edge and Unified Branch aligns with this strategy. The consolidation reduces the number of integration points required. That can also mean that fewer MCP servers are needed in agentic AI architectures.

Red Hat also partners with Cisco’s industrial partners including Rockwell Automation, Siemens, and ABB for industrial edge deployments. This creates a comprehensive ecosystem for operational technology environments.

Adoption challenges and innovation timing

The conversation we have with Balakrishnan makes it clear that there is quite a bit of conceptual agreement on automation’s importance. However, he also acknowledges some persistent implementation challenges. “Everybody thinks it’s important for the other person, not for themselves,” he observes. Successful automation deployment requires CIO-level commitment and cross-organizational collaboration to break down silos.

The timing paradox of automation initiatives presents another obstacle. “Automation is like a peacetime initiative. When you’re at war, you’re not thinking about it, but actually, when you’re at war, that’s really what you need because then you don’t need to be firefighting.” In other words, organizations must invest in automation before crisis situations rather than during them, is the point Balakrishnan wants to make here.

The joint go-to-market strategy with Cisco, including GPL inclusion across multiple product lines, shows that Red Hat is well-positioned to reach the goals it has set for itself. If nothing else, there is definitely momentum. Balakrishnan indicates that customer deals are already closing for edge solutions. The partnership between Cisco and Red Hat continues to deliver value in the AI era.

Also read/watch: A deep-dive into Cisco’s AgenticOps approach to network operations