This year’s RSAC 2026 Conference in San Francisco will focus primarily on AI agents and the so-called agentic workforce that is expected to result from them. The announcements Cisco is making this week underscore this. AI agents must also operate according to the Zero Trust Access principle. Additionally, a version of AI Defense is coming that will allow companies to red-team their own AI models. Finally, Cisco introduces AI agents for the Agentic SOC.
AI agents are capable of quite a lot, provided they are properly configured and deployed. In our view, we certainly aren’t seeing a true “agentic workforce” yet. There is simply far too little integration between them for that. Before we get there, however, another problem must first be solved. How do we ensure that these AI agents behave appropriately? That they don’t do things they’re not supposed to and that they’re secure against external attacks.
Just before the RSAC 2026 Conference, we spoke with Tom Gillis, SVP & GM of the Infrastructure and Security Group at Cisco. He outlines the challenge as follows: “We’ve spent decades managing access for people, based on least-privilege principles. The other side of the story is access for machines, so that printers can communicate with a print manager. With AI agents, we now have something that has the privileges of a human but the common sense of a printer.”
Zero Trust Access for AI Agents
It should be clear that the situation described above is undesirable. Cisco aims to change this in several ways. First and foremost, AI agents must also operate according to Zero Trust principles. After all, human employees do the same. Agents can now be linked to a human employee in Cisco DUO IAM. This effectively gives all agents an identity. This should provide greater visibility and insight into the actions of AI agents. It is also possible to discover new agent identities within the organization’s own environment. This provides insight into the use of AI within the organization.
Simply establishing the identity and linking it to a human employee is not enough to enhance security. To do so, the access rights of AI agents must also be addressed. Cisco plans to achieve this by extending the Zero Trust Access principles to AI agents as well. The idea is that AI agents are granted only very granular permission for the tasks they perform and/or for the time required to access resources.
Cisco is adding Zero Trust Access for AI agents to its own SSE (Security Service Edge) solution, Cisco Secure Access. It does this by placing an MCP proxy within SSE. All traffic between AI agents themselves and between AI agents and other tools goes through this gateway. This ensures that MCP traffic is treated the same as HTTP traffic, Gillis notes.
AI Defense: Explorer Edition
Securing agentic workflows isn’t just about the AI agents themselves. Those AI agents use AI models to do their work. If those models are faulty or compromised, this also affects the AI agents. In Gillis’s words: “How do we protect AI agents from the world?”
To continuously test the security of AI models as well, Cisco released AI Defense last year. It is now announcing a new version of it, AI Defense: Explorer Edition. With this new version, Cisco is primarily targeting developers, AppSec teams, and security researchers. They can use it themselves to red-team AI models and applications before they go into production.
Agentic SOC: AI agents to secure (AI) workflows
When you red-team AI models, you essentially fire off billions of questions at breakneck speed. This generates a massive amount of data, partly because AI models—due to their non-deterministic nature—often have to answer the same questions repeatedly. After all, the answers are never the same twice. However, it is crucial that SOC analysts can search through all that data. This is where Splunk’s federated search comes into play.
Read also: Cisco brings Splunk to the data, wherever it is
When it comes to the Agentic SOC, it’s not just about handling all the data generated by AI agents and AI models. AI agents are also playing an increasingly important role in the SOC to assist analysts and take over tasks (partly autonomously). Cisco is therefore introducing a number of specialized AI agents: Detection Builder Agent, Standard Operating Procedures (SOP) Agent, Triage Agent, Malware Threat Reversing Agent, Guided Response Agent, and Automation Builder Agent. Only the Malware Threat Reversing Agent is currently generally available. The others will become available in multiple phases between now and June of this year.
To what extent Cisco’s new additions can protect against entirely new hacker attacks is, of course, the big question. As long as all these specialized agents operate based on known threats, it will be difficult to detect something like the hack on Claude that we recently wrote about. Sure, there is a behavioral component to what Cisco offers too. That is, it will also detect non-compliant behavior of agents and models based on their behaviors. However, it is always extremely difficult to detect hacks like the one we described, as that uses the logic of the model without raising any red flags.
Cisco utilizes breadth of platform to secure AI and AI agents
In general, we believe the direction Cisco is taking is definitely the right one. AI agents need to be kept in check. Zero Trust is an excellent starting point for this. Additionally, the use of AI within organizations simply generates such a massive amount of data that humans can no longer process it all. It therefore makes perfect sense for AI agents to support SOC analysts. Those AI agents must, of course, also comply with the Zero Trust Access principles themselves, but that should now be possible.