As agentic AI workflows spread across enterprises, security leaders face challenges in identity management, authentication, and governance. Challenges that pose new questions and require new answers. At RSAC 2026 Conference, we had a chance to talk to Sam Curry, the CISO of Zscaler. We discussed the security pillars organizations need to get right to manage the AI agents that will soon dominate internet interactions. Among other things, it requires a renewed focus on some of the basic security tenets.
The addition of AI agents to the workforce is more than just another technology shift. It signals a fundamental transformation in how transactions and communications occur online. Curry predicts that silicon-based intelligence (his name for AI agents, which he contrasts with carbon-based agents, i.e. humans) will become the most common form of interaction on the internet. Agents will conduct thousands of transactions for every single human action. This shift renders traditional bot detection methods obsolete and demands entirely new security frameworks.
The two pillars of agentic security
When it comes to agentic workflows, we noticed a couple of trends during RSAC 2026 Conference. There was a lot of focus on securing workloads in runtime. The second big topic was around the concept of identity.
“Identity itself is still an ongoing thing with carbon-based entities, with human beings. And now we have introduced silicon,” Curry explains. The challenge lies in establishing proper identity binding for agents that may represent multiple people or distribute themselves across multiple workloads. At present, agents typically function as workloads bound to specific machines or cloud compute resources, but this may evolve.
The key is accountability. Organizations must implement systems that track which agents represent which subjects and what actions they’re authorized to perform. This requires moving beyond simple authentication to encompass authorization and authenticity verification. That is a more complex undertaking.
Building authentication frameworks for AI agents
Curry advocates for standards like SPIFFE and SPIRE to create proper identification, verification, and authentication frameworks for workloads. “We have to get the foundational identifier, identification, verification, authentication, and then we build the frameworks on top of it,” he emphasizes.
The authentication challenge becomes more challenging and nuanced when agents perform specific transactions on behalf of humans. An agent might need permission to use a credit card in one context but not another, requiring granular authorization controls. This specificity must be balanced against the efficiency that makes agents valuable in the first place.
Organizations need gateways and brokers to control agent sprawl, especially in cloud environments. Basic provisioning and deprovisioning processes become critical. This does not only hold for security, but also for practical concerns like billing. “The last thing you want is to have run-away agents that are cloning themselves, creating new workloads,” Curry warns.
API security and MCP
The conversation also touches on API security. With the emergence of Anthropic’s Model Context Protocol (MCP), it was very tempting to take that as the layer to secure agentic workflows too. However, MCP is not a prerequisite for AI agents to be able to function. They can also access the information they need via other routes, like CLI and APIs directly.
Solutions include mutual TLS (mTLS), HTTP signing from recent working groups, and protocols like QUIC. But architectural thinking remains paramount. “We have to look at architecture and frameworks,” Curry insists. He thinks proper design can prepare organizations for future developments even as the technology continues to evolve rapidly.
Zero trust architecture for agent containment
Zero trust principles, when that really means actual least privilege, least function, and least exposure, provide a framework for managing agentic environments. By segmenting users to apps, workloads, devices, and offices, organizations can draw barriers that contain risk to acceptable levels.
“We can start to say, well, I can contain this to some degree and I can put order on it and structure,” Curry explains. Proxy and reverse proxy techniques make systems less visible, while proper segmentation allows different departments to innovate at different risk levels. Sales might want messy, highly innovative environments for AI-powered quoting tools, while other functions require metronome-like stability.
The goal is not eliminating risk entirely. That’s impossible for anything worth doing anyway, Curry believes. It is about achieving acceptable risk levels through proper architecture and controls.
Automation challenges in adversarial environments
While automation offers efficiency gains, Curry cautions that “whenever you do automation with an intelligent opponent you become predictable.” If security systems automatically create tickets and incidents in predictable ways, attackers can exploit that predictability to create denial-of-service conditions.
This adversarial dynamic means certain security functions will be easier to automate than others. GRC (Governance, Risk, Compliance) tasks like collecting evidence and producing reports will become largely automated, potentially eliminating some jobs. But functions dealing with the real-time unpredictability of intelligent opponents will require agent-assisted human expertise rather than pure automation.
Curry draws an analogy to modern warfare, where stealth fighters work with unmanned drone assistance to extend capabilities in the battle space. The same model applies to cyber defense. AI assistance extends human capabilities without replacing human judgment in adversarial situations.
Also available as audio-only Techzine TV Podcast
Subscribe to Techzine TV Podcast and watch and/or listen to our other episodes via Spotify, Apple , YouTube or another service of your choice.
The future security workforce
The impact on security jobs will vary by function. Curry predicts that roles focused on repetitive documentation will be largely automated, while positions dealing with intelligent adversaries will evolve into agent-assisted roles. The ratio of humans to agents may be one-to-one or different, but the adversarial component of cybersecurity ensures continued human involvement.
“This is what happened in chess,” Curry notes. “For 10 years, AI-assisted humans dominated the world of chess. Later AI took over. But this is a more complicated space.” The complexity of cybersecurity, combined with its adversarial nature, suggests a longer period of human-agent collaboration than occurred in more deterministic domains.
Beyond technical skills, the future requires mature dialogue about policies and lifecycle management for agents. “The difficulty isn’t in the tools,” Curry observes. “The difficulty is actually in the business process and cultural changes.” Organizations need human-to-human conversations to determine what should be done and how systems should look, even as they leverage tools to implement those decisions.
Practical steps for organizations
Curry recommends that organizations focus on fundamentals: clean up underlying infrastructure, implement proper identity and access management for agents, establish gateways and brokers for cloud environments, and apply zero trust segmentation. These basics create a foundation for innovation within acceptable risk boundaries.
It is very important to ensure auditability. In an incident, security teams must be able to track down who did what and learn from it to apply controls. This accountability extends to scenarios like privilege escalation, agent cloning, or unauthorized data access.
While the technology evolves rapidly the fundamentals of good security architecture remain constant. Organizations that get these basics right position themselves to innovate safely as agentic AI capabilities continue to advance.
Also read: Identity has become malleable for cyber attackers