As artificial intelligence agents are set to become more autonomous in enterprise environments, organizations face a challenge: how do you govern and secure identities that aren’t human? Stephen McDermid, CISO for EMEA at Okta, sat down with us to record an interview for Techzine TV. He gave us some insights into this important question and shares his views on managing AI identities in an evolving threat landscape.
Our conversation starts from a central premise: a security industry grappling with rapid AI adoption while trying to avoid the mistakes of previous technology shifts. McDermid draws parallels to the early days of cloud computing, when organizations rushed to adopt new capabilities without proper governance, only to spend years retrofitting security controls.
AI governance in enterprise environments
Organizations today face a paradox: they must embrace AI to remain competitive, yet uncontrolled AI adoption creates significant security risks. McDermid states that companies can either lead AI adoption with proper governance or react to unauthorized usage by employees. “If you’re not embracing AI and you’re not leading with AI, then you are having to react to AI and that’s a much worse position to be in,” he says. When organizations fail to provide approved AI tools and clear guidelines, employees will find their own solutions. If they do that, they more or less define the risk profile themselves.
The first step for many organizations is simply discovering where AI is already being used. This exploration phase is critical. It is impossible to skip if you want to implement governance frameworks. Once that visibility exists, organizations can open up AI capabilities more broadly with appropriate controls in place.
AI agents as primary identities
Okta’s approach treats AI agents as primary identities. That means they are subject to the same governance principles as human users. It also means establishing clear policies around agent creation, defining what permissions each agent possesses, determining what data it can access, and assigning responsibility for ongoing management.
“When you look at an AI agent, it’s either acting on my behalf, a business’s behalf, an application’s behalf, a service behalf. That means it has an identity,” McDermid notes. “So you have to have the typical governance and controls around it.”
This governance extends throughout the agent’s lifecycle, from initial creation through ongoing operation to eventual decommission. Organizations must monitor what agents are doing in real-time to ensure they’re not performing unauthorized actions. The challenges this poses only become bigger with agentic workflows, where multiple agents interact with each other, potentially sharing data across numerous systems in east-west traffic patterns that traditional security controls might miss.
Regional differences in AI adoption
McDermid’s role as EMEA CISO provides some perspective on regional variations in technology adoption. Europe generally exhibits a lower risk appetite than the United States, with greater emphasis on compliance and data sovereignty. These concerns, about where data is stored, who can access it, and how it’s used, have shaped European attitudes toward American technology companies for over a decade.
Transparency is an important topic in conversations that McDermid has. Organizations that clearly communicate their data handling practices, access controls, and sovereignty measures can overcome initial hesitations. Okta’s Secure Identity Commitment represents this transparency approach, he says.
Despite initial caution, European organizations increasingly recognize that avoiding AI altogether poses greater risks than thoughtful adoption. The maturity curve that applied to cloud computing appears to be repeating with AI. Initial apprehension giving way to confidence as organizations implement appropriate controls. Let’s hope organizations don’t repeat the mistakes from that time too.
Open standards and platform independence
The AI landscape evolves rapidly. That also means the future of it is somewhat uncertain, That is why McDermid argues in favor of open standards. They provide the essential flexibility needed for such a future. Rather than building proprietary solutions based on assumptions about how AI will develop, organizations benefit from frameworks that can adapt and mature alongside the technology.
“If you’re building a new solution or you’re building a new capability, it can’t be niche, it can’t be specific to the now,” McDermid explains. “It has to be something that can be open, can be adapted, can grow, can mature.”
This idea of openness recognizes that enterprises use multiple vendors and want to get the most out of existing investments rather than face vendor lock-in. Open standards enable real-time security decisions by allowing different systems to communicate about threats and authentication events. This interoperability becomes even more important as AI accelerates the pace of both business operations and security threats.
The evolving threat landscape
Identity-based attacks represent 86% of security breaches, making identity providers prime targets for adversaries. Okta and similar platforms face constant attention from threat actors precisely because compromising identity infrastructure provides broad access to multiple systems and data.
McDermid says that responding to threats at machine speed requires automated defenses that can make real-time decisions across application silos, data repositories, and infrastructure boundaries. This is where Okta’s visibility across multiple organizations and hyperscale providers creates interesting threat intelligence opportunities, he says.
Organizations must also prepare for AI-specific vulnerabilities. Agent credentials have already been discovered in the wild, sometimes because they follow predictable patterns that make them easy to guess. As agentic workflows become more complex, and agents interact across multiple platforms and services, the attack surface expands significantly.
Threat intelligence and industry collaboration
McDermid stresses that effective defense requires information sharing across the security community. Attackers already collaborate. Defenders must adopt similar collaborative approaches to have any hope of keeping pace.
Okta’s threat intelligence team monitors attacks across the platform. They identify patterns and tactics that can help protect all customers. This intelligence gets published through security blogs and alerts. These are shared not just with Okta customers but with the broader security community. When using Okta’s Identity Threat Protection, these insights automatically translate into protective measures, but even organizations not using that product benefit from the shared information, according to McDermid.
This collaboration extends to partnerships with other security vendors, law enforcement, and industry groups. Information Sharing and Analysis Centers (ISACs) facilitate threat intelligence exchange within specific sectors, though many organizations still lack access to these collaborative resources.
The fundamental principle is that attackers exploit silos, McDermid argues. They know or assume that competing organizations often don’t share threat information. By breaking down these barriers and establishing transparent communication about attacks and defensive techniques, the security community can take more effective defensive action.
Watch the video for much more on the topic of how to make sure AI agents can operate securely.
Also read: Okta launches platform to secure AI agents