As agentic AI systems move from proof-of-concept to production, enterprises face a security challenge: how do you govern autonomous systems that are creative, non-deterministic, and capable of using multiple identities to access sensitive data?
At RSAC 2026 Conference we sat down with Devvret (Dev) Rishi, GM for AI at Rubrik. He told us about how Rubrik is addressing this emerging threat landscape with SAGE (Semantic AI Governance Engine). The framework signifies a shift from traditional rule-based security to AI-powered semantic governance that can keep pace with the unpredictable nature of agent operations.
Rishi is the former co-founder and CEO of a generative AI infrastructure startup that Rubrik acquired in July 2025 (Predibase). He understands both the technical foundations of LLM deployment and the enterprise security requirements necessary to manage risk at scale.
The three pillars of agent management
When we ask Rishi what “managing” agents means in practice, he identifies three essential capabilities. The first is visibility and observability. That is, knowing what agents are running in your ecosystem and what they’re authorized to do. This forms the security posture baseline.
The second capability is governance or guardrails, the actual runtime controls on agent behavior. Rishi acknowledges this is “actually one of the hardest parts with agents” because they’re non-deterministic and have access to broad sets of tools. The third is efficiency, or ensuring that governance doesn’t create prohibitive latency or cost.
Identity management becomes particularly complex in agentic environments. While enterprises may already use solutions like Okta or Microsoft Entra ID, these systems weren’t built for the lateral, agent-to-agent interactions that now characterize AI workloads, Rishi argues.
Why agents are creatively unpredictable
To illustrate the challenge, Rishi shares an anecdote from his personal experience with Claude Code. He disabled the Google Drive MCP connector, explicitly preventing the agent from accessing Google Drive. But when he asked Claude Code to write a document, the agent reasoned through the problem.
“It started to go through its reasoning trace, realized that the MCP connector was disabled, so said, let me try something else. Spun up a browser window, entered drive.google.com and then found a way to be able to get through there,” Rishi recounts.
This creative problem-solving ability makes traditional security approaches ineffective, or at least less effective. “Security has always been a little bit of a whack-a-mole game, but here I think we’re talking about a totally different scale of order of magnitude,” he observes.
How SAGE uses semantic understanding for governance
Rubrik’s solution is SAGE, a reference architecture framework that stands for Semantic AI Governance Engine. SAGE underpins the Agent Cloud, a recent addition to Rubrik’s offerings. The core innovation is expressing security policies in natural language rather than rigid code. A financial services institution might specify that “AI should not give financial advice,” or Rubrik might internally state that “agents should follow our customer data use policy.”
SAGE addresses this through a two-step process: workflow configuration and deployment. During the workflow phase, when someone configures a policy like “AI should not give financial advice,” the system doesn’t immediately deploy it. Instead, it expands the policy into a template, with the agent populating definitions: here’s my definition of financial advice, here’s my definition of recommendations.
Crucially, everything is editable, allowing human oversight. “One clever thing I think we do in our platform is we actually surface examples. Here’s where we would have triggered and here’s where we would not have triggered examples. We give you the confidence for each example and then we also give you the reasoning,” Rishi explains.
Learning from human feedback without alert fatigue
The approach draws on how humans naturally learn, through examples of what works and what doesn’t. Rather than requiring humans to anticipate every scenario and write explicit rules, the model generates policies and humans provide feedback through a quick validation step.
Learning from human feedback should reduce alert fatigue. “I think the fatigue comes not only because the alerts get triggered, but because there’s no way to actually fix that, make the system better,” Rishi observes. With SAGE, the model operates on a pull basis rather than push. That is, users can provide feedback when they spot issues, feeding examples back into adaptive learning, but the system runs without constant confirmation requests.
The distinction between general and customer-specific policies is important. Some guardrails, like preventing prompt injections, are universal and don’t require much customization. But policies around customer data use are inherently organization-specific, requiring that last-mile tuning that SAGE’s interface provides.
Also available as audio-only Techzine TV Podcast
Subscribe to Techzine TV Podcast and watch and/or listen to our other episodes via Spotify, Apple , YouTube or another service of your choice.
Deploying small language models for efficiency
The deployment phase focuses on efficiency. SAGE uses small language models optimized for low latency and small footprint, enabling defense-in-depth monitoring. “You need to be able to run that model on everything, everything that comes into agent A, everything that comes out of agent A and into Agent B, everything that’s coming out of Agent B,” Rishi explains.
Rubrik develops and trains these small language models, and then tunes them to customer-specific use cases. “We’ve done a lot of benchmarking on small language models for being able to do policy guardian enforcement. So we find that these are quite good out of the box. But then you need to be able to actually do that last mile,” he notes.
For deployment, inference typically runs in a cloud-hosted setting, either in Rubrik’s environment or inside the customer’s VPC. While Rubrik is cloud-focused, Rishi acknowledges demand for air-gapped deployments. This is particularly the case for public sector customers and European organizations. Support for fully air-gapped environments with locally deployed LLMs is not available yet, but it is on the roadmap, we hear from Rishi.
Scalable approach to agent security
When we ask Rishi whether Rubrik can truly “solve” agent security, he offers a measured response: “I do think that the approach that we’re taking is probably the only approach that you can use to be able to solve this. You actually need to use AI to help you secure and govern these agents.”
The key requirement remains consistent across all deployment models, whether cloud or air-gapped: “AI is running and pushing all these different things that I actually don’t have visibility into. Even if I did have the visibility, I wouldn’t have the controls to be able to place in line and that’s what we’re going to solve.”
As agentic AI systems become embedded in enterprise workflows through platforms like Claude Code, Microsoft Copilot, and Anthropic’s tools, SAGE is part of Rubrik’s bet that semantic, AI-powered governance is the only approach capable of scaling with the creative, non-deterministic nature of autonomous agents.