A free AI Agent Scanner from DeepKeep is designed to monitor where organizations are at risk from the introduction of AI agents. The solution gives security teams immediate insight into the extent to which agents have access to sensitive environments, which tools they use, and what vulnerabilities exist in the software they run on.
DeepKeep’s platform supports frameworks such as those from Microsoft, Agentforce, OpenAI Agents, and Amazon Bedrock. Since agentic AI is still relatively new, it was to be expected that it would not yet be as well protected as IT tools that have been in use for longer. This gap must be closed before adoption can be called “enterprise-ready,” and DeepKeep hopes to contribute to this. The free AI Agent Scanner is one of the ways it wants to to this.
Agents are inherently more difficult to protect than traditional IT tools because they can communicate with external systems and are non-deterministic. Attackers also recognize the risks and hope to exploit the autonomous nature of these agents by, for example, providing them with false prompts or misleading them into sharing sensitive data.
Visual risk map based on OWASP standard
Such exploitation opportunities can only be closed by identifying them. DeepKeep’s scanner therefore analyzes an agent’s entire threat environment and produces a visual risk map. This map clearly shows connected tools and their intentions, data sources, and potential vulnerabilities. DeepKeep bases its analysis on the OWASP Top 10 for Agentic Applications, a security framework for autonomous AI systems published by OWASP in December. The framework identifies risks such as prompt injection, tool misuse, and supply chain attacks.
In addition to mapping risks, DeepKeep also offers runtime protection for a number of agentic frameworks. Based on observed agent behavior, the platform determines where AI firewalls and guardrails should be placed. The scanner currently supports Microsoft-based frameworks, Agentforce, OpenAI Agents, CrewAI, Amazon Bedrock AgentCore, n8n, and Make.
DeepKeep plans to further expand AI agent security by 2026, with a red teaming solution in the pipeline.
Protected after the fact
DeepKeep’s setup is a consequence of the way agents have been introduced into IT environments. MCP is not inherently secure, but it is considered the universal standard for agentic workflows. Protection must therefore take place retrospectively. That protection cannot rely solely on XDR solutions that depend on human behavior and automation in the environments they monitor.
Agents occupy an intermediate area, where they do not have to work entirely according to privileges like humans, but are also not as predictable as deterministic tooling. Above all, they must be free within a certain bandwidth to access an IT environment so that they can be deployed flexibly. Otherwise, they are simply overly complicated automation solutions. Agents must therefore be monitored in the same way as humans in terms of their access level and scope of action, but they clearly require different protections because they have different potential vulnerabilities than humans.
Read also: Cisco launches agentic security tools for autonomous AI security