4 min Security

Claude can now scan for complex vulnerabilities, but who will find them?

Claude can now scan for complex vulnerabilities, but who will find them?

Anthropic has launched Claude Code Security, albeit in limited preview mode. The tool enables security teams to scan entire codebases and discover complex vulnerabilities. The ambitions are lofty, but what can the tool actually do? Is it a replacement or a supplement to traditional security tooling? And could it potentially be used by attackers?

Claude Enterprise and Team customers have been able to request early access since Friday. The promise behind Claude Code Security is that overburdened security teams can have some of their work taken over by AI. According to Anthropic, existing analysis tools do not do enough because they do nothing more than go through lists of known vulnerabilities. AI can test the software for layered threats, such as exploits of the specific codebase that arise from its design.

Context-dependent

Finding subtle, context-dependent vulnerabilities usually requires manual work by qualified researchers, whose workload continues to grow due to limited staff, expanding codebases, and ever more complex attacks. According to Anthropic, Claude Code Security can help by detecting potential exploits that traditional methods cannot detect.

The new feature works differently from classic static analysis, the workhorse of security teams for securing codebases at a basic level. Instead of comparing code to known attack signatures, Claude Code Security reads the code “the way a human security researcher would,” according to Anthropic. In other words, the system understands how components interact, tracks how data flows through applications, and detects complex vulnerabilities such as logic errors or misimplemented access controls.

The danger is that this primarily flags false positives, which only makes the work more difficult. It can also operate according to AI logic that humans do not recognize. That should not happen, says Anthropic. Every finding goes through a multi-stage verification process before it reaches an analyst. Claude re-examines its own results, attempts to confirm or refute findings, and filters out those dangerous false positives. In addition, findings are given a vulnerability score so that teams can focus on the most important solutions. Validated findings appear in the Claude Code Security dashboard, where teams can review them, inspect proposed patches, and approve fixes. Claude also provides a confidence score for each finding.

Scores are good in themselves, but at the same time debatable. Today’s CVSS scores are already intended to indicate the severity of a vulnerability, even if that is far from consistent. Anthropic is now inventing something new that should eventually be developed into a common standard for AI-driven scores, or tie into a new CVSS version, or build on it. Which path will work is unclear at this point. The proliferation of scores and assessments must eventually come to an end, and Claude Code Security only makes that a bigger problem in this initial phase, however well-intentioned it may be.

Verification before it reaches the analyst

Humans remain in control, no matter what. Nothing is applied by Claude Code Security without human approval. Claude Code Security identifies problems and suggests solutions, but developers always make the final decision. Anthropic mentions in its announcement that Claude Opus 4.6, released in early February, found more than 500 vulnerabilities in production open-source codebases. These were bugs that had gone unnoticed for decades, despite years of expert review. The company is now working on triage and responsible disclosure with administrators.

Anthropic also uses Claude to review its own code. The company claims that this has proven “extremely effective” in securing its own systems. Claude Code Security aims to make these defensive capabilities more widely available. Because it is built on Claude Code, teams can review findings and iterate fixes within the tools they already use.

AI security or AI attack

Attackers will use AI to find exploitable vulnerabilities faster than ever. But defenders who act quickly can find those same vulnerabilities, patch them, and reduce the risk of an attack. According to Anthropic, Claude Code Security is a step toward the goal of more secure codebases and a higher security benchmark in the industry.

Claude has faced security issues before. Chinese hackers cracked Claude AI for large-scale cyberattacks, with Claude executing 80 to 90 percent of the attack. Claude also proved vulnerable to prompt injection attacks in which private data was forwarded. The limited research preview is intended to further develop the tool in collaboration with customers and to use it responsibly. It is conceivable that attackers will collaborate with Claude Code Security to find vulnerabilities in others in the event of a code leak or in open-source libraries.

In other words, we must assume that Claude will become a tool for both defenders and attackers. More than ever, complex vulnerabilities are relatively easy to find automatically. Assuming a compromise is therefore crucial, even though Anthropic created Claude Code Security precisely to reduce that risk.