2 min Applications

Anthropic is testing the Mythos AI model for cybersecurity

Anthropic is testing the Mythos AI model for cybersecurity

Anthropic has announced a preview of Claude Mythos, a new so-called frontier model that, according to the company, has remarkably strong capabilities in cybersecurity.

Unlike previous models, the focus is not only on detecting vulnerabilities but also on exploiting them. The launch is part of Project Glasswing, an initiative in which a select group of organizations is using the model for defensive security applications. Participants include Amazon, Apple, Microsoft, and the Linux Foundation, reports TechCrunch.

In total, approximately twelve core partners are participating, while an additional forty organizations will gain access to the preview. The goal is for these parties to share their findings with the broader industry.

According to Anthropic, Claude Mythos is a general-purpose language model that has not been specifically trained for cybersecurity, but whose improvements in code understanding and reasoning capabilities enable new applications. In tests, the model reportedly identified thousands of vulnerabilities in a short period, including many so-called zero-day vulnerabilities. According to the company, these often involve bugs that have been present in software for ten to twenty years. A large portion of those findings has not yet been patched, limiting independent verification for the time being.

The claims align with a previously leaked internal description of the model, which referred to it as a new category more powerful than the existing Opus models. In that context, the system was positioned as the most advanced model Anthropic has developed to date. The leak occurred due to an error that made internal documents publicly accessible, which the company later attributed to human error.

Anthropic opts for controlled deployment

Anthropic acknowledges that the capabilities of these types of models entail risks. In the same statement, the company notes that technology capable of identifying and exploiting vulnerabilities can also be misused if it falls into the wrong hands. This is a key reason for limiting the rollout and initially collaborating with a small group of partners. At the same time, the company suggests that this development is difficult to halt, as the underlying progress stems from general improvements in AI models.

The timing is striking, given that Anthropic itself recently faced security issues. Due to an error in a software release, thousands of source code files were temporarily exposed, and attempts to fix the issue led to GitHub repositories being taken offline. This underscores that even parties developing advanced security tools are not immune to operational errors.