3 min Security

Google’s AI Big Sleep discovers twenty new security vulnerabilities in open source

Google’s AI Big Sleep discovers twenty new security vulnerabilities in open source

Google’s AI-powered vulnerability detector Big Sleep has found twenty previously unknown security vulnerabilities in widely used open source software, including FFmpeg and ImageMagick.

The discoveries are the result of a collaboration between Google DeepMind and the security department Project Zero. The details of the vulnerabilities remain confidential for the time being due to the usual disclosure procedures. All issues found in open source will eventually be shared via a public issue tracker so that the development community can verify and fix them.

The announcement follows an earlier breakthrough this year, when Big Sleep detected a critical zero-day in SQLite before cybercriminals could exploit it. It involved a bug with a CVSS score of 7.2 that stemmed from memory corruption caused by integer overflows, allowing malicious SQL input to read outside array boundaries.

Despite years of traditional fuzzing research and manual code inspections, the vulnerability had escaped researchers. Google’s Threat Intelligence team had already seen signs of preparatory abuse, but was unable to identify the root cause. Big Sleep succeeded in doing so and was able to prevent potential abuse in practice. According to Google, this is the first time an AI agent has directly prevented a vulnerability from being exploited.

In its summer update, Google said that since its introduction in November 2024, Big Sleep has found multiple real-world vulnerabilities, underscoring the potential of AI in vulnerability research. At the same time, the company is expanding its other AI-assisted security tools.

New features for Timesketch and FACADE

Timesketch, an open source platform for digital forensic investigations, is gaining new capabilities through Sec-Gemini. This will allow the initial incident analysis to be largely automated, saving forensic investigators valuable time. Google also unveiled FACADE, a method that uses contrastive learning to detect internal threats without the need for historical attack information.

In addition to technological innovations, Google is focusing on responsible AI design. In a white paper, the company describes how AI agents can be built safely and transparently, with human supervision, privacy protection, and the application of secure-by-design principles.

Google is also joining the Coalition for Secure AI, in which public and private parties are working together to develop AI applications securely. Google is making data from the Secure AI Framework available to accelerate research into secure AI.

To address new threats, the Vulnerability Rewards Program has been expanded to include categories specific to large language models, such as prompt injection and training data exfiltration. In its first year, more than $50,000 was paid out for AI-related vulnerabilities, with one in six reports leading to product changes.

Next month, Google will join DARPA at DEF CON 33 to present the final round of the AI Cyber Challenge, in which teams use AI to make open source software more secure.