3 min Security

Mozilla: AI-powered bug detection produces very few false positives

Mozilla: AI-powered bug detection produces very few false positives

Mozilla says it has used AI to detect and fix hundreds of security vulnerabilities in Firefox. The company is providing, for the first time, detailed insight into how it uses AI to analyze vulnerabilities in the browser on a large scale. According to Mozilla, this approach marks a fundamental shift in software security.

The browser maker previously announced that the AI model Claude Mythos Preview was involved in finding 271 security issues in Firefox 150. In a technical explanation, Mozilla now describes how those results were achieved. The company emphasizes that the quality of AI-generated bug reports has improved significantly in a short period of time.

Whereas AI reports often consisted of false positives just a few months ago, according to Mozilla, modern models are now capable of identifying complex and reproducible vulnerabilities. Mozilla attributes this progress not only to more powerful models but also to improved techniques for deploying AI systems specifically for security research.

According to Mozilla, the vulnerabilities spanned various components of Firefox. Some bugs had been present in the code for more than fifteen or even twenty years. A portion of the discovered issues involved sandbox escapes—vulnerabilities that allow attackers to attempt to gain additional privileges in the browser’s main process from within a restricted browser process.

Virtually no false positives

According to Ars Technica, Mozilla says the new approach produces virtually no false positives. This marks a significant difference from earlier generations of AI-powered code analysis, where developers spent a great deal of time on reports that ultimately turned out to be incorrect.

Mozilla Distinguished Engineer Brian Grinstead told Ars Technica that the so-called “harness” developed by Mozilla plays a central role in this. This system controls the language model while analyzing the Firefox codebase. The AI receives specific tasks, such as searching for a vulnerability in a specific source file, and can then independently generate and execute test scenarios.

The harness gives the models access to the same test infrastructure and special Firefox builds that are also used by Mozilla engineers. According to Mozilla, if a test case causes Firefox to crash within a sanitizer build, this is considered strong evidence of a memory security issue.

Mozilla also uses a second language model to evaluate the results of the first model. In doing so, the company attempts to further filter out erroneous or unconfirmed reports before security researchers begin working on them.

Sharp increase in the number of fixes

According to Mozilla, the use of AI has led to a sharp increase in the number of security updates. While Firefox typically resolved between twenty and thirty security bugs per month in 2025, that number rose to 423 fixes in April 2026.

Of the 271 previously announced AI-related bugs, 180 were labeled sec-high, according to Mozilla. These are vulnerabilities that, under normal circumstances, can be exploited via a website, for example.

According to Ars Technica, Mozilla says it is now fully convinced of the approach internally. The company expects to further integrate AI analysis into the Firefox development process. Mozilla is currently investigating how models can automatically check new patches as soon as they are added to the codebase.

Mozilla is calling on other software developers to start experimenting with such techniques now. According to the company, organizations that invest early in AI-assisted security analysis will be better prepared for the rapid development of new models.