MIT Sloan School of Management recently retracted a study that claimed that the majority of ransomware attacks today are driven by artificial intelligence.
This was reported by The Register. The report (note: Wayback Machine) attracted attention by stating that around 80% of attacks in 2024 would use AI techniques. After strong criticism from security researchers, MIT decided to remove the document from its website and announced that a revised version is in the works.
The research was the result of a collaboration between MIT Sloan and cybersecurity company Safe Security. In April, a working paper was published in which the researchers concluded, based on thousands of ransomware incidents, that AI is playing an increasingly important role in cybercrime. The results were then highlighted in a blog post by MIT Sloan, which was picked up by various media outlets.
Report factually incorrect
However, the publication was met with skepticism within the security community. Several experts pointed out that the authors were unable to substantiate their findings. Critics noted that the report even referred to outdated malware projects that had not been active for years. The methodology used was also questioned by specialists, as it was unclear how the researchers had determined that AI was actually involved in the attacks.
One of the most prominent critics was security researcher Kevin Beaumont, who argued on social media that the report was factually incorrect and resembled marketing rather than scientific work. Other experts agreed, emphasizing that such publications undermine confidence in cybersecurity research. Even Google’s AI Overview, which automatically checks information, indicated that there is no evidence for the percentage mentioned.
After the uproar, MIT Sloan removed the working paper. The accompanying blog post was given a new, more neutral title that emphasizes the broader role of AI in cyberattacks and the need to better arm organizations against new threats.
According to Michael Siegel, director of cybersecurity at MIT Sloan and one of the co-authors, a revised version of the report is in the works. He emphasized that the original goal was to draw attention to the growing use of AI in cyberattacks and to help companies think about their resilience.
Nevertheless, criticism remains about the way the research was conducted. Observers point to a possible conflict of interest, as two MIT professors also sit on the board of Safe Security, which is funding the collaboration. According to critics, this affair shows how fragile the boundary is between academic research and commercial interests in the rapidly growing market surrounding AI and cybersecurity.