2 min

While generative AI constantly improves, a Hoxhunt study found that humans are still the better option for delivering results. With over 53,000 email users in more than 100 countries analyzed, the study compared the win rate of simulated phishing attacks created by human social engineers and those created by large artificial intelligence language models.

The findings revealed that although ChatGPT has the potential to be used for malicious phishing activity, human social engineers outperformed AI in inducing clicks on malicious links.

The study showed a significant ratio between success rates of phishing emails created by humans and ChatGPT, with human “red teamers” inducing a 4.2% click rate versus a 2.9% click rate by ChatGPT in the population sample of email users.

Humans are more convincing

Interestingly, the study also found that users with more experience in a security awareness and behavior change program were significantly more protected against phishing attacks by human and AI-generated emails.

Also read: KnowBe4 launches free phishing test aimed at social media

With failure rates dropping from over 14% with less trained users to 2% and 4% with experienced users, it’s evident that effective security awareness and behavior change programs can protect against AI-augmented phishing attacks.

The research highlights that AI creates opportunities for both attackers and defenders. Although large language model-augmented phishing attacks do not yet perform as well as human social engineering, the researchers suggest that the gap will likely close, and attackers are already using AI.

Protection is still the best posture

Melissa Bischoping, director of Endpoint Security Research at endpoint management company Tanium, commented that AI presents new opportunities for efficiency, creativity, and personalization of phishing lures. Still, it’s essential to remember that protections against such attacks remain unchanged.

Mika Aalto, co-founder and CEO at Hoxhunt, recommends embedding security as a shared responsibility throughout the organization with ongoing training that enables users to spot suspicious messages and rewards them for reporting threats until human threat detection becomes a habit.