5 min Security

Identity has become malleable for cyber attackers

Identity has become malleable for cyber attackers

Although cyber attackers prefer to target unprepared victims, they have the means to strike even maturely defended organizations. Complex attacks include convincing deepfakes, utilizing information from previous data breaches, and cyber threats often get in without having to exploit any software vulnerabilities. How can you arm yourself against such dangers? We talk about it with David Sancho, Senior Threat Researcher at TrendAI.

Cyberattacks today have a pervasive psychological dimension. Sancho, who has been with TrendAI (previously known as Trend Micro) for twenty years, sees that the toolbox for attackers has become very diverse. It is almost a cliché to say that cyber attackers no longer just hack systems, instead logging in as if they were a legitimate users. They regularly do so with information obtained through convincing phishing emails or on the dark web. The next step is even more cunning: Sancho mentions convincing deepfakes, extortion based on highly personal information, and even “virtual kidnapping.”

These forms of criminal deception are far from new. Even fake instances of loved ones or colleagues being kidnapped are nothing new. However, the risk of falling into the trap is much more significant for organizations today. Two years ago, it also became apparent that AI could convincingly mimic a voice in a matter of seconds. Attackers have now had time to abuse this tooling on a large scale.

Identity is malleable

Sancho emphasizes that affordable tools can create convincing deepfakes for extortion. They were once a niche, he says, but can now be used to fabricate specific identities. Voice phishing is “extremely mature” and costs almost nothing for malicious actors to set up. In a study conducted in the middle of last year, TrendAI already showed how much freedom of choice attackers have in this regard. The same applies to videos, which, although less advanced than AI audio, are still dangerous. Two years ago, a business call was enough to simulate a CFO sufficiently for a financial employee to transfer $25 million to the cyber attacker posing as their superior.

The fact is that these developments have been accelerating for years. Now, Sancho and his fellow researchers at TrendAI see that AI and automation will define the threat landscape for years to come. A report full of predictions from the company for 2026 was recently published. The “AI-fication” of threats, as it turns out, goes beyond convincing imitation, but more on that later.

For now, it is clear that identity as we traditionally know it has become malleable. With enough software, time pressure, and an almost impossible-to-exclude infiltration, employees can come into contact with an attacker disguised as a colleague or key business partner. Sancho acknowledges that the culture surrounding such compromises could be improved. Because organizations worldwide have not yet realized how convincing the deception can be, stigmas surrounding successful phishing remain. We are miles away from the Nigerian prince, and can simply assume that you can be deceived. As a result, it is no longer possible to rely on the rock-solid identity of yesteryear. The same conclusion was suggested in a World Economic Forum report (PDF) from January, to which Trend Micro contributed.

Deepfakes

When it comes to security tooling, it has long been the case that it is not enough to rely on known signatures of attacks. IT environments are large enough to recognize complex patterns of normal use. As soon as XDR solutions recognize deviations in this, they issue alerts. Attackers are also aware of this and are therefore increasingly behaving like normal users, using legitimate software and targeting accounts with high privileges. Detection will sometimes come too late, for example due to an abundance of alerts or a compromise via external channels (WhatsApp, LinkedIn, etc.).

To truly recognize deepfakes, people can no longer rely on themselves. No security training will provide enough advice to recognize every future AI-driven deception as such. That is why TrendAI’s solutions also look at data flows for potential data leaks and the use of AI applications, in addition to recognizing AI images through artifacts that are often invisible to humans. Incidentally, there is also a free tool, the TrendAI Deepfake Inspector, to keep an eye out for deepfakes on behalf of ordinary users.

Autonomous threats

Another aspect of threats in 2026 is that they are largely automated. Agents not only scan for potential compromises, but can now carry out part of the attack. According to TrendAI, cyber threats can share infrastructure and access to allow agents to carry out the deception themselves. We have already seen elsewhere that legitimate AI tools such as Claude can quickly become malware producers. With increasingly powerful open-source LLMs running locally, malicious actors don’t even need an API to unleash advanced AI agents on victims.

Nevertheless, in our conversation with Sancho, we note that the deepfake layer on top of this AI automation is dominant. Organizations have always had to assume that they could be compromised. What Sancho emphasizes is that we must leave behind the mindset of trust through verification. Unfortunately, organizations, especially critical companies, the public sector, and multinationals, must assume that they will be targeted and can be convincingly deceived. With that knowledge in mind, a different IT approach awaits us, in which complex defense tools must think one step ahead of the people they protect—and the attackers armed with AI.

Read also: Trend Micro brings Vision One to AWS Sovereign Cloud