2 min Analytics

AI fraud is rapidly increasing: deepfakes threaten organizations

AI fraud is rapidly increasing: deepfakes threaten organizations

Fraudsters are increasingly using generative AI. Deepfake video calls and synthetic identities have become more realistic at record speed. 77 percent of anti-fraud professionals have seen a clear acceleration in the past two years. But most feel ill-prepared.

This is according to research by the Association of Certified Fraud Examiners and SAS. Modern analysis techniques can recognize suspicious patterns and transactions in real time. Network analytics exposes hidden relationships that are barely visible to humans.

During International Fraud Awareness Week (November 16 to 22), SAS warns of the threat of AI-driven fraud. International research by the Association of Certified Fraud Examiners (ACFE) and SAS shows that 83 percent of anti-fraud professionals expect these forms of deception to increase further in the coming years.

The blind spot makes fraud unnecessarily easy. Fraudsters combine deepfakes with personal data and social engineering to make deception seem lifelike. Because AI models learn at lightning speed, new fraud scenarios develop faster than organizations can keep up. Less than one in ten anti-fraud professionals feels well prepared for this new generation of threats.

SAS calls on organizations to look beyond technical measures. The basics must be in order: insight into data flows, clear controls, and understandable decision logic. Without that foundation, any AI solution is vulnerable.

Tip: Fraud with deepfakes: how can an organization protect itself?