Deepfakes require employees to develop a different attitude to digital multimedia. Since we have deepfakes, even famous faces can no longer be trusted. How can organizations protect their employees as best as possible?
Technologies used for online fraud continue to evolve and are becoming increasingly convincing. Fake news already showed the importance of quality online sources, but with deepfakes it can be a familiar face that tries to fool you.
What is a deepfake?
Deepfakes are technologically generated videos, photos or audio. This created content uses existing images or audio to recreate the appearance, posture and voices of individuals. This is done using artificial intelligence or AI and, in particular, deep learning techniques.
Although deepfakes have useful applications, such as use in entertainment and education, the technology is increasingly ending up in cybercriminals’ fraud attempts. A more recent example of this took place in Hong Kong. There, a financial employee was convinced to transfer $25.6 million to fraudsters via a video call in which the employee appeared to be in a meeting with several colleagues. The video turned out to have been put together based on deepfake technology.
New form of phishing
Although it may seem like a newer concept, deepfakes are a new technology for committing known forms of fraud. They are most successfully used when a higher degree of authenticity and authority benefits the attacker. That is always the case with phishing but particularly played in the case of CEO fraud or spear phishing. These attack patterns rely on impersonating high-level officials via email or, now, in deepfake videos that appear via messaging or online conferencing. Martin Kraemer, Security Awareness Advocate at KnowBe4, thereby describes the technology as a new form of social engineering. Using these phishing techniques, hackers try to pressure or entice victims to act quickly. “There is always a pressure point. For CEO fraud, that might be loss of business or simply the ask of a personal favour.”
According to him, the difficulty with deepfakes lies in the progress that technology has made, which has made this form of fraud almost unrecognizable. “While it used to be possible to recognize a deepfake based on a few standard indicators, it is now increasingly impossible.”
Recognition impossible
Deepfakes have reached a level where they are no longer detectable by the human eye. “With previous versions of deepfakes, there were some standard red flags to look out for. For example, the movement of the lips would be out of sync with the audio, and the eyes would sometimes falter in an unnatural way.” Smarter AI filtered out such errors, giving fake images enormous persuasive power.
Security awareness training adapts to such developments. KnowBe4, as the provider of these training courses, has new advice for organizations to protect themselves against deepfakes as best as possible. Kraemer: “This is back to the basics. If something does not feel right, verify the questions asked or requests made in a second channel, such as in a chat message. It is also useful to agree on a safe word per team or per organization. With a safe word, the authenticity of the sender is quickly clear. “
Tip! What does effective security awareness training look like?
Available to everyone
The problem with deepfakes is actually twofold. Not only have AI-generated images become more convincing, but the number of deepfakes in circulation has also increased dramatically. “There are commercial tools available that allow anyone to create a deepfake video within three minutes and that will only cost you ten euros per month. This is in addition to a range of open-source tools and algorithms.” Such tools make it quick and easy to set up a fraud attempt. Hackers find this attractive and Kraemer already knows of companies in which employees received deepfakes.
Due to commercial projects, deepfakes will only penetrate deeper into our daily lives in the coming years. Zoom thinks that the technology has a lot of potential for video meetings. Zoom CEO Eric Yuan recently explained his vision of the possibilities in The Verge’s Decoder podcast. In short, a reality envisions where employees can have a deepfake of themselves present in an unnecessary video meeting or any other meeting.
AI will, therefore, only become a larger part of daily tasks in the future. Currently, only a ‘good’ version of AI is available, in which it is possible to place a filter over yourself to attend an online meeting as an AI avatar. The Zoom AI Companion plays at a much higher level, which can also lead to some anxiety. “In my opinion, such AI doppelgängers are not capable of critical thinking. For example, technology has no knowledge of social constructs,” Kraemer responds to Zoom’s plans.
Detection tools still under development
Software against deepfakes is still in its infancy. In the Netherlands, DuckDuckGoose is making a worthwhile attempt to break through in the market. In June, the company raised 1.3 million euros to develop detection software to unmask deepfake images and speech. The situation is not much better among the major AI developers. OpenAI took until May this year to release a tool that can detect AI-generated images. Furthermore, the tool only works for images created using DALL-E, a product of OpenAI.
In the meantime, companies are falling back on employee training to have some form of protection against deepfakes. This training is broad and covers all forms of social engineering, of which deepfakes are a part.
Also read: AI tools aid cybercrime: hackers are experimenting in numerous ways