OpenAI has developed a tool to spot deepfakes created with its proprietary DALL-E image generator. This functionality should counter abuse with manipulated images, for example with sexual content or material aimed at children or the elderly.
OpenAI reported in a blog post that the tool is currently in its testing phase. It apparently already recognizes 98 percent of all DALL-E-generated or edited images it processes. When presented with unedited, original material, a false positive currently occurs in 0.5 percent of cases.
When asked to analyze images created by another AI tool, it dropped the ball. It could then only recognize AI content in 5 to 10 percent of cases.
The attempt to develop such a ‘reality check’ is motivated by the fear of deepfakes and all kinds of potential copyright issues. Such imagery might show (actual, living) individuals appearing to be in situations they have never been in or performing actions they have never done. Falsified historical images also come to mind.
In particular, there is the fear that in time social media platforms will be flooded with increasingly convincing but false visual information. For example, to influence elections or public opinion.
Mark of authenticity
OpenAI has also joined the steering committee of the Coalition for Content Provenance and Authenticity (C2PA), which includes companies such as Google, Adobe, Meta, Microsoft, and Sony. The group provides certification to establish the authenticity of digital content.
In short, the technology behind C2PA is there to ensure content comes from a verified source and is not a forgery or alteration. This move is also meant to anticipate future legislation.
Transparency about origins
If an image was created or edited using AI, C2PA-supplied data must clearly indicate when and how it was altered. This should provide transparency about digital products’ origin and editing history, whether they are texts or (moving) images.
OpenAI has already added the metadata provided by the C2PA authentication standard to all images generated with DALL-E 3, the latest version of its image generator. The same technology will also come to Sora, OpenAI’s video generator, when that becomes generally available.
Societal resilience
OpenAI and its principal investor, Microsoft, have also established a Societal Resilience Fund. This currently contains a modest—by AI standards—2 million dollar.
This fund will pay for initiatives to educate about AI manipulation, especially to groups thought to be more likely to fall victim to such manipulation, such as seniors.
Also read: ElevenLabs makes small step toward fighting audio deepfakes