2 min Analytics

Fake news stays due to challenges in flagging AI content

Fake news stays due to challenges in flagging AI content

The AI industry still appears to have great difficulty flagging artificially generated content. That’s according to a study by Mozilla.

Images and text generated by AI are rarely referred to as AI content. This applies both to a label that users can read and to a watermark that is useful for machines.

Difficult fight against fake news

The labels should help in the fight against fake news. “Artificial realities can be created in seconds without any skills and used as ‘evidence’ to manipulate people’s opinions. We have been facing this problem for years, but this is a new dynamic that is particularly concerning in a year with more than 70 elections around the world,” Ramak Malavi Vasse’l told SiliconANGLE. He is a co-author of Mozilla’s research report.

The researchers mainly denounce the use of an AI label. This is because these markings can be removed without much effort. In contrast, this method is imposed by the EU for companies with social media sites. There is an idea among lawmakers that such labels do address the problem. However, from the moment the content is shared in another online environment, this tag can be tampered with.

A better way to recognize AI content is through a watermark, the study notes. This marking is readable by machines and more difficult to remove. Therefore, the study says this method is preferred. But a watermark, however, does not rule out an AI label. “None of the methods alone is a silver bullet,” argues Molavi Vasse’l.

Recent developments

OpenAI has only just realized the usefulness of tags and started efforts for labeling AI-generated images. These receive a watermark according to the C2PA standard and a visual CR symbol. Thus, OpenAI perfectly follows the best practices that Mozilla’s research is now putting out to label AI content.

The researchers do add another critical note about these labels, noting that transparency in AI-generated content is not a solution to stop harmful AI content. They want to emphasize this because many laws only impose rules around labeling.

Also read: AI-generated images get a watermark