Last year, Adobe announced that they were working with Twitter and The New York Times, to develop their Content Authenticity Initiative. Soon, they are going to begin testing the technology, which uses tags to identify the origins of original content.
Adobe released a whitepaper that says they will be testing it in Photoshop this year. More organizations are getting on board with the initiative. They include BBC, Microsoft, Truepic, CBC/Radio Canada, University of California, and Witness.
The authenticity of content is a very sensitive and hot topic right now. TikTok has already banned deepfake content on its platform, in the months leading up to the U.S. election. Other social networks are making efforts to fight misinformation.
The age of a new and open standard
The Content Authenticity Initiative proposal takes a different approach than the standard and ineffective methods used to fight misinformation on social media.
The way it works is by labelling or tagging the original content, crediting the creator. It then alerts the users when the media is doctored. Adobe’s aim is to provide an open standard that secures the metadata which is attacked to images that are shared on platforms like Facebook or Twitter.
The metadata is very important and if Adobe can successfully make it hard for anyone to modify it, the content can remain secure.
AI is fine but this could work too
The efforts to deal with content authenticity have focused on using AI that can detect altered media or deepfakes, according to Andy Parsons who co-authored the whitepaper. He adds that this AI-based initiative is important.
However, there needs to be a transparent way to let the public know who made a photo or a video, and how the media changed over time. He poses the question, “isn’t it equally important that creative professionals and photojournalists receive credit for their work?