The tool Nightshade twists artwork in a way that is invisible to humans but “poisons” the data for AI systems. The project taps into the debate surrounding AI and copyright and empowers artists with a tool that exploits the weaknesses of AI systems, eventually bringing the developers behind these tools to their knees.
At this moment it is not entirely clear in which way AI and copyrights can live together. Upcoming legislation only gives a vague idea of what the future could bring, and that legislation may vary by world region. There is no conformity, except among artists: they are tired of their hard work being absorbed into AI systems without any compensation.
Also read: Generative AI faces existential crisis over copyright concerns
Poisoning art
This disaffected party may find a remedy in Nightshade. This tool “poisons” artwork with a poison visible only to AI systems. As a result, the system will not see an image of the Mona Lisa as a cat in a dress. This misinterpreting is triggered by Nightshade by only slightly adjusting the pixels in images.
Large amounts of this wrong data, in turn, poison the AI system’s training set. As a result, an AI image generator eventually starts making mistakes, to which the companies behind these tools will have to react. Now assume that the tool consistently turns an orange into an apple. In this case, the image generator will eventually generate an apple when a user asks for an image of an orange.
The effects of the tool would also seep into related concepts, according to the technical paper about Nightshade. This was evident when the researchers poisoned images from the category “fantasy art” and later found that even the search term “dragons” returned only false results.
Deploying weaknesses in AI systems skillfully
The protectant is freely available on the internet. Nightshade is a project from the University of Chicago. The professor in charge, Ben Zhao, explains the thinking process behind the project: “We show that generative models in general, without pun intended, are just models.”
Zhao certainly does not want to market the project as a tool to kill large AI companies. It just wants to show there are flaws in AI systems that do not make the companies behind them inviolable. The professor is all too aware of the discontent that exists in the artistic world. Nightshade, then, should work as a tool to bring AI companies to their knees by turning the AI systems against the AI companies. “What it means is that there are ways for content owners to take more effective action than by writing to Congress or complaining via email or social media.”
The ultimate goal of bringing AI companies to their knees is to set up negotiations between the companies and artists. Only with this dialogue can artists charge for the use of their artwork in training data. That fee is now more often absent than present, and this bothers the authors. “The real discussion here is about permission, about compensation. We are just giving content creators a way to combat unauthorized training.”
Not the final solution
The project from the University of Chicago may only serve as a temporary solution to the copyright problem. The researchers acknowledge that Nightshade is not perfect. For example, the adjustments the tool makes are sometimes visible. To eliminate this effect, Zhao recommends using the Glaze tool after an image has been run through Nightshade.
The researchers are looking at the possibility of combining Nightshade and Glaze into one tool. Both projects come from professor Zhao, which makes this goal more achievable. Of course, the researchers do remain hopeful that the tools will eventually become obsolete once legislation better protects artists’ work from the greed of AI systems.