3 min Applications

Judge sides with Anthropic in groundbreaking AI copyright case

Judge sides with Anthropic in groundbreaking AI copyright case

Anthropic achieved a significant victory for itself and the broader AI industry. A federal judge ruled that the company did not violate the law by training its chatbot Claude with hundreds of legally purchased books that it then digitized without the authors’ permission.

However, Anthropic remains liable for the use of millions of illegally obtained copies of books it downloaded from the internet. The company also used these for training its models.

According to SiliconAngle, Judge William Alsup of the Northern District of California ruled that the way Anthropic’s models process information from countless texts and then generate their original content falls under fair use under US copyright law. In his opinion, the results of the models are new and different.

He described it as follows: just like a reader trying to become a writer, the models do not use the original works to imitate or replace them. They use the texts to create something new.

The judge dismissed one charge in a class action lawsuit filed by three authors. However, he ruled that Anthropic must appear in court in December for allegedly using thousands of copyrighted works. According to him, the company had no right to include illegal copies in its core library.

Large-scale theft of books

The lawsuit was filed last year by authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson. They claim that Anthropic’s practices amount to large-scale theft of thousands of books. According to them, the company attempted to generate a profit by draining the human creativity and originality from these works.

Internal documents showed that Anthropic researchers themselves had doubts about the legality of using online pirate libraries. This led the company to decide to purchase hundreds of works legally after all.

Nevertheless, the judge emphasized that the subsequent purchases do not undo the earlier violations. However, this could influence the amount of damages awarded.

Analyst Holger Mueller of Constellation Research called the ruling a milestone and an important victory for AI companies in the US. According to him, the judge considers AI models to be people who read a legally obtained book and learn something new from it.

Nevertheless, he warned that the fact that Anthropic still has to appear in court is less favorable for the sector. Significant settlements could follow, according to the analyst.

This ruling could set a precedent for dozens of similar cases against other AI companies, including OpenAI, Meta, and Perplexity AI. Since 2023, many lawsuits have been filed by authors, media companies, and music labels accusing AI companies of copyright infringement. Several open letters have also been signed calling for stricter regulations for AI developers.

The controversy has had limited impact so far. Some AI companies have since reached agreements with publishers to gain access to their material.