2 min

German authors and performers want greater copyright protection from generative AI services.

This week Reuters reported that forty-two German associations and trade unions had called on the European Union to toughen the rules for AI that the lawmakers are drafting as part of the pending European Artificial Intelligence Act. The German organizations represent more than 140,000 authors and performers.

In doing so, these “content creators” pointed to the unique threat that ChatGPT poses to their copyrights.

Verdi and DGB, prominent trade unions for the creative sector in Germany, as well as associations for photographers, designers, journalists and illustrators, detailed their concerns in a letter to the European Commission, European Council and EU Members of Parliament (MEPs).

Chatbot training raises “fundamental questions”

“The unauthorised usage of protected training material, its non-transparent processing, and the foreseeable substitution of the sources by the output of generative AI raise fundamental questions of accountability, liability and remuneration, which need to be addressed before irreversible harm occurs,” the letter said.

“Generative AI needs to be at the centre of any meaningful AI market regulation”, it added.

The European Commission first proposed the need for an EU AI Act last year. Now the rules will need to be discussed, negotiated and agreed by the Commission with the European Parliament. The Act will then need to be approved by all the bloc’s member states before it can become actual EU law.

Regulating the entire generative AI product cycle

The regulations need to be made comprehensive enough to encompass not just the training process, but the entire generative AI product cycle, the groups said. In particular, the providers of foundation models need to be regulated. These include such tech giants as Microsoft, Google, Amazon, and Meta Platforms.

Specifically, the groups call for these foundation technology providers to be liable for all content generated and disseminated by the AI. They allege that this should be the case regardless of its specific usage. The providers should be on the hook for infringement of personal rights and copyrights, misinformation or discrimination, they said.

The letter also said the providers of foundation models should not be allowed to operate central platform services to distribute digital content.