The contents of the AI Act were officially agreed upon last week. The agreement should clarify more concretely what types of AI are allowed within the EU and in which way. There will be obligations, which are not yet detailed, although it is clear that AI developers such as Google and OpenAI may not violate copyrights.
The agreement on the content of the AI Act was announced with the words we have heard often enough around the legislation. “The EU AI Act is the first-ever comprehensive legal framework for artificial intelligence worldwide,” spoke Ursula von der Leyen, President of the European Commission. That doesn’t tell much about the content of this framework, although that is the most relevant thing.
Criteria: impact and risk
The legislation that AI must comply with depends on two things. On the one hand, it depends on the potential risk attached to using the technology. On the other hand, the severity of the rules is determined by the impact the technology bears. While both criteria are important, there is no single way to determine the risk and impact of AI. Lawmakers have already determined the high-risk systems. This includes systems that serve the banking and insurance sectors and systems that can influence election results. These systems will be required to conduct a risk assessment. The agreement does not yet share the content of the risk assessment.
AI systems that serve a general purpose and do not target a specific group of users (general-purpose AI, or GPAI) will have their own set of rules. So here, the legislators will look to the technology and ignore the criteria they proposed. These general-purpose models also show the weaknesses of the criteria that have been put forward. For most, a general AI tool will be used for something innocuous, like creating a summary or composing an e-mail. Other users may just start using the tools to look up people’s personal data or create malware. Therefore, classifying the tools as “high risk” by default could in turn trigger many discussions with the developers of these AI tools.
Also read: Are OpenAI, Microsoft and Google lobbying a way out the AI Act?
Training data shared, better for copyrights
Specifically, these technologies must comply with a transparency policy that requires creators to prepare technical documentation, comply with European copyright laws and write a summary of training data. However, we also note that there are still many options on how in-depth or comprehensive these things are.
General models with systemic risk face additional rules. These include additional tests for evaluating the risks and an obligation to report “serious incidents”. Interestingly, these models also suddenly require reporting on energy efficiency, while the sustainability of less risky models apparently does not matter.
The transparency policy remains in the same form the Parliament previously agreed on. It was previously said what the legislation would mean for ChatGPT and its creator OpenAI. Namely, OpenAI will have to transparently state to the EU which datasets they used in training the language model and which of them are copyrighted. Furthermore, the EU will have to be informed of how the AI product works.
No clear labels
In the final agreement, we note that the focus of the law has shifted to the production of the AI system. We no longer see previous rules regarding marking AI content. Online content originating from an AI tool, therefore, remains difficult to identify.
All AI systems are still classified into several categories: minimal risk, limited risk, high risk and unacceptable risk. Lawmakers drew up a list of systems that would be banned; they would thus carry the label of “unacceptable risk”. The bans include biometric identification and forms of scraping that collect pictures of a person’s face. However, law enforcement still has scenarios in which it would be allowed to deploy the use of biometric identification. The “minimal risk” category is the densest, containing, for example, spam filters that filter out malicious emails.
Fines of up to 35 million euros
Fines are attached to violations of the legislation despite earlier protests from Germany, France and Italy. These three European powers had previously hoped to push through making the AI Act a set of voluntary rules. As it turns out, they did not succeed. The AI Act carries fines between 7.5 million and 35 million euros. The exact amount depends on the severity of the offence and the annual turnover of the AI producer.
These countries’ sudden resistance was the result of a protective attitude toward AI start-ups in their own country. The compromise to protect these companies is the ability to set up “regulatory sandboxes” and “real-world testing”. This allows smaller companies to safely test out the rules without immediately risking a large portion of their revenues.
According to the European Parliament, the legislation contains “comprehensive rules”. Given the generality of the criteria and obligations, we can indeed agree with that statement. However, broad concepts are necessary given the rapidly evolving nature of technology and the AI field. Otherwise, the legislation may already need to play catch-up upon publication.