5 min Analytics

Europe divided: what will remain of the AI ​​Act?

Europe divided: what will remain of the AI ​​Act?

Events in the AI world always give Europe the opportunity to mention the EU AI Act once again. The act is often used in the same sentence with “leading agreement” and seems to be a solution to the problems and challenges facing the AI world today. But what will be left of the legislation as criticism grows?

In a joint letter, the EU is warned of the possible consequences of overregulating AI technology. Especially for foundation models, this can be pernicious, Apple, Ericsson, Google, and SAP warned, among others. A joint letter from these parties was submitted to the European Union on Nov. 23. “As European digital industry representatives, we see a huge opportunity in foundation models, and new innovative players emerging in this space, many of them born here in Europe. Let’s not regulate them out of existence before they get a chance to scale, or force them to leave.”

The position of these groups is not surprising. The letter also restated comments which were previously expressed by the creative sector. “The comprehensive EU framework for copyright protection and enforcement already contains provisions that can help address AI-related copyright issues, such as the exemption for text and data mining.”

Also read: Is Europe killing itself financially with the AI Act?

Responding to divisions in Europe

Yet there was good reason for these groups to reiterate roughly the same views once again. The reason was the mutual agreement between Germany, France and Italy on the shared vision for regulating AI. Overall, they want AI tools and not AI technology to be regulated. Consequently, the focus of legislation would then no longer rest on foundation models. Moreover, the EU would not enforce the imposed rules with sanctions.

Brando Benifei, MEP and a key face in pushing the legislation, said in response to the sudden crisis at OpenAI, he disagreed with the agreement in any case: “The understandable drama surrounding Altman’s resignation from OpenAI and joining Microsoft shows us that we cannot rely on voluntary agreements brought about by visionary leaders.”

Undoubtedly the agreement between the three European superpowers will start a fierce debate in the European Parliament. Germany, France and Italy will be able to use the swelling criticism gratefully in the discussion.

Much uncertainty

The result is much uncertainty surrounding the upcoming legislation. There is no longer a release date for the act, but it becomes more clear that the law will be postponed until 2024. For the upcoming consultation round in December, there is still very little chance that the Council of Europe, the European Commission and the European Parliament will reach an agreement.

In addition, there is less and less certainty about the content of the AI Act. At the European Parliament vote in June, there was still a euphoric press conference in which a clear majority of the Parliament approved the Act. There did not seem to be any major stumbling blocks to the legislation. It did appear to be a clear framework to initiate and delineate discussions on the final act.

Yet we should not forget that the Parliament has already set the deadline for 2025 during the June vote. Perhaps in October, lawmakers thought there was already more agreement on the final content. As a result, the date was pushed forward to 2023. But mainly around the rules for foundation models, there are still divided opinions at this moment. However, the penalties associated with the legislation are also once again under discussion.

Will we follow the global examples?

In the meantime, developments were made at the global level that may have influenced the thinking and actions of European leaders. Just before November started, the G7 summit signed a voluntary code of conduct for AI developers. It also set out guidelines that could serve as examples for leaders worldwide when developing their own regulations. The Bletchley Declaration, the document’s official name, focuses on the technology risks associated with the rapid development of AI. It involves a total of 11 principles for risk mitigation, information sharing and incident reporting, among others.

About the same time as the G7 summit, the Biden administration pushed through an Executive Order (EO) that endorses the same ambitions as the Bletchley Declaration. The common thread in these documents is always a principle of best practices. That gives AI developers a guide in how the government believes AI can be safely developed, but otherwise hangs no obligations. Is Europe now trying to follow suit?

France, Germany and Italy are clear proponents of the approach. Moreover, because of the Bletchley Declaration, these countries are backed up by international institutions to develop legislation that guides but does not enforce. This is made clear with a new agreement which encourages developers to follow “secure-by-design” principles. The agreement was signed by France, Germany and Italy. Developers are thus encouraged to release LLMs only after adequate security testing has been done, but are not forced to do so. The US and UK competent authorities refer to the agreement as the first result of the Bletchley Declaration and speak of it as “a global effort”. Surely, the lack of support from the European Commission suggests otherwise.