The field of AI is rapidly evolving and currently still has quite a lot of freedom for companies offering their services. The EU is changing that within six months with legislation surrounding the new technology.
The European Union’s upcoming legislation for AI has already caused quite a bit of discussion. Some claim that regulation will halt the European economy, while others see benefits for the safety of AI tools. Either way, the provisionally approved legal text has come a long way, and the final word on the final text has not yet been spoken either. How did the draft become what it is today and what obstacles are still lurking?
Also read: EU votes on AI Act draft: should ChatGPT change?
Writing and rewriting
The first version of the AI Act already expressed a completely different view of generative AI tools than the preliminary legal text that survived a vote in the European Parliament. Several companies sent their advice to the European Commission and Council about that first version. We previously analyzed OpenAI’s opinion in detail and mainly took issue with the “high risk” category. Due to the earlier definition of this category, generative AI tools such as Bard and Bing Chat would be banned in Europe.
A company like OpenAI, with its generative AI tool ChatGPT, has much to gain from going against such reasoning. A company like Workday, on the other hand, does not own foundation models but weaved AI into the core of its technology platform for human capital management and finance. The company nevertheless took a closer look at the proposed rules in its own feedback in which it united with the feedback OpenAI sent in terms of the high risk category. We had a chat with them to hear from them what they think about the new AI Act.
According to Jens-Henrik Jeppesen, senior director of public policy at Workday and responsible for Workday’s advice to the European Commission, the initial draft of the EC demonstrates how complex the matter is: “The high-risk product category was actually created even before large AI tools such as ChatGPT launched.”
OpenAI en other significant players in the field op generative AI, like Microsoft and Google, already argued their products should not be considered as a high risk product. They reason products should only be considered as high risk when they operate in a high risk use case. Jeppesen agrees: “As ChatGPT doesn’t have an intended purpose, but can be used for multiple purposes, it doesn’t automatically fall into the high-risk category.”
New criticism
Jeppesen does not say foundation models should not be regulated at all: “It is inevitable that there will eventually be rules for these products, but the regulation had to be softer than the first proposal formulated.” The draft as it is now shouldn’t pose problems for foundation models.
Also read: Research: ‘Foundation models largely non-compliant with AI Act’
Not everyone is as lenient towards the bill, though. Airbus and Siemens, among others, signed an open letter to the European Commission after the unveiling of the AI Act draft, detailing the AI Act’s potential dangers to the European economy. European companies are concerned that the strict regulation will discourage AI developers from making their products available to the EU. In fact, to be in line with the regulation, these companies have to commit a lot of time and resources, which could create a hurdle.
‘Legislation to build trust’
Right now, Jeppesen sees two problems with new AI products. The first one has to do with the predominantly negative coverage of AI products. This deters companies from experimenting with the new technology. In the media, you regularly hear stories about copyright infringement by AI and what bad things AI can contribute to (hacking, for example).
Moreover, Jeppesen also notices there is currently a race to release AI products. Everyone wants their products on the market as quickly as possible, and this poses enormous risks to the security of these products. Regulation curbs this hasty behavior by giving developers enough time for the testing phase.
“By advocating for regulation now, we are raising the bar for AI tools and the rules will also reassure companies who are reading all these negative things about AI. The rules will allow European companies to bring in new technologies with more confidence. So for us, the rules will mainly ensure that there is more confidence in the AI market.”
Companies that ventured into AI anyway despite these problems can tighten internal regulation for the technology at the EU’s pace. For Workday, at least, that is the way they do it. “At Workday, we have been working with ethical principles for the use of AI for a long time. We make sure that our AI governance evolves along with the European rules. So for us, being in compliance with the AI Act will not be a problem.”
The future impact of the AI Act
It is difficult for the EU and interesting at the same time that the AI Act is being drafted at the same time as new AI products enter the market. It is quite possible, for example, that an AI product which launches in December is not yet included in the AI Act proposal and does not fit into any category. Lawmakers will need to take that into account and be able to move the legislation quickly.
As a result, the ultimate impact of the AI Act is difficult to predict and is still causing quite a bit of turmoil in the AI market. Over the next six months, Europe will continue to work on the legislative text. This will be done in consultation with the industry. Workday is part of that industry and intends to do its part: “We want to help define the concepts in the legislation so that they are clear for the companies getting started with AI technology, but also so that the rules serve the purpose the EU had in mind and do not have an unwanted effect.”