4 min

Tags in this article

, , , ,

For Europe, the AI ​​Act is all about gaining trust. Companies and residents of the EU must be assured that available AI tools do not pose a risk to privacy, for example. The EU says it has perfectly reconciled this goal with the ability to innovate. This can be seen in the milder rules that start-ups receive, but certainly not in the complete legislation.

The AI Act is another step closer to realization. The European Parliament voted today in favour of the law. The vote was clear: 523 votes for, 46 against, and 49 withholdings. That was as expected. The EU reached a political agreement on the content of the legislation last year. The rules laid down then received the approval of member states in February, which rather reduces today’s vote to a formality.

Doubts about economic growth

However, the AI Act’s content still sparks debate. These are prompted by concerns about Europe falling behind technologically and economically in artificial intelligence. Fears that the legislation will hinder the growth or even creation of European AI companies are among the concerns.

First, it already costs money to carefully check whether an AI tool complies with all the rules. Start-ups then prefer to invest their limited budgets in optimizing the AI tool. After all, a better tool provides a greater chance that the product will catch on. But not investing in this control is not an option either, because a violation of the law carries a fine of between 7.5 million and 35 million euros. The exact amount depends on the severity of the offence and the annual turnover of the AI producer.

Milder rules for European start-ups

European start-ups do not need to worry about these risks. Germany, France and Italy could put enough pressure on lawmakers to enforce milder rules for European AI start-ups. The compromise to protect these companies is the ability to set up “regulatory sandboxes” and “real-world testing. This would allow smaller companies to safely test the rules without immediately risking much of their revenue.

The three European superpowers were resisting the rules to protect important start-ups in their countries. An interesting story in this regard is France’s Mistral AI. The company benefits particularly well from the protection it gains as a European AI start-up. Its deal with Microsoft is proof of that. The company is investing 15 million euros and, in return, takes shares in Mistral AI.

EU lawmakers view the deal with suspicion. They view the deal as an attack on the AI Act and, specifically, the exemption rules for European AI start-ups. Mistral AI still falls under the start-up category despite its estimated value of two billion euros. Along with France, the startup was not shy to advocate for an AI Act that operates entirely on voluntary rules.

The act still holds fines, but MEPs now see what the true intentions of this action were. “The Act almost collapsed under the guise of no rules for ‘European champions,’ and now look. The European regulators have been played with,” said Kim van Sparrentak, MEP and involved in setting up the AI Act. At today’s press conference, given by Brando Benifei and Dragoş Tudorache, MEPs who put large amounts of effort into the AI Act, the question is brought back to the outcome of the law and not how it came about. “What companies do with the legislation is up to them,” it sounds. “Start-ups are protected by regulatory sandboxes, self-assessment assistance and free research and development, among other things.”

Read also: LLM for Europe: Mistral AI puts Europe on the AI map

Companies less reluctant

There are also predictions that favour the AI Act and see it as an engine for the European economy. This camp often returns to the idea that the AI Act increases companies’ confidence in technology. Higher confidence then naturally leads to higher adoption of AI tools in European companies.

Workday, for example, supports this idea. According to the company, the legislation removes concerns about the privacy and security of the tools. This casts Workday as an explanation for why European companies have more confidence in the tools.

These predictions fully follow the goal Europe had in mind with the legislation. Benifei and Tudorache emphasize this again when asked about the impact of the legislation on start-ups: “Regulatory sandboxes give start-ups a chance to develop. However, this is not the point of the legislation. The AI Act should lay a foundation for models and tools so that citizens and businesses can trust the tools.”

Old chatbots serve longer

MEPs’ confidence in the legislation is unwavering: “A balance has been struck between the interest to innovate and the interest to protect.” That start-ups are then the child of the bill does not seem to apply to European start-ups. Start-ups from other countries, on the other hand, can still hold back from exploring the European market. Large companies are already giving a clear signal. For example, Google’s chatbots always find their way to Europe only at a late stage. Usually when the testing period for the tool is already over. While the rules undoubtedly boost business confidence, companies in Europe have to put up with older versions of chatbots for longer.

The EU is now trying to convince other countries to pass similar legislation. EU is now trying to convince other countries to pass similar legislation. This would give the EU more assurance of its innovative capacity. Winning trust, then, seems to carry a cost after all.

Also read: AI Act: OpenAI and Google may not violate copyrights