3 min

The EU wants to finalize an AI Act before the end of the year. The proposals unveiled over the past summer caused consternation among a variety of companies, including OpenAI, Airbus, Siemens and Workday. In particular, the classification of AI models raised concerns about potential curbs on innovation. A new proposal is set to categorise AI solutions into three levels, with all foundational models having to pass extensive security tests.

In August, we detailed the comments tech companies made about the AI Act proposals. It went to show that the private sector is very much eager to help formulate rules that would guarantee the safety of citizens, but still leave room to innovate.

Tip: AI Act: legislation that plays catch-up with a new reality

The balance the EU has to strike is meant to be able to regulate both tech giants and smaller AI players. According to reports from Bloomberg, there are now more concrete plans, where the EU can place any AI innovation into one of three categories.

Foundation models: transparency, protocols, opt-outs

Foundation models are often described as AI models that can be deployed for a wide variety of tasks. This is also pretty much the definition that EU negotiators are proposing. The requirements for such a foundation model are stringent. The training process has to be mapped out, with independent experts that will look to exploit the model in such a way as to show undesirable behaviour. Then, an evaluation would would take place, based on standardized protocols. Under the proposed AI Act, companies that deploy a particular foundation model would have the right to request information and testing from the model in question. In addition, each customer will have to be given the option for an opt-out, whereby their data would not be used for subsequent training.

The point has been made several times that foundation models are very difficult to monitor in this way. Still, complying with the rules of the AI Act should be feasible, even if foundation model developers still have work to do.

Computing power as a measuring point

Stricter rules are in the offing for “highly capable” foundation models, which will be defined by a yet-to-be-determined FLOPS (floating operations per second) benchmark. This is where the EU still has the most wiggle room when it comes to its AI regulation. As it happens, it’s also where companies can talk with policy makers about best practices and a voluntary code of conduct. It goes to show that the EU is still struggling to get a grip on the rapid development of AI and help from the industry is required to anticipate later developments.

There is also a proposal to assess the potential impact of a model based on the sorts of applications that depend on it. As a result, if there are high risks associated with a specific solution, the underlying AI will have to meet tougher requirements. Again, this would involve external experts who will use red-teaming to try to tease out all possible dangers.

This will remain a thorny issue for open-source, for example, since, as an inherently distributed group, it is difficult for them to consistently comply with strict rules. The EU will still need to formulate an answer to that, especially since Big Tech companies see open-source as a key player in the AI game.

User-based AI solutions

The last category is aimed at general-purpose AI systems. Any tool that has 10,000 professional customers or 45 million end users can once again count on a forced red-teaming initiative. Risk analysis and methods for mitigations must be formulated.

Those user numbers, by the way, are not binding. The EU retains the right to nominate other systems for additional rules. Once again, there appears to be more wiggle room within the layered structure of AI Act than at first glance appears to be the case.

Also read: Google to compensate its generative AI users for copyright claims