2 min

Tags in this article

, , ,

General-purpose AI (GPAI) will have transparency policies imposed within the AI Act. Systems that hold a higher risk will have additional rules imposed on them. It is, however, completely unclear for whom those rules are intended.

The content of the EU AI Act still has to clarify many details. This was also the general point of the press conference the Commission held on Monday and was attended by TechCrunch. At the conference, the Commission also confirmed that foundation models are part of the category that lawmakers refer to as “general purpose AI”. The generality is to ensure that the law is future-proof: “In the future, we may have different technical approaches. And so we were looking for a more general term.”

Uncertainties about partitioning

All models that meet the predetermined definition of a GPAI must comply with a transparency policy. That includes preparing technical documentation, complying with European legislation on authors’ rights and providing a summary of training data.

Additional rules apply to models with systemic risks. When problems occur, these models’ risks have the potential to create a domino effect and ultimately collapse the broader economy. To be classified in that category, it uses a very unclear principle. For example, it assumes the computing power required when training models. Models that exceed 10^25 FLOPs, or floating point operations, fall into this category.

During the press conference, the Commission indicated that this criterion was chosen to include important generative AI models. However, it would not have been discussed whether this criterion applies to existing AI models. In addition, the commission hinted that developers should determine for themselves whether their models are covered by this legislation. “We will see what the companies will assess because they are best placed to make this assessment,” it said.

Not targeting companies

This method elected lawmakers to remain impartial. “The rules were not written with certain companies in mind.” Moreover, the requirements have yet to be completed by the AI Office. That is a group of experts who monitor compliance with the legislation but the group is not currently set up.

There are many other substantive aspects of the AI Act that have yet to be determined. Eleven meetings on that will follow, the first of which will take place as early as today, according to Reuters. During the meetings, government officials and aides to lawmakers will meet to determine the details.

Also read: AI Act: OpenAI and Google may not violate copyrights