2 min

Tags in this article

, ,

Final adoption of the EU AI Act is probably no longer feasible this year. Much disagreement is said to remain between the various EU legislative bodies on how AI models should be regulated.

The various EU legislative bodies, in particular European Parliament and EU member states, have yet to agree among themselves on how to regulate AI models, Reuters writes.

Meanwhile, three rounds of consultations are said to have already taken place and a fourth is scheduled for December. If the parties have not reached an agreement by then, the entire plan will be pushed back to 2024.

How to regulate?

The issue centers on the question of how specifically to regulate AI foundation models in the AI Act. A proposed version of the AI Act proposes that developers of AI foundation models be required to assess the potential risks of these models.

In addition, they must subject these models to testing during the development process and after the relevant AI foundation model is released to the market.

Furthermore, they must examine the training data for bias, validate the data and publish technical documentation before a release.

Currently, Spain holds the EU presidency and it would indicate that AI models should be further tested for vulnerabilities. In addition, there should be a multi-layered regulatory model based on the number of users an AI model has. The more users, the more regulation from the EU AI Act.

Other pressures on AI Act

The composition of the EU AI Act is not only experiencing pressure from its own legislators. Open-source companies are also improving the pressure and asking the EU to include smaller companies in the compilation process. This is because these developers may find it difficult to comply with the regulations.

In addition, there should be a distinction between companies that primarily want to earn from AI models and more hobby researchers and scientists.

Also read: AI Act takes shape: the rules to keep AI in check at every level