2 min Analytics

AI Act: transparency requirements for ChatGPT remain general

AI Act: transparency requirements for ChatGPT remain general

Negotiations on the European AI Act would have produced an agreement on the required level of transparency that AI developers must provide in their models. Regarding the use of facial recognition and access to source code, discussions should come to an end today in an additional meeting.

On Wednesday, MEPs, members of the European Commission and EU member states gathered for a meeting regarding the content of the EU AI Act. Europe is divided over this set of rules, as countries with a lot of legal weight suddenly oppose too severe consequences to non-compliance with the law. These include the countries of France, Germany and Italy, which fear strict legislation will devastate start-ups in their own countries.

Despite the difficult point, attendees were eager to lock in the content of the legislation as far as possible. They are motivated because there are not many discussions left before the legislation should be pushed back until after the European elections.

Read also: Europe divided: what will remain of the AI ​​Act?

However, the full content of the AI Act was not determined, Bloomberg knows. After spending nearly a full day negotiating from Wednesday afternoon until Thursday, those present decided to schedule an additional meeting on Friday morning. This one started at 9 a.m.

Understanding training data

Today’s discussions will deal with the use of biometric identification in the form of facial recognition within our region. The legislation around foundation models, which previously could not be agreed upon, does seem to be settled. These models will have to become more transparent about the training dates of their models. More details are not known, except that it involves some basic requirements. Discussions today would further negotiate whether AI models will have to release their source code.

Further, some of the legislation will still rely on the principle of willingness. A voluntary code of conduct will be established to mitigate systemic risks. These are risks that have the potential to create a domino effect when problems are encountered, which could lead to a collapse of the broader economy. The European Commission would maintain a list of AI models that fall into this category.