2 min

According to a leaked draft of the artificial intelligence regulations envisioned by the European Union’s lawmakers, fines could reach up to 4%of global annual revenue (or €20 million if greater) for prohibited use-cases.

The regulation is expected to be delivered this week. The plan to regulate AI has been on the back burner for a while. In February last year, the European Commission published a white paper showing plans for regulating artificial intelligence apps deemed highly risky. In the beginning, the lawmakers considered focusing on specific sectors.

A blanket AI policy

Some of the sectors targeted initially included energy and recruitment. However, it soon became clear that the approach might not be as efficient. So, the lawmakers seemed to have looked at AI risk as a whole without necessarily narrowing it down to specific industries.

The focus is now on compliance requirements for AI apps considered high risk. They could occur anywhere, from weapons and the military to less defence-driven use cases. Things the EU aims for are human oversight and at least a kill switch to bypass an AI. The goal for the Commission seems to be geared towards getting the public to trust AI.

High-risk examples of AI:

  • AI systems that evaluate creditworthiness;
  • AI systems that make individual risk assessments;
  • AI systems used for recruitment;
  • AI systems and algorithms for predicting crime;
  • AI systems that priorities the dispatching of emergency services;
  • AI systems that determine the access to or assigning people to educational institutes;

The use of artificial intelligence by the military and systems used by authorities will be exempt from the new rules. They can use AI for purposes that are banned for commercial use.

Examples of AI systems that are likely to be banned:

  • AI systems designed to manipulate human behaviour, opinions or decisions;
  • AI systems for random surveillance applied in a general manner;
  • AI systems used for social scoring;
  • AI systems that exploit information or predictions to target a person on their vulnerabilities;

Encouraging the use of good AI

The aim seems to be about using a system of compliance checks and balances borne of EU values to encourage trustworthy or “human-centric” AI applications that aren’t high risk. There is also some criticism of the new rules. Some of them are too vague.

Another part of the draft talks about the support AI development in the EU. Member States could be pushed to start regulatory sandboxing schemes where startups and SMEs get dibs on AI development and testing, before hitting the market.