2 min

According to a legal expert’s examination of the proposal, the European Union’s planned risk-based framework for overseeing artificial intelligence includes capabilities for oversight bodies to force the removal of a commercial AI system or the retraining of an AI model if it is judged to present a high risk.

That suggests the EU’s (still in deliberation) Artificial Intelligence Act contains significant regulation power, assuming the bloc’s diverse array of Member State-level regulation authorities can successfully steer it at detrimental algorithms to compel product modification in the name of fairness and the public good.

Some work still needs to be done

The draft Act is still being criticized for various structural flaws and may yet fall short of the intention of establishing broadly “trustworthy” and “human-centric” AI, as promised by EU legislators.

A year ago, the European Commission proposed an AI Act, laying out a framework that prohibits a small number of AI use cases (like China’s social credit scoring system) that are considered unsafe to people’s wellbeing or fundamental rights to be permitted.

Even though it looks like it has teeth, the Act is still not as comprehensive as it can- or should be.

The high-risk side of it

The proposed regulation also restricted other uses depending on the perceived risk, with a subcategory of “high risk” use cases subject to pre-and post-market monitoring.

High-risk systems fall under; Law enforcement; Migration, asylum, and border control management; Biometric identification and categorization of natural persons; Education and vocational training; Employment, workers management and access to self-employment; Management and operation of critical infrastructure; Access to and enjoyment of essential private services and public services and benefits; Administration of justice and democratic processes.

Many civil society groups are still calling for the Act to be tightened and more comprehensive since it still leaves a lot to chance and self-regulation.