2 min

Tags in this article

, ,

Details concerning the EU AI Act remained scarce one year after the initial draft was released. Even though this legal framework is still being established — or, more accurately, precisely because of that reality — now is the moment to learn more about it.

We previously discussed several crucial details of the EU AI Act, including who it affects, when it will take effect, and what it is about. The Mozilla Foundation’s Executive Director, Mark Surman, and Senior Policy Researcher, Maximilian Gahntz, spoke to George Anadiotis of ZDNet about this journey. 

Mozilla’s interest

According to Surman, Mozilla’s interest in AI began at around the same time that the EU AI Act started its lifecycle. Mozilla has collaborated with people worldwide to develop a theory for making AI more trustworthy, with two long-term goals in mind: agency and accountability.

Mozilla’s proposals for enhancing the EU AI Act and how people may participate in Mozilla’s AI Theory of Change continue to be discussed.

The EU AI Act is on the way, with a projected implementation date of 2025, and its influence on AI might be similar to that of GDPR on data protection.

What does the EU AI Act cover?

The EU AI Act covers users and providers of AI systems in the EU; suppliers based outside the EU who are the source of an AI system’s placement on the market or commissioning in the EU, and providers and users of AI systems based outside the EU when the system’s outputs are utilized in the EU.

Its method is based on a four-level classification of AI systems based on their perceived risk: unacceptable risk systems are outright forbidden (with few exceptions), high-risk systems are subject to traceability, transparency, and robustness regulations, low-risk systems require supplier disclosure, and minimal risk systems have no restrictions.