The EU AI Act is revolutionary, but how should we interpret it?

The EU AI Act is revolutionary, but how should we interpret it?

On March 14, the AI Regulation – known as the AI Act – was adopted by the European Parliament and will soon come into effect. With the AI Act, the EU has a scoop: the first legislation to regulate the development and use of Artificial Intelligence (AI). This framing of the adoption of AI comes amid the high-stakes debate on how to deal with AI. From solving healthcare shortages and models that predict climate change to biased or hallucinatory AI; the current and future pros and cons of AI are often the talk of the day. And while the AI Act is seen by some as a way to encourage the responsible use of AI, there are also concerns that the regulation and framing can hold back innovation. How should we interpret this unique legislation? Two experts from data, analytics and AI expert SAS will explain to us what to look out for in the coming period.

Regulation, innovation, or both?

With 272 pages, the final version of the AI Act is by no means a text to leisurely read through in an afternoon. But in essence, according to Kalliopi Spyridaki, Chief Privacy Strategist EMEA & Asia Pacific at SAS, the act is about countering risk: “The EU AI Act prohibits AI systems that pose unacceptable risks. Think of systems that use suggestive methods to manipulate individuals; systems that try to exploit vulnerable populations such as children and the elderly; systems that are used to assign a score to individuals based on how socially desirable their behavior is. Once the law comes into force, such banned systems cannot be placed on the European market. Moreover, the EU will regularly review this list of prohibited systems.”

The AI Act contributes to the further development of responsible AI by making it clear to companies which applications are and are not classified as responsible. But what about the AI Act, what should companies be watching out for and how do they make sure they are not caught off guard when the law comes into force?

“Companies need to prepare and make sure they have a plan for trustworthy AI. In fact, you could do anything with AI. But we need to make sure it reflects the values of our society. Data is the basis of everything. If the input of an AI model is wrong, the output will be the same,” explains Josefin Rosén, Trustworthy AI Specialist at SAS. “We need to assess the quality of our data and make sure the data is inclusive and representative of all stakeholders. In addition, we need to track where the data comes from and how it is used to understand why AI makes certain decisions. Finally, when deploying AI in a business context, we need to ensure that it continues to make it work properly over time. SAS is well positioned to help companies create a trusted AI culture.”

SAS puts responsible AI, AI that is safe and whose application is explainable and responsible, at the heart of its business. It is also leading the way in promoting trusted AI in a number of sectors, such as the REAiHL project, which is researching the application of AI in healthcare.

Far-reaching implications for the world, and your business

According to Kalliopi, the impact of the legislation will extend beyond the EU:

“The EU AI Act has the potential to shape the regulatory landscape around artificial intelligence globally. Like GDPR, the EU’s approach has been criticized over fears that it could hold back innovation. Currently, however, we have GDPR-like legislation in every country in the world. Now Europe is once again taking the lead with the first AI-specific legislation of its kind.”

With the EU taking a leadership role, this could actually benefit companies inside and outside the EU that are rapidly adopting responsible AI. Kalliopi: “Whether the AI Act will have the same impact globally as the GDPR remains to be seen, but it will certainly be a market stimulus that encourages AI developers and users in Europe and beyond to increase their efforts to understand and implement the requirements and obligations of the new law. Responsible AI in Europe will not only be a legal requirement, but also a strategic choice for organizations worldwide that want to remain competitive and responsible. Ultimately, AI developers who start developing technologies in compliance with the EU AI Act can gain a competitive advantage.”

As the AI Act makes its mark on the era of artificial intelligence, we face a crossroads of opportunities and challenges. Regulation and innovation must always balance responsibility and progress. The introduction of the AI Act reflects not only the continuing evolution of technology, but also the growing awareness of the need to integrate human values and ethics into our digital progress. As the implementation of the legislation progresses, it will become clear what the Act gives us. For now, it seems that organizations within the AI field that are engaged in responsible AI already have an advantage, and those not yet engaged in it should consider looking further into it.

This article was submitted by SAS.