OpenAI, Microsoft and Google are all tinkering with AI products together or on their own. Behind the scenes, they also appear to be tinkering with something else: a way for the AI Act not to apply to their products.
Even before the AI Act can officially go into effect, major tech companies appear to be wriggling out from under the upcoming rules. Microsoft, Google and OpenAI all already paid visits to European institutions. This is how they are trying to lobby their way out of the AI Act.
Read also: EU votes on AI Act draft: should ChatGPT change?
Specifically under discussion is the “high risk” category, in which the EU wants to include AI tools that pose a risk to society. Products and solutions from this category will not be allowed in Europe, so Microsoft, Google and OpenAI are already securing a spot in the European AI market.
According to the companies themselves, there is a good reason why their tools fall out under this categorization. This is because the strict rules should only be imposed on companies that want to use artificial intelligence for a high-risk use case.
Large tech companies are typically not concerned with a specific use case but just want to find traction with the general public. This is not to say that solutions cannot serve a high-risk use case. Users can misuse language models to extract personal information or leverage text-to-speech technology to create audio deep fakes.
The two-faced man
In public appearances, Sam Altman, CEO of OpenAI, takes a different approach. He went around regulatory agencies advocating for some form of legislation. According to him, clear rules are needed for AI developers to ensure the security of the tools.
Behind the scenes, these are hearing more dissent from the CEO. He consistently believes that rules should be in place but mostly likes to put safeguards in place himself. This is evident from the white paper OpenAI sent to EU officials and TIME requested and published. “By itself, GPT-3 is not a risky system. But it has capabilities that could potentially be used in high-risk use cases.”
Milder final draft
OpenAI sent the white paper to the EU Commission and Council in September 2022. The final draft of the law was only approved in June 2023. That draft no longer states that general-purpose AI systems should be considered high risk by definition, whereas earlier drafts did. The compromise is now that key models, such as the language model for ChatGPT, require transparency about the datasets used in training.
OpenAI argued why their tools are not high risk: “We believe that our approach to mitigating risks arising from the generality of our systems is industry-leading.” Meanwhile, however, it has long been known that bypassing the built-in security in OpenAI’s language models is possible. For example, you’ll find plenty of examples of prompts to abuse the language model on Jailbreak Chat
Further, OpenAI did not feel it necessary to set a rule that considers all AI products “high risk” when they generate content indistinguishable from human creation. According to the company, there was no need to carry this rule over to the final draft because a rule is already provided that requires companies to label AI content clearly. The company expresses fear that the initial rules will automatically put its products in the strictest category and therefore asks regulators in the white paper to make only the labelling of AI content mandatory.
“This Article can sufficiently require and ensure that providers put into place reasonably appropriate mitigations around disinformation and deepfakes, such as watermarking content or maintaining the capability to confirm if a given piece of content was generated by their system,” the company concludes. Indeed, the final draft approved by the EU Parliament retains only the labelling obligation.
Lobbying or consulting?
The white paper makes one’s faith in the AI Act fall apart somewhat on a first reading. The law comes to regulate artificial intelligence and make it safe for the public, which is mainly important with products and models used by many Europeans. Therefore, large tech companies should not be allowed to get away with it.
On the other hand, it is just wise for the EU to consult with companies already fully engaged in AI to design the rules. Otherwise, the AI Act quickly threatens to put a brake on innovations. So can we blame these companies for trying to put themselves in a good position ahead of time while advising them?
“We recognize and appreciate the enormity of the EU’s work in understanding and encouraging developmentof critical Al technology while: ensuring that the development and useof these systems respects fundamental human rights and values. We remain ready to assist and advise however needed,” concludes OpenAI.