3 min

The European Council wants to regulate the development of open-source artifical intelligence, but researchers warn of the plan’s risks.

The European Commission introduced a bill to regulate artifical intelligence (AI) in 2021. Brussels wants to prevent AI from contributing to discrimination, privacy violations, dictatorship and disruption.

The rules depend on the application of AI. Developers of AI systems for education are held to different standards than the developers of general chatbots. The so-called AI Act is far from final. The details are currently being discussed by policymakers.

One of the talking points is the regulation of general-purpose AI. Most policymakers agree that AI systems for education, law enforcement and other sensitive areas should be heavily supervised. However, opinions on the regulation of AI systems for general-purpose tasks are divided.

Regulating open-source AI developers

The European Council proposed to hold developers of general-purpose AI (GPAI) responsible for risk management, data governance, transparent documentation and cybersecurity. Research institute Brookings recently warned that the proposal could have dire consequences for open-source AI development.

The European Council defines GPAI systems as AI systems with “generally applicable functions” and use cases in a “plurality of contexts”. Brookings notes that open-source AI projects fit the picture. According to the research institute, the proposal could make open-source developers liable for third-party violations.

The danger

Suppose a third party incorporates an open-source project into an AI application. The application turns out to be a violation of the AI Act. The third party faces sanctions. Brookings is concerned that the third party may shift the blame to the open-source project’s developers.

Stable Diffusion is a real-world example. The open-source AI system generates images based on words. The system itself is harmless, but several developers incorporated the technology into tools for pornographic deepfakes. The European Council’s proposal could cause Stable Diffusion to be held responsible for the deepfakes.

According to Oren Etzioni, CEO of the Allen Institute for AI, the proposal creates a climate in which developers are afraid to develop open-source AI projects. “Open source developers should not be subject to the same burden as those developing commercial software”, the CEO told TechCrunch.

“Consider the case of a single student developing an AI capability; they cannot afford to comply with EU regulations and may be forced not to distribute their software, thereby having a chilling effect on academic progress and on reproducibility of scientific results.”

Two sides

Mike Cook, an AI researcher at Knives and Paintbrushes, sees things differently. According to Cook, setting a universal standard encourages developers to work with integrity. “The fearmongering about ‘stifling innovation’ comes mostly from people who want to do away with all regulation and have free rein, and that’s generally not a view I put much stock into”, Cook said.

“I think it’s okay to legislate in the name of a better world, rather than worrying about whether your neighbour is going to regulate less than you and somehow profit from it.”

To be continued

Brussels will finalize the AI Act in the coming months. The bill is expected to be presented for a vote this fall. There’s no guarantee that the European Council’s proposal finds its way into the final act.