Reports about the potential dangers of AI are coming thick and fast. We often hear these voices emanating from politicians and ethicists, but they’re far from alone. Strikingly, it’s the CEOs from Google and OpenAI that can be heard expressing their wish for AI to be regulated. Both companies are leaders in the current generative AI race. It was recently revealed that Sundar Pichai is actively involved in drafting AI regulations at an EU level. Should we really want these two AI developers to be active participants in the legislation of their own playing field?
Firstly, it seems quite obvious to involve parties such as Google and OpenAI when drafting supranational regulations. After all, they constitute the highest concentrations of AI expertise on the planet, and they will be the first to know about emerging developments in the field. Still, it’s of crucial importance that policymakers aren’t overly swayed and keep a respectful distance from them. They can still fire questions in the direction of ethicists and alumni, such as ‘godfather of AI’ Geoffrey Hinton who recently left Google. In his particular case, there’s good reason to think he’ll have a lot to say to regulators when it comes to the dangers of AI.
We can expect some resistance to legislation from companies focused on AI. This is nothing out of the ordinary, as just about all economic sectors look to be regulated rather lightly. To ensure innovation remains possible, prominent figures such as OpenAI CEO Sam Altman are pushing for a loose interpretation of AI regulations regarding privacy and copyright. AI applications are bursting at the seams with guarantees about data protection, and chatbots are eager to trivialize their own capabilities thanks to careful moderation by their makers. For the end user, the message from Big AI is: give us the benefit of the doubt. After all, it’s not like we’re easily capable of looking into what’s happening with our chatbot prompts. This is in stark contrast to, say, Microsoft’s cloud offerings. In that case, users can know much more about the strict rules the tech giant has to abide by.
Input-output
Above all, let’s not give the tech industry a free pass when it comes to privacy concerns. The EU fervently dishes out fines on Meta and other Big Tech firms. These companies have a pretty underwhelming track record in this area, to put it nicely. When it comes to AI, how can this issue be properly handled?
In and of itself, it’s not actually all that complex. Requiring transparency about the datasets in play and the transfer of user data can provide clarity for how generative AI models get their answers. That way we can avoid privacy issues and copyright violations. Indeed, given the potential use cases for third parties to buy into AI tools, Big Tech can benefit from being held to clear standards.
OpenAI no longer informs us how many parameters their GPT-4 LLM (large language model) uses. It’s not really an issue to keep it that way, the exact complexity of the model will not be defined by such specs anyway. Too much focus on such details leaves us nowhere. Privacy and copyright violations can be limited if we know what enters the AI’s knowledge base. It does, however, keep the larger fears over AI development untreated.
Dual personality
This is where the positions of both Sundar Pichai and Sam Altman appear to be conflicted. On the one hand, they’re busy advocating for overarching AI rules and, on the other, they’re keen to stress the need for broad agency with regard to AI development. The versions of Pichai and Altman we hear on podcasts are quick to point out the potential hazards around the technology. However, in their day-to-day CEO roles they are ardent supporters of greenlighting AI advancements. They clearly feel the need to keep up in the race for an ever-improving artificial intelligence. This is where the explanation lies for their apparent dual personalities. They’re talking about two different issues: those of AI tech being profitable and potent in the here and now, and those of the future capabilities of an all-conquering intelligence that can scare the living daylight out of humanity.
Google and OpenAI essentially find themselves on the side of politicians and citizens when it comes to the long-term risks around AI. They will be wanting to studiously avoid being labelled as a threat to society. Yet the AI arms race forces their hand in a way: every step they take to advance AI leads us closer to the point of no return. If we wait too long, that’s where the road leads us. At least, that’s what those in the know tell us. What’s the solution?
Principles
AI’s complexity can drive regulatory paralysis. How can you tame what you don’t know? In order to tackle the problem, we need a change of approach. We need to think about what manifestation of AI is desirable to us all: one that assists us, can be kept in check and is a good automaton wherever applicable. AI can deliver us a step beyond the automation we see in car assembly or vacuum cleaners. However, AI is hard to predict – just like humans are.
Laws prevent us from engaging in our worst tendencies. Reckless driving, the spread of misinformation and outright scamming are meant to be curbed by law. AI cannot simply abide by those same rules, but they do go a step closer to humans than earlier tech. It’s important for Big Tech to know who’s responsible about the negative effects of AI in the future, just to be able to have a product to sell. They will be eager to stop the threat of open-source AI by creating rules that only they can realistically comply with. The open-source community ensures the democratization of technology. This sounds like a positive. In many cases, it certainly is. At least as far as AI is concerned, the open-source direction may also provide a degree of self-regulation through the openness of programming code. However, we may end up at a point where we cannot hold anyone accountable for an open-source AI model enabling undesirable applications.
For both Big Tech and open-source to have any direction, AI principles need to be put in place. Those will be fluid regarding specific applications in terms of privacy and copyright. We don’t really need parties like OpenAI and Google to do that. However, we may do well to sound out how they view the broader issues surrounding the containment of AI rules.
Also read: OpenAI CEO calls for new security legislation for AI