2 min

The development of AI models more powerful than OpenAI’s GTP-4 should be paused for a period of six months. This pause should then be used to develop and implement security protocols. This is stated by more than a thousand AI experts and other signatories, such as Elon Musk and Steve Wozniak, in an open letter.

The open letter, written by the nonprofit organization Future of Life, calls for a six-month pause in developing advanced AI models. During this period, experts should develop safety protocols that ensure new, much more powerful AI models than the current GTP-4 cannot pose a risk to society and humanity in general.

The more powerful AI models should be designed or taken into production only after it is established that their effects are positive and have minimal risks. The security protocols for the more powerful AI models should be developed, implemented and audited by an independent group of experts.

Threats posed by AI

In the letter, the signatories point out several threats that AI models can have when competing with the human brain. The models could flood information channels with propaganda and disinformation. Ultimately, it could even automate many jobs and, more philosophically put, outclass humans in cleverness, make them obsolete and eventually “replace” them.

According to the signatories, the question arises as to whether training very powerful AI models will cause humanity to lose control of current civilization.

What measures are needed?

The signatories come up with several measures to prevent these risks. In addition to developing, implementing and overseeing security protocols, AI developers must also work with regulators to develop robust AI governance systems. At a minimum, this requires the creation of new well-equipped regulators for AI. There also needs to be an overview of highly sophisticated AI systems and large “pools of computing power”. This overview should help monitor AI systems.

Furthermore, AI’s should add watermarks to distinguish real from artificial. There also needs to be a robust auditing and certification system and clarification on who is legally liable for damages caused by AI systems. Last but not least, there should be proper funding for technical research on AI security and the creation of research institutes to help counter AI-induced economic and political turmoil.

Elon Musk and Steve Wozniaks signature

The open letter was signed by several tech and AI experts, including quite remarkably Elon Musk. Who himself is not averse to developing artificial intelligence. Former Apple founder Steve Wozniak also signed the letter, as well as CEOs of AI companies and researchers from Alphabet subsidiary DeepMind.

Also read: Several companies forbid employees to use ChatGPT