Prominent AI researcher Kyunghyun Cho questions AI’s potential to pose an existential threat. Several experts have recently reported that it is time to curb artificial intelligence in the long run. However, Cho believes people should rather pay attention to the dangers and benefits of AI’s current applications than any perceived long-term risk.
VentureBeat interviewed the associate professor at New York University, considered a trailblazer for the current state of generative AI. He says he is disappointed by the lack of concrete proposals on AI regulation in the here and now. Cho notes that the ominous rhetoric from the likes of Google alumnus Geoffrey Hinton is distracting from the discussions that should be happening now. Examples include the establishment of clear rules regarding AI applications in health care or curbing AI in military applications.
Recently, we have seen several open letters appear that have been sounding the alarm about AI. These were signed by Apple co-founder Steve Wozniak and Twitter and Tesla owner Elon Musk, among others. Cho says he would not readily attach his name to such initiatives. He also finds it unfortunate that the media is bombarding AI founders such as Geoffrey Hinton and Yoshua Bengio as heroic scientific figures. In particular, according to Cho, the development of the dreaded AGI (Artificial General Intelligence) is the work of thousands of people scattered around the world. AGI refers to the idea that there is eventually an AI variant that surpasses human knowledge. The alarmist suggestion associated with this tends to suggest a world run by AI. Cho softens this image.
That the role of individuals is overestimated in such developments is nothing new. Consider the idea that Steve Jobs would have virtually single-handedly driven his staff to such innovations as the iMac, iPod and iPhone.
The current state of affairs
This brings us back to what Cho is addressing: the current state of AI. As we described earlier, political bodies are drumming up regulation. This is partially done with the participation of Big Tech CEOs such as Sundar Pichai of Google and Sam Altman of OpenAI. To keep these regulations from going entirely off the rails, Cho’s words serve as a helpful instruction. First, identify the problems and opportunities of today so that we can actually benefit from AI.
Feeding fears about AI also leads to hype around its application. After all, something so powerful as to create existential danger can also be highly effective for practical applications with a bit of tweaking. This in turn benefits AI companies. It fuels the market value of parties like OpenAI, Google and AI hardware maker Nvidia.
In this regard, there is plenty of innovation. It appears, for example, that most developers are working on artificial intelligence. All the more reason to wonder about the strictness of the rules. If Cho is to be believed, the key is to put usual issues such as privacy and security at the forefront of AI regulation. This instead of excessive alarmism that is counterproductive in the long run.