4 min Analytics

Ilya Sutskever wants to do what OpenAI can’t: to develop superintelligence securely

Insight: Generative AI

Ilya Sutskever wants to do what OpenAI can’t: to develop superintelligence securely

Ilya Sutskever has once again founded his own AI company. His previous venture involved the successful OpenAI, known for its GPT chatbot. Last month, he resigned there without a clear reason. His founding of Safe Superintelligence Inc. now seems to tell more about what the reason for his departure must have been.

Sutskever did not share a reason for leaving OpenAI. Now, it is clear that he remains active in AI development. Sutskever has already made a major name in this industry. He played a major role in developing generative AI and accelerating the training process of LLMs.

So it is a good thing that the “special genius,” as OpenAI CEO Sam Altman described his former colleague, remains active in the AI world. Incidentally, OpenAI was not Sutskever’s first project. In 2012, four years before founding OpenAI, he set up DNNResearch. It took only a few months for that company to catch Google’s attention, and Sutskever’s company was acquired. With that acquisition Sutskever became Research Scientist at this tech company that was seriously investing in AI development early on.

Reclaiming control

This is not the first time Sutskever is starting over and reclaiming control that he sees as getting lost as the company expands. Within OpenAI, this can be illustrated by the tumultuous weekend when Altman resigned as CEO. The action was initially supported by Sutskever, who has clashed with Altman many times in the past over OpenAI’s strategy. Sutskever later backtracked on his participation in the action and said he regretted it. His sudden turnaround left him as the only founder on the board after Altman’s resignation was withdrawn, and it turned out to be the very board of directors that had to make room.

The weekend made clear that Altman’s role within OpenAI has enormous value. Microsoft, OpenAI’s largest partner, immediately offered help and asked Altman to lead an advanced AI research team. Furthermore, the opinion of the staff was speaking for his own. 90 percent sided against the decision with a letter to the board threatening resignation if Altman did not return and the board resigned.

It became apparent that it was no longer possible to get around Altman within OpenAI. The company’s employees, shareholders, and partners do not want that, but even as a public figure, Altman has taken on a role that is inextricably linked to OpenAI.

Also read: OpenAI considers for-profit structure to protect CEO Sam Altman

Safe Superintelligence Inc.

Sutskever can now start over in a new project. In doing so, he is probably trying to do correctly what he sees OpenAI is doing wrong. His new company Safe Superintelligence Inc. will focus on the creation of artificial general intelligence (AGI) to which the “super intelligence” in the name refers. Furthermore, the focus is on the safe development of this technology.

The company also includes Daniel Gross and Daniel Levy. Gross previously led AI initiatives within Apple and is known primarily as an entrepreneur. Levy joins from OpenAI, where he was a member of the technical team since March 2022.

Little by little, OpenAI’s position is breaking down

The company’s website provides additional explanation of its mission: “Superintelligence is within reach. Building secure superintelligence (SSI) is the most important technical problem of our time.” That statement has two important aspects that seem to provide insight into what is going on within OpenAI. Is superintelligence within reach because OpenAI has a major breakthrough in it? Are developments there progressing unsafely, possibly without considering the risks of a level of AI as smart as humans?

In terms of security, OpenAI has recently come under fire. At the times when Altman speaks publicly, however, there seems to be nothing lacking within OpenAI. For example, he regularly speaks to legislators and invariably advocates policies that prioritize security and require AI developers to follow rules. Individuals who are/were involved in the processes of OpenAI, in turn, speak of a culture where the pace for releasing AI tools wins out over secure development. In response to increasingly loud public criticism, the company established a security committee. The figures who represent the committee, led by Altman and filled by OpenAI insiders who were not expelled from the board after the CEO’s return, show it to be little more than an empty shell.

That Sutskever is putting his genius capabilities for AI development to work in a company that empathizes with the risks of such development strikes us as a good thing. As the company pushes back against the commercial aspects of AI development, it remains to be seen when Safe Superintelligence Inc. can get to a product release.

Also read: OpenAI board learned of ChatGPT launch through Twitter