The soap around OpenAI continues apace. Shortly after several executives departed once more, former board member Helen Toner talks about the mess the AI company brought on itself. The conversation also discusses the reason for the resignation of CEO Sam Altman.
Helen Toner was one of the board members who advocated for the firing of CEO Sam Altman in late 2023. That decision would have been influenced by the fact that Altman had never informed the board about his important position within the OpenAI Startup Fund, which he even had full control over. After a brief power struggle, Altman returned. Toner and other directors left the company soon after.
Disclosed on Twitter
However, the turmoil within OpenAI had been going on for much longer. Even the board of directors knew nothing about ChatGPT’s sudden launch. “When ChatGPT came out in November 2022, the board was not informed in advance about that,” Toner tells The Ted AI Show podcast.
The chatbot proved to be a hit, demonstrating to a global audience what generative AI has to offer. ChatGPT was the fastest-growing app of all time and now offers the ability to “naturally” have a conversation with the chatbot, discuss images and sift through documents. Its core technology fuels Microsoft’s gargantuan Copilot efforts that aim to bring AI functionality to every Windows machine.
Tip: ChatGPT now talks in real-time, new GPT-4o model available for free
While Altman and his inner circle logically could not have known just how successful ChatGPT would be, Toner’s revelation is telling. OpenAI constantly argues about the importance of secure AI development and calls for clear legislation to make it happen, but is itself contributing to a climate in which AI applications are being sent out at lightning speed. Toner further indicates that Altman had incorrectly informed the board at various times about the security processes used by OpenAI. That fact would also have influenced the decision to kick Altman out.
Altman described as a manipulator
It becomes clear during the show that Altman knows how he can manipulate things. Here Toner talks about a personal confrontation with the CEO, for which the reason is said to have been the publication of one of Toner’s research papers. “Sam began lying to other board members in an attempt to push me off the board.” Toner would not have been the only OpenAI member with such experiences. She says two executives told the board about their experiences with a toxic work environment, during which Altman was accused of “psychological abuse.” The executives could substantiate the allegations with evidence, and that proved enough for the board to take action.
The resignation of the CEO came out of the blue. According to Toner, that was exactly the intention: “It was very clear to all of us that as soon as Sam had any suspicion that we were going to do anything against him, he would pull out all the stops, do everything in his power to protect the board, to prevent us from even getting to the point where we could fire him.”
Security initiative
Two weeks ago, co-founder of OpenAI, Ilya Sutskever, also left. As one of the founders of GenAI, he served as Chief Scientist beginning in 2018, which put him in charge of developing new AI models for OpenAI.
Both the exodus of higher-ups and criticism from former employees seem to have forced OpenAI into action. Yesterday, the company announced that it has established a “Safety and Security Committee”. Within the next 90 days, this committee will develop recommendations regarding AI safety. It’s unlikely it will criticize the decision to leave the OpenAI board in the dark about ChatGPT, however.
After all, this committee consist only of OpenAI insiders who survived the power struggle of late 2023. Sam Altman is joined by chairman Bret Taylor and fellow board members Adam D’Angelo and Nicole Seligman.
In passing, OpenAI confirmed that it has begun training its newest frontier model, presumably GPT-5. This would continue down the path toward “AGI,” or artificial general intelligence. OpenAI’s hope is that, in time, it will develop an intelligence that surpasses the skills of the human brain. All the more reason to look closely at AI security, although the new committee is simply a self-policing force without any outside influence.
Update 09:50 am by Laura Herijgers. Added more context on Sam Altman’s firing.