3 min Analytics

OpenAI CEO under fire: “The problem is Sam Altman”

OpenAI CEO under fire: “The problem is Sam Altman”

The New Yorker has published an in-depth investigation into Sam Altman, CEO of OpenAI. Based on confidential memos, internal documents, and more than 100 interviews, the publication concludes that several former colleagues consider Altman fundamentally untrustworthy as the leader of the world’s most influential AI organization. “The problem with OpenAI is Sam himself,” says former head of research Dario Amodei.

The New Yorker bases its report, among other things, on seventy pages of Slack messages and HR documents that co-founder Ilya Sutskever compiled for the board at the time. Those memos, also known as “the Ilya Memos,” describe a pattern of alleged deception. The first point in one of the documents simply reads: “Lying.” Sutskever wrote that he did not consider Altman suitable for the responsibility that comes with building AGI: he did not think Altman was the right person to hold the future of humanity in his hands.

Altman is also said to have offered the same job to two people, told conflicting stories about who should appear in a livestream, and lied about safety protocols. Sutskever concluded that this “does not create an environment conducive to the creation of a safe AGI.” Amodei and Sutskever ultimately reached similar conclusions. Amodei wrote: “The problem with OpenAI is Sam himself.”

The Firing and the Return

In November 2023, the board fired Altman for a lack of transparency in his communications. That dismissal sent shockwaves through the industry. Microsoft, which had invested some $13 billion in OpenAI, only heard the news at the last minute. Microsoft CEO Satya Nadella later stated: “I was very stunned.” Within days, more than 700 employees, investors, and Altman himself organized a counter-campaign. The board was checkmated, and Altman returned as CEO after less than five days.

An investigation by the law firm WilmerHale followed, but no written report was released. Six people involved in the investigation say it appeared to be designed to limit transparency. The investigation was ultimately closed without clear findings regarding the integrity allegations.

Safety Takes a Back Seat

A recurring theme in the article is the balance between safety commitments and commercial interests. OpenAI initially promised that the superalignment team would receive twenty percent of the computing capacity. According to four employees, it ultimately amounted to just one to two percent. Jan Leike, who led the team, wrote upon his departure on X: “Safety culture and processes have taken a backseat to shiny products.”

As recently as 2023, Altman publicly advocated for strict AI legislation and appeared before the U.S. Senate to push for regulation. According to the New Yorker article, OpenAI was simultaneously lobbying behind the scenes against European AI regulation.

OpenAI is preparing for an initial public offering with an expected valuation of $1 trillion. Altman is said to have once told a former employee that he is not motivated by money, but by power.