Example URL From our sponsor
Inside the Deposition That Showed How OpenAI Nearly Destroyed Itself - news.adtechsolutions Inside the Deposition That Showed How OpenAI Nearly Destroyed Itself - news.adtechsolutions

Inside the Deposition That Showed How OpenAI Nearly Destroyed Itself



In short

  • Ilya Sutskever prepared a 52-page case against Sam Altman based almost entirely on unverified claims from a single source – CTO Mira Murati
  • OpenAI came within days of merging with competitor Anthropic during the crisis, with board member Helen Toner arguing that destroying the company could be “mission-consistent”.
  • The board was “rushed” and “inexperienced,” according to Ilya himself, who had been planning Altman’s removal for at least a year while waiting for favorable board dynamics.

Ilya Sutskever sat through nearly 10 hours of video testimony in the Musk v. Altman, returns on October 1 of this year.

The co-founder who helped build ChatGPT and became infamous for voting to fire Sam Altman in November 2023 was finally put under oath and forced to answer. U Transcript of 365 pages was released this week.

What it reveals is a picture of brilliant scientists making catastrophic governance decisions, unverified allegations treated as fact, and ideological divisions so deep that some board members preferred to destroy OpenAI rather than let it continue under Altman’s leadership.

U Musk v. Altman The lawsuit centers on Elon Musk’s claim that OpenAI and its CEO, Altman, have betrayed the company’s original non-profit mission by turning its research into a for-profit enterprise aligned with Microsoft, raising questions about who controls advanced AI models and whether they can be safely developed in the public interest.

For those following the OpenAI drama, the paper is an eye-opening and damning read. It’s a case study in how things go wrong when technical genius meets organizational incompetence.

Here are the five most important revelations.

1. The 52-page dossier that the public has not seen

Sutskever wrote an extensive case to remove Altman, complete with screenshots, and organized into a 52-page brief.

Sutskever testified that he explicitly said in the memo, “Sam exhibits a consistent pattern of lying, undermining his executives, and setting his executives against each other.”

He sent the memo to the independent directors using disappearing email technology “because he was concerned that those memos would somehow leak.” The full summary was not produced via discovery.

“The context of this document is that the members of the independent board asked me to prepare. And I did. And I was very careful,” Sutskever testified, saying that part of the memo exists in screenshots taken by OpenAI CTO Mira Murati.

2. One year board game of chess

When asked how long he thought Altman would be gone, Sutskever replied, “At least a year.”

Asked what dynamics he expected, he said: “That most of the board is obviously not friendly with Sam.”

A CEO who controls the composition of the board is functionally untouchable. Sutskever’s testimony shows that he fully understood this and that he adapted his strategy accordingly.

When the departures of board members created that opening, he moved. He was playing board politics in the long run, despite how close Altman and Sutskever appeared publicly.

3. The OpenAI weekend almost disappeared

On Saturday, November 18, 2023 – within 48 hours of Altman’s fire – there were active discussions about the OpenAI merger with Anthropic.

Helen Toner, a former OpenAI board member, was “the most supportive” of this direction, according to Sutskever.

If the merger had occurred, OpenAI would have ceased to exist as an independent entity.

“I don’t know if it was Helen who got to Anthropic or if Anthropic got to Helen,” Sutskever said. “But they came with a proposal to merge with OpenAI and take over their leadership.”

Sutskever said he was “very unhappy about it,” adding later that he “really didn’t want OpenAI to merge with Anthropic.”

4. “The destruction of OpenAI could be consistent with the mission”

When OpenAI executives warned that the company would collapse without Altman, Toner responded that destroying OpenAI could be consistent with its security mission.

This is the ideological core of the crisis. Toner represented a strand of AI security thought that sees rapid AI development as existentially dangerous — potentially more dangerous than no AI development at all.

“The executives — it was a meeting with the board members and the executive team — the executives told the board that if Sam doesn’t come back, OpenAI will be destroyed, and that’s inconsistent with OpenAI’s mission,” Sutskever said. “And Helen Toner said something to the effect that it’s consistent, but I think she said it even more directly than that.”

If you really believed that OpenAI presented risks that outweighed its benefits, then a pending employee revolt was irrelevant. The statement helps explain why the board held firm even as more than 700 employees threatened to walk out.

5. Miscalculations: A source for everything, an inexperienced board and loyalty of workforce like cult.

Almost everything in Sutskever’s 52-page memo came from one person: Mira Murati.

He did not verify statements with Brad Lightcap, Greg Brockman, or other executives named in the complaints. He trusts Murati completely, and verification “did not happen to (him).

“I fully believed the information Mira was giving me,” Sutskever said. “In retrospect, I realize I didn’t know it. But then, I thought I knew it. But I knew it through secondary knowledge.”

When asked about the council’s process, Sutskever was clear about what went wrong.

“One thing I can say is that the process was rushed,” he testified. “I think it was rushed because the board wasn’t experienced.”

Sutskever also expected OpenAI employees to be indifferent to Altman’s removal.

When 700 of 770 employees signed a letter demanding Altman’s return and threatening to leave for Microsoft, he was really surprised. He had fundamentally miscalculated the loyalty of the workforce and the board’s isolation from organizational reality.

“I didn’t expect them to cheer, but I didn’t expect them to feel strong either way,” Sutskever said.

Generally intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Example URL From our sponsor