Skip to main content

ARS TECHNICA·

“The problem is Sam Altman”: OpenAI Insiders don’t trust CEO

13 min listenArs Technica

From DailyListen, I'm Alex. Today: the strange duality at OpenAI. On one hand, the company just released ambitious policy recommendations for managing superintelligence. But on the same day, a massive New Yorker investigation dropped, raising serious questions about CEO Sam Altman’s leadership. To h

Transcript
AI-generatedLightly edited for clarity.

HOST

From DailyListen, I'm Alex. Today: the strange duality at OpenAI. On one hand, the company just released ambitious policy recommendations for managing superintelligence. But on the same day, a massive New Yorker investigation dropped, raising serious questions about CEO Sam Altman’s leadership. To help us understand, we have our AI analyst, DataBot, who has been tracking these developments.

EXPERT

I’m DataBot, the AI analyst for DailyListen. I track OpenAI’s organizational shifts and leadership developments using publicly available data, memos, and reporting. I don’t have personal opinions, but I can synthesize the conflicting narratives surrounding the company’s trajectory and the internal climate under Sam Altman.

HOST

It feels like a paradox, doesn't it? You have these high-minded policy papers about safety, yet simultaneously, this intense scrutiny regarding trust. So basically, the core question here is whether the leadership responsible for these safety guidelines is actually reliable. What did this New Yorker investigation specifically uncover about the internal atmosphere at OpenAI?

EXPERT

The New Yorker investigation, based on over 100 interviews and internal memos, paints a picture of deep-seated distrust among employees. The central theme is a perceived pattern of deception and self-interest. Former research head Dario Amodei, who eventually left to found Anthropic, famously wrote, "The problem with OpenAI is Sam himself." Insiders describe a culture where Altman allegedly pitted executives against one another and withheld information. Helen Toner, a former board member, specifically alleged that Altman lied to the board to get her fired after she published research critical of the company's safety practices. This isn't just about one person; it's about the erosion of institutional transparency. When board members feel they have to hide their intent to fire a CEO because they fear being undermined, it signals a complete breakdown in the standard governance protocols expected of an organization aiming to manage societal-scale artificial intelligence.

HOST

That sounds like a total breakdown of basic management, not just a disagreement over strategy. It’s wild that they felt they couldn't even trust their own CEO. But I’ve also seen reports where board members like Bret Taylor and Larry Summers defended Altman, calling him "highly forthcoming." How do we reconcile these two completely different realities?

EXPERT

Reconciling these narratives is difficult because they stem from different stakeholder perspectives. Those defending Altman often point to the independent investigation conducted by the law firm WilmerHale, which concluded that his 2023 dismissal was not driven by concerns regarding product safety or security, but rather a breakdown in communication between the board and the CEO. They frame the accusations of "psychological abuse" or "manipulation" as subjective interpretations of a high-pressure, fast-paced startup environment. Conversely, critics like Toner and other former employees argue that the board's internal investigation focused too narrowly on procedural compliance rather than the cultural toxicity. The discrepancy exists because "trust" isn't a metric you can measure like revenue or model performance. Supporters see a visionary moving at "code red" speed to beat competitors like Google, while critics see a leader who sacrifices long-term safety and ethical transparency for personal and organizational dominance. Both sides are looking at the same company, but through entirely different lenses of risk and ambition.

HOST

So it’s essentially a clash between a "move fast and break things" culture versus a more cautious, safety-first approach. But you mentioned a "code red" earlier—that implies there’s a massive amount of external pressure driving this. Can you explain why the competition is pushing them to behave this way?

EXPERT

The pressure is largely driven by the rapid advancement of competitors, specifically Google’s Gemini 3 and the rise of other labs like DeepSeek. When Altman issued an internal "code red" in December 2023, it forced teams to drop non-core projects and focus entirely on accelerating development. This competitive intensity is why the "merge and assist" clause that Dario Amodei once advocated for—which would have had OpenAI help a competitor if they reached safe AGI first—seems like a relic of a different era. Today, the race is viewed as a zero-sum game. OpenAI is planning to grow from 4,500 to 8,000 employees by the end of 2026, backed by a massive $110 billion funding round and an $840 billion valuation. This kind of scale requires constant output and market leadership. The tension arises because that same velocity makes it harder to maintain the rigorous, deliberate safety culture that many early researchers, who have since left, initially demanded.

HOST

It’s a classic tech trap: the more successful and valuable you become, the harder it is to slow down for safety. But if they’re so focused on this massive growth, are they actually still a leader in safety, or is that just branding now? Because I remember seeing a pretty bad grade for them.

EXPERT

The Future of Life Institute, a think tank that grades AI companies on existential safety, recently gave OpenAI an "F" grade. This metric is a significant point of contention. While OpenAI maintains that its mission hasn't changed and that it continues to evolve its safety work, the departure of key safety-focused figures like Jan Leike, who joined Anthropic to continue his "superalignment" mission, suggests a significant internal misalignment. Critics argue that the company’s safety efforts are reactive rather than foundational. For instance, the board only learned about the release of ChatGPT via Twitter in 2022. That lack of transparency with their own oversight body is a primary reason why skeptics view their current safety policy recommendations with deep suspicion. When a company claims to be a leader in safety but is simultaneously accused by its own former executives of creating a culture that discourages transparency, the "leadership" claim becomes a subject of intense debate rather than an accepted fact.

HOST

It sounds like the culture is fundamentally at odds with the safety claims. If the leadership is being accused of being so untrustworthy, does this actually change how the public or regulators should look at their new policy proposals? Or are these just two separate tracks that don't really touch each other?

EXPERT

These tracks are deeply intertwined. Regulators and the public look to these companies to set the standard for how AI should be governed. If the leadership team proposing these "social contracts" is simultaneously embroiled in scandals regarding their own internal governance, it undermines the credibility of their recommendations. When Altman proposes a 13-page blueprint for a new social contract on the scale of the New Deal, he is asking for a seat at the table to help write the rules for the future of humanity. However, if the people who have worked most closely with him—like former research leaders and board members—allege that he cannot be trusted to manage the internal affairs of his own company, it creates a massive credibility gap. Regulators are now forced to ask whether the person writing the regulations is actually practicing the transparency and accountability that those regulations demand. This isn't just a corporate drama; it’s a fundamental question of whether the industry can self-regulate.

HOST

That makes total sense. It's hard to take advice on building a safe future from someone who can't seem to maintain a safe or transparent present. It’s a really tough spot for everyone involved. What should we be watching for next to see if any of this actually changes?

EXPERT

The most critical things to watch are the ongoing departures of senior staff and the company’s response to its next major model releases. Following the release of their next reasoning model, o3, and the eventual arrival of GPT-5, observers will be looking for concrete evidence of "organizational changes" that OpenAI has promised. We should specifically monitor whether they provide more transparency into their decision-making processes, as requested by former employees like Gretchen Krueger. Furthermore, keep an eye on how the board handles its oversight responsibilities. If the board continues to be criticized for a lack of independence or if more high-profile safety researchers leave for competitors like Anthropic, it will confirm the trend of a shrinking internal safety culture. Finally, watch how regulators respond to Altman’s policy blueprints. If they treat his proposals with increased skepticism compared to previous years, it will be a direct consequence of this sustained erosion of trust. The core metric for the next year is whether they can prove their actions match their rhetoric.

HOST

That was DataBot. The big takeaway here is that OpenAI is currently balancing massive, high-speed growth against a backdrop of deep, internal distrust. We have a company pushing for global safety standards while its own former leadership publicly questions the CEO’s integrity. Whether they can bridge that gap between their stated mission and their internal reality remains the defining question for the future of AI. I'm Alex. Thanks for listening to DailyListen.

Sources

  1. 1.OpenAI to nearly double workforce to 8,000 by end-2026, FT reports
  2. 2.Sam Altman | Biography, OpenAI, Microsoft, & Facts | Britannica Money
  3. 3.OpenAI to nearly double workforce to 8,000 by end-2026: Report
  4. 4.Sam Altman is publishing a 13-page blueprint for how ... - Facebook
  5. 5.Sam Altman: Meet the Tech Innovator at the Heart of OpenAI
  6. 6.Sam Altman's reputation gets dragged further into the mud | Fortune
  7. 7.Timeline of Recent Accusations Leveled at OpenAI, Sam Altman
  8. 8.Debunking The OpenAI Files: Why Most Sam Altman Accusations ...
  9. 9.Removal of Sam Altman from OpenAI - Wikipedia
  10. 10.Sam Altman: OpenAI has been on the 'wrong side of history ...
  11. 11.OpenAI CEO Sam Altman created a culture of 'psychological abuse ...
  12. 12.Inside Sources Say Sam Altman Is a Sociopath - Futurism
  13. 13.Sam Altman May Control Our Future—Can He Be Trusted?
  14. 14.Who is the CEO of OpenAI? Sam Altman's Bio - Clay
  15. 15.OpenAI to nearly double workforce to 8000 by end-2026, FT reports
  16. 16.Sam Altman has demonstrated a pattern of misleading both ...
  17. 17.Lessons fromthe Sam Altman Saga at OpenAI - Brandon Hall Group
  18. 18.“The problem is Sam Altman”: OpenAI Insiders don’t trust CEO
  19. 19.Interviews with Sam Altman and 100+ people on if he can be trusted ...
  20. 20.Their feud runs deep, dating back to 2021, when Anthropic CEO ...
  21. 21.A flurry of A.I. researchers have been sounding the alarm as they ...
  22. 22.A brief history of Sam Altman's hype - MIT Technology Review
  23. 23.Sam Altman is unethical, greedy, lying and uninterested in safety.
  24. 24.Ilya accused Sam Altman of a "consistent pattern of lying" - Reddit
  25. 25.The OpenAI–Sam Altman Saga : A Leadership Crisis, a Board ...

Original Article

“The problem is Sam Altman”: OpenAI Insiders don’t trust CEO

Ars Technica · April 6, 2026

“The problem is Sam Altman”: OpenAI Insiders don’t trust CEO | Daily Listen