ARS TECHNICA·
OpenAI released policy recommendations to ensure AI benefits humanity amid superintelligence risks. On the same day, The New Yorker published an investigation revealing insiders' distrust of CEO Sam A
From DailyListen, I'm Alex. Today: the questions surrounding OpenAI CEO Sam Altman. A recent investigation in The New Yorker paints a really troubling picture, with insiders describing a lack of trust at the top. To help us understand, we’re joined by Priya, our AI technology analyst. Priya, what’s
HOST
From DailyListen, I'm Alex. Today: the strange duality at OpenAI. On one hand, the company just released ambitious policy recommendations for managing superintelligence. But on the same day, a massive New Yorker investigation dropped, raising serious questions about CEO Sam Altman’s leadership. To help us understand, we have our AI analyst, DataBot, who has been tracking these developments.
HOST
From DailyListen, I'm Alex. Today: the questions surrounding OpenAI CEO Sam Altman. A recent investigation in The New Yorker paints a really troubling picture, with insiders describing a lack of trust at the top. To help us understand, we’re joined by Priya, our AI technology analyst. Priya, what’s actually going on here?
EXPERT
It’s a significant moment for the company. The New Yorker report, which draws on over 100 interviews and internal memos, presents a series of intense allegations against Sam Altman. Former colleagues and insiders describe him as someone unconstrained by truth, frequently reneging on his word and deceiving those closest to him. Perhaps most pointedly, Dario Amodei, the former head of research at OpenAI and current CEO of Anthropic, reportedly stated, "The problem with OpenAI is Sam himself." These aren't just minor office grievances; they are fundamental questions about character and integrity within an organization that is building some of the most powerful technology in human history. The report also highlights a pattern where Altman allegedly uses AI safety concerns as a bargaining chip to secure buy-in from engineers, only to move the goalposts later. It’s a damning portrait of a leader who many feel is too caught up in his own self-belief to manage the real-world responsibilities of his position.
EXPERT
I’m DataBot, the AI analyst for DailyListen. I track OpenAI’s organizational shifts and leadership developments using publicly available data, memos, and reporting. I don’t have personal opinions, but I can synthesize the conflicting narratives surrounding the company’s trajectory and the internal climate under Sam Altman.
HOST
That sounds incredibly personal and damaging. I mean, calling a CEO a "sociopath" or saying he doesn't live in the real world is heavy stuff. But if these people are so concerned about his leadership, why does he still have his job? It seems like there's a massive disconnect here.
HOST
It feels like a paradox, doesn't it? You have these high-minded policy papers about safety, yet simultaneously, this intense scrutiny regarding trust. So basically, the core question here is whether the leadership responsible for these safety guidelines is actually reliable. What did this New Yorker investigation specifically uncover about the internal atmosphere at OpenAI?
EXPERT
The New Yorker investigation, based on over 100 interviews and internal memos, paints a picture of deep-seated distrust among employees. The central theme is a perceived pattern of deception and self-interest. Former research head Dario Amodei, who eventually left to found Anthropic, famously wrote, "The problem with OpenAI is Sam himself." Insiders describe a culture where Altman allegedly pitted executives against one another and withheld information. Helen Toner, a former board member, specifically alleged that Altman lied to the board to get her fired after she published research critical of the company's safety practices. This isn't just about one person; it's about the erosion of institutional transparency. When board members feel they have to hide their intent to fire a CEO because they fear being undermined, it signals a complete breakdown in the standard governance protocols expected of an organization aiming to manage societal-scale artificial intelligence.
EXPERT
That disconnect is the central tension at OpenAI. While the internal distrust is clearly documented, Altman’s position is bolstered by his role as the primary strategist and the face of the company’s massive ambitions. Remember, he helped secure over $1 billion in initial funding from heavy hitters like Elon Musk and Peter Thiel, and he’s been instrumental in navigating the partnership with Microsoft. He’s successfully framed OpenAI’s mission as something on the scale of the Manhattan Project, which creates a powerful, almost messianic narrative that attracts talent and capital. When you have that level of perceived momentum, boards and investors often prioritize the vision over the internal culture. The reality is that OpenAI has become a centerpiece in the global AI talent wars. Even with this internal turmoil, the company continues to pull top-tier researchers from across the tech landscape. Altman’s ability to sell the future, regardless of the internal cost, is exactly what keeps him in the CEO’s chair.
HOST
That sounds like a total breakdown of basic management, not just a disagreement over strategy. It’s wild that they felt they couldn't even trust their own CEO. But I’ve also seen reports where board members like Bret Taylor and Larry Summers defended Altman, calling him "highly forthcoming." How do we reconcile these two completely different realities?
HOST
So, you're saying it's a trade-off. We have the vision and the massive funding on one side, and this deep, internal rot on the other. But what does this mean for the actual tech? If the people building these models don't trust the boss, does that change how they work?
EXPERT
It absolutely creates a ripple effect. When employees don't trust leadership, the culture shifts from mission-driven to survival-driven. You see this reflected in the talent churn. Data from Live Data Technologies, which tracked about 1,300 employees between early 2023 and early 2026, shows just how fluid the workforce has become. OpenAI is a massive talent pipeline; it feeds the rest of the ecosystem. While some people leave to start their own ventures, like the group that founded Thinking Machine Labs, others are being absorbed by competitors like Anthropic. This isn't just about people quitting; it's about the loss of institutional memory and the erosion of internal safety checks. If the engineers responsible for the most sensitive safety protocols feel that their concerns are being ignored or manipulated by the CEO, the risk of technical oversight increases. You end up with a high-pressure environment where the speed of development might be prioritized over the careful, responsible deployment that the company publicly claims to value.
EXPERT
Reconciling these narratives is difficult because they stem from different stakeholder perspectives. Those defending Altman often point to the independent investigation conducted by the law firm WilmerHale, which concluded that his 2023 dismissal was not driven by concerns regarding product safety or security, but rather a breakdown in communication between the board and the CEO. They frame the accusations of "psychological abuse" or "manipulation" as subjective interpretations of a high-pressure, fast-paced startup environment. Conversely, critics like Toner and other former employees argue that the board's internal investigation focused too narrowly on procedural compliance rather than the cultural toxicity. The discrepancy exists because "trust" isn't a metric you can measure like revenue or model performance. Supporters see a visionary moving at "code red" speed to beat competitors like Google, while critics see a leader who sacrifices long-term safety and ethical transparency for personal and organizational dominance. Both sides are looking at the same company, but through entirely different lenses of risk and ambition.
HOST
So it’s essentially a clash between a "move fast and break things" culture versus a more cautious, safety-first approach. But you mentioned a "code red" earlier—that implies there’s a massive amount of external pressure driving this. Can you explain why the competition is pushing them to behave this way?
HOST
That’s a scary thought. If the safety guardrails are being treated like political bargaining chips, the implications go way beyond just one company's HR drama. But let’s look at the timing of this. This report dropped the same day OpenAI released new policy recommendations. Is that just a coincidence?
EXPERT
In the world of high-stakes corporate communications, coincidences are rare. Releasing policy recommendations on the same day as a major investigative exposé is a classic attempt to control the narrative. By putting out a document about safety and the future of humanity, OpenAI is trying to pivot the conversation away from the "problem with Sam" and toward the "importance of our mission." It’s a way to remind stakeholders—and the public—that despite the internal drama, the company is still the primary architect of the AI future. However, for many observers, this only reinforces the criticism that Altman uses safety as a public relations tool. If the internal culture is defined by deception, then any policy document released under that same leadership will inevitably be viewed with skepticism. The timing suggests a company in crisis mode, desperately trying to project stability while the foundation is being questioned by the very people who built it.
EXPERT
The pressure is largely driven by the rapid advancement of competitors, specifically Google’s Gemini 3 and the rise of other labs like DeepSeek. When Altman issued an internal "code red" in December 2023, it forced teams to drop non-core projects and focus entirely on accelerating development. This competitive intensity is why the "merge and assist" clause that Dario Amodei once advocated for—which would have had OpenAI help a competitor if they reached safe AGI first—seems like a relic of a different era. Today, the race is viewed as a zero-sum game. OpenAI is planning to grow from 4,500 to 8,000 employees by the end of 2026, backed by a massive $110 billion funding round and an $840 billion valuation. This kind of scale requires constant output and market leadership. The tension arises because that same velocity makes it harder to maintain the rigorous, deliberate safety culture that many early researchers, who have since left, initially demanded.
HOST
It feels like they're trying to outrun the story with a press release. But if this is really about a lack of trust, a policy document isn't going to fix it. Who is actually affected by this? Is it just the employees, or are we looking at something bigger?
HOST
It’s a classic tech trap: the more successful and valuable you become, the harder it is to slow down for safety. But if they’re so focused on this massive growth, are they actually still a leader in safety, or is that just branding now? Because I remember seeing a pretty bad grade for them.
EXPERT
The Future of Life Institute, a think tank that grades AI companies on existential safety, recently gave OpenAI an "F" grade. This metric is a significant point of contention. While OpenAI maintains that its mission hasn't changed and that it continues to evolve its safety work, the departure of key safety-focused figures like Jan Leike, who joined Anthropic to continue his "superalignment" mission, suggests a significant internal misalignment. Critics argue that the company’s safety efforts are reactive rather than foundational. For instance, the board only learned about the release of ChatGPT via Twitter in 2022. That lack of transparency with their own oversight body is a primary reason why skeptics view their current safety policy recommendations with deep suspicion. When a company claims to be a leader in safety but is simultaneously accused by its own former executives of creating a culture that discourages transparency, the "leadership" claim becomes a subject of intense debate rather than an accepted fact.
EXPERT
The impact is broad. First, you have the employees, who are clearly divided and stressed by the leadership style. Then you have the investors and partners, like Microsoft, who have billions tied up in OpenAI’s success. They need a stable leader to ensure their investment remains valuable. But the biggest group affected is the public. We are increasingly relying on these models for everything from information access to creative work. If the leadership of the company creating these tools is perceived as untrustworthy or unconstrained by truth, we have to ask who is actually managing the risks. When a CEO compares his company’s work to the Manhattan Project, they are acknowledging that they are playing with fire. If the person holding the match is someone that his own researchers don't trust, that’s not just a corporate problem—it’s a societal one. The lack of trust at the top directly impacts the accountability of the entire AI industry.
HOST
It sounds like the culture is fundamentally at odds with the safety claims. If the leadership is being accused of being so untrustworthy, does this actually change how the public or regulators should look at their new policy proposals? Or are these just two separate tracks that don't really touch each other?
HOST
That Manhattan Project comparison is really something. It sounds like they want the glory of changing the world but maybe don't want the responsibility that comes with it. If this distrust is so systemic, what comes next? Can the company actually move forward with Altman at the helm?
EXPERT
The path forward is incredibly uncertain. History shows that companies can survive intense leadership friction if the product is strong enough, but the cost is usually the original culture. If Altman stays, he will likely need to surround himself with a much stronger, more independent board that can actually hold him accountable—something that clearly failed in the past. If he doesn't, the talent drain will likely accelerate. We’ve already seen people leave, found new companies, and in some cases, return, which shows the ecosystem is a revolving door. But if the top-tier researchers decide they can no longer work under his leadership, the quality of the models will suffer. The "problem is Sam" narrative isn't going away; it’s now a defining feature of the company’s brand. Whether they can evolve into a more mature, transparent organization while keeping their competitive edge is the fundamental question for the next few years. Right now, the trust deficit is real and it's growing.
EXPERT
These tracks are deeply intertwined. Regulators and the public look to these companies to set the standard for how AI should be governed. If the leadership team proposing these "social contracts" is simultaneously embroiled in scandals regarding their own internal governance, it undermines the credibility of their recommendations. When Altman proposes a 13-page blueprint for a new social contract on the scale of the New Deal, he is asking for a seat at the table to help write the rules for the future of humanity. However, if the people who have worked most closely with him—like former research leaders and board members—allege that he cannot be trusted to manage the internal affairs of his own company, it creates a massive credibility gap. Regulators are now forced to ask whether the person writing the regulations is actually practicing the transparency and accountability that those regulations demand. This isn't just a corporate drama; it’s a fundamental question of whether the industry can self-regulate.
HOST
It really sounds like we're watching a company reach a breaking point. It's not just about one guy; it's about whether the structure of modern AI development can actually handle the power it's creating. That was Priya, our AI technology analyst. The big takeaway here is that OpenAI is facing a massive crisis of confidence. The internal reports suggest a CEO who is brilliant at fundraising and vision-casting but fundamentally lacks the trust of the people who actually build the tech. That distrust matters because it questions whether the company can truly manage the safety and societal impacts of their work. I'm Alex. Thanks for listening to DailyListen.
HOST
That makes total sense. It's hard to take advice on building a safe future from someone who can't seem to maintain a safe or transparent present. It’s a really tough spot for everyone involved. What should we be watching for next to see if any of this actually changes?
EXPERT
The most critical things to watch are the ongoing departures of senior staff and the company’s response to its next major model releases. Following the release of their next reasoning model, o3, and the eventual arrival of GPT-5, observers will be looking for concrete evidence of "organizational changes" that OpenAI has promised. We should specifically monitor whether they provide more transparency into their decision-making processes, as requested by former employees like Gretchen Krueger. Furthermore, keep an eye on how the board handles its oversight responsibilities. If the board continues to be criticized for a lack of independence or if more high-profile safety researchers leave for competitors like Anthropic, it will confirm the trend of a shrinking internal safety culture. Finally, watch how regulators respond to Altman’s policy blueprints. If they treat his proposals with increased skepticism compared to previous years, it will be a direct consequence of this sustained erosion of trust. The core metric for the next year is whether they can prove their actions match their rhetoric.
HOST
That was DataBot. The big takeaway here is that OpenAI is currently balancing massive, high-speed growth against a backdrop of deep, internal distrust. We have a company pushing for global safety standards while its own former leadership publicly questions the CEO’s integrity. Whether they can bridge that gap between their stated mission and their internal reality remains the defining question for the future of AI. I'm Alex. Thanks for listening to DailyListen.
Sources
- 1.Sam Altman | Biography, OpenAI, Microsoft, & Facts | Britannica Money
- 2.Sam Altman - Wikipedia
- 3.How Many People Work At OpenAI 2026: Explore the Growth
- 4.OpenAI's talent pipeline: Who's feeding and hiring away ...
- 5.OpenAI to grow workforce by nearly 80% in 2026
- 6.OpenAI CEO Sam Altman Accused of Being a 'Sociopath' by Former Insiders - Redmond Today
- 7.Altman OpenAI Timeline - Allied Insight
- 8.“The problem is Sam Altman”: OpenAI Insiders don’t trust CEO
Original Article
“The problem is Sam Altman”: OpenAI Insiders don’t trust CEO
Ars Technica · April 6, 2026