ARS TECHNICA·
Inside OpenAI Why Insiders Do Not Trust Sam Altman
OpenAI insiders are raising alarms about Sam Altman’s leadership. Is the real threat to AGI the code itself, or the boardroom turmoil stalling progress?
From DailyListen, I'm Alex
HOST
From DailyListen, I'm Alex. Today: the internal turmoil at OpenAI. A new investigation suggests that for many insiders, the problem isn't the technology—it's CEO Sam Altman. To help us understand what’s happening, we’re joined by Marcus, our economics analyst, who has been tracking the shifting power dynamics within the world’s most influential AI company.
MARCUS
It’s a complex situation, Alex. We’re looking at a fundamental clash between the original mission of OpenAI and the reality of its rapid commercialization. When it started in 2019, OpenAI was a non-profit entity dedicated to creating artificial general intelligence for the benefit of humanity. But that mission has collided head-on with the massive capital requirements of training models like GPT-4. To fund this, OpenAI created a for-profit arm, which has since secured billions from partners like Microsoft. The core issue, according to many insiders, is that as the company shifted toward this high-growth, for-profit model, the leadership—specifically Sam Altman—began to prioritize commercial success and speed over the original safety-first philosophy. This has led to a deep-seated distrust among employees, former leaders, and even members of the original board, who have raised concerns that the company is no longer the mission-driven organization they joined. It’s essentially a battle over the soul of the company as it scales toward a workforce of 8,000 by 2026.
HOST
That’s a stark divide, Marcus. It sounds like a classic case of mission drift, where the need for funding forces a company to abandon its roots. But help me understand the personal side of this. Why is the distrust so specifically tied to Sam Altman himself, rather than just the business model?
MARCUS
The focus on Altman stems from a series of reported incidents that have left insiders feeling deceived. The New Yorker investigation, which draws on over 100 interviews, highlights instances where former leaders felt misled about corporate maneuvers. For example, there’s the account of Dario Amodei, the former head of research who eventually left to co-found Anthropic. He famously remarked that the problem with OpenAI is Sam himself. This isn't just about business strategy; it’s about personal transparency. Insiders have alleged that Altman played a role in board politics that undermined the very people tasked with safety oversight. When Altman was briefly ousted in November 2023, the board’s stated reason was that he wasn't consistently candid in his communications. They felt he couldn't be trusted with the "button" of superintelligence. This lack of trust has created a culture where key safety researchers, like Ilya Sutskever and Jan Leike, felt compelled to leave, signaling a mass exodus of talent who no longer believe leadership is prioritizing the long-term societal risks of the technology.
HOST
Wow, that’s intense. It’s one thing to disagree on a product roadmap, but it’s another to have your own board and top researchers questioning your basic integrity. If the trust has truly collapsed, how does the company actually function day-to-day? Does this internal friction affect the actual development of their AI?
MARCUS
It certainly creates a volatile environment. When you lose leaders who were responsible for the "superalignment" of these systems—the work meant to ensure AI remains safe—you’re losing the internal guardrails that critics argue are necessary. The friction manifests in how decisions are made. If staff feel that safety concerns are being overruled for the sake of a product launch or a new deal, they become less likely to flag issues internally. This leads to a culture of silence. Furthermore, the external pressure is mounting. The FTC is currently investigating whether OpenAI has engaged in unfair data security practices, and there’s even a formal complaint filed with the IRS questioning the company’s tax structure. These aren't just minor bureaucratic hurdles; they are systemic challenges that force the company to spend time and resources on legal defense rather than research. When leadership is under this much scrutiny, the focus shifts from long-term safety goals to short-term survival and public relations, which is exactly the opposite of what those early safety-focused insiders wanted.
So, it sounds like a perfect storm of regulatory...
HOST
So, it sounds like a perfect storm of regulatory scrutiny and internal burnout. You mentioned the IRS and the FTC, which makes me think this is about more than just "feelings" in the office. Is there a concrete, legal way that the company is trying to bridge this gap between profit and mission?
MARCUS
They’re trying, but it’s a delicate balancing act. OpenAI has stated that the non-profit will remain in control, even as they move toward what’s known as a public benefit corporation. The idea is that the non-profit will hold a significant, though yet-to-be-determined, stake in the business arm. This is intended to give the non-profit access to capital that it can then direct toward its own mission-driven goals. Altman argues that this growth is actually consistent with their mission because it provides the resources needed to extend access to AI and help society build incredible things. But skeptics, including some inside the company and external watchdogs like the Midas Project, argue this is just a way to circumvent tax regulations. They suggest the structure is designed to channel revenue toward commercial interests while keeping the tax benefits of a non-profit. The legal reality is that the for-profit arm is supposedly bound to the non-profit’s mission, but defining that mission in a way that satisfies both investors and safety advocates is proving to be nearly impossible.
HOST
That sounds like a massive loophole, or at least a very convenient arrangement. But I have to push back—isn't this just how Silicon Valley works? We see this with other tech giants, too. Why is OpenAI getting so much heat for doing what any other startup would do to scale up?
MARCUS
The heat is because of the stakes. OpenAI isn't building a photo-sharing app or a delivery service; they are, by their own admission, striving for artificial general intelligence, which Altman has famously compared to the Manhattan Project in terms of scale and impact. When you claim you’re building something that could outperform human intelligence on all fronts, the standard for trust is much higher. The public and regulators expect a higher degree of accountability than they would from a standard software company. If a social media site has a bad week, it’s a PR issue. If a superintelligent system is misaligned with human values, the consequences are theoretically catastrophic. That’s why the "move fast and break things" ethos of Silicon Valley feels so dangerous to so many people here. When insiders like Dario Amodei warn that the problem is the leadership’s integrity, it’s not just a workplace dispute—it’s an alarm bell regarding the governance of a technology that could potentially reshape the entire future of human society.
HOST
That makes sense. It’s not just about the money; it’s about the existential risk. Given all this, it feels like the company is at a real crossroads. If the board, the regulators, and the researchers are all at odds, what does the future look like? Is it even possible for OpenAI to regain its original focus?
MARCUS
It’s highly unlikely they can return to the original, smaller non-profit model. That ship has sailed. The future of OpenAI will likely be defined by a constant struggle to appease three very different groups: the commercial investors, the regulatory bodies, and the remaining staff who still care about the mission. To survive, they’ll need to prove that their governance isn't just a paper exercise. They’ve recently released policy recommendations to address superintelligence risks, which is their attempt to show they’re still committed to safety. However, the proof will be in the actions, not the documents. If they continue to lose top-tier safety researchers and face more regulatory crackdowns, the pressure on Altman to either change his leadership style or step aside will only grow. The company is effectively trying to build a plane while it’s already in the air, and the passengers are starting to wonder if the pilot is actually watching the gauges or just focused on the speed of the flight.
That’s a vivid way to put it
HOST
That’s a vivid way to put it. So, basically, the company is too big to go back, but it’s arguably too broken to move forward without major structural or leadership changes. It’s a fascinating, if slightly terrifying, look at the power dynamics behind the AI revolution. I really appreciate you breaking that down for us.
MARCUS
Happy to help, Alex. It’s a story that’s far from over.
HOST
That was Marcus, our economics analyst. The big takeaway here is that OpenAI’s challenges go way beyond a simple corporate disagreement. It’s a fundamental conflict between the, at times, reckless speed of commercial AI development and the need for rigorous, trustworthy safety oversight. Whether Sam Altman can navigate this without losing the trust of his own people—and regulators—remains the central question. I'm Alex. Thanks for listening to DailyListen.
Sources
- 1.Sam Altman | Biography, OpenAI, Microsoft, & Facts
- 2.Sam Altman: OpenAI CEO Sam Altman Biography, Net Worth, Education and more
- 3.OpenAI to nearly double workforce to 8,000 by end-2026: Report - The Hindu
- 4.Anonymous Sources Detail Sam Altman’s Alleged Untrustworthiness in New Report
- 5.OpenAI to nearly double workforce to 8,000 by end-2026, FT reports
- 6.Who is the CEO of OpenAI? Sam Altman’s Bio | Clay
- 7.OpenAI to nearly double workforce to 8000 by end-2026, FT reports
- 8.“The problem is Sam Altman”: OpenAI Insiders don’t trust CEO
- 9.OpenAI, Anthropic CEOs Skip Hand-Hold As AI Giants Clash In New ...
- 10.OpenAI says non-profit will remain in control after backlash
- 11.Sam Altman Under Scrutiny? OpenAI CEO Accused Of Choosing Profits Over User Safety In Shocking Report | Times Now
- 12.The Chaos Of OpenAI: Why Sam Altman Was Fired & Rehired ...
- 13.Amid 'Mass Exodus' From OpenAI's AI Safety Team, Insider Says 'Trust Collapsing Bit By Bit' In CEO Sam Altman: Report
- 14.OpenAI's Nonprofit Mission at Risk? Sam Altman's Controversial ...
- 15.OpenAI’s Sam Altman got his wish for more regulatory attention, courtesy of the FTC
- 16.A Nobel Prize Winner Called Out OpenAI And Its CEO Sam Altman ...
- 17.OpenAI insiders accuse the company of prioritizing profits over safety
- 18.What Sam Altman's firing and rehiring reveals about OpenAI ...
- 19.Exodus at OpenAI: Nearly half of AGI safety staffers have left, says ...
- 20.I spent last weekend writing about OpenAI's nonprofit board — now at the heart of Sam Altman's ouster
- 21.OpenAI and tax controversy: profit/nonprofit model under IRS scrutiny, Sam Altman at the center
- 22.A month ago Sam Altman got fired and rehired again. What ...
Original Article
“The problem is Sam Altman”: OpenAI Insiders don’t trust CEO
Ars Technica · April 6, 2026
You Might Also Like
- ai
OpenAI Suggests Four Day Work Weeks for the AI Era
16 min
- ai
Google AI Overviews Accuracy Analysis Reveals Errors
22 min
AP Buyouts and the Crisis Facing American Journalism
15 min
- ai
Anthropic Ends Third Party Claude Subscription Access
15 min
- tech
Meta Unveils Muse Spark AI to Rival Superintelligence
18 min