Skip to main content

SF STANDARD·

I lived through Google’s AI-military crisis. Here’s why engagement still matters.

12 min listenSF Standard

From DailyListen, I'm Alex. Today we're talking about a fascinating opinion piece that cuts right to the heart of one of tech's biggest dilemmas. Diane Greene, the former Google Cloud CEO who lived through Google's controversial Project Maven crisis, just published a piece in the SF Standard arguing

Transcript
AI-generatedLightly edited for clarity.

HOST

From DailyListen, I'm Alex. Today: a tech industry veteran who lived through one of Silicon Valley's biggest ethical flashpoints is weighing in on how AI companies should work with the military. Diane Greene, the former Google Cloud CEO who was at the center of the company's controversial Project Maven crisis, just published an opinion piece arguing that deep collaboration with the Pentagon is the right path forward. Even though it's harder than just saying yes or no. To help us understand what this means for the tech industry, we have Maya Chen, our AI analyst who's been tracking the intersection of artificial intelligence and defense policy. Maya, let's start with who Diane Greene is and why her voice carries weight on this topic.

EXPERT

Diane Greene isn't just another tech executive with opinions. She's someone who's been in the room when these decisions get made. We're talking about a woman who co-founded VMware in 1998 and took it public with a nineteen billion dollar first-day valuation. Google acquired her startup Bebop for three hundred eighty million dollars, then made her CEO of Google Cloud in 2015. But here's why her voice matters on military AI partnerships: she was running Google Cloud when Project Maven exploded into a company-wide crisis. Project Maven was this Pentagon program where Google was helping develop AI to analyze drone footage. Thousands of Google employees signed petitions against it. People resigned. The backlash was so intense that Google eventually pulled out and created new AI ethics principles. Greene lived through that whole mess as one of the senior executives making decisions. So when she says collaboration is possible but complicated, she's not speaking theoretically.

HOST

Right, so she's not coming at this as an outsider. What exactly is she arguing in this new piece?

EXPERT

Greene's making what I'd call a nuanced argument in the SF Standard. She's saying that tech companies have basically three options when the military comes calling for AI help. Option one is complete withdrawal - just say no to any defense work. Option two is compliance - take the contracts and don't ask hard questions. But Greene's pushing for option three: deep collaboration. And by that, she means tech companies should engage with the military but do it thoughtfully. Ask the hard questions about how the technology will be used. Build in safeguards. Have ongoing oversight. She acknowledges this is the hardest path because it requires constant judgment calls and ongoing responsibility. But she argues it's better than either extreme.

HOST

That sounds reasonable in theory, but I imagine the devil's in the details. What would this deep collaboration actually look like?

EXPERT

That's where it gets tricky, and honestly, Greene doesn't spell out all the specifics in her piece. But based on what we know about her experience and the broader debate, deep collaboration would mean tech companies becoming partners rather than just vendors. Instead of the Pentagon saying "build us this AI system" and the company saying "here you go," there'd be ongoing dialogue about use cases, limitations, ethical boundaries. Think about it like this: when Google was working on Project Maven, the controversy wasn't necessarily that they were helping the military analyze video footage. The problem was that employees felt like they were being kept in the dark about how their work might be used in actual combat situations. Deep collaboration would mean those conversations happen upfront and continuously. Companies would have a say in how their technology gets deployed. They'd build in kill switches or usage restrictions. They'd require regular audits.

HOST

But I'm curious about the business reality here. Can tech companies really tell the Pentagon how to use their products?

EXPERT

That's the million-dollar question, and it gets to why Greene calls this the hardest option. The military isn't used to vendors dictating terms about operational use. And tech companies aren't used to taking ongoing responsibility for how their products get used after they ship. But Greene's argument is that AI is different. It's not like selling trucks or radios where the use case is pretty straightforward. AI systems can be adapted and applied in ways that the original developers never intended. And they can have consequences that ripple far beyond their immediate use. Look at what happened with facial recognition technology. Companies developed it for things like photo tagging, but then it got used for surveillance in ways that raised serious civil liberties concerns. Greene's saying that with military AI, you can't just hand over the technology and walk away. The stakes are too high.

HOST

So this isn't just about Google or one company. This is happening across the industry?

EXPERT

Exactly. Pentagon AI partnerships are growing fast. Microsoft has a ten billion dollar cloud contract with the Defense Department. Amazon was competing for that same contract. Palantir has built their entire business around government and military contracts. And it's not just the big players. The Pentagon has programs specifically designed to work with smaller AI startups. So Greene's argument isn't really about whether tech companies should work with the military. That ship has sailed. The question is how they do it. And right now, most companies are choosing either the compliance route or trying to stay out of defense work entirely. Very few are attempting what Greene calls deep collaboration.

HOST

What's driving this trend toward more military AI partnerships?

EXPERT

Two big factors. First, the Pentagon knows it's behind in AI compared to countries like China, and it's trying to catch up fast. Military leaders have been pretty open about this. They're saying American tech companies have the best AI talent and technology, so they need those partnerships to maintain military advantage. Second, the technology itself has reached a point where it's genuinely useful for military applications. We're not talking about science fiction anymore. AI can analyze satellite imagery, predict equipment failures, optimize logistics, help with cybersecurity. These are real capabilities that can make military operations more effective and potentially save lives. But that same technology can also be used for autonomous weapons, mass surveillance, or other applications that make people uncomfortable. That's why Greene's arguing for this middle path where companies stay engaged but maintain some control over how their work gets used.

HOST

Looking ahead, do you think other companies will follow Greene's approach?

EXPERT

It's going to depend on a few things. First, whether the Pentagon is actually willing to accept these deeper partnerships with more oversight and restrictions. The military values operational security and independence, so they might not want tech companies looking over their shoulders. Second, whether tech companies can figure out how to make this work practically. It's one thing to say you want ongoing oversight. It's another thing to build systems and contracts that actually make that possible. And third, public pressure. Google pulled out of Project Maven because of employee activism. If that kind of pressure continues, more companies might try Greene's approach as a way to thread the needle. But if the public debate dies down, companies might just go with the easier compliance route. I think we're still in the early stages of figuring out how this relationship between Silicon Valley and the Pentagon is going to work long-term.

HOST

That was Maya Chen. The big takeaway here is that as AI becomes more powerful and more central to military operations, tech companies are being forced to make hard choices about their role in national defense. Diane Greene's argument for deep collaboration represents a middle path between complete withdrawal and blind compliance. But whether that approach can work in practice remains an open question. The stakes are high because these decisions will shape not just individual companies, but how America develops and deploys AI for national security. I'm Alex. Thanks for listening to DailyListen.

Sources

  1. 1.Diane Greene: Inspiring female founders and CEOs
  2. 2.Diane Greene - Founder Tactics - Pilot
  3. 3.Diane Greene - 2026 Portfolio & Founded Companies - Tracxn
  4. 4.[PDF] Case Law - Alabama Law Scholarly Commons
  5. 5.Diane Greene - Grokipedia
  6. 6.Diane Greene - Wikipedia
  7. 7.Adapt of Philadelphia, Liberty Resources, Inc., Marie Watson ...
  8. 8.Diane Greene, who lived through Google's AI-military crisis, argues in an SF Standard opinion piece that deep collaboration between AI companies and the military is harder than withdrawal or compliance but remains the best option. This matters because it shapes how tech firms balance ethics, innovation, and national security amid growing Pentagon AI partnerships. A key detail: Greene draws from her Google experience with Project Maven, a controversial drone AI program.

Original Article

I lived through Google’s AI-military crisis. Here’s why engagement still matters.

SF Standard · April 3, 2026

I lived through Google’s AI-military crisis. Here’s why engagement still matters. | Daily Listen