Skip to main content

MIT TECHNOLOGY REVIEW·

The Growing Divide in Public and Expert Views on AI

11 min listenMIT Technology Review

A new report highlights a major divide in AI perception. While 73% of experts are optimistic about job impacts, only 23% of the public shares this view.

Transcript
AI-generatedLightly edited for clarity.

From DailyListen, I'm Alex

HOST

From DailyListen, I'm Alex. Today: why we're all seeing such a massive divide in how we view artificial intelligence. To help us understand, we have Priya, our technology analyst, who has been covering the latest Stanford AI Index report. Priya, it’s great to have you here.

PRIYA

Thanks for having me, Alex. It’s a fascinating time to be looking at these numbers. The 2026 AI Index report from the Stanford Institute for Human-Centered AI really puts a spotlight on this friction. We’re seeing a widening gap between how researchers and industry experts view this technology compared to the general public. Michelle Kim, who wrote about this for The Algorithm, hit the nail on the head when she said that if you’re following the news, you’re probably getting whiplash. The core of the issue is that AI is advancing at a pace that’s frankly faster than society’s ability to understand, govern, or even trust it. We’re in a phase where technical capability is racing ahead, while public perception is trying to catch up, but often getting stuck on the risks and the unknowns rather than the potential benefits we see in the labs.

HOST

Wow, that’s a pretty stark way to put it. So, you’re saying the people building it are looking at what it can do, while the rest of us are mostly worried about what it might break. Can you break down that 73% versus 23% statistic I’ve been hearing about? [CLIP_START]

PRIYA

That specific data point comes from the surveys analyzed within the 2026 AI Index. It’s a massive indicator of the disconnect. When researchers and experts are asked about the impact of AI on jobs, 73% are genuinely optimistic. They see the productivity gains, the coding assistance, and the technical problem-solving capabilities firsthand. But when you look at the general public, only 23% share that same optimism. Most Americans are telling pollsters they believe AI will actually lead to fewer jobs over the next two decades. It’s a classic case of different vantage points. Experts are often working with AI on structured, technical tasks where it excels and provides immediate, measurable value. Meanwhile, the public encounters AI in much more open-ended, unpredictable scenarios where the limitations are far more obvious. That creates a fundamentally different emotional and intellectual experience with the tech, leading to this massive, measurable gap in confidence. [CLIP_END]

HOST

That makes sense, but it still feels like a huge disconnect. If the experts are so sure it’s going to be a net positive for the economy, why aren’t they doing a better job of convincing the rest of us? Is it just a communication problem, or are they ignoring something?

PRIYA

I think it’s less about a communication failure and more about the nature of the work. Experts are essentially living in a future where they see the efficiency gains in real-time. They see how AI can handle the repetitive, manual coding tasks that used to take days. It’s a marathon, not a sprint, from their perspective. But for the average person, the anxiety over jobs, healthcare, and the wider economy is very real and very immediate. There isn’t a clear, comforting roadmap for how these massive structural shifts will play out for the average worker. When you hear about AI scaling faster than the world can adapt, that’s not just a technical observation; it’s a social one. There’s a lack of concrete evidence for the public to feel secure, and until those benefits are felt broadly across the workforce, that skepticism is going to remain the dominant narrative for most people.

So, the experts are looking at the potential, but the...

HOST

So, the experts are looking at the potential, but the public is looking at their own paycheck. But couldn't you argue that the public is right to be skeptical? I mean, we've seen plenty of tech "revolutions" before that promised the moon and left a lot of people behind.

PRIYA

That’s a fair point, and it’s why the 2026 report is so important. It doesn't just celebrate the technical jumps; it documents the rising security issues and the very real growing pains. Stanford researchers aren’t just cheering from the sidelines. They’re documenting that the technology is outpacing our current regulatory and trust frameworks. There’s a legitimate concern that the speed of deployment is happening without enough guardrails. When experts say concerns about job loss might be overblown, they’re usually looking at historical trends where technology creates new roles even as it displaces old ones. But the public’s fear isn't just about the *eventual* outcome; it’s about the messy, painful transition period in between. Whether or not those fears are overblown, they’re definitely rational given how little control the average person has over these rapid, industry-wide decisions. It’s not just about the final state of the economy; it’s about the disruption to people’s lives today.

HOST

That really highlights the tension between abstract economic theory and the lived reality of working people. If the experts are worried about the technology moving too fast to govern, what exactly are they seeing? Are there specific areas where the risks are starting to outweigh the potential benefits right now?

PRIYA

The 2026 Index highlights that we’re seeing significant jumps in model capabilities, but those same jumps bring new security vulnerabilities. We’re talking about more advanced models that can be misused for things like automated misinformation or more sophisticated cyberattacks. It’s not just that the AI is getting smarter; it’s that the barrier to entry for malicious actors is dropping significantly. That’s a major point of contention even among the experts. While they might be optimistic about job growth in the long run, many of those same researchers are deeply concerned about the lack of robust security measures for the current generation of models. It’s a bit of a double-edged sword. The same power that allows an AI to write code or analyze medical data also gives it the capacity to do harm if it’s not properly constrained. The consensus is that we’re in a precarious moment where the technology is outrunning our ability to secure it.

HOST

It sounds like we’re essentially building the plane while we’re flying it. But I want to push back a little on the "expert" label. If these experts are the ones building the tech, aren’t they biased toward it succeeding? Is there anyone looking at this without a vested interest?

PRIYA

That’s a critical question, and it’s why the Stanford AI Index is so widely cited. It’s an attempt to provide an objective, data-driven look at the landscape, specifically because there’s so much hype and bias in the industry. The people behind the Index are researchers, not just company spokespeople. They’re the ones tracking the actual performance benchmarks, the investment flows, and the public sentiment surveys. They’re documenting the gaps, not just the successes. Even within the expert community, there is a lot of debate about the ethical implications and the pace of development. It’s not a monolith. You have researchers who are sounding the alarm on safety, and others who are pushing for faster, more open development. The Index tries to capture that complexity rather than just giving us a single, glowing report card. It’s meant to be a resource for policymakers and the public to ground the conversation in actual, measurable data.

I appreciate that distinction

HOST

I appreciate that distinction. It’s good to know there’s some attempt at objectivity, even if the picture is messy. What comes next, then? If the gap between expert optimism and public anxiety keeps widening, where does that leave us? Are we headed for a total breakdown in trust?

PRIYA

That’s the big question. If we don’t find a way to bridge that gap, we’re likely going to see a lot more friction between the tech industry and the public. We’re already seeing it in the form of increased calls for regulation and more scrutiny from government bodies. The 2026 report suggests that as these models become more embedded in our daily lives—in healthcare, in finance, in our workplaces—the need for transparency is going to become non-negotiable. If the public doesn't understand how these systems work, or if they feel like they’re being used as guinea pigs for unproven tech, that trust gap is only going to grow. The next few years are going to be defined by whether we can move from this "move fast and break things" mentality to a more responsible, inclusive approach that actually addresses the public’s very real concerns.

HOST

That’s a lot to consider. It sounds like the technology itself isn't the only thing that needs an upgrade; our social and regulatory systems are clearly struggling to keep up as well. Priya, thanks for walking us through the data.

PRIYA

It was a pleasure, Alex. The data is clear that we’re at a crossroads, and it’s going to be crucial to keep tracking these trends as they evolve.

HOST

That was Priya, our technology analyst. The big takeaway here is that the divide over AI isn't just a difference of opinion; it’s a fundamental difference in experience. Experts are seeing the raw potential in controlled settings, while the public is feeling the anxiety of rapid, unpredictable change. We’re in a period where technology is moving faster than our social and regulatory systems can adapt, and that gap is fueling real, valid concerns about the future. I'm Alex. Thanks for listening to DailyListen.

Sources

  1. 1.Why opinion on AI is so divided | MIT Technology Review
  2. 2.MIT Technology Review's EmTech AI 2026: Leading in the Era of AI ...
  3. 3.Why Opinion on AI Is so Divided
  4. 4.Why opinion on AI is so divided
  5. 5.MIT Technology Review's Post - LinkedIn

Original Article

Why opinion on AI is so divided

MIT Technology Review · April 13, 2026

The Growing Divide in Public and Expert Views on AI | Daily Listen