TECHCRUNCH·
YouTube Expands AI Likeness Detection: Audio Analysis
YouTube is expanding its AI-powered likeness detection to protect celebrities from unauthorized deepfakes, curbing synthetic fraud in entertainment.
From DailyListen, I'm Alex
HOST
From DailyListen, I'm Alex. Today, YouTube is widening its AI likeness detection tools to shield celebrities and the entertainment industry from unauthorized deepfakes. To help us understand what this changes—and what it doesn't—we’re joined by Priya, our technology analyst, who’s been covering how platforms manage synthetic media.
PRIYA
What this unlocks is a new, proactive layer of defense for high-profile figures who’ve become prime targets for synthetic fraud. Think of it as an industrial-scale version of YouTube’s existing Content ID system, which has spent years policing copyrighted music and film clips. Instead of looking for copyrighted audio or video files, this system scans incoming uploads for visual matches against registered, AI-generated face profiles. When the system detects a potential deepfake of a celebrity, it flags it for the rights holder—in this case, talent agencies like CAA, UTA, or WME. They can then choose to have it removed or, in some cases, monetize it if they’ve authorized the content. It’s a shift from reactive, manual reporting to automated, policy-driven enforcement. By bringing talent agencies into the loop, YouTube is essentially outsourcing the identification of these fakes to the people who have the most incentive to protect their clients’ brand and reputation from unauthorized endorsements or scams.
HOST
So, you’re saying this is basically Content ID for human faces. But we’ve known for a while that deepfakes are a massive issue, especially after that Tom Hanks incident last year where he was used in a fake ad. Why has it taken until today, April 22, 2026, to give this protection to celebrities?
PRIYA
The interesting piece is the sheer scale of the challenge. Managing likeness rights is orders of magnitude more complex than managing a copyrighted pop song. When you police music, you’re dealing with a fixed, digital file. When you police likeness, you’re dealing with infinite, generative variations—AI models can create a thousand different, slightly altered versions of a face in seconds. YouTube had to first roll this out to the 4 million creators in its Partner Program to refine the detection logic before expanding it to the broader entertainment industry. They’ve been testing this with politicians, journalists, and government officials, gathering data on how to balance the removal of harmful fakes without accidentally suppressing legitimate parody or satire. It’s a delicate calibration. They’re effectively building a digital gatekeeper that has to distinguish between a malicious scam and a fan-made tribute. The technical hurdle isn't just detecting a face; it’s detecting a specific, unauthorized, AI-simulated face while leaving room for the creative expression that defines the platform.
HOST
That balance between protection and creative freedom is always the sticking point. I’m curious about the effectiveness here. You mentioned this is a shield, but we’re still seeing reports that deepfakes are everywhere. Does YouTube have any hard numbers on how many of these videos they’ve actually taken down?
PRIYA
That’s the gap in the current data. YouTube hasn’t shared specific, public statistics on how many videos have been pulled down since they started this pilot. They’ve only stated that the volume of content removed so far has been, quote, "very small." This is a common pattern with these early-stage enforcement tools. Platforms are often hesitant to release precise numbers because it reveals the size of their problem and the limitations of their detection tech. We know from industry research that roughly 48% of deepfake incidents now involve the unauthorized use of celebrity likenesses, which is a massive incentive for this expansion. However, even if this tool catches the most egregious, high-traffic scams, it’s not a total solution. It’s a filter, not a wall. The sheer volume of content uploaded to YouTube every minute means that even with automated detection, some fakes will inevitably slip through, especially those that use newer, more sophisticated generation techniques that haven’t been registered in the system yet.
HOST
If the numbers are small, I have to wonder if this is more about public relations than actual safety. You mentioned they’re supporting the NO FAKES Act in Washington, too. Are they just trying to get out ahead of regulation by showing they can police themselves before the government forces their hand?
PRIYA
What this unlocks is a stronger position for YouTube in federal policy debates. By implementing these tools voluntarily, they’re arguing that they don't need heavy-handed, one-size-fits-all legislation to handle likeness rights. They’re showing lawmakers that they can build the infrastructure to identify and remove unauthorized synthetic content effectively. Supporting the NO FAKES Act is part of that long-term strategy. It signals that they want a clear, legal framework—one where the platform’s responsibility is defined—rather than being left to adjudicate every single likeness dispute on their own terms. It also gives them a degree of cover. If they can point to a federal law that establishes clear rules for voice and visual likeness, it makes their own enforcement actions feel more legitimate and less like they’re acting as a private, unaccountable arbiter. It’s a move to align their private governance with a public, legal standard, which is a very calculated way to reduce their own long-term liability.
HOST
That makes sense from a legal perspective, but let’s talk about the people on the other side of this. We’ve established that agencies like CAA and WME are getting access. Does an individual celebrity actually need a YouTube channel to be protected, or is their image just automatically shielded once the agency signs up?
PRIYA
You don’t need a channel to be shielded. That’s a major part of this expansion. By giving talent agencies and management companies direct access to the tool, YouTube is decoupling the protection from the creator ecosystem. An agent can now register a client’s likeness and monitor the platform for deepfakes, regardless of whether that celebrity has ever uploaded a single video. This is vital because the biggest risk for a celebrity isn’t usually someone re-uploading their clips; it’s someone using their face to sell a fake product or spread misinformation in a video that looks like an authentic endorsement. The agency acts as the rights holder, using the platform’s tools to scan for any unauthorized use of their roster’s faces. It shifts the burden of monitoring away from the individual—who might not be tech-savvy or even active on the site—and places it on the professional representatives whose job is to protect that intellectual property.
HOST
So, this isn't just about protecting a brand's channel, but their entire existence on the platform. But let me push back: if the system relies on these agencies to flag content, doesn't that create a massive loophole? What happens to the celebrities who aren't represented by these major agencies, or the ones who don't have a team monitoring their digital footprint 24/7?
PRIYA
That is the core limitation of this model. It creates a tiered system of protection. If you’re a top-tier celebrity with a powerhouse agency like UTA or CAA, you have a professional team actively policing the platform on your behalf. You have the resources to ensure your likeness is registered and your reputation is guarded. But if you’re a mid-level entertainer, a local public figure, or someone who doesn't have that kind of institutional support, you’re effectively on your own. You’re left relying on the platform’s baseline detection or your own ability to manually report violations, which is a massive disadvantage. We’re seeing a shift where digital safety is becoming a service that you have to be big enough to afford or manage. It’s not a universal safety net; it’s an enterprise-grade tool for those who have the clout to demand it. That leaves a large portion of the population vulnerable to the same scams, just without the automated assistance.
HOST
That sounds like we’re moving toward a web where your safety depends on your status. You also mentioned earlier that this tool has to balance free expression, like parody. But who decides what’s parody and what’s a malicious deepfake? Is it the agency, or is it YouTube’s algorithm?
PRIYA
The final call remains with the platform, but the process is heavily influenced by the initial detection. YouTube’s algorithm flags the content, but the human or automated review process that follows is where the tension lies. When an agency requests a removal, they’re making a claim that the content violates the platform’s policy on unauthorized likeness. If the uploader disagrees—say, they claim it’s a protected parody—then the dispute process kicks in. This is exactly where the risk of over-censorship lives. Agencies are naturally incentivized to be aggressive; they want to protect their clients at any cost, which often means flagging anything that casts their client in a negative light, even if it’s satire. YouTube then has to act as the judge. They have to decide if the claim is valid under their policies. It’s a massive, ongoing struggle to maintain a platform that allows for critical commentary while still providing the protections that major rights holders are demanding.
HOST
It sounds like YouTube is trying to build a system that is both a courtroom and a police force. But even with all this tech, people are still pretty good at spotting these things themselves. I read that people identify deepfake audio correctly 73% of the time. Does this technology actually make us safer, or just lazier?
PRIYA
That’s a sharp way to frame it. The technology isn't a replacement for human critical thinking. That 73% accuracy rate for audio is encouraging, but it also means that over a quarter of the time, people are being fooled. As the tech gets better, that percentage will likely drop. The danger of relying on these tools is that it creates a false sense of security. If we assume that anything left on the platform must be "verified" or "safe," we become more susceptible to the fakes that do manage to bypass the filters. This technology is a necessary response to the sheer volume of synthetic media, but it’s not a panacea. It’s a tool that helps, but it doesn't solve the underlying problem of misinformation. We’re in a race where the generators of deepfakes are constantly iterating to stay one step ahead of the detectors. It’s a cycle of innovation and detection that isn't going to end anytime soon.
HOST
That race between generation and detection is going to define the next few years for sure. I want to circle back to the controversy. You’ve been very neutral, but is there any actual public pushback against this? Are there any concerns from creators or privacy advocates about YouTube having this much power over who gets to use whose face?
PRIYA
There is significant, underlying concern. Privacy advocates have been vocal about the potential for these tools to be misused. If a platform has the ability to detect and remove a face, what’s to stop them from expanding that power to other types of content that they deem "unauthorized" or "harmful"? The power to block a specific face is the power to control a narrative. When you give talent agencies the ability to trigger these removals, you’re effectively handing them a degree of editorial control over what can be said about their clients. While the current focus is on deepfakes and scams, the precedent it sets is what worries people. It’s a move toward a more curated, platform-controlled environment where the rules of what is allowed are increasingly determined by the interests of the powerful, rather than an open, democratic exchange of ideas. That’s the real, long-term trade-off we’re looking at here.
HOST
It really sounds like we’re trading a bit of the wild-west internet for a more corporate-controlled space. Before we wrap up, what should our listeners be watching for next? Is this just the beginning of the expansion for this likeness tool?
PRIYA
The next phase will be about the refinement and the integration. We’ll be watching to see if YouTube starts sharing more data, or if they continue to keep the efficacy numbers quiet. We’re also looking to see how other platforms respond. If YouTube becomes the "safe" place for celebrities to exist, will other platforms like TikTok or X be forced to adopt similar, industry-facing tools just to remain competitive in the eyes of talent agencies? The pressure will mount for a standardized, cross-platform approach to likeness rights. We’re moving toward a reality where digital identity is a protected asset, and the companies that can offer the most robust, reliable protection for that asset are going to win the favor of the biggest names in entertainment. It’s not just about content anymore; it’s about the management of identity itself, and that’s a shift that will play out for years.
That was Priya, our technology analyst
HOST
That was Priya, our technology analyst. The big takeaway here is that YouTube is shifting toward a more proactive, rights-holder-focused model to combat deepfakes. While this offers a new, powerful shield for celebrities and talent agencies, it also raises real questions about who gets that protection and how much power we’re concentrating in the hands of the platform. It’s a step toward safety, but it’s definitely not a total solution. I’m Alex. Thanks for listening to DailyListen.
Sources
- 1.YouTube Expands Likeness Detection To Celebrities, Talent Agencies 04/22/2026
- 2.YouTube expands its AI likeness detection technology to celebrities - TechCrunch
- 3.YouTube AI Likeness Detection Unleashed: Major Expansion Shields Celebrities from Deepfake Threats
- 4.YouTube is expanding its AI likeness detection tool to celebrities ...
- 5.YouTube expands AI likeness detection tool to celebrities
- 6.YouTube expands AI deepfake detection to politicians, government officials, and journalists | TechCrunch
- 7.YouTube expands likeness detection to protect entertainers and ...
- 8.Deepfake Attacks & AI-Generated Phishing: 2026 Statistics
- 9.YouTube expands its AI likeness detection technology to celebrities
- 10.YouTube has announced an expansion of its AI-powered likeness ...
- 11.YouTube’s AI ‘Likeness Detection’ Tool and the Emerging Law of Digital Identity – University of Baltimore Law Review
- 12.YouTube Likeness Detection Tool: How It Works & Who It Protects ...
Original Article
YouTube expands its AI likeness detection technology to celebrities
TechCrunch · April 21, 2026
You Might Also Like
- tech
Listen: Meta Is Building A Photorealistic AI Mark Zuckerberg
11 min
- ai
Listen: Google AI Overviews Accuracy Analysis Reveals Errors
22 min
- ai regulation
Listen: EU AI Act Reaches Milestone Shaping Global Tech
18 min
- tech
Listen: Perplexity Integrates Plaid for AI Personal Finance
10 min
- tech
Firefox 150 Security: Anthropic Mythos Breakdown
11 min