Anthropic has put Claude Security into public beta for Enterprise customers โ€” formerly called Claude Code Security, now generalized. The product runs on Opus 4.7 (same model behind Claude Code) and does agentic static analysis on customer codebases: tracing data flows, examining how components interact across files and modules, reading source directly, then generating patch instructions for human review. It lives in the Claude.ai sidebar at claude.ai/security with admin-console enable; webhook integrations to Slack and Jira are available, plus CSV/Markdown export for audit pipelines. Team and Max plans are expected to follow.

The architectural choice that matters is *model-driven analysis* versus pattern-driven analysis. Snyk, Semgrep, GitHub Advanced Security all work primarily by maintaining curated rule libraries โ€” CWE patterns, known-bad-API usage, CVE-matched dependencies โ€” and matching code against them. They're fast, deterministic, and well-suited to vulnerabilities that show up as recognizable code shapes. Claude Security's approach is to read the code with a frontier reasoning model and reason about it, which has the opposite tradeoff: probably better at logic bugs, business-logic flaws, and multi-file data-flow issues that don't fit a static rule; probably worse on coverage-completeness for known patterns. That's a real architectural shift, not a wrapper around an existing scanner.

Here's the load-bearing missing piece: there is no public eval data. No supported-language list disclosed. No false-positive rate. No precision/recall on a standard benchmark. No comparison run against Snyk or Semgrep on the same codebase. No pricing. The announcement reads as "we built this; trust us, evaluate it on your code" โ€” which is fine for a public beta, but it means builders evaluating this against their existing tooling have to do their own measurement work. The honest ecosystem signal is that frontier-lab vertical products are now coming online (this; OpenAI's Codex; Google's Big Sleep; various Cursor/Anthropic/OpenAI enterprise plays). The lab-as-product-vendor competing directly with the application layer it once just powered is a real ecosystem move worth tracking, regardless of which specific product wins.

If you're on Claude Enterprise, turn it on, run it against a codebase you know well, and see what it finds and misses against your existing scanner stack. The eval discipline is on you โ€” "AI vulnerability scanner" claims have been around long enough that you should be skeptical until you've measured. Pay attention to overlap with Snyk/Semgrep results: where the model finds something the patterns miss, that's signal; where the patterns catch something the model misses, that's the limitation to model-driven analysis at this generation. The notable absence of GitHub PR or CLI integration is worth flagging โ€” most production security tooling lives in PR review, and this currently lives in claude.ai. That's an interesting product choice, probably temporary.