OpenAI published a cybersecurity action plan on April 29, summarized by Help Net Security, that frames the company's approach as racing to arm defenders rather than restricting model access on safety grounds. Sasha Baker, OpenAI's Head of National Security Policy, framed the threat: "Malicious actors are using AI to improve phishing, automate reconnaissance, accelerate malware development, evade detection, and increase the scale of cyber operations. These groups don't need the most advanced frontier models to cause real harm; even capable mid-tier systems can provide meaningful operational advantage." The plan is structured around five pillars: democratizing cyber defense, coordinating across government and industry, strengthening security around frontier cyber capabilities, preserving visibility and control in deployment, and enabling users to protect themselves. The vehicle for defender-access is the Trusted Access for Cyber (TAC) program — tiered access for vetted defenders, with stricter controls on more powerful capabilities. Help Net Security explicitly contrasts this with Anthropic's "more cautious stance, emphasizing tighter control and restricted access to advanced AI capabilities."
The OpenAI/Anthropic policy split is now an explicit market segmentation. OpenAI's argument: if attackers can use mid-tier systems to scale phishing, reconnaissance, malware development, and detection evasion, restricting frontier capability only for defenders just loses the race. The implication is that the right policy lever is access-tiering, not capability-gating — exactly what TAC institutionalizes. Anthropic's counter-argument is that mass defender access also means mass dual-use exposure, and the Pentagon-refusal precedent we covered earlier this week (Anthropic's supply-chain-risk lawsuit) generalizes — restrict the use cases where you cannot enforce safety controls. Both stances are defensible. Which one wins probably depends less on technical merit than on regulatory environment and on which Tumbler-Ridge-style lawsuit lands first against either company.
Three patterns matter. First, "even capable mid-tier systems can provide meaningful operational advantage" — Baker's framing — is the same thesis Wiz Research demonstrated against GitHub last week, when AI was used to find a critical RCE in a closed-source binary. The AI-driven vulnerability-discovery threat model is now mainstreamed in OpenAI's own policy framing. Expect every major security vendor to publish similar reports through 2026, and expect insurance carriers to start pricing AI-attack exposure into premiums. Second, the TAC program's tiered-access design is a template: vet defenders, scale capability access by trust level, keep the most powerful capabilities behind heavier controls. That structure is portable to any provider that wants to offer "safety-tier" capabilities for defensive use; expect Anthropic, Google, Microsoft, and AWS Bedrock to publish their own equivalents within 12 months. Third, the Anthropic vs OpenAI split mirrors the broader 2026 industry split — OpenAI takes contracts and capability bets that Anthropic refuses (Pentagon contracts, defender-access). The market will eventually price which side is right; for now, builders need to read both companies' policies and place bets explicitly.
For builders, three concrete things. First, if you build security tooling, the TAC tiered-access program is something to actually evaluate, not just bookmark. The defender-side use case for advanced models is real (vulnerability discovery, log triage, threat hunting, RAG over CTI feeds), and OpenAI is signaling it wants vetted partners. Get in early or watch competitors get there first. Second, the "mid-tier systems also dangerous" framing is a regulatory signal. If OpenAI is publicly arguing that mid-tier capability is enough for meaningful attack scaling, the legislative response will follow — expect compliance frameworks for mid-capability-tier models that did not previously exist, and pre-emptively prepare your own safety documentation. Third, the OpenAI-Anthropic policy divergence is now an explicit strategic choice for any company building on either. If your customers care about restricted-access-as-safety-feature, Anthropic is the upstream supplier whose stance aligns. If your customers want broad capability access for defenders, OpenAI's TAC is the supplier vehicle. The "agnostic on which model you use" pitch from a year ago is no longer fully agnostic — you now have to explain which side of the policy split you align with.
