Treasury Secretary Scott Bessent and Fed Chair Jerome Powell called bank executives to Washington this week, encouraging them to test Anthropic's new Mythos model for detecting security vulnerabilities. JPMorgan Chase was the only officially announced partner, but Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley are reportedly testing the model as well, according to Bloomberg.
This Treasury push creates a bizarre contradiction at the heart of government AI policy. While one arm of the Trump administration is actively promoting Anthropic's technology to the financial sector, the Department of Defense has designated the company as a supply-chain risk. That designation emerged after Anthropic refused to give the government unrestricted access to its models, leading to a court battle that's still ongoing. The mixed signals reveal how different agencies are taking wildly different approaches to AI security and national interests.
Anthropic's marketing around Mythos — claiming it's "too good" at finding vulnerabilities despite not being trained for cybersecurity — smells like enterprise sales theater. The company is limiting access partly due to these supposed capabilities, but that restriction conveniently creates scarcity that drives demand. Meanwhile, U.K. financial regulators are reportedly discussing the risks posed by Mythos, suggesting international concern about a model the U.S. Treasury is actively promoting.
For developers working in financial services, this creates a complicated landscape. Banks are clearly hungry for AI-powered security tools, but the regulatory environment remains fractured. If you're building in this space, expect continued policy whiplash as different government agencies figure out whether AI companies are partners or threats.
