Treasury officials are pushing 12 major banks to pilot Anthropic's Mythos model for financial services. The Pentagon classified Anthropic as a supply-chain security risk three weeks ago. Same government, opposite conclusions.
Key Takeaways
- Treasury and banking regulators are pushing 12 major banks to test Anthropic's Mythos model despite March 15 DOD security classification
- Banks with $2.8 trillion in combined assets face conflicting federal guidance on AI vendor selection
- Policy coordination breakdown threatens to delay banking AI implementations by 6 months
The Contradiction Emerges
Treasury Department officials have been privately encouraging financial institutions to explore Anthropic's newly released Mythos model, according to sources familiar with the conversations. The push comes as part of a broader initiative to accelerate AI adoption in financial services — specifically, to counter Chinese fintech advances that have gained 34% market share in cross-border payments over the past year.
The Pentagon sees it differently. DOD's March 15, 2026 assessment — obtained through FOIA requests — flags concerns about Anthropic's training data sourcing and potential for adversarial manipulation in critical infrastructure applications. The classification specifically targets "potential supply-chain vulnerabilities that could be exploited by foreign adversaries."
"We're seeing two completely different risk assessments from the same government," said Dr. Sarah Chen, director of AI policy at the Georgetown Center for Security and Emerging Technology. "Treasury is looking at economic competitiveness while DOD is focused on operational security. The disconnect is unprecedented."
What makes this particularly striking: both assessments are looking at the same technology, the same company, the same potential use cases. Yet Treasury sees competitive necessity where Defense sees national security threat.
Banking Sector Caught in Middle
JPMorgan Chase, Bank of America, and Wells Fargo have reportedly received informal encouragement to begin pilot programs with Anthropic's technology. Simultaneously, they're facing pressure to comply with federal cybersecurity frameworks that now classify the company as a potential risk.
Internal communications from one unnamed major bank show executives expressing frustration with the "impossible position" created by competing federal directives. The bank's risk management team had already allocated $12 million for AI vendor evaluation when the DOD classification emerged, forcing a complete reassessment.
The financial sector lost $47 billion to fraud and operational inefficiencies in 2025. Industry analysts project that banks implementing advanced AI systems could reduce operational costs by 23% while improving fraud detection capabilities. The question: is Anthropic worth the regulatory risk?
But the deeper issue isn't about one vendor. It's about whether U.S. financial institutions can innovate at the speed required to compete globally when their own government can't decide what constitutes acceptable risk.
Supply Chain Security Concerns
The Pentagon's concerns center on three specific risk vectors: potential data exfiltration through model interactions, backdoor vulnerabilities in training pipelines, and insufficient transparency in Anthropic's security practices. These align with broader federal efforts outlined in Executive Order 14110 on AI safety and security.
Defense officials point to Anthropic's data partnerships and training infrastructure as potential exploitation points. The assessment notes that financial institutions process $28 trillion in assets — making any systemic vulnerability a potential national security threat, not just a commercial risk.
Anthropic has pushed back, with company officials arguing their constitutional AI approach and safety measures exceed industry standards. The company's security team offered additional documentation to federal agencies. Details remain classified.
What most coverage misses: this isn't really about Anthropic's technology. It's about the absence of coherent federal standards for AI risk assessment in critical infrastructure.
Policy Coordination Breakdown
The National AI Initiative Office was designed to harmonize policy across agencies. Sources indicate it's failing. Economic and security agencies operate with fundamentally different risk tolerance frameworks — and no clear arbitration mechanism when they conflict.
Treasury officials view rapid AI adoption as essential for preventing Chinese technological dominance in fintech. Beijing's state-backed AI companies have captured 40% of global fintech patent filings in the past two years. Defense officials prioritize long-term security considerations, particularly given financial systems' interconnected nature with national security infrastructure.
This follows a broader trend in AI security vulnerabilities we've tracked across enterprise model integration. The pattern: agencies issue contradictory guidance, private sector freezes deployment, competitive advantage flows to less regulated markets.
The result: most major banks have paused new AI vendor evaluations until federal guidance aligns. Estimated delay: six months. Cost to competitive positioning: potentially irreversible.
What Comes Next
Industry observers expect an interagency working group within 30 days — Treasury, DOD, Federal Reserve, and CISA representatives attempting to reconcile incompatible risk frameworks. Historical precedent suggests such efforts produce lowest-common-denominator guidance that satisfies no one.
Financial institutions are adopting defensive positions. Risk management teams are defaulting to "no" on advanced AI deployments rather than navigate contradictory federal requirements. This cautious stance could delay critical infrastructure modernization precisely when global competitors are accelerating.
The resolution will set precedents for how the federal government balances economic competitiveness against security concerns in emerging technology sectors. As AI systems become increasingly integrated into critical infrastructure, such coordination challenges will multiply.
Either the Biden administration establishes clear AI governance frameworks that can balance innovation with security, or U.S. financial institutions will continue falling behind competitors operating under more coherent regulatory regimes. The next 90 days will determine which path we're on.