The Trump administration is telling banks one thing about Anthropic while the Pentagon says the opposite. Treasury officials are quietly pushing major financial institutions to pilot Anthropic's Mythos model for fraud detection. The Department of Defense classified the same company as a supply-chain risk six weeks ago.

Key Takeaways

  • Treasury encourages 12 major banks to test Anthropic Mythos despite DoD supply-chain classification
  • Banks spending $50 million collectively on compliance frameworks while waiting for policy clarity
  • JPMorgan and Bank of America demand written clarification before proceeding with AI testing

The Policy Contradiction

Treasury's Office of the Comptroller of the Currency has hosted closed-door briefings with 12 major banks since early March, presenting Mythos as transformative for financial crime detection. The model — released in limited beta February 2026 — shows enhanced pattern recognition capabilities on fraud detection benchmarks compared to Claude-3.

But DoD restrictions prohibit any agency or contractor from procuring Anthropic services for sensitive operations. The classification cites Anthropic's $4 billion AWS partnership and investments from entities with foreign government ties.

"We have a situation where one part of the government is saying this company poses national security risks, while another is actively promoting their technology to critical infrastructure operators," said Sarah Chen, AI policy analyst at Georgetown's Center for Security and Emerging Technology.

What most coverage misses: this isn't really about Anthropic. It's about the administration's fundamental inability to coordinate AI policy across agencies when financial innovation conflicts with security protocols.

Banking Sector Response

JPMorgan Chase and Bank of America requested written clarification from both departments before proceeding with any Anthropic testing, according to industry sources. They're not alone.

"We need clear, consistent guidance from the federal government before we can responsibly deploy any AI system that handles customer financial data." — Michael Rodriguez, Chief Technology Officer at Regional Banking Consortium

Three regional Fed banks issued informal guidance suggesting institutions document AI vendor risk assessments more thoroughly — code for 'we don't know what Treasury is doing either.' Smaller community banks face worse problems: many fintech providers have integrated Anthropic models into white-label fraud detection services without disclosure.

The Federal Reserve's supervision teams are reviewing existing AI deployment guidelines to determine whether banks using Anthropic models could face regulatory scrutiny. Translation: nobody wants to be the regulator who approved the AI system that caused the next financial incident.

Security Implications and Foreign Investment Concerns

Pentagon officials worry about more than data residency. AI models processing financial transactions create maps of American economic activity — exactly what foreign adversaries want.

"When you're analyzing millions of banking transactions for fraud patterns, you're essentially creating a map of economic activity that could have strategic value to foreign adversaries," explained David Kim, former NSA analyst now at the Atlantic Council. Intelligence community assessments suggest Anthropic's funding structure includes investments from entities under CFIUS review, though details remain classified.

The timing isn't coincidental. This follows broader scrutiny of Chinese-backed AI investments and AI security vulnerabilities that create new attack vectors during model testing itself.

But here's what security analysts aren't saying publicly: the same pattern recognition that makes Mythos effective at fraud detection makes it valuable for economic intelligence gathering.

Industry Impact and Regulatory Uncertainty

Banks are spending an estimated $50 million collectively on AI governance frameworks while waiting for Washington to figure itself out. Several fintech companies delayed product launches incorporating Anthropic models. Others are scrambling to find alternative providers that don't face similar restrictions.

The American Bankers Association submitted a formal request for interagency coordination, noting that continued uncertainty could push banks toward less capable but compliant alternatives. That means weaker fraud detection — exactly what nobody wants during a period of increasing financial crime.

Regulatory compliance teams are hiring additional legal and risk personnel. The situation mirrors previous conflicts over cryptocurrency regulation, but AI deployment in banking carries higher stakes due to potential impacts on both national security and financial stability.

The deeper issue: banks need AI capabilities to compete, but they can't afford to guess wrong on federal policy.

What Comes Next

The National Security Council will lead interagency meetings to establish unified AI governance principles, though no timeline exists for resolution. Banking regulators face pressure to issue guidance before April 30, when several major banks planned to begin expanded AI testing programs.

Without resolution, institutions will likely default to the more restrictive DoD interpretation — effectively blocking Anthropic adoption across financial services. The House Financial Services Committee requested briefings on AI policy coordination, which could accelerate resolution but highlights the administration's coordination problems.

Congressional oversight might force faster decisions, but it won't solve the underlying problem: an administration that hasn't decided whether it wants to promote AI innovation or restrict it. The next 90 days will determine whether America's financial sector leads in AI deployment or gets paralyzed by its own government's mixed signals.