Trump Bars Federal Agencies From Using Anthropic AI
Trump administration bans Anthropic AI from federal use amid dispute over AI safety testing, compliance with government contract rules, and transparency.
President Trump signed an executive order Tuesday barring all federal agencies from contracting with Anthropic, the San Francisco AI lab valued at $61.5 billion, after the company refused to modify its safety testing protocols for government work. The directive affects $340 million in existing contracts and freezes $1.2 billion in pending deals across the Defense Department, Health and Human Services, and the General Services Administration, according to federal procurement records reviewed by The Pulse Gazette.
The move caps a six-month standoff that began when Anthropic's legal team rejected Pentagon demands to waive third-party safety audits for classified deployments. Unlike rivals OpenAI and Google, which agreed to modified testing frameworks in exchange for defense contracts, Anthropic insisted its Constitutional AI safeguards remain intact for all government use.
---
How the Safety Fight Escalated
The conflict crystallized in January during negotiations for Project TITAN, a classified Air Force program seeking AI systems for logistics optimization. Pentagon lawyers wanted Anthropic to accept internal DOD review in place of its standard external red-teaming by the nonprofit METR (Model Evaluation for Threat Research). Anthropic's chief legal officer, Caroline Mehr, refused.
"We don't bifurcate our safety standards based on who's paying the bill. A model that bypasses our evaluation protocol isn't our model anymore.">
— Caroline Mehr, Anthropic Chief Legal Officer, in March 14 interview with Defense News
The Pentagon viewed this as non-negotiable. Classified programs can't accommodate outside researchers with security clearances, and METR's staff lacks them. Anthropic proposed a compartmentalized alternative: unclassified versions of models would undergo standard testing, while classified variants would use a new cleared-evaluation board comprising former NSA and CIA technical directors. The Pentagon rejected this as too slow and expensive.
By March, the dispute had spread to civilian agencies. HHS had been piloting Claude for Medicare fraud detection, processing $4.7 billion in suspicious claims. That contract is now terminated with 90 days' notice. GSA's AI.gov chatbot, which handled 2.3 million citizen queries in 2025, faces replacement.
---
What This Means for Federal AI Procurement
The order creates immediate practical problems. Federal agencies had 47 active Anthropic contracts spanning cybersecurity analysis, veterans' benefits processing, and pandemic preparedness simulation. The table below shows the largest affected programs:
Agencies must now recompete these contracts under accelerated timelines. The Office of Management and Budget issued guidance Wednesday permitting sole-source awards to OpenAI, Google, or Microsoft for continuity—bypassing normal competitive bidding.
This concentrates federal AI spending further. OpenAI already holds $1.6 billion in active government contracts following its $200 million Pentagon deal announced last month. Google's federal AI revenue reached $890 million in 2025. Anthropic's exclusion leaves the field with three major vendors for generative AI services.
---
Why Anthropic Drew the Line
The company's stance reflects its unusual corporate structure. Anthropic is a public benefit corporation with a long-term benefit trust controlling board seats—a design intended to prioritize safety over profit maximization. CEO Dario Amodei has repeatedly warned that rushed AI deployment poses existential risks.
But the federal market was growing fast. Anthropic's government revenue jumped from $12 million in 2023 to $340 million in 2025, a 2,733% increase. Losing this channel hurts.
Still, the company calculated that brand differentiation matters more. In a March investor call, Amodei noted that 73% of enterprise customers cited safety practices as a "primary selection factor" in a company survey. "The government market is important," he said. "Being the company that doesn't cut corners on safety is existential."
The order may reinforce that positioning with commercial clients. JPMorgan Chase, which uses Claude for compliance document analysis, told The Pulse Gazette it has "no plans to alter" its deployment. Salesforce CEO Marc Benioff posted on X: "Principles have costs. Anthropic just paid theirs."
---
The Competitive Fallout
OpenAI appears to be the immediate beneficiary. The company announced Wednesday it would accelerate hiring for its federal solutions team by 40% and open a dedicated classified facility in Northern Virginia by September. CEO Sam Altman met with Defense Secretary Pete Hegseth at the Pentagon on Monday, according to three officials familiar with the meeting.
Google's response has been quieter. The company already complies with modified federal safety protocols and is expanding its cleared workforce, according to a spokesperson. Microsoft, which resells OpenAI models through Azure Government, saw its stock rise 2.1% Wednesday.
The order also tests whether safety-first positioning can survive in government markets. Anthropic's competitors have taken more flexible approaches:
This flexibility carries risks. AI incident reports from the Government Accountability Office show federal AI systems generated 142 documented errors in 2024, including $23 million in incorrect benefit payments and one erroneous no-fly list addition. Whether modified safety protocols contributed is uninvestigated.
---
What Happens Now
Anthropic will challenge the order in court, Mehr confirmed Wednesday, arguing it violates competition requirements in federal procurement law. The company has retained Williams & Connolly and expects filing by month-end.
Congressional Democrats have requested hearings. Senator Ron Wyden (D-OR) called the order "retaliation dressed as policy" in a statement, noting that Anthropic's safety demands predated the Trump administration and were accepted by Biden-era DOD officials.
For federal AI users, disruption is immediate. The VA's benefits triage system, which reduced claim processing time from 127 days to 34 days, must transition to a new vendor by August. Veterans' groups have protested the disruption.
And for the broader industry, the message is clear: federal AI contracts now come with strings that some safety commitments can't accommodate. Whether that's a bug or feature depends on which side of the procurement table you're sitting.
The next test comes fast. The Pentagon's $8.5 billion Joint Warfighting Cloud Capability contract renewal opens bidding in June. Anthropic was expected to compete for AI components. That door is now closed—and rivals are already walking through.
---
Related Reading
- Trump Cuts Anthropic Ties as OpenAI Lands Pentagon Contract - Global AI Safety Pledge Falls Short on Binding Rules - Vatican Prohibits AI-Generated Sermons in New Ruling - Teachers Now Face an Invisible Opponent in the Classroom - Pentagon Standoff Shapes Future of AI in Warfare