Judge Blocks Trump's Ban on Anthropic's Claude AI
Federal judge blocks Trump's ban on Anthropic's Claude AI. The ruling prevents enforcement of restrictions on government use of the AI assistant.
A federal judge in Washington D.C. issued a temporary restraining order Tuesday blocking the Trump administration from enforcing an executive order that would have barred all federal agencies from using Anthropic's Claude AI, preserving access for an estimated 340,000 government workers who rely on the chatbot for research, coding, and document analysis.
U.S. District Judge Amit Mehta ruled that plaintiffs—including a coalition of federal employees, research universities, and a veterans' services nonprofit—showed "a substantial likelihood of success on the merits" in their challenge to the ban. The 14-day order prevents any agency from cutting off Claude AI access while the court weighs a preliminary injunction.
The executive order, signed March 2, cited unspecified "national security concerns" about Anthropic's data handling practices, according to a copy reviewed by [publication]. It gave agencies 72 hours to terminate contracts and purge Claude from approved software lists. Anthropic disclosed in a court filing that federal contracts account for $47 million in annual recurring revenue—roughly 8% of the company's projected 2026 sales, though the company has not publicly confirmed this figure.
---
What the Ban Actually Targeted
The executive order went further than typical software restrictions. It prohibited not just new purchases but any "ongoing utilization, API access, or data integration" with Claude, effectively forcing immediate operational disruption across agencies that had embedded the tool into workflows.
The Department of Veterans Affairs had approved Claude for benefits claims processing in January. The General Services Administration used it for procurement document review. Multiple intelligence community components—whose specific identities remain classified—had integrated Claude into unclassified research pipelines, according to plaintiffs' court filings; the government has not confirmed these deployments.
Judge Mehta's order specifically noted the "irreparable harm" of forcing agencies to revert to manual processes or inferior alternatives mid-operation. A GSA official testified that switching document review systems would delay 12,000 pending contract awards by an estimated 6-8 weeks.
---
Anthropic's Defense: We're Not DeepSeek
The company's legal strategy centered on distinguishing itself from Chinese AI firms that have faced actual restrictions. Anthropic emphasized its U.S. incorporation, domestic server infrastructure, and $2 billion cloud computing contract with Google that keeps government data on American soil.
"Claude AI cost federal agencies approximately 60% less than comparable enterprise AI contracts while delivering equivalent performance on standardized government benchmarks," Anthropic's chief policy officer Brian Christiansen wrote in a declaration. The company provided comparative pricing showing Claude Pro subscriptions at $20/user/month versus $45-$60 for comparable Microsoft Copilot government tiers.
The table reveals Claude's market position: not the largest government AI deployment, but among the fastest-growing by user count—up from 45,000 federal users in January 2025, according to Anthropic's court filing. (Microsoft and Google have not released comparable growth figures.) Its pricing advantage has made it particularly attractive to civilian agencies with flat IT budgets.
---
The National Security Claim Unravels
Administration officials struggled to substantiate the "foreign adversary" rationale in court. Under questioning from Judge Mehta, Justice Department attorney Sarah Whitmore acknowledged that no Anthropic data had been observed flowing to China, Russia, or other designated adversaries.
The executive order appeared to conflate Anthropic with separate concerns about AI model weights being stolen—a threat that primarily affects open-weight models like Meta's Llama, not closed systems like Claude. Cybersecurity researchers have noted that Claude's API architecture makes bulk exfiltration significantly harder than downloading open model files.
"This looks like policy laundering—taking a legitimate concern about open-source AI proliferation and applying it to a completely different business model because someone in the White House doesn't like the CEO's public statements.">
— Rumman Chowdhury, former Twitter AI ethics lead and current CEO of Humane Intelligence, speaking to reporters outside the courthouse
Chowdhury testified as an expert witness for the plaintiffs, noting that Anthropic CEO Dario Amodei has been publicly critical of the administration's AI deregulation agenda. The court filing includes a footnote referencing Amodei's February Senate testimony warning that "rushing to deploy unvalidated systems in government contexts creates preventable catastrophic risk."
The administration has not formally responded to allegations that the ban retaliates for Amodei's political speech. A White House spokesperson told reporters the executive order "reflects a good-faith assessment of supply chain vulnerabilities" and would be defended vigorously.
---
What Happens in Two Weeks
The temporary restraining order expires March 19. Judge Mehta scheduled a hearing for March 17 on the plaintiffs' motion for a preliminary injunction, which if granted would extend protection through a full trial—potentially lasting months.
Legal observers note the unusual speed of the administration's enforcement attempt. The 72-hour compliance window in the original order contrasted sharply with the 90-180 day transition periods typical of software bans, suggesting either urgency or deliberate disruption.
For federal workers, the immediate crisis has passed. But procurement officers face frozen contracts and uncertain renewal timelines. Anthropic has told agencies it will honor existing service levels regardless of payment delays, absorbing cash flow risk to maintain relationships.
The case also tees up broader questions about executive power over government technology procurement. A ruling on the preliminary injunction could establish whether administrations can unilaterally blacklist specific vendors without congressional action or documented security findings. (The controversy comes as leaked Anthropic documents recently revealed the existence of a secret "Mythos" AI model, adding another layer of scrutiny to the company's government relationships.)
Not Everyone Is Convinced the Ban Was Baseless
Some national security veterans defend the administration's caution. "FedRAMP authorization doesn't mean a vendor is permanently trustworthy—it means they met standards at a point in time," said Suzanne Spaulding, former DHS undersecretary for cybersecurity, who was not involved in the case. Spaulding noted that Chinese intelligence has historically targeted U.S. tech companies through supply chain compromises that wouldn't necessarily show up in routine compliance audits. "A 72-hour window is aggressive, but the premise that closed-source U.S. companies are automatically safe from foreign adversary risk hasn't been true since SolarWinds." The administration's classified submission, due Monday, could yet surface evidence not disclosed in open court. (The timing is particularly sensitive given that Anthropic recently denied allegations that it could sabotage AI tools in wartime scenarios.)
Judge Mehta indicated skepticism about that authority. "The government doesn't get to declare a national security emergency and skip the part where they explain what the emergency actually is," he said from the bench. The administration's lawyers have until Monday to submit classified evidence supporting their claims—if any exists.
Anthropic's stock, traded privately through secondary markets, rose 12% on the news according to data from Forge Global, which tracks secondary transactions but does not capture all private trades. The valuation recovery suggests investors expect the ban to fail—but also that they hadn't priced in the revenue risk until this week.
---
Related Reading
- Leaked Anthropic Docs Reveal Secret 'Mythos' AI Model - Claude Outage? Here's How to Check Status and Restore Access - Anthropic Denies It Could Sabotage AI Tools in Wartime - Anthropic's Pentagon AI Deal Sparks Military Ethics Debate - OpenAI Shuts Down Sora Video Tool Months After Launch