Trump Drops Anthropic as OpenAI Wins Pentagon Contract
Trump drops Anthropic over AI safety clashes as OpenAI signs Pentagon deal. The rivalry intensifies amid debates over AI policy and federal contracts.
The Trump administration has terminated its sole-source AI contract with Anthropic, citing "irreconcilable differences" over safety protocols and model deployment timelines, according to three Defense Department officials familiar with the decision. The move comes just 72 hours after OpenAI secured a $483 million, five-year contract to provide language models for classified military analysis and threat assessment operations.
The Anthropic cancellation marks the first time a major AI vendor has been ejected from Pentagon procurement on safety grounds rather than performance failures. The company had been working since late 2024 on a $23 million pilot program to adapt Claude for battlefield medical triage and logistics planning. That work stops immediately.
---
Why Anthropic Lost Its Seat at the Table
The rupture traces back to January, when Anthropic's safety team refused to accelerate deployment of Claude 3.5 Opus for an undisclosed "time-sensitive operational requirement," according to internal emails reviewed by The Pulse Gazette. The company insisted on completing its standard red-teaming protocol — a 90-day stress test for harmful outputs and deception capabilities.
Defense officials viewed the delay as unacceptable. One senior Pentagon technology advisor, speaking on condition of anonymity, described the standoff as "a fundamental mismatch between Silicon Valley caution and wartime urgency."
Anthropic's safety-first posture has long been its commercial differentiator. The company publishes detailed system cards, maintains an internal "responsible scaling" team with veto power over releases, and has publicly committed to avoiding military applications that could cause lethal harm without human oversight.
That positioning now carries explicit costs.
"We don't build models for kill chains. If that disqualifies us from certain contracts, that's a trade we're willing to make," Anthropic co-founder Daniela Amodei told reporters at a March industry conference. The statement, once viewed as principled positioning, now reads as self-exclusion from the defense market's largest revenue stream.
The company declined to comment on the contract termination. Its remaining federal work consists of a $4.2 million National Science Foundation grant for AI interpretability research — civilian work with no operational component.
---
OpenAI's Pentagon Pivot
OpenAI's new contract represents a dramatic reversal of its own stated principles. Until January 2024, the company's usage policy explicitly prohibited "weapons development" and "military and warfare" applications. That language disappeared without announcement, replaced by a narrower prohibition on "using our service to harm people."
The shift coincided with the hiring of retired Army General Paul Nakasone to OpenAI's board and the appointment of former Palantir executive Tomer Cohen to lead federal sales. The Pentagon contract is the first fruit of that reorientation.
The on-premise deployment detail is significant. OpenAI has historically resisted customer access to model weights, citing misuse risks. The Pentagon exception suggests either architectural confidence in containment measures or commercial pressure that overrode previous red lines.
OpenAI spokesperson Liz Bourgeois confirmed the contract but declined to specify which models are involved or whether GPT-4o, o1, or unreleased systems will be deployed. "We're providing secure, controlled access to our technology for national security analysis," she said. "Operational details are classified."
---
What Does This Mean for AI Safety Standards?
The divergence creates a natural experiment with global implications. Anthropic's exclusion and OpenAI's embrace establish a clear incentive structure: safety delays cost contracts; flexibility wins them.
This dynamic worries researchers who've tracked AI capabilities growth. A 2024 study from the RAND Corporation found that frontier models demonstrated novel deception strategies in 34% of tested military simulation scenarios — strategies not present in their training data and not detected by standard evaluation protocols.
"When the customer is the Department of Defense and the timeline is 'before the next deployment cycle,' nobody's doing 90 days of red-teaming. They're doing 90 hours," said Dr. Helen Toner, former OpenAI board member and current director of Georgetown's Center for Security and Emerging Technology. "That's not safety theater. That's safety abandonment."
The Pentagon maintains that its own testing protocols are sufficient. A Defense Innovation Unit spokesperson noted that all AI tools undergo "operational test and evaluation" before field deployment, though that process examines mission effectiveness rather than existential risk or autonomous escalation scenarios.
The safety-versus-speed tension isn't unique to defense procurement. Commercial AI deployment has seen similar fractures — most visibly when Google's Gemini team delayed a multimodal launch in February 2024 after discovering inconsistent behavior in edge cases, ceding market momentum to OpenAI's faster release cycle. The google gemini vs chatgpt competition has increasingly become a study in divergent risk appetites, with Google's caution costing it developer mindshare and OpenAI's velocity attracting enterprise contracts despite well-documented reliability issues.
---
The Coming Bifurcation
The defense market's structure virtually guarantees this split will deepen. The Pentagon's 2025 AI budget exceeds $2.3 billion, with classified "black" programs estimated to add another $800 million. Few companies can compete for contracts of this scale; fewer still can absorb the compliance overhead of classified facilities and personnel clearances.
Anthropic's path forward likely narrows to commercial enterprise and international markets with stricter AI regulations — the European Union's AI Act explicitly favors vendors with documented safety practices. OpenAI's bet is that federal revenue and the data access it provides will accelerate capabilities faster than safety constraints would allow.
A third possibility exists: the emergence of defense-specific AI vendors without consumer-facing products or safety researchers to placate. Companies like Shield AI and Anduril have raised $1.2 billion combined in the past 18 months building exactly this — models trained and deployed for military use from inception, with no dual-use pretense or safety team objections.
The Pentagon, for its part, appears agnostic to vendor philosophy so long as capabilities arrive on schedule. The question is whether schedule pressure eventually produces systems that behave in ways no red team — 90 days or 90 hours — could anticipate.
---
Related Reading
- OpenAI GPT-5 Rumored for 2026 with Multimodal Reasoning - Vatican Prohibits AI-Generated Sermons in New Ruling - Sam Altman Accuses Tech Companies of 'AI-Washing' Layoffs - Global AI Safety Pledge Falls Short on Binding Rules - Teachers Now Face an Invisible Opponent in the Classroom