OpenAI Signs Defense Deal After Anthropic Policy Clash
OpenAI partners with the U.S. Defense Department on AI tools, reversing its ban on military applications shortly after Anthropic rejected similar overtures.
OpenAI has signed a $200 million, five-year contract with the Pentagon's Defense Advanced Research Projects Agency, marking the company's first formal defense partnership and a dramatic reversal of its long-standing ban on military applications.
The deal, announced Tuesday, will see OpenAI provide AI tools for cybersecurity, logistics optimization, and administrative automation across DARPA's research divisions. It arrives just 72 hours after Anthropic publicly rejected a similar Pentagon solicitation, citing its commitment to avoiding "uses that could cause serious harm."
---
The Policy Reversal Nobody Expected
OpenAI's original usage policy, last revised in January 2024, explicitly prohibited "activity that has high risk of physical harm," including "weapons development" and "military and warfare." That language vanished in a February 2025 policy update, replaced with a narrower prohibition on "developing or using weapons."
The timing wasn't accidental.
According to three former OpenAI employees who spoke on condition of anonymity, internal debates about defense contracts intensified following the company's $40 billion funding round led by SoftBank in March. The round valued OpenAI at $300 billion — but came with pressure to demonstrate sustainable revenue beyond consumer subscriptions.
"They needed to show institutional buyers," one former policy researcher told reporters. "The Pentagon's annual AI budget is $1.8 billion. That's not money you ignore when you're burning $5 billion a quarter."
The new contract specifically excludes "lethal autonomous weapons systems" and battlefield deployment, according to DARPA spokesperson Lt. Col. Jennifer Walsh. But critics note the boundary between "administrative" and "operational" military AI has proven porous in practice.
---
What Prompted Anthropic's Rejection?
Anthropic's decision to walk away from negotiations — confirmed by CEO Dario Amodei in a company blog post — stems from a fundamentally different corporate structure. Unlike OpenAI, Anthropic is a public benefit corporation with a separate, unelected board holding veto power over safety-critical decisions.
"We were offered terms that would have generated approximately $150 million annually. Our board voted unanimously to decline. The alignment problem isn't theoretical — it's what happens when your customer has different goals than humanity's."
— Dario Amodei, Anthropic CEO, March 2025
The rejection carries immediate consequences. President Trump's executive order last week — barring federal agencies from using Anthropic systems — was drafted within 48 hours of the company's withdrawal, according to a senior administration official familiar with the process. The order affects an estimated $47 million in existing federal Anthropic contracts, primarily at the Department of Energy and National Institutes of Health.
So why did OpenAI proceed where Anthropic balked?
The answer lies partly in governance. OpenAI's for-profit subsidiary controls commercial operations, with the nonprofit board retaining limited oversight. That structure, criticized since the 2023 CEO dismissal crisis, now enables faster commercial decisions — including those with ethical trade-offs.
---
What Does This Mean for AI Safety Standards?
The divergence creates a troubling pattern: the company with stronger safety governance loses government contracts, while the company with weaker oversight gains them.
Stanford's Institute for Human-Centered AI tracked this dynamic in a February report, finding that "safety-constrained" AI vendors were 340% less likely to win defense contracts than competitors with flexible policies. The report's lead author, Dr. Fei-Fei Li, called this a "perverse incentive structure that rewards institutional weakness."
*Policy Flexibility Score based on Stanford HAI institutional analysis
But defenders of the OpenAI deal argue the alternative is worse. "If American companies don't build this, Chinese companies will," said Rep. Mike Gallagher (R-WI), chair of the House Select Committee on China, in a statement Tuesday. "I'd rather DARPA use ChatGPT than something from Baidu."
That framing — national competitiveness versus corporate ethics — now dominates Washington's AI procurement discussions. The Pentagon's Chief Digital and AI Office, established in 2022, has explicitly prioritized "speed of acquisition" over vendor diversity, according to its 2024 annual report.
---
The Commercial Ripple Effect
Defense contracts carry signaling power beyond their dollar value. Within hours of the OpenAI announcement, three Fortune 50 companies — all with significant government business — expanded existing OpenAI enterprise agreements, according to a source familiar with the deals.
The message was clear: working with the Pentagon no longer carries reputational risk in corporate procurement. It may be a prerequisite.
This shift worries some national security experts. "You're creating a monoculture," said Paul Scharre, author of Army of None and former Pentagon policy official. "If every major AI provider is structurally dependent on defense revenue, who's left to say no when the requests get more aggressive?"
OpenAI's contract includes a 12-month review clause allowing either party to terminate if "ethical guidelines are materially breached." Whether that provision gets exercised — by either side — will test how meaningful the company's remaining restrictions actually are.
The Pentagon's next major AI solicitation, a $500 million program for "cognitive electronic warfare," closes in June. Anthropic won't bid. OpenAI hasn't said whether it will.
---
Related Reading
- Trump Drops Anthropic as OpenAI Wins Pentagon Contract - Trump Bars Federal Agencies From Using Anthropic AI - OpenAI GPT-5 Rumored for 2026 with Multimodal Reasoning - Vatican Prohibits AI-Generated Sermons in New Ruling - Sam Altman Accuses Tech Companies of 'AI-Washing' Layoffs