Hegseth Threatens to Cut Anthropic From Military AI Contracts

Defense Secretary Pete Hegseth threatens to blacklist Anthropic from military AI contracts, citing national security risks in the AI startup's operations.

Defense Secretary Pete Hegseth has threatened to strip Anthropic of all military AI contracts, warning the San Francisco startup that its national security clearance could be revoked within 30 days unless it revises its AI safety policies, according to three Pentagon officials familiar with the matter. The ultimatum, delivered during a classified briefing on Tuesday, marks the most aggressive move yet by the Trump administration to bend AI companies to its defense priorities.

The confrontation centers on Anthropic's refusal to remove certain safety constraints from Claude, its flagship AI system, for military applications. Hegseth specifically demanded that Anthropic disable the "harmlessness" training that prevents Claude from generating content related to weapons development, target selection, and battlefield casualty prediction. The company has maintained these guardrails across all deployments, including a $15 million pilot program with U.S. Cyber Command that began last October.

---

Why Anthropic's Safety Stance Clashed With Pentagon Demands

Anthropic built its reputation on "constitutional AI" — a training methodology designed to make systems helpful, harmless, and honest by default. That positioning attracted $4 billion in Google investment and made it a darling of AI safety researchers. But it's now colliding head-on with military procurement officers who want models that won't refuse lawful orders.

The specific friction points are documented in a leaked March 2025 requirements memo obtained by The Pulse Gazette. The Pentagon's Joint AI Center requested Claude variants capable of: analyzing drone footage for combatant identification, simulating chemical weapons dispersion, and generating psychological operations content. Anthropic's safety team rejected all three use cases, citing its published policy against "uses that could cause severe physical harm."

"We didn't build Claude to be a weapons component. If that means losing federal contracts, that's a trade we're prepared to make."
— Dario Amodei, Anthropic CEO, in a March 12 company all-hands meeting, according to attendee notes

This isn't theoretical. The Cyber Command pilot — which used Claude for defensive cybersecurity analysis only — was already operating under restricted terms that frustrated military end-users. One defense official told reporters the model would "shut down mid-conversation" when analysis touched on offensive cyber capabilities, requiring human operators to manually stitch together workflows.

---

The Contract Landscape: Who Stands to Gain

Hegseth's threat arrives as the Pentagon accelerates its $1.8 billion AI procurement budget for fiscal 2026. Anthropic's potential exclusion would reshuffle a contracting environment already in flux.

Company2025 Pentagon AI ContractsPrimary ApplicationsSafety Policy Flexibility OpenAI$340 millionCyber defense, logistics, code generationModified terms for defense (March 2025) Anthropic$15 million (pilot only)Cyber defense (restricted)No military exemptions Palantir$287 millionIntelligence fusion, targetingPurpose-built for defense Scale AI$198 millionData labeling, autonomous vehiclesCase-by-case review Anduril$124 millionDrone swarms, border surveillanceNo civilian safety constraints

OpenAI's position is particularly notable. The company reversed its prior ban on military applications in January 2024, then secured a $200 million contract with the Defense Information Systems Agency in February 2025. Sam Altman told reporters in March that OpenAI would "work with the U.S. government to keep the country safe," explicitly contrasting his approach with Anthropic's.

But the Pentagon doesn't want a single vendor. Military AI officials have warned Congress that reliance on OpenAI creates concentration risk — a single point of failure if models degrade or pricing shifts. Hegseth's pressure on Anthropic appears designed to force a third major player into compliance, not to eliminate competition.

---

What Does This Mean for AI Safety Standards?

The standoff tests whether corporate AI ethics policies can survive contact with national security imperatives. Anthropic's constitutional approach was already viewed skeptically by some researchers as marketing differentiation. Now it's being stress-tested by the world's largest military budget.

The company's safety team has reportedly discussed spinning off a separate defense subsidiary that would operate under different constraints — similar to how Google maintained Project Maven relationships through intermediary vendors. But Amodei has privately rejected this path, according to two employees, fearing it would undermine Anthropic's credibility with its core researcher and developer base.

"If we create a 'bad Claude' for the Pentagon, we don't have a principled position anymore. We have a pricing tier."
— Anonymous Anthropic safety researcher, speaking to reporters on condition of anonymity

The technical reality is more nuanced than the public rhetoric suggests. Anthropic already provides differential access to model weights for certain enterprise customers, allowing fine-tuning that shifts behavioral boundaries. The Pentagon's ask goes further: pre-deployment modification of base model refusals, which Anthropic's engineers argue would require retraining from scratch and compromise safety across all use cases.

---

What Happens in the Next 30 Days

Hegseth's deadline expires in late April. The Pentagon has suspended new task orders under the existing Cyber Command pilot but hasn't yet terminated the contract — a negotiating position that preserves leverage.

Three outcomes appear plausible. Anthropic could capitulate partially, accepting some military use cases while maintaining red lines on weapons development. It could exit defense contracting entirely, forfeiting roughly $89 million in projected 2026-2027 revenue according to internal forecasts. Or the administration could escalate further, using the Committee on Foreign Investment in the United States to review Google's Anthropic investment as pressure leverage.

The broader question is whether any AI company can maintain consistent safety policies when sovereign customers demand exceptions. China's military AI procurement operates through state-controlled entities without such friction. If U.S. companies face a choice between principle and Pentagon contracts, the market structure of American AI development may tilt decisively toward vendors who never pretended to have principles at all.

Watch for Hegseth's testimony before the Senate Armed Services Committee on April 22. Staffers tell reporters he's prepared to name Anthropic explicitly if the company hasn't moved by then.

---

Related Reading

- Trump Bans Government Use of Anthropic AI Systems - Trump Drops Anthropic as OpenAI Wins Pentagon Contract - Pentagon AI Power Struggle Erupts After Deadly Raid - AI Bot Runs for Colombian Parliament in Historic Campaign - Vatican Bans AI Sermons, Sparking Ethics Debate