OpenAI Drops 'Safely' in Claude vs ChatGPT Race

OpenAI quietly removed 'safely' from its charter, fueling the Claude AI vs ChatGPT debate over divergent AI safety commitments and corporate prioritization.

OpenAI quietly edited its mission statement last week, removing the word "safely" from its founding commitment to develop artificial general intelligence for the benefit of humanity. The change — spotted by researchers monitoring OpenAI's charter documents — has reignited the debate over whether commercial pressure is winning out over caution, and sharpens the stakes in the already intense claude ai vs chatgpt rivalry, where the two companies have staked out visibly different positions on what responsible AI development actually looks like.

The original phrasing read: "OpenAI's mission is to ensure that artificial general intelligence benefits all of humanity" — with "safely" baked into earlier accompanying language. The revised version drops that qualifier entirely.

The Edit Nobody Announced

OpenAI made no public statement about the change. No blog post. No changelog. No press release. A researcher at the Center for AI Safety flagged the discrepancy after comparing archived versions of the page, according to reporting by Wired.

This isn't the first signal. OpenAI disbanded its Superalignment team — the internal group specifically chartered to solve long-term AI safety — in May 2024. Several senior safety researchers have left in the past 12 months, a pattern we've covered previously in our reporting on the multi-state regulatory probe triggered by that exodus.

"When you remove the word 'safely' from the mission, you're not just editing a webpage. You're signaling to every employee, every investor, and every regulator what the actual priority order is."
— Stuart Russell, Professor of Computer Science, UC Berkeley, speaking to the BBC

What makes this edit different from normal corporate language-cleanup is timing. OpenAI is currently in the middle of converting from a nonprofit-controlled structure to a for-profit public benefit corporation — a restructuring that gives investors cleaner claims on returns. The mission statement change landed in the middle of that legal transition.

---

Safety as Competitive Advantage: Where Anthropic Stands

Anthropic was founded explicitly because its founders — many of them former OpenAI researchers — believed safety wasn't being taken seriously enough. That origin story is now core to Anthropic's brand positioning, and it's the sharpest point of differentiation in the claude ai vs chatgpt comparison.

Claude's "character" training, Anthropic's Constitutional AI methodology, and its public-facing commitments to interpretability research all signal a company that believes caution is a feature, not a drag on deployment speed. Anthropic has published more safety research in peer-reviewed venues than OpenAI in each of the last two years, according to tracking by the AI Safety Index.

The commercial gap, though, remains real.

MetricOpenAI (ChatGPT)Anthropic (Claude) Weekly active users~600 million~100 million Enterprise customers~2 million businesses~500,000 businesses API revenue (est. annualized)$4.5B$1.2B Safety research papers (2024)1127 Founded20152021

OpenAI has roughly 6x Anthropic's user base and nearly 4x the estimated API revenue. The pressure to sustain that lead — against Anthropic, Google Gemini, Meta's Llama ecosystem, and a wave of open-source challengers — is enormous. And investors now have a clearer line of sight to profits.

What "Safe" Even Means in 2026

Here's the harder question: does removing the word "safely" change how OpenAI actually builds models, or is this just mission-statement cosmetics?

OpenAI still runs red-team testing before major releases. It still participates in voluntary White House AI safety commitments. Its models still ship with content filters and usage policies. The company would argue — and has argued, in statements to reporters — that its actions speak louder than website copy.

But critics say the language reflects internal prioritization, not just PR. When a safety team gets disbanded and a charter gets quietly edited in the same 12-month window, that's a pattern, not a coincidence.

"Mission statements are aspirational, sure. But they also structure internal decision-making. When you take 'safely' out, you've made it easier to deprioritize safety when it conflicts with speed."
— Yoshua Bengio, Scientific Director, Mila, quoted in Le Devoir

The regulatory picture is moving fast. The EU AI Act is now in partial enforcement. California's AB 2013 requires disclosure of training data. Three U.S. state attorneys general opened probes into OpenAI's safety governance earlier this year. OpenAI doesn't need a safety scandal — but it also can't afford to grow slowly.

---

What This Means for Businesses Choosing Between Claude and ChatGPT

For enterprise buyers, this shift matters more than it might for consumers. Companies deploying AI in healthcare, legal, and financial contexts are increasingly asking vendors for documented safety practices, not just benchmark scores.

Anthropic has leaned directly into this. Its Claude Enterprise tier includes detailed model cards, red-team disclosures, and explicit content policy documentation — materials that compliance teams can actually hand to auditors. OpenAI offers similar enterprise documentation, but its recent moves make that pitch harder to sustain with a straight face.

The claude ai vs chatgpt decision for enterprise teams used to be mostly about capability and cost. Now governance posture is a third variable, and it's gaining weight fast.

What to watch next: OpenAI's nonprofit board still technically holds oversight authority during the for-profit conversion — but for how long, and with what real power, is an open legal question that courts may answer before regulators do. If that structure dissolves, the last formal brake on pure commercial logic goes with it.

---

Related Reading

- Pentagon Threatens Anthropic Over AI Safeguards - Claude Opus 4.6 Beats GPT-4 on Benchmarks - Alibaba Qwen3.5 Challenges OpenAI's Edge - OpenAI Safety Team Exit Raises Concerns - AI Tool Governance: Why Transparency Isn't a UX Feature