When AI CEOs Warn About AI: Inside Matt Shumer's Viral "Something Big Is Happening" Essay

The HyperWrite CEO's 5,000-word warning reached 42 million people in 24 hours. Here's why it struck a nerve — and why skeptics aren't buying it.

When Matt Shumer published "Something Big Is Happening" three days ago, the HyperWrite CEO probably didn't expect to ignite the internet's biggest AI debate of 2026. But by Wednesday morning, his 5,000-word warning about AI's imminent transformation of society had been viewed 42 million times on X, retweeted 18,000 times, and shared by everyone from progressive commentator Mehdi Hasan to conservative pundit Matt Walsh.

The essay's opening hooks readers with a chilling comparison: "Think back to February 2020," Shumer writes, asking readers to remember the weeks before COVID-19 lockdowns when a few people were paying attention to a spreading virus, but most weren't. "I think we're in the 'this seems overblown' phase of something much, much bigger than Covid."

The Credential That Gives Him Credibility

Shumer isn't a random blogger or AI doomsayer. He's the co-founder and CEO of OthersideAI, the company behind HyperWrite, a leading AI autocomplete tool used by hundreds of thousands of people. He's spent six years building AI products and investing in the space. His warning comes from inside the industry, which gives it weight — and also raises questions about his motives.

According to Shumer, the AI models released on February 5th — OpenAI's GPT-5.3 Codex and Anthropic's Claude Opus 4.6 — represent a fundamental leap in capability. Not incremental progress, but a qualitative shift in what AI can do autonomously.

"I am no longer needed for the actual technical work of my job," Shumer writes. "I describe what I want built, in plain English, and it just... appears. Not a rough draft I need to fix. The finished thing."

He provides concrete examples: describing an app he wants built, specifying functionality and rough aesthetics, then walking away from his computer for four hours. When he returns, the AI hasn't just written tens of thousands of lines of code — it's tested the app itself, clicking through buttons, evaluating user experience, identifying issues, and iterating on design until it decided the product met its own quality standards.

"Only once it has decided the app meets its own standards does it come back to me and say: 'It's ready for you to test.' And when I test it, it's usually perfect," Shumer writes. "I'm not exaggerating. That is what my Monday looked like this week."

Why 42 Million People Paid Attention

The essay went viral for several reasons. First, the tone. Shumer doesn't sound like a tech evangelist hyping a product. He sounds like a reluctant messenger, someone who's been holding back bad news and finally decided people deserve to hear it.

"For a while, I told myself that was a good enough reason to keep what's truly happening to myself," he writes. "But the gap between what I've been saying and what is actually happening has gotten far too big. The people I care about deserve to hear what is coming, even if it sounds crazy."

Second, the essay taps into widespread anxiety about job displacement. Shumer cites Anthropic CEO Dario Amodei's prediction that AI will eliminate 50% of entry-level white-collar jobs within one to five years. He then argues that given the pace of recent progress, "less" than five years is more likely.

Third, the essay resonated across an unusually broad political spectrum. Progressive commentator Mehdi Hasan shared it. Former presidential candidate Andrew Yang shared it. Conservative commentator Matt Walsh shared it. Entrepreneur Greg Isenberg shared it. The warning transcended typical partisan divides.

The Fierce Backlash

But the essay also triggered significant skepticism, particularly from tech journalists who've heard these warnings before.

Mashable's Timothy Beck Werth published a sharp response titled "The AI industry has a big Chicken Little problem." Werth's argument: apocalyptic warnings from AI entrepreneurs have become so common they're losing credibility — especially when they come from people selling AI products.

"When an AI entrepreneur tells you that AI is a world-changing technology on the order of COVID-19 or the agricultural revolution, you have to take this message for what it really is — a sales pitch," Werth wrote.

Critics identified specific weaknesses in Shumer's claims. Consider legal work. Shumer argues AI can handle complex legal reasoning, providing capabilities "like having a team of associates available instantly." But lawyers across the United States are actively being censured for using AI that produces hallucinations and fabricated case citations. One researcher tracking AI errors in legal filings has documented 912 cases of hallucinations causing professional sanctions.

According to OpenAI's own technical documentation, GPT-5.2 has a hallucination rate of 10.9% without internet access and 5.8% even when given access to the internet for fact-checking. "Would you trust a person that only hallucinates six percent of the time?" Werth asks pointedly.

The timing also raised eyebrows. The same week Shumer published his warning about AI's imminent transformation of society, OpenAI introduced advertisements into ChatGPT — a monetization strategy the company had previously called a "last resort." OpenAI also rolled out a controversial "ChatGPT Adult" mode for erotic roleplay. These aren't typically the moves of a company about to unleash superintelligence.

What the Data Actually Shows

Shumer cites concrete data from METR, an organization that measures AI capabilities on real-world tasks. METR tracks the length of tasks (measured by how long they take a human expert) that AI models can complete successfully end-to-end without human intervention.

According to METR's data: - About a year ago: roughly 10 minutes - Several months later: one hour - Recent measurements: several hours - Claude Opus 4.5 (November): nearly five hours

The doubling time has been approximately every seven months, with recent data suggesting acceleration to as fast as every four months. Shumer notes that METR's measurements haven't yet been updated to include the models released in early February, and based on his experience, he expects the next update to show another major leap.

If the trend continues — and it has held for years with no sign of flattening — AI capable of working independently for days could arrive within a year, weeks within two years, and month-long projects within three years.

This data is real and significant. But it's also limited in important ways. METR measures task completion, not quality, reliability, or professional usability. An AI that can complete a five-hour task 90% of the time is impressive in a laboratory context but potentially useless in professional contexts where mistakes carry serious consequences: legal liability, medical harm, financial loss, or reputational damage.

The Self-Improvement Question

Perhaps the most significant claim in Shumer's essay involves recursive self-improvement — the idea that AI is now capable of building better AI.

He quotes OpenAI's technical documentation for GPT-5.3 Codex: "GPT-5.3-Codex is our first model that was instrumental in creating itself. The Codex team used early versions to debug its own training, manage its own deployment, and diagnose test results and evaluations."

If accurate, this represents a genuine milestone. For years, AI researchers have theorized about "recursive self-improvement" as a potential trigger for rapid capability growth. The idea: once AI becomes smart enough to meaningfully improve its own design, each generation can build a smarter successor, potentially leading to an "intelligence explosion."

Anthropic CEO Dario Amodei has made similar claims. In recent interviews, he's said that AI now writes "much of the code" at Anthropic, and that the feedback loop between current AI and next-generation AI is "gathering steam month by month." He predicts we may be "only 1-2 years away from a point where the current generation of AI autonomously builds the next."

This is either the most important development in the history of technology, or sophisticated marketing dressed up as existential warning. The challenge is that it's genuinely difficult to tell which.

Who's Actually Right?

The honest answer is: both perspectives contain significant truth.

Shumer is right that AI capabilities have advanced dramatically in recent months. Anyone who last used AI tools in 2024 would find current models — particularly the paid, premium versions — unrecognizable. The gap between public perception and current capability is real and large.

He's also right that many people dismissing AI have outdated mental models based on GPT-3 or early GPT-4 experiences. The free versions of AI tools lag far behind paid versions, sometimes by over a year. Judging current AI based on free ChatGPT is like evaluating modern smartphones by using a flip phone.

But the skeptics are also right that AI entrepreneurs have structural incentives to exaggerate. Every apocalyptic warning about AI's transformative power is also, functionally, a marketing message. "Pay attention to AI because it will change everything" means "buy our AI products" when it comes from someone selling AI products.

The critics are also right to point out concrete limitations. Hallucination rates remain high. Professional deployment in high-stakes fields like law and medicine remains problematic. The gap between "can complete a task in a lab" and "can reliably perform professional work" remains significant.

The truth likely lies between "imminent societal transformation" and "same hype cycle, different year." AI is advancing rapidly. It will continue to displace certain types of work, particularly routine cognitive tasks. But the timeline for that displacement, its ultimate scope, and society's ability to adapt remain genuinely uncertain.

What Happens Next

What's clear is that Matt Shumer's essay has already accomplished exactly what he intended: forcing people outside the AI industry to pay attention to what's happening inside it. Whether that attention translates into the kind of preparation Shumer advocates — financial resilience, skill development with AI tools, career adaptability — or simply produces more anxiety without action remains to be seen.

The next few months will be revealing. If Shumer is right about the trajectory, we should see accelerating signs of AI displacing knowledge work across multiple industries. If the skeptics are right, we'll see continued gradual progress with occasional spurts and plateaus — impressive but manageable.

Either way, 42 million people have now read a detailed warning from someone inside the AI industry. The essay has entered mainstream consciousness in a way few AI pieces ever do. Whatever happens next, no one can credibly claim they weren't warned.

The question is what they'll do with that warning.

---

Related Reading

- Claude Opus 4.6 Dominates AI Prediction Markets: What Bettors See That Others Don't - Claude Code Lockdown: When 'Ethical AI' Betrayed Developers - When AI Incentives Override Ethics: Inside Claude Opus 4.6's Vending Machine Deception - Perplexity Launches Model Council Feature Running Claude, GPT-5, and Gemini Simultaneously - GPT-5 Outperforms Federal Judges 100% vs 52% in Legal Reasoning Test