Sam Altman: AGI With Automated AI Researcher by 2028
OpenAI's CEO says they'll have an 'AI research intern' by September 2026 and a 'true automated AI researche.... Get the latest details and expert analysis on...
Sam Altman Just Put a Date on AGI: Automated AI Researcher by 2028
Category: news Tags: OpenAI, Sam Altman, AGI, AI Research, Timeline, Automation
Current content:
---
Related Reading
- GPT-5 Beats Human Experts on Every Major Benchmark. OpenAI Says We're Not Ready for GPT-6. - GPT-5 Achieves Human-Level Reasoning on Graduate-Level Problems - OpenAI Has Lost 40% of Its Senior Staff in 18 Months. Insiders Explain Why. - ChatGPT Gets Ads Starting Tomorrow. Sam Altman Called This a 'Last Resort.' - OpenAI Just Released GPT-5 — And It Can Reason Like a PhD Student
---
The 2028 timeline represents a dramatic acceleration from OpenAI's previously cautious public stance. For years, the company deliberately avoided specific AGI predictions, with Altman himself stating in 2023 that such forecasts were "impossible to make with any precision." This shift suggests either genuine technical confidence or strategic positioning amid intensifying competition from Anthropic, DeepMind, and Chinese labs like DeepSeek. The definition Altman appears to be using—an AI capable of autonomous research—also differs from the more expansive "economic value exceeding all human labor" benchmark he has cited in the past, raising questions about whether this represents a narrowing of ambition or a more pragmatic intermediate milestone.
Industry reaction has been predictably polarized. Yann LeCun, Meta's chief AI scientist, has publicly dismissed the claim as "corporate hype," arguing that current systems lack the persistent memory, causal reasoning, and world modeling necessary for genuine scientific discovery. Conversely, researchers at the Machine Intelligence Research Institute warn that even a constrained definition of automated research could trigger recursive self-improvement loops with unpredictable consequences. The discrepancy between these assessments highlights a fundamental unresolved tension: there remains no consensus metric for measuring progress toward AGI, leaving timelines vulnerable to interpretation and marketing spin.
The economic implications of Altman's forecast extend far beyond Silicon Valley. If realized, automated AI research would collapse the cost of scientific discovery across pharmaceuticals, materials science, and energy—potentially compressing decades of innovation into months. Yet this same capability would obsolete the career pipelines that currently feed the $200 billion global R&D workforce. Governments have shown little capacity to prepare for such disruptions; the EU AI Act and U.S. executive orders on AI safety contain no provisions for mass technological unemployment in knowledge sectors. Altman's timeline, whether accurate or not, forces these conversations into the immediate policy window rather than the distant future.
---