OpenAI Accused of Violating California AI Safety Law
California accuses OpenAI of violating SB 1047 AI safety law. The lawsuit could reshape how AI companies operate in the state. Read the full analysis.
OpenAI Accused of Violating California AI Safety Law
The Legal Challenge Under California’s SB 53
California’s SB 53, enacted in September 2025 and effective January 2026, represents the nation's most comprehensive regulatory regime for frontier AI systems. The law applies to developers of models exceeding computational thresholds of 10^26 FLOPs during training — a threshold that currently captures state-of-the-art large language models from leading developers. The statute's architecture includes multiple governance layers: mandatory publication of detailed safety frameworks describing risk assessment methodologies and mitigation strategies; annual third-party auditing of compliance; incident reporting systems for safety-critical events; whistleblower protections for personnel raising safety concerns; and enhanced safeguards for models classified as high-risk across specified hazard dimensions including cybersecurity, biological and chemical weapons, radiological and nuclear threats, and autonomous capabilities.
The Midas Project’s Allegations Against OpenAI
The Midas Project’s allegations focus specifically on GPT-5.3-Codex's cybersecurity risk classification and the compliance obligations it triggers. OpenAI CEO Sam Altman acknowledged that the model achieved the company's first 'high' classification under its internal Preparedness Framework, an evaluation system assessing models across risk vectors with escalating safeguard requirements. This 'high' designation indicates that OpenAI's own assessment determined the model capable of facilitating significant cyber harm if misused at scale or automated for malicious purposes. Despite this classification, The Midas Project contends, OpenAI released the model without implementing framework-prescribed protections. These required safeguards include evaluation protocols resistant to deceptive behavior directed at safety evaluators, mechanisms to preserve safety research integrity and prevent sabotage, and transparency requirements preventing companies from concealing true system capabilities.
OpenAI’s Defense and Regulatory Interpretation
OpenAI has vigorously disputed these allegations. The company's defense hinges on conditional logic embedded in both its internal framework and SB 53's regulatory text. Enhanced safeguard requirements, OpenAI argues, activate only when high risk appears alongside 'long-range autonomy' — defined as sustained independent operation over extended temporal horizons without continuous human oversight. Since GPT-5.3-Codex operates through turn-based interaction requiring active human prompting, the company maintains it lacks the autonomous capabilities that would trigger strictest requirements. This interpretive position, if accepted by regulators, would substantially narrow SB 53's practical applicability. Most current large language models operate through similar turn-based interactions requiring human direction. Characterizing such systems as lacking long-range autonomy would exclude them from enhanced safeguards despite potentially dangerous capabilities, potentially gutting the law's protective intent.
Implications for AI Risk Governance and Regulatory Enforcement
The case exposes deeper vulnerabilities in AI risk governance architectures. Classification frameworks relying on developer self-assessment create structural incentives for conservative risk classification. Companies can minimize compliance costs and accelerate deployment while maintaining plausible deniability against regulatory challenges. The 'long-range autonomy' trigger exemplifies this concern, as reasonable analysts might disagree about whether specific capabilities constitute autonomous operation. California's Attorney General now faces complex enforcement decisions. Formal action would require substantial technical investigation, potentially including independent expert evaluation of GPT-5.3-Codex's actual operational characteristics versus OpenAI's documented classifications. Settlement negotiations offer alternatives to adversarial litigation but might sacrifice precedent value that formal adjudication would provide. The case outcome will establish critical precedents affecting the entire frontier AI ecosystem.
The Broader Impact on Global AI Regulation
Other developers — Anthropic, Google DeepMind, Meta A, and emerging competitors — are monitoring closely as California's approach will shape compliance strategies industry-wide. A robust enforcement posture would strengthen compliance incentives; weak enforcement could signal regulatory impotence. International regulatory dynamics compound these effects. The European Union's AI Act with its risk-based classification, China's algorithmic governance emphasizing social stability, and emerging standards elsewhere create complex multinational compliance environments. California's enforcement credibility will influence whether global standards converge or fragment. For OpenAI specifically, adverse outcomes could include mandatory safeguard implementation, model availability restrictions, financial penalties, or enhanced ongoing oversight. The company reportedly maintains a $5 billion annual burn rate amid intense competition, making regulatory complications potentially consequential for competitive position.
The Future of Democratic Governance Over AI Technologies
The case tests fundamental questions about democratic governance of transformative technologies. AI systems present novel challenges — opacity, rapid evolution, dual-use potential, systemic risk — that may exceed traditional regulatory capacities. The California experiment provides crucial evidence about governance feasibility as societies worldwide grapple with artificial intelligence's profound implications. The AI industry, regulatory community, and concerned public are watching closely as this landmark case develops. Its resolution will shape the governance landscape within which transformative capabilities continue to emerge, determining whether safety commitments become legally enforceable obligations or remain voluntary industry practices.
Related Reading
- California's SB-1047 Successor Is Even More Aggressive - Deepfake Detection for the 2026 Election: Can Technology Save Democracy? - The AI Model Users Refuse to Let Die: Inside the GPT-4o Retirement Crisis - The White House AI Czar Has 449 AI Investments - California Just Passed the Strictest AI Hiring Law in America