US Senate Passes AI Safety Act with Bipartisan Support. Labs Must Report Capabilities to Government.

The bill requires disclosure of dangerous capabilities and safety testing before deployment. Industry reaction is mixed.

What the Bill Requires

Mandatory Reporting

AI labs training models above 10^26 FLOPs must report:

RequirementTimeline Training compute usedWithin 30 days Dangerous capability evaluationsBefore deployment Safety testing resultsBefore deployment Red team findingsWithin 90 days Cybersecurity measuresAnnually

Safety Testing

CapabilityRequired Test Bioweapon assistanceThird-party red team Cyberattack capabilityCISA evaluation Deception/manipulationIndependent audit Autonomous replicationContained testing

---

Who's Covered

Threshold: 10^26 FLOPs

ModelTraining ComputeCovered? GPT-5~10^27 FLOPsYes Claude Opus 4~10^27 FLOPsYes Gemini 2 Ultra~10^27 FLOPsYes Llama 4~10^26 FLOPsYes Mistral Large 3~10^25 FLOPsNo Most startups<10^24 FLOPsNo Only ~5-10 organizations worldwide meet the threshold.

---

The Bipartisan Coalition

Supporters

PartyKey SenatorsStated Reason DemocratSchumer, KlobucharSafety, accountability RepublicanRomney, YoungNational security

The Vote

- Passed: 78-19 - Democratic yes: 45 - Republican yes: 33 - Opposed: Mostly libertarian-leaning

---

Industry Reactions

OpenAI

'We support this legislation. Responsible AI development requires oversight. We've been doing voluntary safety testing; mandatory testing levels the playing field.'

Anthropic

'We welcome clear requirements. Ambiguity about expectations has been harder than having standards.'

Meta

'We're reviewing the bill carefully. Our open-source approach presents unique considerations for how testing requirements apply.'

Critics

'This creates regulatory capture for incumbents. Startups can't afford the compliance costs.' — Tech Policy Researcher

---

What It Doesn't Do

Not CoveredWhy It Matters Smaller modelsMost AI isn't frontier Deployment restrictionsOnly requires reporting Liability rulesNo legal consequences for harms Open-source carveoutUnclear how it applies PreemptionStates can add more rules

---

Enforcement

Who Enforces

AgencyRole Commerce DeptRegistration, reporting CISACybersecurity evaluation OSTPTechnical standards FBICriminal violations

Penalties

ViolationPenalty Failure to reportUp to $10M False reportingCriminal charges Repeated violationsTraining prohibition

---

The Path Forward

House Status

- Similar bill passed Energy & Commerce Committee - Floor vote expected Q2 2026 - White House indicated support

Implementation Timeline

MilestoneTarget Bill signedMarch 2026 Regulations issuedSeptember 2026 Reporting beginsJanuary 2027 Full enforcementJuly 2027

---

Comparison to Other Regulations

RegionApproach US (this bill)Reporting, testing for largest EU AI ActRisk-based, comprehensive UKSector-specific, light-touch ChinaState control, censorship focus

---

The Debate

Pro-Regulation View

- Dangerous capabilities need oversight - Voluntary commitments aren't enforceable - Public has right to know about risks - Levels playing field vs. irresponsible actors

Anti-Regulation View

- Slows innovation - Favors incumbents over startups - US companies face rules competitors don't - Government lacks technical expertise

---

Bottom Line

The US is finally regulating AI—but narrowly. Only the largest models are covered. Only reporting is required. The question is whether this is:

- A good first step that will expand - A minimal compromise that won't be effective - The beginning of regulatory creep that stifles innovation

We'll find out over the next few years.

---

Related Reading

- Anthropic Quietly Updated Its AI Safety Policy. Here's What Changed. - California's AI Safety Bill Passes: What It Actually Requires - The EU AI Act Is Now Enforced: Here's What Actually Changed - The 2026 AI Safety Report Is Out: 'Testing Can't Keep Up With Advancing AI' - EU AI Act Enforcement Begins: What Companies Need to Know