White House Unveils Federal AI Framework, Sparks Preemption Fight
The White House released a sweeping AI policy framework that sets federal standards while allowing states to add stricter rules — sparking a heated preemption
The Biden administration released its long-awaited federal AI governance framework on March 12, mandating baseline safety testing for all AI systems deployed by government agencies but explicitly declining to establish nationwide rules that would override state laws. The 94-page document, developed over 18 months by the Office of Science and Technology Policy, creates immediate compliance obligations for federal contractors while leaving a patchwork of state regulations—from California's SB 1047 to New York's algorithmic hiring laws—intact and enforceable.
That's the problem.
---
What the Framework Actually Requires
Federal agencies must now conduct third-party risk assessments before deploying any AI system affecting public benefits, law enforcement, or critical infrastructure, according to the framework text. Systems classified as "high-impact" face mandatory red-teaming, bias audits, and continuous monitoring requirements. The General Services Administration will maintain a public registry of approved vendors, with non-compliant companies barred from new contracts starting October 1.
But the framework stops deliberately short of creating a federal licensing regime or preempting state authority. Section 7.3 states explicitly that "this guidance does not displace or limit state or local laws governing AI deployment in the private sector."
For tech companies, this creates a compliance nightmare. A single AI product might need to satisfy California's disclosure requirements for training data, Illinois's biometric consent rules, New York City's audit mandates for hiring algorithms, and now federal procurement standards—none of which perfectly align.
"We're looking at a Balkanized regulatory environment where the cost of compliance scales with the number of states you operate in," said Suresh Venkatasubramanian, who co-authored the White House's AI Bill of Rights and now directs Brown University's Center for Tech Responsibility. "The administration punted on the hard question, and now companies are stuck navigating 50 different rulebooks."
---
The Preemption Battle Lines
The framework's silence on preemption wasn't accidental—it was political. Senior administration officials told reporters that explicit federal preemption language would have triggered a veto threat from progressive Democrats and civil rights groups who view state laws as essential safeguards against algorithmic discrimination.
But that calculation may not survive the transition. Republican lawmakers, led by House Energy and Commerce Chair Cathy McMorris Rodgers, immediately announced plans to introduce legislation establishing federal AI standards that would supersede state rules. The proposed "American AI Leadership Act," expected next month, would create a single national licensing framework administered by a new Federal AI Administration.
The stakes extend beyond compliance costs. California's SB 1047, which requires safety evaluations for large AI models and mandates "kill switch" capabilities, has already prompted Anthropic and OpenAI to modify their training practices nationwide rather than maintain separate systems. Federal preemption would eliminate that leverage.
---
What This Means for Federal Contractors
The immediate impact falls on companies selling to government buyers. The framework's procurement rules apply retroactively to existing contracts worth more than $10 million, forcing vendors to submit compliance documentation by September 30 or risk contract termination.
Palantir, which holds $2.3 billion in active federal AI contracts, disclosed in an SEC filing that it expects $47 million in additional compliance costs this year. Smaller vendors face worse: the framework requires liability insurance policies covering AI-specific harms, a product that barely exists in the commercial market.
"The insurance requirement is going to kill off a generation of federal AI startups," predicted Melissa Flagg, former Deputy Assistant Secretary of Defense for Research and now a partner at the venture firm Lux Capital. "You can't get covered for algorithmic discrimination at any price right now. The big players will absorb this; the challengers won't."
The framework also creates a structural advantage for cloud providers. Amazon Web Services, Google Cloud, and Microsoft Azure can offer "compliance-as-a-service" packages that handle federal documentation requirements—effectively bundling infrastructure and regulatory clearance. Independent software vendors without cloud platforms must navigate the process alone.
---
The Courts Will Decide
Legal challenges are already forming. The Chamber of Commerce announced it will file suit arguing that the framework's procurement rules exceed executive authority under the Federal Acquisition Regulation. Separately, California Attorney General Rob Bonta prepared a defensive brief asserting states' rights to regulate AI safety beyond federal minimums—positioning for the preemption fight to come.
The Supreme Court's 2024 decision in Loper Bright weakening agency deference complicates everything. Lower courts may be less inclined to uphold the framework's technical requirements if challenged, creating uncertainty that persists until Congress acts or the next administration rewrites the rules.
So where does this leave the average AI buyer? For now, federal agencies get marginally safer systems with documented testing. Everyone else gets the same fragmented landscape, plus new procurement requirements that ripple through vendor pricing. The framework moved the ball forward on government use cases while explicitly avoiding the harder question of whether AI safety should be a national standard or a state-by-state experiment.
That experiment continues. The only certainty is that someone—likely the Supreme Court—will eventually pick a winner.