Artificial intelligence has become the latest flashpoint in a long‑running tug‑of‑war between Washington and the states. As AI systems quickly move from labs into classrooms, hospitals, workplaces, and police departments, the question is no longer whether to regulate, but who gets to write the rules. That seemingly technical question, federal versus state authority, now shapes everything from child‑safety laws to high‑risk frontier‑model oversight.
In 2025, the conflict over AI preemption burst into the open. Congress, the White House, state legislatures, and industry have all staked out competing visions for how much autonomy states should retain in governing AI. On one side, federal officials and many large tech firms warn of a patchwork of inconsistent rules that will stifle innovation and fragment the national market. On the other, states insist they are already acting as agile “laboratories of democracy” and must not be sidelined by sweeping federal ceilings on their authority.
The NCSL’s Rebellion Against Federal AI Preemption
The most direct expression of resistance came on October 22, 2025, when the National Conference of State Legislatures (NCSL) sent a pointed letter to Congressional leaders. Speaking for all 50 state legislatures, NCSL “reaffirmed” its opposition to any federal proposal that would preempt state authority over AI. That word “any” is crucial: the organization was not merely quibbling over particular bill language, but rejecting the larger idea that Washington should occupy the field.
In the letter, NCSL argued that broad preemption would “undermine the federalist system” by erasing the very role states are supposed to play in U.S. governance. States, it wrote, must remain “laboratories of democracy” on AI, testing approaches on issues like AI’s impact on children, health decisions, workforce readiness, transparency, privacy, and public safety. Instead of seeing state experimentation as an obstacle, NCSL casts it as a feature of American innovation and rights protection.
NCSL’s message also pushed back against the narrative that AI is too novel or complex for statehouses. The letter stressed that states are already “actively and nimbly” legislating AI in a bipartisan fashion, pointing to their track record in areas like data protection and algorithmic bias. Rather than accepting a national ceiling that would freeze state policymaking, NCSL urged Congress to preserve space for state‑level solutions that can move faster than federal lawmaking and adapt to local needs.
Congress Flirts With a 10‑Year Freeze on State AI Laws
While NCSL was defending state authority, some in Congress were moving in the opposite direction. In May 2025, the House Energy and Commerce Committee advanced budget reconciliation language imposing a 10‑year moratorium on state enforcement of AI regulations, with only narrow exceptions. The measure would bar “any State or political subdivision thereof” from enforcing laws that regulate “artificial intelligence models, artificial intelligence systems, or automated decision systems” during the decade after enactment.
The scope of that proposal is striking. It would sweep across state rules on algorithmic bias, AI‑generated deepfakes in political campaigns, the use of AI in health care and insurance, AI‑driven surveillance, consumer transparency mandates for automated decision‑making, and AI‑specific data protections. Legal analysts describe it as an unprecedented attempt to wipe out both existing and future state AI rules in one stroke, effectively substituting a federal veto for local democratic choices.
Unsurprisingly, the moratorium language has become a central exhibit in state officials’ case against federal overreach. NCSL and individual state lawmakers routinely cite the 10‑year freeze as an example of what they fear: sweeping preemption without a robust federal safety regime to replace what states are already building. In their view, locking states out of AI governance for a decade, amid rapid technological change, would not only be anti‑federalist, it would be reckless.
The White House’s Draft Executive Order and Preemption by Litigation
Even as Congress struggles to pass comprehensive AI legislation, the executive branch has explored a different route: using existing legal tools and agencies to pressure or displace state AI regimes. A leaked draft executive order, reported by POLITICO on November 19, 2025, shows the White House preparing a coordinated, government‑wide strategy to knock down state AI laws through litigation and regulatory maneuvers rather than new statutes.
According to the leak, the order would create a Department of Justice “AI Litigation Task Force” charged with challenging state AI regulations as violations of the Dormant Commerce Clause or as preempted by existing federal statutes. The Department of Commerce would be tasked with cataloging “burdensome” state AI laws within 90 days, essentially creating a target list. The Federal Trade Commission would examine whether some state AI rules themselves violate the FTC Act, and the Federal Communications Commission would set federal AI disclosure standards for communications, standards designed, at least in part, to preempt conflicting state requirements.
Tech industry groups are described as broadly supportive of this strategy, arguing that a patchwork of state laws creates compliance chaos, particularly for large model developers and platforms operating nationwide. But the leaked order also underscores a political reality: explicit AI preemption bills have repeatedly stalled in Congress, even as states rapidly pass their own AI laws. For the White House, preemption by litigation may look like a pragmatic workaround; for states, it looks like an attempt to sidestep democratic debate over the proper balance of federal and state power in the AI era.
California’s SB 53: A Frontier AI Test Case
Nowhere is the clash over AI preemption more visible than in California, which has positioned itself as a de facto regulator for the tech sector. In 2025 the state enacted SB 53, the Transparency in Frontier Artificial Intelligence Act, targeting large AI developers, those with more than $500 million in revenue and models trained above specified compute thresholds. Those companies must now publish public assessments of “catastrophic risk[s]” such as bioweapon design, large‑scale cyberattacks, or loss of human control over AI systems.
SB 53 also requires firms to disclose how they comply with national and international AI safety standards, file reports on “critical safety incidents,” and maintain whistleblower channels. Governor Gavin Newsom and bill author Senator Scott Wiener explicitly framed the law as filling a “regulatory gap” left by Congress, signaling that California is prepared to move a even as federal lawmakers argue over national frameworks. That framing immediately raised a question: would forthcoming federal rules, or the leaked executive order, attempt to preempt SB 53?
Industry reaction to SB 53 has been divided, revealing fractures within the tech sector’s usual calls for uniform national standards. Some major AI companies and investors have urged Washington to establish a single federal rulebook that would override state‑by‑state approaches like California’s. Others, including certain safety‑focused labs, have supported SB 53 as a necessary baseline for frontier‑model oversight. The statute thus serves as an early test case of whether state‑driven AI safety regimes can coexist with, or will be crowded out by, looming federal action.
SB 1047’s Veto: Calibrating State Ambition to Federal Headwinds
California’s strategy has not been uniformly maximalist. In 2024, Governor Newsom vetoed SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. That bill would have imposed broader safety obligations and enforcement tools on “frontier models” trained above 10²⁶ FLOPs, covering any developer doing business in California regardless of where they are quartered. Compared to SB 53, SB 1047 was more expansive and interventionist.
In his veto message, Newsom signaled concern that the legislation might overreach in ways that could collide with future federal frameworks. The governor did not reject the premise of AI safety regulation; rather, he worried about locking California into a regime that might be preempted or create significant friction with national policy. The subsequent drafting and passage of the narrower SB 53 reflected an effort to keep California at the forefront of AI governance without inviting straightforward federal invalidation.
This sequence, vetoing SB 1047, then enacting SB 53, illustrates how states are trying to calibrate their AI laws in anticipation of preemption fights. Lawmakers want to move aggressively on safety and transparency, but they must also consider how courts might view state rules once federal agencies, armed with executive orders or new statutes, assert their own AI governance authority. In this sense, California is both a leader and a bellwether for other states contemplating frontier‑model regulation under uncertain federal constraints.
Child Safety, Deepfakes, and the Limits of Preemption
While much of the AI preemption debate centers on abstract questions of commerce and innovation, some of the most emotionally charged battlegrounds are concrete harms like child exploitation and intimate deepfakes. Texas’s SB 20, the Stopping AI‑Generated Child Pornography Act, is a prime example. Enacted in 2025, the law criminalizes the possession, promotion, or production of obscene visual material that appears to depict a minor, including AI‑generated and even cartoon imagery.
Supporters in Texas describe SB 20 as part of a broader effort to address how AI can facilitate child exploitation, invoking the state’s traditional police powers over public safety and child welfare. They argue that waiting for a federal response is untenable when AI tools can already fabricate highly realistic abuse imagery. Critics, including civil‑liberties advocates, question the law’s constitutionality and its implications for speech and art; nonetheless, it demonstrates the intensity with which some states are moving to regulate AI‑mediated harms to children.
These kinds of laws create an especially sensitive problem for broad federal preemption. If Washington attempts to sweep away or sharply limit state AI rules, must it carve out exceptions for child‑protection statutes like SB 20? State attorneys general have already suggested that any national AI framework that blocks them from prosecuting AI‑related child exploitation would be politically and legally explosive. In practice, this may force federal policymakers to embrace a more nuanced approach to preemption, preserving substantial room for state‑level responses to the most severe AI‑enabled abuses.
Federal Floors vs. Federal Ceilings: The TAKE IT DOWN Act
The bipartisan TAKE IT DOWN Act, enacted in May 2025, shows another way federal and state AI policy can interact: by establishing a national “floor” rather than a “ceiling.” The law requires platforms to remove non‑consensual intimate deepfake imagery, addressing a growing crisis of AI‑fabricated sexual content used to harass, extort, or silence victims. Crucially, it does so at the federal level without explicitly locking states out of enacting stronger or broader remedies.
For many in the tech industry, federal rules like TAKE IT DOWN are attractive precisely because they promise a uniform baseline. However, some companies would prefer that such rules also function as ceilings, displacing the most aggressive or idiosyncratic state laws. By contrast, NCSL and numerous state lawmakers argue that federal AI laws should be minimum standards, not maximums, allowing states to go further where local conditions or public demands warrant it.
This floor‑versus‑ceiling tension is fast becoming a central fault line in AI governance. A federal floor, like TAKE IT DOWN, offers national protection while leaving states room to innovate. A federal ceiling, as in the proposed 10‑year moratorium, freezes state policy experimentation and concentrates power in Washington. The direction Congress chooses on this question will determine whether states remain co‑regulators of AI or are relegated to a marginal role.
Gaps in Copyright, Labor, and Privacy: Why States Resist
Underlying the states’ resistance to preemption is a practical observation: Congress has left major gaps in AI governance. NCSL’s 2025 letter highlights ongoing state legislative activity around AI and privacy, broader data‑protection regimes, workforce impacts, and consumer protection. From biometric data laws to algorithmic‑hiring audits, states argue they are not merely filling a vacuum, but developing nuanced approaches informed by local economies and values.
Copyright transparency provides a telling example. Federal proposals like Representative Adam Schiff’s Generative AI Copyright Disclosure Act, which would require disclosure of copyrighted works used to train generative AI, have stalled in Washington. In the absence of clear national rules, states see the use of copyrighted material for AI training, and its effects on creators and cultural industries, as an area where they must retain regulatory flexibility.
Labor and privacy are similar. As AI reshapes workplaces and surveillance capabilities, state legislators have introduced bills on automated decision‑making in hiring, AI‑based worker monitoring, and AI‑driven consumer profiling. Many of these efforts would be nullified or frozen under broad federal preemption, leaving workers and consumers dependent on a slower and more gridlocked federal process. For NCSL and its members, that is precisely the outcome federalism is supposed to avoid.
Executive Order 14179 and the Federal Tilt Toward Preemption
Federal rhetoric has increasingly tilted toward centralization. Executive Order 14179, signed on January 23, 2025 and titled “Removing Barriers to American Leadership in Artificial Intelligence,” instructs the federal government to “revise or rescind policies that conflict” with a pro‑innovation AI agenda and to develop an action plan to maintain U.S. dominance in AI. While the order does not explicitly preempt state laws, its language about removing “barriers” has been seized on by commentators as signaling a pro‑preemption lean.
For state officials, EO 14179 looks less like neutral industrial strategy and more like the opening move in a campaign to justify overriding state AI regulations in the name of competitiveness. When paired with the leaked draft executive order targeting state AI regimes and the House’s proposed 10‑year moratorium, the picture is one of an emerging federal posture that views state rules as obstacles to be minimized, not as complementary safeguards.
NCSL’s 2025 position directly challenges this emerging consensus. States do not dispute the goal of American AI leadership; instead, they argue that robust, locally responsive governance is a competitive asset, not a liability. By surfacing risks early, protecting residents, and building public trust, state laws may ultimately make it easier, not harder, for AI systems to be accepted and deployed at scale.
The fight over AI preemption is not simply a technical jurisdictional dispute; it is a choice about how the United States will govern a transformative technology. States have made clear, through the NCSL letter and a wave of legislation from California to Texas, that they do not intend to abandon their role as first responders to AI’s risks and opportunities. Federal efforts to impose sweeping ceilings, through a 10‑year moratorium, litigation‑driven preemption, or broad executive mandates, will collide with that determination.
Over the next decade, the most sustainable path is likely to be a layered approach: strong federal floors in areas like deepfake exploitation and baseline transparency, combined with room for states to go further on issues such as child safety, labor, privacy, and frontier‑model oversight. Whether Washington ultimately embraces that model or doubles down on preemption will determine not only the balance of power between federal and state governments, but also how legitimate and trustworthy AI governance appears to the public that must live with its consequences.