Washington’s latest move on artificial intelligence marks a decisive turn in the battle over who gets to write the rules for one of the most consequential technologies of the century. With a new executive order and a pending federal bill that explicitly target state-level AI statutes, the White House is signaling that it intends to set a single, national framework and to curb what it calls a "patchwork" of burdensome state rules. This shift comes after two years in which state capitals, not Congress, have driven most concrete AI regulation.
For businesses, developers, and civil society, the stakes are high. States such as California and New York have already enacted far‑reaching safety and transparency laws for advanced AI systems, while dozens of other states are experimenting with rules touching everything from deepfakes to hiring algorithms. Washington’s emerging preemption strategy could freeze or unwind many of those efforts, reshaping incentives for innovation, compliance, and consumer protection across the United States.
The new executive order: Asserting a national AI policy
In December 2025, the White House issued an executive order titled “Ensuring a National Policy Framework for Artificial Intelligence.” The order declares that it is federal policy to preserve U.S. “national and economic security and dominance” in AI through a minimally burdensome national framework. It frames some state AI laws as threats to innovation and directs the federal government to check what it calls “onerous and excessive” state rules until Congress enacts comprehensive legislation.
The order does not itself create a full federal AI code, but it lays the procedural groundwork for federal preemption. It instructs agencies to align existing regulations with the national policy and to identify existing authorities that can be used to override conflicting state requirements. It also explicitly links AI policy to major federal funding streams for broadband and digital infrastructure, creating financial leverage over states that attempt to move a with their own strong regulations.
Perhaps most importantly, the order is framed as a bridge to future legislation: it instructs senior advisors to draft a bill that would codify a uniform federal AI framework and expressly preempt state laws that clash with its policy goals. In doing so, it sets the stage for a prolonged tug‑of‑war between federal and state power over AI governance.
AI Litigation Task Force and funding pressure on the states
A cornerstone of the executive order is the creation of an AI Litigation Task Force within the U.S. Department of Justice. Within 30 days of the order, the Attorney General must stand up this unit with a singular mission: challenge state AI laws that are inconsistent with the order’s policy. The task force is authorized to argue that certain state measures violate the Commerce Clause, are preempted by existing federal regulations, or are otherwise unlawful.
The order pairs this litigation strategy with a potent fiscal tool. It directs the Department of Commerce to issue a policy notice conditioning access to remaining funds under the Broadband Equity, Access, and Deployment (BEAD) program on states’ AI regulatory posture. States whose AI laws are deemed “onerous” or inconsistent with the federal approach can be declared ineligible for certain non‑deployment funds. In effect, Washington is using infrastructure money to discourage aggressive, independent state regulation of AI.
This combination of lawsuits and funding conditions is unprecedented in the AI space. It signals that the administration is prepared not only to argue for legal preemption in court, but also to penalize states financially if they press a with stringent AI rules. For governors and legislatures considering ambitious AI bills, that threat could significantly alter the political calculus.
Targeting state AI safety laws: California, New York, and beyond
Washington’s preemption push lands at a moment when several states are moving aggressively to regulate powerful AI systems. California’s Transparency in Frontier Artificial Intelligence Act (SB 53) requires companies building high‑risk frontier models to publish detailed documentation about catastrophic risks and safety measures, and establishes whistleblower protections tied to AI safety incidents. New York has followed with its own Responsible AI Safety and Education (RAISE) Act, imposing rigorous planning and incident‑reporting requirements on large AI developers and creating a dedicated state AI safety office.
These statutes were designed to fill what state lawmakers view as a dangerous gap at the federal level. They mandate publicly accessible risk assessments, accelerate disclosure timelines for serious AI‑related incidents, and lean on large‑revenue developers to shoulder specific safety obligations. Industry groups have criticized them as overly prescriptive and duplicative, but supporters argue they are necessary guardrails for systems that can affect financial markets, critical infrastructure, and public safety.
Under the new federal order, such laws are prime candidates for challenge. The Department of Commerce is tasked with cataloging existing state AI laws and labeling those that are “onerous” or inconsistent with the federal policy. The AI Litigation Task Force can then sue to invalidate or narrow them, arguing that they burden interstate commerce or conflict with a national framework that favors minimal, innovation‑oriented regulation. In practice, California, New York, and a handful of other early adopter states could become test cases for the scope of federal preemption in AI.
Congressional preemption: The American AI Leadership and Uniformity Act
While the executive branch is acting through orders and agency power, Congress is also weighing an explicit statutory route to preemption. The American Artificial Intelligence Leadership and Uniformity Act, introduced in the House in September 2025, would establish a national AI framework and impose a temporary moratorium on certain state laws that restrict AI models and systems engaged in interstate commerce. The bill frames this moratorium as a way to prevent a fragmented regulatory environment while the federal government finalizes its own standards.
The Uniformity Act is part policy blueprint, part political statement. It codifies the idea that AI leadership requires regulatory predictability for companies operating nationally or globally. At the same time, it would sharply constrain states’ ability to experiment with stricter rules on issues like model transparency, training data disclosures, or mandatory safety evaluations. Because so many modern AI services operate across state lines by default, the bill’s interstate‑commerce hook is capacious.
Whether the bill passes in its current form remains uncertain, but its very existence strengthens the administration’s argument that Congress is moving toward a uniform federal approach. It also gives states, industry, and civil society a legislative focal point: lobbyists and advocacy groups are already lining up to push amendments that either expand consumer protections or deepen preemption. The outcome will shape how durable the executive branch’s preemption strategy really is.
State pushback and the federalism debate over AI
State leaders are not accepting Washington’s preemption push quietly. The National Conference of State Legislatures (NCSL) has formally reaffirmed its opposition to broad AI preemption, warning congressional leaders that stripping states of authority would undermine the “laboratories of democracy” model that has historically driven innovative responses to new technologies. State lawmakers point to their work on AI’s impacts on children, health decisions, and workforce readiness as evidence that states are nimble and close to the ground.
From their perspective, Washington’s moves threaten to freeze experimentation just as AI’s real‑world harms are becoming visible. States have been among the first to confront issues such as non‑consensual deepfake imagery, algorithmic discrimination in housing and employment, and the mental‑health risks of AI companions used by teenagers. They argue that federal agencies, bound by slower rulemaking cycles and broader political constraints, are ill‑suited to respond quickly to emerging harms or to reflect diverse local values.
This conflict over AI governance is, at its core, a modern chapter in a long‑running federalism debate. The executive order and pending federal legislation treat AI as a domain where national economic competitiveness and security justify strong centralization. State officials counter that the same technology’s social and ethical impacts are highly contextual and demand local tailoring. The resolution of this tension will shape not only AI policy but also the broader balance between federal and state power in digital regulation.
Business implications: Compliance relief or new uncertainty?
For AI developers and deployers, Washington’s preemption strategy offers both potential relief and fresh uncertainty. Many large firms have long complained about the complexity of navigating dozens of evolving state laws covering privacy, AI transparency, automated decision‑making, and deepfakes. Industry groups have touted studies estimating that a patchwork of state AI rules could impose hundreds of billions of dollars in compliance costs over the next decade, especially for companies operating at national scale.
A single federal framework could simplify compliance by creating one set of baseline obligations and enforcement mechanisms. Companies may welcome litigation that strikes down the most onerous or idiosyncratic state requirements, particularly those that mandate extensive public disclosures about model internals or impose short deadlines for incident reporting. For smaller firms, reduced regulatory fragmentation could lower barriers to entry and make it more feasible to offer services across multiple states.
Yet preemption also introduces new risks. The contours of the federal framework are still unsettled, and the AI Litigation Task Force’s early lawsuits will likely face prolonged challenges in the courts. Until those cases are resolved and Congress finalizes legislation, companies will be operating in a gray zone where state laws exist, but their enforceability is contested. Firms must weigh how aggressively to adjust their compliance programs in anticipation of preemption, knowing that an adverse court ruling or legislative shift could swing the pendulum back toward a more fragmented landscape.
What this means for consumers, workers, and civil society
For consumers and workers, the impact of Washington’s preemption push will hinge on how robust the eventual federal standards are. If the national framework preserves or exceeds the strongest state protections on issues like safety testing, transparency, discrimination, and recourse, then preemption could bring the best of both worlds: strong guardrails and consistent rights across state lines. Advocates note that some sectors, such as financial services and healthcare, already rely on national standards to prevent regulatory arbitrage.
However, if federal rules are weaker than existing state statutes, preemption could roll back protections that residents of certain states currently enjoy. For example, California’s and New York’s AI safety laws were designed to compel early disclosure of catastrophic risks and serious incidents. If those requirements are neutralized without being replaced by equally strong federal mandates, the net effect could be less transparency and slower responses to dangerous failures in high‑impact systems.
Civil society groups are responding by shifting their advocacy to Washington. Organizations that previously focused on statehouses are now lobbying Congress and federal agencies to embed strong public‑interest standards into the national framework. They are calling for clear audit rights, explainability requirements for automated decisions that significantly affect people’s lives, and meaningful enforcement mechanisms. The outcome of these debates will determine whether “uniformity” in AI rules ultimately serves primarily corporate convenience, public protection, or some mix of both.
Washington’s bid to preempt state AI rules marks a pivotal moment in the governance of artificial intelligence in the United States. Through a new executive order, the creation of an AI Litigation Task Force, and support for legislation like the American Artificial Intelligence Leadership and Uniformity Act, the federal government is moving to assert primacy over a field that states have helped shape. This effort is driven by concerns about regulatory fragmentation and a desire to safeguard U.S. leadership in AI, but it directly collides with states’ ambitions to craft their own, sometimes stricter, guardrails.
Over the next several years, the balance that emerges between federal dominance and state experimentation will define the regulatory environment for AI developers, businesses, and ordinary citizens. Court battles over preemption, congressional negotiations over the scope of national standards, and continued state innovation at the edges will together determine whether the U.S. AI regime prioritizes speed and uniformity, or allows for a more pluralistic approach to managing risk. For now, one thing is clear: the center of gravity for AI regulation is shifting decisively toward Washington, even as the debate over who should be in charge is only beginning.