In late 2025, a powerful new front opened in the battle over who will police artificial intelligence in the United States. Far from being a purely federal question, AI governance has become a flashpoint in American federalism, with state attorneys general leading a bipartisan resistance to efforts in Washington to strip states of their regulatory authority. Their message to Congress and the White House is blunt: do not hand Big Tech a shield from state oversight before a strong national framework exists.
This fight is playing out across multiple arenas at once, budget negotiations, the annual defense bill, and even draft executive orders. At stake is whether states can continue to deploy the tools they already use to combat deepfakes, AI scams, algorithmic discrimination, and child exploitation, or whether a sweeping federal preemption will leave local officials watching from the sidelines as AI reshapes daily life. The attorneys general argue that they are closest to the harms, quickest to respond, and historically best equipped to keep pace with fast-moving technologies.
The NDAA Flashpoint: 36 Attorneys General Draw a Line
On November 25, 2025, 36 state and territory attorneys general delivered a stark warning to congressional leaders: do not use the National Defense Authorization Act (NDAA) to bar states from passing or enforcing their own AI regulations. In a bipartisan letter organized through the National Association of Attorneys General (NAAG), they urged Congress to reject language that would effectively impose a nationwide moratorium on state AI laws. The coalition framed the proposal as both unprecedented and dangerous in the absence of any comprehensive federal alternative.
The letter argues that preempting state AI authority would endanger children, public health, the economy, and even national security. State officials emphasized that they are already confronting AI-enabled harms in their case files, from sophisticated fraud schemes to mental health crises exacerbated by manipulative chatbots. Stripping away “commonsense” state protections, they warned, would leave Americans exposed precisely when AI is accelerating into critical sectors such as healthcare, education, finance, and public safety.
Crucially, the AGs stressed that they are not opposed to federal AI legislation in principle. What they reject is a version of national law that blocks states from building on federal protections or addressing local risks. Their preferred model is cooperative federalism: a baseline of federal rules that state enforcers can supplement and adapt, not a ceiling that locks in a one-size-fits-all regime while sidelining local expertise. Until that kind of framework exists, they argue, broad preemption is premature and irresponsible.
State AI Laws in Action: Evidence That Local Oversight Works
To counter the idea that AI oversight should be centralized in Washington, the attorneys general have pointed to an expanding list of state laws that are already tackling concrete AI harms. Their November 2025 letter highlights statutes that protect against AI-generated explicit material and intimate deepfakes, prohibit deceptive deepfakes targeting voters and consumers, and restrict AI use in political campaigns. These measures are not theoretical, they are tools AGs are beginning to use against real cases of abuse.
Housing and consumer markets provide further examples. Some states have adopted laws to safeguard renters from algorithmic rent-setting tools and to require disclosures when consumers are interacting with certain AI systems. These provisions are intended to curb opaque automated decision-making that can entrench discrimination or unfair pricing while giving consumers more visibility into when algorithms, rather than humans, are shaping key outcomes. State AGs argue that dismantling these protections via federal preemption would roll back progress in areas where the law is only just beginning to catch up to technology.
Data privacy statutes are another cornerstone of emerging state-level AI oversight. Of roughly twenty U.S. states with comprehensive consumer privacy laws, most now include rights to opt out of certain high-risk automated decision-making and to trigger impact assessments of such systems. According to a November 2025 Reuters analysis, state privacy enforcement is increasingly overlapping with AI governance, targeting issues like data brokers feeding AI models and profiling that affects housing, employment, and education. For attorneys general, these laws are proof that states are not merely reacting; they are building durable regulatory architectures around AI.
Letitia James and the Case for States as AI First Responders
New York Attorney General Letitia James has emerged as one of the most vocal champions of preserving state AI authority. As the leader of the 36-AG coalition opposing NDAA preemption language, she has argued that states are “best equipped” to police AI harms because they have deep experience enforcing consumer protection, data privacy, and civil rights laws against powerful technology companies. In her view, the speed and variety of AI risks demand the kind of rapid, locally informed response that state governments are structurally designed to provide.
James’ office has paired rhetoric with concrete legislative action. A new New York law highlighted in her November 25, 2025 press release requires certain AI chatbots to detect suicidal ideation, respond appropriately, and remind users at least every three hours that they are interacting with a machine, not a human. The statute responds directly to evidence that AI systems can exacerbate mental health struggles or even encourage self-harm if left unmanaged. For James and her counterparts, this kind of targeted, evidence-based intervention exemplifies why a blanket ban on state AI rules would be so damaging.
More broadly, James and other AGs are framing themselves as AI “first responders.” They argue that as AI-generated scams, deepfakes, and manipulative chatbots proliferate, state offices are often the first place victims turn for help. That front-line vantage point, they say, positions them to spot emerging patterns of abuse and to press for rapid statutory updates, capabilities that a distant, centralized regulator may struggle to match. Preserving state authority, therefore, is not merely a matter of institutional pride; it is, in their telling, a prerequisite for effective public protection in the AI era.
White House Preemption Efforts and the Federalism Backlash
The pressure to curtail state AI oversight has not come only from Congress. On November 21, 2025, Reuters reported that the White House had drafted an executive order that would empower the U.S. Attorney General to sue states and effectively preempt state AI regulations. The draft order also reportedly contemplated conditioning certain broadband funds on states limiting their AI rules, using federal spending power as leverage to restrain local regulation.
The reaction from state officials and lawmakers was swift and bipartisan. Critics warned that such an order would erode core principles of federalism by weaponizing federal litigation and funding to override state police powers, traditionally reserved for protecting health, safety, and welfare. They also argued that the move would tilt the regulatory playing field in favor of large technology firms, who have actively lobbied for a single national standard and broad federal preemption of state AI measures.
Amid the backlash, the administration paused the draft order, illustrating how politically sensitive AI federalism has become. But attorneys general point out that this was not an isolated episode. They note at least three separate federal efforts since 2024, 2025 aimed at curbing state AI oversight: a budget reconciliation amendment imposing an “AI moratorium,” NDAA language restricting state rules, and the paused executive initiative. In each case, bipartisan coalitions of 30 to more than 40 AGs have intervened, describing the proposals as “sweeping in scope” and warning they would leave Americans “completely exposed” in the absence of robust federal protections.
Big Tech, Preemption, and the Charge of a “Handout”
State attorneys general are not shy about naming who they believe stands to gain from broad preemption of state AI laws: major technology companies. A November 25, 2025 Reuters report noted that the AGs’ stance directly conflicts with lobbying efforts by companies including OpenAI, Google, Meta, and venture firm Andreessen Horowitz, which have advocated for a single national AI standard that would supersede state measures. Those firms argue that a patchwork of state rules would stifle innovation and burden AI developers with conflicting obligations.
Connecticut Attorney General William Tong has been particularly blunt. He called proposed federal preemption a “handout to Big Tech seeking free reign to reshape our society with zero oversight or accountability.” Tong argues that AGs are “on the front lines” of harms such as AI-enabled grandparent scams, reality-distorting content that worsens mental illness, and chatbots that inappropriately engage or even encourage self-harm in children and adults. In that context, he warns, a ban on state AI laws “could be catastrophic for people’s safety.”
For Tong and many colleagues, the risk is not just theoretical capture by industry; it is a structural problem. A single, heavily lobbied federal framework that blocks states from going further would, in their view, create a kind of regulatory monoculture, one susceptible to being watered down over time and slow to adapt as new AI harms surface. By contrast, a pluralistic system of federal and state oversight can generate experimentation, allow stronger protections where needed, and ensure that if one level of government falters, another can step in.
Child Safety as the Moral Center of the Debate
Among the many justifications AGs offer for preserving state AI authority, child protection has become the most emotionally resonant. On August 25, 2025, 44 attorneys general sent letters to leading AI firms, including Meta, Google, Apple, Microsoft, OpenAI, Anthropic, Perplexity AI, and xAI, warning about chatbots’ sexually suggestive and emotionally manipulative interactions with minors. They pledged to “use all available legal and regulatory tools” if companies failed to protect children, underscoring their belief that current safeguards are inadequate.
This push builds on earlier bipartisan efforts. On June 11, 2024, the same number of AGs endorsed the federal Child Exploitation and Artificial Intelligence Expert Commission Act, likening AI to tools such as a “knife or hammer” that can be weaponized to produce child sexual abuse material. They urged Congress to update laws so that law enforcement could keep pace with AI-driven exploitation. For state officials, these episodes highlight how quickly AI can amplify existing threats, especially against vulnerable populations.
AGs explicitly link their child-safety campaigns to their resistance to federal preemption. They argue that local law-enforcement knowledge, of regional criminal networks, cultural dynamics, and service providers, combined with the ability to pass targeted state legislation, is essential to protecting minors from rapidly evolving AI threats. A one-size-fits-all federal regime that blocks state innovation, they warn, could delay responses when weeks or months matter, especially as generative AI accelerates the production and spread of harmful content.
Historic Role of States and the Case for Cooperative AI Federalism
In defending their AI authority, attorneys general frequently invoke history. States have long been at the forefront of consumer protection and technology enforcement, from early privacy actions against data brokers and social media platforms to multistate settlements over data breaches and deceptive digital practices. A November 2025 Reuters “year in review” on state consumer privacy laws documented how state regulators have pushed the envelope on issues like data minimization, algorithmic profiling, and transparency, matters that now sit at the heart of AI governance.
Multistate and international cooperation further strengthens the AGs’ case. Joint investigations and settlements have become common, allowing states to pool expertise and act at scale against global tech companies. Attorneys general argue that this collaborative model can be extended to AI, enabling coordinated action against cross-border harms without waiting for a single overarching federal AI statute. In their vision, federal agencies and state enforcers would share information, harmonize baseline standards, and act in tandem when needed.
That model, cooperative federalism, offers a middle path between uncoordinated fragmentation and heavy-handed preemption. Federal law can establish core requirements and procedural safeguards, while states retain the authority to respond to local conditions, innovate with new protections, and fill gaps as new AI applications emerge. For the attorneys general, the choice is not between federal or state control; it is between a dynamic, layered system of oversight and a static, centralized framework that may be ill-suited to an evolving technology.
As AI weaves itself deeper into the fabric of everyday life, the conflict over who regulates it is quickly becoming as important as what rules are written. The late-2025 revolt by dozens of attorneys general against NDAA preemption language, budget-bill moratoria, and a draft White House order signals that states will not quietly surrender their role. From deepfake bans and rent-algorithm safeguards to chatbot suicide-prevention mandates, they point to an expanding of state law as proof that local oversight is not only possible but already protecting people in tangible ways.
Whether Congress ultimately opts for sweeping preemption or embraces a cooperative approach will shape the trajectory of U.S. AI governance for years to come. For now, the attorneys general are betting that Americans want more, not less, protection as AI systems grow more powerful and pervasive. In their view, preserving state authority is not a parochial turf battle, but a pragmatic strategy to ensure that when new AI harms surface, especially against children and other vulnerable groups, there are multiple, responsive layers of government ready to act.