Trump Signs Executive Order to Preempt State AI Laws
Summary
On December 11, 2025, President Trump signed an Executive Order directing federal agencies to use available legal authorities to preempt state AI laws that conflict with national AI policy objectives. The EO directed the Department of Justice to establish an AI Litigation Task Force to challenge conflicting state laws, the FCC to adopt federal AI reporting standards that preempt inconsistent state requirements, and the FTC to issue an AI policy statement. Agencies were directed to consider state AI law postures when making federal grant decisions. Narrow carve-outs preserved state authority over child safety, data center siting, and state government's own AI procurement.
What Happened
By December 2025, more than twenty states had enacted or were advancing AI-related legislation. California's SB 53 — signed in September — was the most prominent frontier-AI-specific law, but other states had enacted sector-specific AI rules in employment, housing, healthcare, and consumer protection. The patchwork was creating compliance burdens for companies operating nationally and, from the administration's perspective, creating regulatory fragmentation that impeded US AI competitiveness.
The December 11 EO took a multi-channel approach to preemption that stopped short of formal field preemption — which would have required congressional action — while using available executive authorities to maximum effect.
The DOJ AI Litigation Task Force was directed to identify state laws that conflict with federal AI policy and to challenge them in federal court under the Supremacy Clause, Commerce Clause, and field preemption doctrines where applicable. The Task Force was also directed to provide legal guidance to state attorneys general on the federal preemption landscape.
The FCC was directed to adopt federal AI reporting and labeling standards under its existing telecommunications authority, with the intent that federal standards would preempt inconsistent state requirements in communications-adjacent AI applications.
The FTC was directed to issue a policy statement clarifying that the FTC Act's unfair or deceptive practices framework applied to AI systems — framing federal consumer protection law as providing a complete national framework that left no room for supplementary state rules in its core domain.
The grant conditionality provision — directing agencies to consider state AI law postures in federal grant decisions — was the most legally contested element. Critics argued it constituted unconstitutional coercion; the administration argued it was standard exercise of spending power.
Carve-outs preserved state authority to regulate child safety applications of AI, data center siting and construction, and state governments' own internal AI procurement policies — concessions designed to deflect the most politically resonant objections.
Why It Matters
The preemption EO represented the Trump administration's most aggressive direct intervention in the federal-state AI governance balance. While the administration had previously removed federal safety requirements and accelerated federal procurement, it had not previously attempted to constrain state-level regulation. The December EO changed that.
The constitutional questions were significant. States have broad police powers to regulate commerce within their borders; federal preemption of state law requires either a clear congressional statement or a direct conflict with federal law or regulation. Executive action directing agencies to bring preemption litigation was an aggressive use of the executive's power to shape the litigation environment — but courts would ultimately decide whether specific state laws were preempted.
For California specifically, the EO set up a direct confrontation. California's SB 53 applied to companies regardless of federal policy, and the state had a well-developed enforcement infrastructure and a record of defending its regulations against federal preemption challenges. The DOJ Task Force's first targets were likely to be measured by how they fared against California.
The broader implications for democratic accountability were stark: if the federal executive could effectively block state AI legislation through litigation threats and grant conditionality, the most feasible pathway for binding AI regulation in the US — state-level action — would be closed. This would leave frontier AI development effectively unregulated at the domestic level pending the uncertain prospect of federal legislation.