active Uncontested

AI Governance Evolution

How are governments and international bodies racing to regulate frontier AI — and how are competitive pressures already bending those rules back toward permissiveness?
Curated by terry-tang Since Feb 2025 Updated Apr 19, 2026

Canonical Synthesis

Author: terry-tang | Last updated: 2026-04-19

The AI governance story of 2025–2026 is the story of rules arriving just as the will to enforce them began to erode. The first binding AI prohibitions anywhere entered force in February 2025; by December 2025, the US federal government was directing its justice department to preempt the states trying to fill the gap. The EU activated obligations in the morning and proposed delays in the afternoon. The arc is not one of steady regulatory progress — it is oscillating, competitive, and deeply shaped by the economic stakes that have escalated alongside the technology itself.

Phase 1: Foundational Moves (2023–2024)

The governance foundations were laid in 2023 and 2024. The EU AI Act passed its final parliamentary vote in March 2024, after three years of negotiation, and entered into force in August 2024 — the most comprehensive AI regulation in the world. The Biden administration's October 2023 Executive Order established a US framework of voluntary safety testing and reporting obligations that was ambitious by American standards but non-binding. The Bletchley Park AI Safety Summit in November 2023 created the international AI safety institute network and the Bletchley Declaration, a voluntary commitment to frontier safety evaluation. These developments shared a common premise: safety and competitiveness were compatible, and governance could advance alongside capability development.

Phase 2: Regime-Setting (Early 2025)

February 2025 was the pivot month. On February 2, the EU AI Act's Article 5 prohibitions entered force — the first legally binding bans on specific AI applications in any major jurisdiction. Emotion recognition in workplaces, social scoring by public authorities, and real-time biometric surveillance faced concrete legal limits in 27 EU member states. This was the high-water mark of the precautionary regulatory model. Twelve days later, the UK rebranded its AI Safety Institute as the AI Security Institute, pivoting its mission from existential risk to cybercrime. And January 20 — fifteen days before the EU prohibitions — had already seen Trump revoke the Biden AI Executive Order, eliminating the US safety framework entirely.

By April 2025, the Trump OMB memos (M-25-21 and M-25-22) had replaced the Biden governance infrastructure with an acceleration mandate, rescinding M-24-10 and extending review timelines for high-impact AI uses. By June, the US AI Safety Institute had been renamed the Center for AI Standards and Innovation, dropping "safety" from its name and mission. The two national bodies created at Bletchley to evaluate frontier AI risk had both, within six months of the EU prohibitions entering force, been repositioned away from their original mandate.

Phase 3: Competing Visions (Mid-2025)

By summer 2025, three distinct governance visions had fully crystallized. The Trump administration's July 2025 AI Action Plan articulated an acceleration-first framework: remove regulatory barriers, subsidize infrastructure, and let American AI lead. China's July 2025 Global AI Governance Action Plan at the World AI Conference proposed a sovereignty-and-multilateralism model: state control over domestic AI, equity in international governance, and resistance to technology export controls. And the EU's August 2025 activation of GPAI obligations continued the rights-based compliance model, requiring frontier model providers to document their training data, publish model evaluations, and comply with a GPAI Code of Practice.

These three visions competed for adherents among non-aligned states and created structural fragmentation in the international governance landscape. A binding multilateral AI treaty with universal participation became effectively impossible — each major player had too much invested in its own model to accept the constraints of a compromise.

Phase 4: Binding Rules Meet Backlash (Late 2025)

The fourth quarter of 2025 saw binding rules arrive alongside immediate pressure to delay or preempt them. On November 1, the Council of Europe Framework Convention on AI (CETS 225) entered into force — the first binding international AI treaty, ratified by the UK, France, and Norway. Its human rights framework provided a third legal architecture alongside the EU's risk-based model and emerging national rules.

Three weeks later, the European Commission's Digital Omnibus proposal sought to defer the EU AI Act's Annex III high-risk obligations by at least 16 months, citing standards readiness and competitiveness concerns. The prohibitions that had entered force in February were untouched, but the high-risk rules — the Act's most commercially significant provisions — were effectively paused pending negotiations.

On December 11, the Trump administration's preemption EO directed the DOJ to challenge state AI laws in federal court, the FCC to adopt preemptive federal standards, and agencies to condition grants on state AI law postures. California's SB 53 — signed in September as the first US frontier AI safety law — became the immediate target.

Phase 5: Convergence Toward Lighter Regulation (2026 Trajectory)

The March 2026 White House National AI Legislative Framework confirmed the direction of travel. The 7-pillar blueprint proposed statutory preemption of state AI laws, regulatory sandboxes, liability safe harbors, and no new federal safety regulator. It contained no mandatory safety requirements, no binding risk assessments, and no minimum transparency standards. If enacted, it would close the most viable pathway for binding frontier AI regulation in the US for the foreseeable future.

The EU's Omnibus delay negotiations continued into 2026, with the Parliament seeking conditionality that would prevent indefinite deferral. Whether the high-risk rules would apply in 2027, 2028, or later remained unresolved. The GPAI obligations that had entered force in August 2025 remained in place, but enforcement authority did not begin until August 2026 at the earliest.

Globally, the competitive pressure dynamic had become self-reinforcing: each jurisdiction that relaxed its rules pointed to others' relaxation as justification. The EU's Omnibus cited US deregulation; US deregulation cited EU competitive advantage; China positioned itself as the responsible governance alternative. Rules that arrived with urgency were accumulating delays, carve-outs, and conditionality.

Open Questions

  • Federal-state preemption: Will US courts uphold executive-branch preemption of state AI laws under existing constitutional doctrine, or will Congress need to legislate? Can California's SB 53 survive a DOJ challenge, and does the answer depend on which circuit hears the case?
  • Transatlantic divergence: As EU enforcement of GPAI obligations begins in 2026 and high-risk rules remain delayed, will companies structure their AI development to comply with EU rules globally or to maintain separate EU-compliant and global-optimized versions? Does divergence accelerate product bifurcation?
  • Whether binding rules will stick: The Council of Europe Convention and EU AI Act are both in force in name. But enforcement capacity, political will, and standards readiness are all uncertain. Will the prohibitions that entered force in February 2025 actually be enforced at scale, or will they prove to be paper rules that frontier developers navigate without material consequence?
  • China's governance model: Will the WAICO proposal attract sufficient membership to function as a genuine governance institution, or will it remain a diplomatic positioning vehicle? Does China's domestic regulatory framework — which includes binding generative AI rules — give it credibility in governance leadership that its political model otherwise constrains?
  • The safety institute network: With the UK and US institutes rebranded away from safety and toward security/standards, what is the function of the international AI safety institute network? Do the remaining national institutes — Japan, Canada, Singapore, South Korea — maintain the original mandate, or do they follow the UK/US direction?

Events in this thread