European Parliament Passes the EU AI Act
Summary
The European Parliament voted overwhelmingly to approve the EU AI Act, the world's first comprehensive legal framework for artificial intelligence. The legislation established a risk-based classification system for AI applications, with outright bans on certain uses and strict requirements for high-risk systems, setting a global regulatory precedent.
What Happened
On March 13, 2024, the European Parliament approved the EU AI Act with 523 votes in favor, 46 against, and 49 abstentions. The legislation, first proposed by the European Commission in April 2021, had undergone nearly three years of negotiation and revision — with generative AI's explosive growth in 2023 forcing significant last-minute additions to address foundation models and general-purpose AI systems.
The Act established a tiered risk framework:
- Unacceptable risk (banned): Social scoring by governments, real-time biometric surveillance in public spaces (with narrow exceptions), emotion recognition in workplaces and schools, and untargeted scraping of faces for facial recognition databases.
- High risk: AI systems used in critical infrastructure, education, employment, law enforcement, and migration. These faced mandatory requirements including risk assessments, data quality standards, transparency, human oversight, and regulatory conformity assessments.
- Limited risk: Chatbots and deepfakes required transparency labels (users must be told they are interacting with AI).
- Minimal risk: Most AI applications faced no additional requirements.
For general-purpose AI models (including large language models), the Act created a separate framework requiring transparency about training data, compliance with EU copyright law, and — for models classified as posing "systemic risks" — mandatory safety evaluations, incident reporting, and adversarial testing.
The Act included a phased implementation timeline, with most provisions taking effect in 2025-2026.
Why It Matters
The EU AI Act was the world's first comprehensive AI law and represented a fundamentally different approach from the US reliance on executive orders and voluntary commitments. By establishing legally binding requirements with enforcement mechanisms including fines of up to €35 million or 7% of global annual revenue, the EU created real consequences for non-compliance.
The Act's most significant long-term impact may be its "Brussels effect" — the tendency for EU regulations to become de facto global standards because companies find it easier to comply universally than to maintain separate products for different markets. Whether AI companies would adopt EU standards globally or maintain separate approaches remained an open question, but the history of GDPR suggested that the EU's regulatory power extended well beyond its borders.
The treatment of open-source AI was particularly contested during negotiations. The final text included limited exemptions for open-source models, but the scope and adequacy of these exemptions remained debated — particularly regarding whether open-source developers could realistically comply with the Act's transparency and documentation requirements.