EU Institutions Reach Political Agreement on the AI Act
After years of debate, the EU AI Act reached political agreement. The framework bans certain AI uses like social scoring, mandates conformity assessments for high-risk systems, and creates transparency rules for foundation models.
Fact-checked and reviewed — Kodi C.
After marathon trilogues, EU lawmakers reached political agreement on the AI Act on 8 December 2023. The compromise sets obligations for providers and deployers of high-risk AI, adds transparency and risk controls for general-purpose and foundation models, and tightens limits on biometric identification and emotion recognition in public spaces.
The agreement paves the way for final votes and phased enforcement beginning in 2025, with some prohibitions applying sooner. AI teams building for the EU should prepare for conformity assessments, technical documentation, and incident reporting duties while monitoring how the final text defines systemic risk for large models.
Trilogue Negotiations and Compromise
The agreement concluded intensive trilogue negotiations involving the European Parliament, Council, and Commission. Major compromise areas included the scope of prohibited practices, treatment of general-purpose AI models, law enforcement exemptions, and biometric identification restrictions. Negotiators balanced innovation concerns against fundamental rights protections.
The Parliament secured stronger prohibitions on biometric categorization and emotion recognition while accepting narrower law enforcement exemptions than originally proposed. The Council obtained clearer definitions of high-risk systems and transition provisions for existing deployments. The Commission's risk-based approach survived largely intact.
Prohibited AI Practices
The agreement bans AI systems manipulating human behavior to bypass free will, exploiting vulnerabilities of specific groups, enabling social scoring by governments, and conducting real-time biometric identification in public spaces except under strict conditions. These prohibitions take effect six months after entry into force.
Organizations must audit AI portfolios for practices potentially triggering prohibitions. Manipulation assessment requires evaluating whether systems influence decisions through subliminal techniques or exploitation of psychological vulnerabilities. Social scoring evaluations should examine whether systems rate individuals based on behavior across unrelated contexts.
High-Risk AI Requirements
High-risk systems face mandatory requirements including risk management systems, data governance practices, technical documentation, human oversight capabilities, accuracy standards, and cybersecurity measures. Providers must establish quality management systems and conduct conformity assessments before market placement.
High-risk categories cover AI in critical infrastructure, education, employment, essential services, law enforcement, migration, and justice administration. Deployers face transparency, human oversight, and impact assessment obligations. Documentation must support regulatory audits and user inquiries.
Foundation Model Governance
General-purpose AI models face tiered obligations based on computational resources and systemic risk potential. All foundation models require technical documentation, training process transparency, and copyright compliance mechanisms. Models with systemic risk face additional evaluation, red-teaming, incident reporting, and cybersecurity requirements.
The agreement introduces compute thresholds for triggering systemic risk classification, with Commission authority to adjust thresholds as technology evolves. Open-source model exemptions exist for smaller models meeting specific criteria, though systemic risk obligations apply regardless of licensing.
Governance and Enforcement Structure
The AI Office within the Commission coordinates enforcement and oversees general-purpose model compliance. National authorities supervise high-risk systems within their jurisdictions. The agreement establishes penalty frameworks with fines reaching the greater of €35 million or 7% of global revenue for prohibition violations.
Regulatory sandboxes enable controlled testing of new AI systems before full compliance requirements apply. The Commission will develop implementing acts, delegated acts, and technical standards detailing specific requirements across system categories.
Rollout timeline
The phased setup provides 6 months for prohibited practices, 12 months for foundation model obligations, and 24 months for most high-risk requirements. Some high-risk systems in regulated products receive 36 months. If you are affected, develop compliance roadmaps aligned to applicable deadlines.
- European Commission press release summarizes the political deal and next steps.
- Council press release highlights obligations for high-risk AI and general-purpose systems.
Continue in the Policy pillar
Return to the hub for curated research and deep-dive guides.
Latest guides
-
AI Policy Implementation Guide
Coordinate governance, safety, and reporting programmes that meet EU Artificial Intelligence Act timelines and U.S. National AI Initiative Act mandates while sustaining product…
-
Digital Markets Compliance Guide
Implement EU Digital Markets Act, EU Digital Services Act, UK Digital Markets, Competition and Consumers Act, and U.S. Sherman Act requirements with cross-functional operating…
-
Semiconductor Industrial Strategy Policy Guide
Coordinate CHIPS and Science Act, EU Chips Act, and Defense Production Act programmes with capital planning, compliance, and supplier readiness.
Coverage intelligence
- Published
- Coverage pillar
- Policy
- Source credibility
- 71/100 — medium confidence
- Topics
- AI Act · Foundation Models · High-Risk AI · European Union
- Sources cited
- 2 sources (iso.org, crsreports.congress.gov)
- Reading time
- 5 min
Source material
- Industry Standards and Best Practices — International Organization for Standardization
- Congressional Research Service Analysis
Comments
Community
We publish only high-quality, respectful contributions. Every submission is reviewed for clarity, sourcing, and safety before it appears here.
No approved comments yet. Add the first perspective.