← Back to all briefings
AI 5 min read Published Updated Credibility 92/100

EU Council Gives Final Approval to the AI Act — May 21, 2024

The Council of the European Union formally adopted the Artificial Intelligence Act, clearing the final legislative hurdle before publication in the EU Official Journal and triggering phased compliance deadlines.

Fact-checked and reviewed — Kodi C.

AI pillar illustration for Zeph Tech briefings
AI deployment, assurance, and governance briefings

The Council of the European Union formally adopted the Artificial Intelligence Act on 21 May 2024, clearing the final legislative hurdle before publication in the EU Official Journal. This adoption triggers the phased compliance timeline that organizations must handle over the subsequent two years.

Legislative Process Completion

Council adoption represents the final step in the EU legislative process after Parliament approval in March 2024. The formal vote confirmed the text agreed in trilogue negotiations, making no significant changes to the agreed regulation. Publication in the Official Journal followed in July 2024, starting the 20-day period before entry into force.

The adopted text locks in the full framework addressing AI systems across risk categories. Prohibitions on unacceptable AI practices, obligations for high-risk systems, transparency requirements for general-purpose AI models, and enforcement mechanisms with significant penalties now have binding legal effect across all EU member states.

Compliance Timeline Structure

The AI Act implements phased deadlines rather than a single compliance date. Prohibited practices including social scoring systems and most real-time biometric identification face a 6-month deadline from entry into force. Organizations operating such systems must discontinue them by early 2025.

General-purpose AI model obligations take effect at 12 months, requiring providers to implement transparency measures including technical documentation, training data summaries, and copyright compliance procedures. Systemic risk models face additional requirements including model evaluations, adversarial testing, and incident reporting.

High-risk AI system obligations apply at 24 months from entry into force, establishing the August 2026 primary compliance deadline. Organizations deploying high-risk AI in areas including employment, credit, law enforcement, and critical infrastructure must complete conformity assessments, establish risk management systems, and implement quality management processes.

Governance Structure

The regulation establishes multi-layered governance combining EU-level coordination with national setup. The AI Board brings together representatives from national regulatory authorities to coordinate enforcement approaches and interpretation guidance. This structure promotes consistent application while respecting member state authority.

The EU AI Office within the European Commission provides central coordination, particularly for general-purpose AI models. The Office develops guidance, coordinates scientific expertise, and exercises direct enforcement authority over GPAI providers. This centralized approach addresses concerns about regulatory fragmentation for foundation models.

National market surveillance authorities receive responsibility for high-risk AI system oversight. Member states must resource these authorities to conduct audits, investigate complaints, and enforce penalties. If you are affected, identify relevant national authorities and prepare for supervisory engagement.

Penalty Framework

The AI Act establishes significant penalties scaled to violation severity and organizational size. Maximum penalties reach 35 million euros or 7% of global annual turnover for prohibited practices violations. High-risk obligations violations face penalties up to 15 million euros or 3% of turnover. SMEs and startups receive proportionally reduced maximum penalties.

Member states must implement penalty frameworks through national legislation, potentially adding criminal sanctions beyond administrative fines. If you are affected, monitor national setup for jurisdiction-specific requirements that may exceed baseline regulation requirements.

Implementation Planning Priorities

If you are affected, establish project governance for AI Act compliance with appropriate executive sponsorship and cross-functional coordination. Legal, compliance, technology, and business functions all have roles in setup. Dedicated project management ensures coordinated progress across workstreams.

AI system inventory provides the foundation for compliance scope assessment. Organizations must identify AI systems within regulatory definitions, classify systems by risk category, and document deployment contexts. Inventory completeness determines accuracy of subsequent compliance assessments.

Implementation Recommendations

  • Timeline planning: Establish project timelines aligned with 6-month, 12-month, and 24-month milestones with appropriate buffer for complexity.
  • Conformity assessment: Begin evaluating high-risk AI systems against Annex III criteria and conformity assessment requirements.
  • Documentation preparation: Develop technical documentation templates meeting Annex IV requirements for high-risk systems.
  • GPAI assessment: Evaluate general-purpose AI model usage and provider obligations for transparency requirements.
  • Governance structure: Establish or improve AI governance structures to support ongoing compliance and risk management.

Continue in the AI pillar

Return to the hub for curated research and deep-dive guides.

Visit pillar hub

Latest guides

Coverage intelligence

Published
Coverage pillar
AI
Source credibility
92/100 — high confidence
Topics
European Union · AI Act · Compliance
Sources cited
3 sources (consilium.europa.eu, eur-lex.europa.eu, iso.org)
Reading time
5 min

Source material

  1. Artificial intelligence act: Council gives final green light — Council of the European Union
  2. Regulation (EU) 2024/1689 on artificial intelligence — European Union
  3. ISO/IEC 42001:2023 — Artificial Intelligence Management System — International Organization for Standardization
  • European Union
  • AI Act
  • Compliance
Back to curated briefings

Comments

Community

We publish only high-quality, respectful contributions. Every submission is reviewed for clarity, sourcing, and safety before it appears here.

    Share your perspective

    Submissions showing "Awaiting moderation" are in review. Spam, low-effort posts, or unverifiable claims will be rejected. We verify submissions with the email you provide, and we never publish or sell that address.

    Verification

    Complete the CAPTCHA to submit.