EU Council agrees general approach on AI Act
EU telecom and digital ministers adopted a general approach on the Artificial Intelligence Act on 6 December 2022, locking in the Council’s negotiating mandate on risk tiers, prohibited uses, and conformity assessments before trilogue talks.
Fact-checked and reviewed — Kodi C.
Council Agreement and Legislative Process
The Council of the European Union adopted its general approach on the AI Act on 6 December 2022, establishing the member state position for trilogue negotiations with the European Parliament and Commission.
The general approach represented culmination of 18 months of Council working group deliberations, during which member states negotiated amendments addressing scope concerns, high-risk classification criteria, enforcement mechanisms, and support for innovation. Achieving unanimity among 27 member states with varying economic interests and regulatory philosophies required significant compromise, particularly regarding provisions affecting national security systems and AI applications in regulated sectors with existing oversight mechanisms.
Risk-Based Framework Refinements
The Council approach maintained the Commission's risk-based regulatory architecture while refining risk classification criteria and requirements. High-risk AI system definitions received particular attention, with member states seeking clarity on which applications trigger improved obligations. The general approach expanded exemptions for AI systems used in national security contexts, addressing sovereignty concerns from member states reluctant to subject defense and intelligence applications to Commission oversight. Classification debates anticipated significant trilogue discussions as Parliament's more expansive high-risk categories conflicted with Council's narrower approach.
General Purpose AI Provisions
The general approach introduced provisions addressing general purpose AI systems that could be integrated into multiple downstream applications, a category not fully addressed in the Commission's original proposal. Recognizing that foundation models and large language models create novel regulatory challenges, member states established frameworks for obligations that would apply to providers of general purpose AI regardless of specific deployment contexts. These provisions became central trilogue discussion points as the technology environment evolved rapidly following ChatGPT's release weeks before Council adoption.
Innovation and Competitiveness Considerations
Member states emphasized innovation-friendly elements including regulatory sandboxes, support for SME compliance, and proportionality requirements for small-scale providers. The general approach strengthened sandbox provisions, requiring member states to establish controlled environments where AI developers could test innovations with regulatory guidance and temporary exemptions from certain requirements. Support measures for startups and SMEs addressed concerns that compliance costs could disadvantage European companies against competitors in less regulated jurisdictions. The balance between protection and innovation remained contentious throughout negotiations.
Enforcement and Governance Structure
The general approach addressed AI Act enforcement mechanisms, establishing relationships between national authorities and EU-level oversight. Member states retained primary enforcement responsibility while accepting Commission coordination role and European AI Board establishment. Penalties for non-compliance could reach significant percentages of global turnover, though member states sought proportionality safeguards and due process protections. The enforcement architecture drew lessons from GDPR setup, where varying national approaches created compliance complexity for organizations operating across multiple jurisdictions.
Trilogue Positioning
Adoption of the general approach enabled Council participation in trilogue negotiations with Parliament, which had adopted its own position containing significant differences from both Council and Commission approaches. Key negotiating points included high-risk classification scope, general purpose AI obligations, enforcement mechanisms, and prohibited AI applications.
The political agreement eventually reached in December 2023 required significant compromise from all three institutions, with the final text reflecting negotiated positions rather than any single institution's preferred approach. Organizations tracking AI Act development should understand the evolution from Council general approach through trilogue to final text.
Industry and Stakeholder Reactions
Industry associations generally welcomed Council positions emphasizing proportionality and innovation support while expressing concerns about compliance complexity and definitional ambiguities remaining in the text. Civil society organizations criticized weakened provisions compared to Commission proposals, particularly regarding fundamental rights protections and biometric surveillance restrictions.
The general approach represented a snapshot in ongoing legislative evolution rather than final requirements, with significant modifications occurring during subsequent trilogue negotiations. If you are affected, have monitored all three institutional positions to anticipate the range of possible final outcomes.
Source material
- Council press release announces general approach adoption and summarizes key positions.
- General approach text provides the complete Council negotiating position.
- Legislative observatory tracks AI Act progress through institutional negotiations.
Continue in the AI pillar
Return to the hub for curated research and deep-dive guides.
Latest guides
-
AI Governance Implementation Guide
Operationalise the EU AI Act, ISO/IEC 42001, and U.S. OMB M-24-10 requirements with accountable inventories, controls, and reporting workflows.
-
AI Incident Response and Resilience Guide
Coordinate AI-specific detection, escalation, and regulatory reporting that satisfy EU AI Act serious incident rules, OMB M-24-10 Section 7, and CIRCIA preparation.
-
AI Procurement Governance Guide
Structure AI procurement pipelines with risk-tier screening, contract controls, supplier monitoring, and EU-U.S.-UK compliance evidence.
Coverage intelligence
- Published
- Coverage pillar
- AI
- Source credibility
- 71/100 — medium confidence
- Topics
- AI Governance · Regulation · Risk Management
- Sources cited
- 2 sources (iso.org, nist.gov)
- Reading time
- 5 min
Source material
- Industry Standards and Best Practices — International Organization for Standardization
- NIST AI Risk Management Framework
Comments
Community
We publish only high-quality, respectful contributions. Every submission is reviewed for clarity, sourcing, and safety before it appears here.
No approved comments yet. Add the first perspective.