EU AI Act
The Council of the European Union gave final approval to the AI Act, clearing the last legislative hurdle before publication and staged enforcement.
Accuracy-reviewed by the editorial team
On the Council of the European Union formally adopted the Artificial Intelligence Act, completing the legislative process for the world first full AI regulatory framework. The regulation sets up a risk-based approach to AI governance, with requirements ranging from transparency obligations for minimal-risk systems to outright prohibitions for AI applications deemed incompatible with EU values.
Risk-Based Classification Framework
The AI Act categorizes AI systems into four risk tiers, with regulatory requirements increasing based on the potential for harm. This graduated approach allows innovation in low-risk applications while ensuring strong protections for systems that could significantly impact individuals rights, safety, or wellbeing.
- Prohibited AI practices. The regulation bans AI systems that manipulate human behavior to bypass free will, exploit vulnerabilities of specific groups, enable social scoring by governments, or use real-time remote biometric identification in public spaces for law enforcement with narrow exceptions.
- High-risk AI systems. Systems used in critical infrastructure, education, employment, essential services, law enforcement, migration management, and administration of justice face full requirements including conformity assessments, technical documentation, human oversight, and accuracy standards.
- Limited risk systems. AI systems interacting with humans, generating synthetic content, or categorizing individuals based on biometric data must meet transparency requirements ensuring users know they are interacting with AI or viewing AI-generated content.
- Minimal risk systems. Most AI applications face no mandatory requirements under the regulation, though providers should adopt voluntary codes of conduct.
Obligations for AI Providers and Deployers
The AI Act creates distinct obligations for different actors in the AI value chain, recognizing that providers developing AI systems and deployers using them in specific contexts face different responsibilities and have different capabilities for ensuring compliance.
- Provider obligations. Organizations developing high-risk AI systems must implement risk management systems, ensure training data quality, maintain technical documentation, enable human oversight, and show accuracy, robustness, and cybersecurity. Providers must also register systems in an EU database before market placement.
- Deployer obligations. Organizations using high-risk AI systems must use them following instructions, ensure human oversight, monitor for risks, keep logs, and inform individuals when subject to automated decisions with significant impact.
- Importer and distributor obligations. Organizations bringing AI systems into the EU market or distributing them must verify that providers have completed required conformity assessments and documentation.
General-Purpose AI Models
The AI Act includes specific provisions for general-purpose AI models, addressing the unique challenges posed by foundation models and large language models that can be adapted for multiple downstream applications.
- Transparency requirements. All GPAI model providers must prepare technical documentation, comply with EU copyright law, and publish training content summaries.
- Systemic risk obligations. GPAI models with high-impact capabilities face additional requirements including model evaluations, adversarial testing, incident reporting, and cybersecurity protections.
Key dates and milestones
The AI Act enters into force 20 days after Official Journal publication, with provisions becoming applicable in phases. Prohibited practices take effect after 6 months, GPAI obligations after 12 months, high-risk system requirements after 24 months, and certain obligations for embedded AI after 36 months. This graduated timeline provides organizations time to achieve compliance while ensuring that the highest-risk applications face prompt regulation.
Compliance Preparation Steps
- AI inventory development. Catalog all AI systems in use or development, classifying each according to the AI Act risk tiers based on intended purpose and deployment context.
- Gap assessment. Compare current AI governance practices against applicable AI Act requirements, identifying areas requiring process development, documentation improvement, or technical modification.
- Governance structure establishment. Designate responsible personnel for AI Act compliance, establish oversight procedures, and integrate AI governance into existing compliance frameworks.
Continue in the Compliance pillar
Return to the hub for curated research and deep-dive guides.
Latest guides
-
Third-Party Risk Oversight Playbook
Operationalize OCC, Federal Reserve, EBA, and MAS outsourcing expectations with lifecycle controls, continuous monitoring, and board reporting.
-
Compliance Operations Control Room
Implement cross-border compliance operations that satisfy Sarbanes-Oxley, DOJ guidance, EU DORA, and MAS TRM requirements with verifiable evidence flows.
-
ESG Assurance Operating Guide
Deploy credible ESG assurance across CSRD, SEC climate disclosure, and ISSA 5000 requirements with regulator-aligned controls, data governance, and audit-ready evidence.
Coverage intelligence
- Published
- Coverage pillar
- Compliance
- Source credibility
- 89/100 — high confidence
- Topics
- EU AI Act · AI compliance · High-risk AI · EU regulation
- Sources cited
- 3 sources (consilium.europa.eu, digital-strategy.ec.europa.eu, iso.org)
- Reading time
- 5 min
Further reading
- Artificial Intelligence Act: Council gives final green light — Council of the European Union
- European approach to artificial intelligence — European Commission
- ISO 37301:2021 — Compliance Management Systems — International Organization for Standardization
Comments
Community
We publish only high-quality, respectful contributions. Every submission is reviewed for clarity, sourcing, and safety before it appears here.
No approved comments yet. Add the first perspective.