Policy Briefing — EU AI Act Legislative Proposal
The European Commission’s draft Artificial Intelligence Act of 21 April 2021 set a risk-based regime for AI systems, introducing prohibitions, high-risk obligations, conformity assessments, and market surveillance structures across the EU single market.
Executive briefing: On 21 April 2021 the European Commission proposed the Artificial Intelligence Act (COM(2021) 206 final), the first comprehensive EU framework governing AI. The regulation applies extra-territorially to providers, importers, distributors, and users of AI systems placed on the EU market or whose outputs affect people in the Union. It categorises AI systems into prohibited, high-risk, limited-risk (transparency obligations), and minimal-risk tiers. Organisations must prepare for conformity assessments, technical documentation, post-market monitoring, and fundamental rights safeguards far ahead of the Act’s enforcement—expected two years after adoption.
Scope and definitions
The Act adopts a broad definition of AI (Annex I) encompassing machine learning, logic- and knowledge-based approaches, and statistical methods. It covers standalone software and embedded systems, including safety components of products regulated under sectoral legislation (e.g., Medical Devices Regulation, Machinery Regulation). Territorial scope mirrors the GDPR: providers established outside the EU fall within the regime when their systems’ outputs are used in the Union.
Risk categories
- Prohibited practices (Article 5). Includes manipulative techniques causing harm, exploitation of vulnerable groups, social scoring by public authorities, and real-time remote biometric identification for law enforcement except narrowly defined derogations.
- High-risk systems (Articles 6–51). Two groups qualify: (1) AI systems that are safety components of regulated products (listed in Annex II) and (2) standalone systems listed in Annex III, such as biometric identification, critical infrastructure management, education, employment, credit scoring, and public services. High-risk systems face stringent requirements.
- Limited-risk systems. Require transparency obligations (Article 52), such as informing users that they interact with AI chatbots or emotion recognition systems.
- Minimal-risk systems. Subject to voluntary codes of conduct.
Obligations for high-risk systems
Providers must implement a risk management system (Article 9), ensure high-quality datasets (Article 10), maintain technical documentation (Annex IV), enable human oversight (Article 14), deliver robustness, accuracy, and cybersecurity (Article 15), and establish post-market monitoring and incident reporting (Articles 61–62). Before placing systems on the market, providers undergo conformity assessments—either self-assessment or third-party evaluation depending on the system.
Users (deployers) of high-risk AI must operate according to instructions for use, monitor operation, keep logs, and cooperate with providers. Importers and distributors must verify CE marking and documentation, maintaining traceability.
Conformity assessment and quality management
- Quality management system (QMS). Providers must maintain documented procedures covering design controls, data governance, supplier management, corrective actions, and technical documentation control. The QMS aligns with ISO/IEC 9001 principles and should integrate with ISO/IEC 23894 risk management for AI.
- Assessment routes. For most Annex III systems, providers can self-assess against harmonised standards. For biometric identification and systems using remote biometric identification, a notified body assessment is required. If harmonised standards are absent, providers must follow common specifications or seek voluntary certification.
- CE marking and registration. High-risk systems must bear the CE mark and be registered in the EU database. Providers must notify market surveillance authorities of serious incidents and malfunction trends.
Data governance requirements
Training, validation, and testing datasets must be relevant, representative, free of errors, and complete. Providers need documented measures for data collection, annotation, bias mitigation, and privacy compliance under GDPR and ePrivacy rules. For biometric systems, consent and lawful bases must satisfy both GDPR and sectoral laws. Data governance also requires traceability of data provenance, versioning, and labelling to support audits.
Human oversight
High-risk AI must allow natural persons to understand capabilities and limitations, override or interrupt operations, and detect anomalies. Providers should define oversight strategies (human-in-the-loop, human-on-the-loop, human-in-command) tailored to use cases. Training programmes for operators must equip them to interpret model outputs, respond to alerts, and escalate incidents.
Transparency and information duties
Providers must supply detailed instructions for use, including system description, intended purpose, performance metrics, human oversight measures, and cybersecurity controls. Logs must be automatically generated and retained to support traceability. Limited-risk systems (chatbots, deepfakes, emotion recognition) must inform users they are interacting with AI or synthetic content.
Market surveillance and enforcement
Member States will designate competent authorities and notify bodies. A European Artificial Intelligence Board (EAIB) will facilitate coordination, mirroring the European Data Protection Board. Market surveillance authorities can order corrective actions, withdraw products, and impose administrative fines up to €30 million or 6% of global turnover, depending on the infringement (Article 71). Fines are tiered—breaches of prohibited practices carry the highest penalties; supplying incorrect information to authorities yields up to €10 million or 2% of turnover.
Relationship with existing legislation
The AI Act complements sector-specific frameworks: Medical Devices Regulation, In Vitro Diagnostic Regulation, General Product Safety Directive, NIS Directive, GDPR, and consumer law. Providers should perform regulatory mapping to determine overlapping conformity assessments and integrate AI requirements into existing quality systems. For example, medical device manufacturers must update technical documentation to address AI-specific risk controls in addition to MDR Annex II requirements.
Implementation planning for organisations
- Inventory AI systems. Classify applications against Annex III categories and assess whether systems qualify as safety components under sectoral legislation.
- Establish governance. Create cross-functional AI compliance committees involving legal, ethics, data science, cybersecurity, and product leadership. Define accountability for provider versus user obligations.
- Build documentation frameworks. Implement model cards, data sheets, algorithmic impact assessments, and traceability logs aligned with Annex IV requirements.
- Design monitoring. Deploy telemetry capturing performance, bias metrics, and incidents; integrate with incident response processes to meet the 15-day serious incident reporting deadline.
- Engage in standardisation. Participate in CEN-CENELEC JTC 21, ETSI, ISO/IEC JTC 1/SC 42, and industry consortia shaping harmonised standards, ensuring forthcoming specifications reflect practical constraints.
Timeline awareness
Legislative negotiations between the European Parliament and Council ran through 2022–2023, with political agreement expected in 2023 and application two years later. Organisations must plan multi-year roadmaps covering design, procurement, and operational controls. Monitor delegated acts that will update Annex III and detail post-market monitoring templates.
Zeph Tech supports AI Act readiness by mapping portfolios to risk categories, building quality management systems, and preparing conformity assessment dossiers that withstand scrutiny from EU market surveillance authorities.
Continue in the AI pillar
Return to the hub for curated research and deep-dive guides.
Latest guides
-
AI Workforce Enablement and Safeguards Guide — Zeph Tech
Equip employees for AI adoption with skills pathways, worker protections, and transparency controls aligned to U.S. Department of Labor principles, ISO/IEC 42001, and EU AI Act…
-
AI Incident Response and Resilience Guide — Zeph Tech
Coordinate AI-specific detection, escalation, and regulatory reporting that satisfy EU AI Act serious incident rules, OMB M-24-10 Section 7, and CIRCIA preparation.
-
AI Model Evaluation Operations Guide — Zeph Tech
Build traceable AI evaluation programmes that satisfy EU AI Act Annex VIII controls, OMB M-24-10 Appendix C evidence, and AISIC benchmarking requirements.




