← Back to all briefings

AI · Credibility 53/100 · · 7 min read

AI Governance Briefing — March 13, 2024

The European Parliament approved the EU AI Act, finalising risk-tier obligations, general-purpose AI transparency duties, and phased enforcement milestones for providers and deployers.

Executive briefing: On March 13, 2024 the European Parliament voted to adopt the Artificial Intelligence Act with 523 votes in favour, 46 against, and 49 abstentions. The regulation now heads to the Council for formal endorsement before publication in the EU Official Journal, where it will enter into force 20 days later and begin phased application across the bloc.

Scope and definitions

  • Risk-based tiers. The law differentiates unacceptable-risk, high-risk, limited-risk, and minimal-risk systems, with obligations scaling based on potential harm to safety, fundamental rights, and rule of law.
  • General-purpose AI (GPAI). Foundation models and GPAI systems face transparency, technical documentation, and systemic risk mitigation duties, including energy and compute reporting.
  • Sectoral alignment. Annex II integrates the AI Act with existing EU product safety regimes, while Annex III lists high-risk use cases spanning biometrics, employment, critical infrastructure, and access to essential services.

Timeline

  • April 21, 2021. The European Commission presented the original AI Act proposal alongside a coordinated plan update.
  • June 14, 2023. The European Parliament adopted its negotiating position, enabling trilogue talks with the Council and Commission.
  • December 8, 2023. Parliament and Council reached a political agreement on the final text after a 36-hour trilogue.
  • February 2, 2024. EU Member States' permanent representatives (Coreper) endorsed the compromise text, clearing the way for Parliament's plenary vote.
  • February 13, 2024. The Parliament's IMCO and LIBE committees adopted the compromise text, confirming the version sent to the March plenary.
  • May 21, 2024. The Council of the EU granted final approval, clearing the way for publication in the Official Journal on July 12, 2024.
  • July 12, 2024. The regulation was published in the Official Journal (L 206), starting the 20-day countdown to entry into force.
  • August 1, 2024. The regulation enters into force 20 days after publication, activating the European Commission's AI Office and cooperation framework with national authorities.
  • February 1, 2025. Prohibitions on unacceptable-risk practices, such as social scoring and real-time biometric categorisation for law enforcement, become enforceable six months after entry into force.
  • May 1, 2025. Codes of practice and voluntary commitments for GPAI models mature nine months after entry into force to guide systemic risk mitigation.
  • August 1, 2025. GPAI providers must comply with documentation, transparency, and model governance duties 12 months after entry into force.
  • August 1, 2026. Obligations for high-risk systems regulated under Annex II product safety law take effect, bringing AI requirements into existing CE marking conformity assessments 24 months after entry into force.
  • August 1, 2027. Standalone high-risk systems listed in Annex III must comply with risk management, data governance, and human oversight obligations 36 months after entry into force.

Pre-legislative groundwork

  • February 19, 2020. The European Commission issued its AI White Paper, launching consultations on a risk-based regulatory framework that shaped the proposal.
  • October 20, 2020. The European Parliament adopted resolutions on AI ethics, civil liability, and intellectual property, signalling support for tiered obligations and human oversight safeguards.
  • December 6, 2022. EU Member States meeting in the Council adopted their general approach, aligning positions on biometrics, general-purpose AI, and enforcement architecture before trilogue negotiations.

Member state preparation deadlines

  • By August 1, 2025. Member States must designate national competent authorities, market surveillance authorities, and notifying bodies responsible for conformity assessments (Articles 70 and 73).
  • By August 1, 2026. Each Member State must stand up at least one AI regulatory sandbox and communicate participation rules to the European Commission (Article 56).

Implementation infrastructure

  • AI Office build-out. The European Commission is establishing an AI Office in 2024 to coordinate enforcement, oversee GPAI codes of practice, and support national competent authorities.
  • Harmonised standards. On May 16, 2024 the Commission issued a standardisation request tasking CEN and CENELEC with drafting AI Act standards on data governance, risk management, and quality management to underpin conformity assessments.
  • Member state readiness. During the transition period, Member States must designate notifying bodies and market surveillance authorities and prepare to collaborate through the AI Board and national supervisory structures.

Governance alignment

  • EU Digital Services Act. Coordinating AI transparency reporting with DSA due diligence helps platforms demonstrate oversight of recommender systems and content moderation AI.
  • NIST AI RMF. AI risk identification, measurement, and governance functions map to AI Act requirements for data governance, human oversight, and logging.
  • ISO/IEC 42001. Organisations pursuing AI management system certification can use AI Act obligations to prioritise controls around lifecycle governance, change management, and incident response.
  • OECD AI Principles. Embedding proportionality, accountability, and robustness supports third-country adequacy assessments for multinational deployments.

Operational impacts

  • Conformity assessment. High-risk system providers must implement quality management systems, maintain technical documentation, and register in the EU database prior to market placement.
  • Data governance. Training, validation, and testing datasets require documented relevance, representativeness, and bias mitigation, with traceability for regulators.
  • Incident reporting. Providers must log serious incidents and corrective actions, while deployers need monitoring processes and cooperation with competent authorities.
  • Contractual obligations. GPAI providers will face customer requests for risk mitigation support, transparency artefacts, and downstream usage restrictions to demonstrate shared compliance.

Implementation priorities

  • Classify AI portfolios against the EU risk tiers and document rationale, noting which use cases fall within Annex III categories or trigger GPAI duties.
  • Stand up cross-functional compliance programmes that integrate legal, privacy, cybersecurity, safety, and product owners to prepare conformity assessments and CE markings.
  • Update model lifecycle tooling to capture dataset provenance, evaluation metrics, and red-teaming outputs required for technical documentation.
  • Negotiate updated contractual assurances from foundation model vendors covering access to documentation, incident escalation, and systemic risk mitigation commitments.

Enablement moves

  • Deliver executive briefings explaining phased enforcement so budget planning anticipates 6-, 9-, 12-, 24-, and 36-month milestones.
  • Embed AI Act logging, monitoring, and human oversight requirements into secure development lifecycles, product launch checklists, and risk committees.
  • Coordinate with EU representatives or competent national authorities to stay current on harmonised standards, implementing acts, and sectoral guidance as they are issued.

General-purpose AI compliance runway

  • August 2025 documentation. Within 12 months of entry into force, GPAI providers must publish summaries of training data, maintain technical documentation, and supply usage instructions so deployers can meet transparency duties.
  • Systemic model safeguards. GPAI models that meet the systemic risk thresholds must complete model evaluations, adversarial testing, and share mitigation reports with the AI Office once the Commission adopts the supporting methodologies in 2025.
  • Serious incident reporting. GPAI providers need incident escalation and notification playbooks ready for August 2025 so they can inform the AI Office and national authorities without undue delay when safety or fundamental rights risks emerge.

Article-level obligations to highlight

  • Article 9 — Risk management system. High-risk providers must maintain a documented, continuous risk management process covering design, testing, deployment, and post-market monitoring.
  • Article 10 — Data governance. Training, validation, and testing datasets must meet quality criteria for relevance, representativeness, and bias mitigation with supporting documentation.
  • Article 11 — Technical documentation. Providers need comprehensive technical files before market placement so authorities can assess conformity and registrants can complete EU database submissions.
  • Article 12 — Record-keeping. Automatic logging capabilities are required to support traceability, post-market surveillance, and incident response obligations.
  • Article 13 — Transparency and instructions. Providers must supply deployers with clear usage instructions, capabilities, and limitations to support compliant operation.
  • Article 14 — Human oversight. Systems must be designed with effective human oversight measures to prevent or minimise risks to health, safety, and fundamental rights.
  • Article 15 — Accuracy, robustness, and cybersecurity. Providers must design and develop AI systems to achieve resilience against errors, faults, and malicious interference throughout the lifecycle.
  • Article 52 — Transparency for limited-risk AI. Deployers of AI systems that interact with humans, detect emotions, or generate deepfakes must disclose AI use and labelling requirements.

Adjacent EU regulatory deadlines

  • February 17, 2024 — Digital Services Act (DSA) full effect for VLOPs/VLOSEs. Platforms already meeting DSA risk management and transparency duties can reuse governance artefacts for AI Act limited-risk compliance.
  • October 17, 2024 — NIS2 transposition deadline. Critical sectors must align cybersecurity governance and supply chain controls; AI risk controls should map to the same oversight committees.
  • January 17, 2025 — Digital Operational Resilience Act (DORA) application. Financial entities can fold AI Act model governance into ICT risk management, incident reporting, and third-party oversight under DORA.
  • September 12, 2025 — Data Act cloud switching obligations. Data Act portability and switching rules apply 20 months after entry into force, creating dependencies between AI deployment choices and contractual controls.

Annex III high-risk scope highlights

  • Biometric systems. Remote biometric identification, categorisation, and emotion recognition for law enforcement and workplace monitoring fall within Annex III, requiring risk management, data quality, and human oversight controls.
  • Critical infrastructure. AI that manages transport, energy, water, or digital infrastructure is deemed high-risk because malfunction could endanger life or supply continuity.
  • Education and employment. Systems that evaluate students or make hiring, promotion, or termination decisions must address bias, documentation, and oversight requirements.
  • Essential services access. Credit scoring, insurance underwriting, migration, asylum, and social benefit eligibility tools trigger Annex III duties to protect fundamental rights.
  • Justice and democratic processes. AI used in policing, criminal risk assessment, border control, or voter influence is captured to safeguard due process and democratic integrity.

Supervision and coordination architecture

  • European AI Office. The Commission’s AI Office coordinates GPAI oversight, manages systemic risk investigations, and can issue guidance or request corrective actions from providers.
  • AI Board and Advisory Forum. National representatives meet through the AI Board, supported by an Advisory Forum of stakeholders and a Scientific Panel that feeds technical expertise into enforcement planning.
  • National competent authorities. Each Member State designates market surveillance authorities and notifying bodies responsible for conformity assessment, post-market monitoring, and sanctions.
  • Cooperation mechanisms. Articles 65–68 require information sharing, joint investigations, and coordinated risk assessments across Member States when systemic issues emerge.

Penalty structure

  • Prohibited practices. Marketing or using banned AI systems can draw fines up to €35 million or 7% of global annual turnover, whichever is higher.
  • High-risk and GPAI obligations. Breaches of high-risk requirements or GPAI duties can incur penalties up to €15 million or 3% of global turnover.
  • Information duties. Providing incomplete, incorrect, or misleading information to authorities can lead to fines up to €7.5 million or 1.5% of global turnover.
  • SME considerations. The regulation allows proportional caps for small and medium-sized enterprises and start-ups to avoid disproportionate penalties.

Commission deliverables to monitor

  • Codes of practice. Within nine months of entry into force, the Commission will facilitate GPAI codes of practice that preview systemic risk mitigation expectations ahead of binding implementing acts.
  • Harmonised standards. Following the May 2024 standardisation request, CEN and CENELEC must propose standards that the Commission can cite in the Official Journal to confer presumption of conformity.
  • Common specifications. If standards lag, Article 41 empowers the Commission to issue common specifications so high-risk providers have detailed technical requirements before 2026 obligations begin.
  • Templates and registries. Implementing acts will define EU database schema, declaration of conformity formats, and serious incident reporting templates during the transition period.

Global programme alignment tasks

  • Map overlapping obligations with Canada’s proposed Artificial Intelligence and Data Act (AIDA), the UK’s pro-innovation AI regulation framework, and the U.S. NIST AI Risk Management Framework so multinational deployments can reuse governance artefacts.
  • Reconcile EU AI Act transparency notices with GDPR Articles 13–15 and ePrivacy consent flows to avoid conflicting customer disclosures.
  • Update third-country vendor contracts to require timely delivery of Annex IV technical documentation, post-market monitoring data, and incident escalations.
  • Establish escalation paths between EU and non-EU security operations centres so systemic GPAI incidents reach the AI Office and national authorities within required timeframes.

Sources

Zeph Tech is building AI Act readiness playbooks that synchronise risk classification, vendor diligence, and documentation workflows across EU and multinational deployments.

  • EU AI Act
  • GPAI
  • AI governance
  • Regulatory compliance
Back to curated briefings