← Back to all briefings
AI 8 min read Published Updated Credibility 90/100

EU AI Act Enforcement Timeline — July 12, 2024

Publication of the EU AI Act in the Official Journal triggered a cross-functional enforcement program, mapping prohibited practice shutdowns, GPAI documentation, and high-risk conformity workstreams to ISO/IEC 42001, NIST AI RMF, and sector regulations ahead of the 2025 and 2026 deadlines.

Verified for technical accuracy — Kodi C.

AI pillar illustration for Zeph Tech briefings
AI deployment, assurance, and governance briefings

Publication of the Artificial Intelligence Act in the Official Journal on 12 July 2024 starts a tightly sequenced enforcement calendar that culminates in full high-risk compliance by August 2026. The governance office is using the 20-day entry-into-force trigger to back-plan program milestones, resource allocations, and audit evidence so model builders, compliance leads, and assurance teams hit every regulatory checkpoint with clean documentation.

The immediate critical is to translate the Regulation (EU) 2024/1680 text—Articles 5 through 85 and Annexes I to XI—into operational guardrails. That means mapping prohibited practices, high-risk classification, general-purpose AI (GPAI) support duties, and market surveillance expectations to the product portfolio.

We are refreshing inventories to tag every AI capability by Annex III risk category, identifying GPAI components that rely on third-party foundation models, and linking each use case to data-protection artifacts and Article 17 quality management procedures. This exercise is coordinated with privacy, security, legal, and procurement to avoid duplicating assessments and to ensure that delegated acts or harmonized standards issued later in 2024 can be absorbed without rework.

Timeline focus: Entry into force occurs on 1 August 2024. Article 5 prohibitions—including untargeted facial recognition scraping, emotion inference in workplaces or education, and social scoring—needs to be decommissioned by 2 February 2025. Voluntary codes of practice to support GPAI transparency are due from the Commission and AI Office nine months after entry into force, positioning May 2025 as the horizon for documentation pilots. GPAI providers must implement systemic-risk monitoring, incident notification, and technical documentation (Articles 52a–56) no later than 1 August 2025. High-risk system obligations—risk management, data governance, technical documentation, human oversight, and CE marking—become enforceable on 1 August 2026, with transitional relief through August 2027 for legacy deployments that were lawfully placed on the market. We are also tracking related milestones such as the AI Liability Directive negotiations, European Data Innovation Board guidance, and national supervisory authority staffing plans that may accelerate inspections.

Control framework alignment

We are building a consolidated control matrix that maps each AI Act obligation to existing enterprise frameworks:

  • ISO/IEC 42001:2023 alignment. Articles 9, 17, and 29 align tightly with clauses on AI risk management, quality management, and operational control. We are extending our 42001 management system documentation to include AI Act-specific evidence such as algorithmic impact assessments, data lineage logs, and human oversight playbooks.
  • NIST AI Risk Management Framework. Article 9’s risk management system is mapped to the Govern, Map, Measure, and Manage functions. Each risk scenario from Annex III is receiving associated risk treatments, control ownership, and monitoring metrics. We are ensuring that GPAI systemic-risk obligations feed directly into the RMF’s Measure function dashboards.
  • ISO/IEC 23894 and 27001 integration. Data governance and robustness clauses are linked to 23894’s life-cycle risk controls and ISO/IEC 27001 Annex A.5, A.8, and A.12 controls so that model pipelines inherit secure development, logging, and supply-chain protections.
  • GDPR and EU Charter obligations. Article 10 data governance requirements are cross-referenced with GDPR Article 35 data protection impact assessments, Article 24 accountability measures, and the Charter of Fundamental Rights to keep fairness and proportionality analyzes auditable.

Each linkage is being codified in our GRC tooling so that auditors can trace requirements from Article text to implemented controls, associated evidence repositories, and periodic review cadences. Where harmonized standards such as the forthcoming CEN/CENELEC EN 42001 technical specifications provide more granular requirements, we will integrate them as soon as the European Commission cites them in the Official Journal.

Action plan and workstreams

We have broken delivery into three primary waves with clear accountability:

  1. Wave 1 — 30-day mobilization. Stand up the cross-functional AI Act program office chaired by the Chief AI Governance Officer. Deliverables include a refreshed system inventory, appointment of Article 16 authorized representatives for extraterritorial deployments, and a communication plan for internal teams and critical vendors. Security engineering is capturing model release pipelines, while legal finalizes template contract addenda to flow down GPAI obligations.
  2. Wave 2 — 31-180 days. Focus on prohibited practice off-boarding, GPAI documentation pilots, and data governance uplift. Engineering teams must document and, where necessary, disable biometric categorization, predictive policing, or emotion recognition functionality. GPAI teams are building transparency packs that include training data summaries, evaluation protocols, known limitations, and copyright notices required by Article 52b. Data stewards are aligning with Article 10 by validating datasets for bias, completeness, and traceability. Procurement is assessing supplier attestations to confirm conformance with the AI Act and related sectoral rules such as the EU Data Act and DORA.
  3. Wave 3 — 181-720 days. Prepare for full high-risk conformity assessments and post-market monitoring. Business units must embed human oversight procedures aligned to Article 14, develop model change control and redeployment playbooks, and ensure CE marking documentation is ready for notified bodies. Monitoring teams are configuring incident detection thresholds that align with Article 62 reporting timelines and national authority expectations. Internal audit schedules thematic reviews to confirm readiness before the 2026 go-live.

Each wave is supported by a resourcing plan, budget approvals, and defined success metrics so slippage is surfaced early. We are also pairing program milestones with board updates and risk committee reporting to maintain executive sponsorship.

Industry nuances

Our vertical leads are tailoring obligations to sector regulations:

  • Financial services. Alignment with the Digital Operational Resilience Act (DORA) and European Banking Authority guidelines is ensuring AI credit scoring, anti-fraud, and customer service bots satisfy both AI Act and prudential expectations. Model risk management functions are updating SR 11-7 style inventories with AI Act attributes.
  • Healthcare and life sciences. Annex III medical device use cases require coordination with the Medical Device Regulation and in vitro diagnostic rules. Clinical safety officers are folding Article 9 risk management outputs into ISO 14971 safety files and validating post-market surveillance integration.
  • Critical infrastructure and public sector. Utilities and transport teams are aligning AI Act controls with NIS2, CER Directive obligations, and national procurement frameworks. Public-sector deployments must prepare for AI Office oversight and potential publication of algorithm registers, so documentation and explainability tooling are being prioritized.

We are also engaging the works council and labor relations teams where AI monitoring tools touch employees, ensuring Article 29 human oversight and fundamental rights assessments address national labor law nuances.

Data, model, and tooling readiness

Technical squads are executing a detailed backlog to raise assurance maturity:

  • Data governance uplift. Establish dataset versioning, lineage, and access controls so that training and validation corpora can be fully reconstructed. Automate bias and drift testing to feed Article 61 post-market monitoring.
  • Model documentation and evaluation. Produce Annex IV technical documentation dossiers, including model cards, evaluation metrics, adversarial robustness results, and fail-safe design notes. GPAI providers are preparing downstream usage guidance and API usage constraints to support deployers.
  • Operational observability. Integrate AI system logs into the security operations center with alerting tuned to detect systemic-risk indicators, performance anomalies, and prohibited functionality reactivation.
  • Third-party assurance. Require supplier attestations that align with Article 28 supply-chain expectations, including commitments to share evaluation artifacts, report serious incidents within 24 hours, and cooperate with market surveillance authorities.

We are expanding our model registry to include compliance metadata (Article references, harmonized standards, notified body interactions) so that due diligence, audits, and change advisory boards can reference a single system of record.

Governance, reporting, and training

program governance integrates legal, compliance, risk, and engineering to ensure decisions are logged and reproducible:

  • Decision records. Maintain Article 13 transparency logs and board briefings that justify model deployment decisions, exemption claims, and remediation timelines.
  • Training and competence. Deliver role-based training covering Article 4 fundamental rights protections, Article 15 robustness requirements, and AI Office escalation protocols. Completion tracking is embedded into ISO/IEC 42001 competency management.
  • Stakeholder engagement. Establish external communication plans for users, regulators, and partners, including template notifications for Article 62 serious incidents and Article 65 market surveillance cooperation.

Risk committees are receiving quarterly updates that summarize progress against each enforcement milestone, highlight outstanding delegated act clarifications, and capture dependencies on harmonized standards or European AI Office guidance.

Metrics and assurance

To evidence compliance, we are aligning metrics and assurance routines:

  • Key risk indicators. Track the number of Annex III systems without validated risk treatments, GPAI models lacking transparency packs, and outstanding supplier attestations.
  • Testing cadence. Schedule pre-release and post-release evaluations aligned with Article 15 robustness and Article 61 monitoring, recording reproducible results and issue remediation turnaround.
  • Audit readiness. Internal audit and third-line functions are preparing thematic audits that test traceability from Article obligations to implemented controls, including spot checks of human oversight logs, CE marking files, and incident response drills.

These metrics feed an executive dashboard that is reviewed monthly and distributed to compliance, legal, and engineering leadership. Findings with regulatory exposure are escalated through the enterprise risk management process.

Follow-up actions

Between now and the February 2025 prohibited-practice deadline we are prioritizing system decommissioning, GPAI documentation pilots, and supplier engagement. The ensuing twelve months are dedicated to full Article 9 and Annex IV compliance, including notified body engagement where required. By mid-2026, we expect to complete high-risk conformity assessments, operate mature post-market monitoring, and maintain auditable documentation packs ready for EU AI Office inspection. Continuous monitoring will ensure lessons learned inform later enforcement windows and related regulations such as the AI Liability Directive and Data Act interoperability mandates.

Cited sources

The AI governance desk synchronizes ISO/IEC 42001 controls, NIST AI RMF safeguards, and EU AI Act obligations so teams can brief executives with confidence and document readiness for AI Office inspections.

Continue in the AI pillar

Return to the hub for curated research and deep-dive guides.

Visit pillar hub

Latest guides

Cited sources

  1. Regulation (EU) 2024/1680 — Artificial Intelligence Act (Official Journal, July 12, 2024) — eur-lex.europa.eu
  2. European Commission — AI Act timeline and setup resources — digital-strategy.ec.europa.eu
  3. European Commission — EU AI Office mandate and coordination role — commission.europa.eu
  • EU AI Act
  • AI governance
  • General-purpose AI risk management
  • ISO/IEC 42001 alignment
Back to curated briefings

Comments

Community

We publish only high-quality, respectful contributions. Every submission is reviewed for clarity, sourcing, and safety before it appears here.

    Share your perspective

    Submissions showing "Awaiting moderation" are in review. Spam, low-effort posts, or unverifiable claims will be rejected. We verify submissions with the email you provide, and we never publish or sell that address.

    Verification

    Complete the CAPTCHA to submit.