← Back to all briefings

Compliance · Credibility 86/100 · · 2 min read

AI governance roadmap — EU AI Act high-risk QMS build-out

High-risk AI providers have less than a year to finalise quality management systems, conformity assessments, and post-market monitoring before the EU AI Act’s core obligations activate in 2026, making August 2025 the crunch point for programme mobilisation.

Executive briefing: Regulation (EU) 2024/1689 (the AI Act) entered into force on 1 August 2024. High-risk AI obligations—including risk management, data governance, technical documentation, logging, transparency, and human oversight—apply 24 months later. That places the compliance activation date in mid-2026, leaving August 2025 as the final window to operationalise quality management systems (Article 17) and conformity assessment pathways (Articles 19–21). Providers of biometric identification, critical infrastructure monitoring, HR screening, credit scoring, and other high-risk systems need cross-functional programmes spanning ML engineering, legal, security, and product teams.

Regulatory checkpoints

  • Quality management. Article 17 requires documented procedures covering design controls, data governance, testing, corrective actions, and supplier oversight; gaps must be closed before notified bodies or internal checks commence.
  • Technical documentation. Annex IV demands detailed model cards, training data provenance, performance metrics, and cybersecurity safeguards to support conformity assessments and market surveillance.
  • Post-market monitoring. Article 61 obliges providers to implement incident logging, corrective action processes, and serious incident reporting to authorities within 15 days.

Operational build

  • Stand up AI product lifecycle councils that align model risk governance with ISO/IEC 42001 and NIST AI RMF controls.
  • Create data governance workstreams that catalogue training datasets, bias mitigation, and synthetic data provenance consistent with Annex IV.
  • Integrate red-teaming, adversarial testing, and cybersecurity baselines (Article 15) into MLOps pipelines with automated evidence capture.

Enablement moves

  • Coordinate early with notified bodies to determine whether third-party conformity assessment is required and schedule availability.
  • Develop human oversight playbooks defining operator qualifications, override mechanisms, and safe fallback procedures per Article 14.
  • Build regulatory reporting workflows for serious incidents and substantial modifications so market surveillance authorities receive timely notifications.

Sources

Zeph Tech supports AI providers in meeting EU AI Act obligations—standing up quality management systems, documentation pipelines, and post-market monitoring operations.

  • EU AI Act
  • High-risk AI
  • Quality management systems
  • Conformity assessment
Back to curated briefings