← Back to all briefings
AI 5 min read Published Updated Credibility 93/100

AI Governance Briefing — April 14, 2025

General-purpose AI providers have weeks left to finalise EU AI Act codes of practice due by 2 May 2025, and Zeph Tech is locking documentation, testing, and systemic-risk disclosures into those templates.

Timeline plotting source publication cadence sized by credibility.
3 publication timestamps supporting this briefing. Source data (JSON)

Executive briefing: Articles 56 and 57 of Regulation (EU) 2024/1689 require providers and deployers of general-purpose AI (GPAI) models to adopt Commission-endorsed codes of practice within nine months of the Act’s entry into force. That window closes on . Zeph Tech is coordinating legal, engineering, and policy teams to populate the mandatory annexes—training data summaries, energy usage, evaluation protocols, systemic-risk triggers, and downstream support plans—so GPAI attestations are ready when the European AI Office reviews submissions.

Regulatory checkpoints

  • Code adoption. Article 56(5) gives GPAI providers nine months to align with Commission-approved codes; non-compliance exposes models to direct obligations under Article 53.
  • Transparency pack. Codes demand disclosures on training datasets, compute usage, copyright safeguards, and risk mitigation per Annex XI templates.
  • Downstream support. Providers must furnish deployers with documentation, evaluation tooling, and incident reporting interfaces so downstream users can meet Article 52 duties.

Control alignment

  • NIST AI RMF (Measure/Manage). Embed stress testing, red teaming, and monitoring metrics referenced in the codes into RMF-aligned control libraries.
  • ISO/IEC 42001:2023 clause 8.5. Treat code-of-practice commitments as AI management system controls with change-management, document control, and audit trails.
  • EU AI Pact commitments. Synchronise voluntary AI Pact dashboards with the formal code submissions to avoid conflicting disclosures.

Detection and response priorities

  • Flag model updates that change systemic-risk posture so documentation, safety testing, and downstream notices stay in sync.
  • Monitor for copyright-protected content leakage or watermark bypass that must be reported under code-of-practice obligations.
  • Exercise incident runbooks with the AI Office contact points to prove 24/7 responsiveness.

Enablement moves

  • Run cross-functional sprints to complete Annex XI templates, compute lifecycle energy metrics, and systemic-risk assessments ahead of the May deadline.
  • Document downstream enablement packages—SDKs, evaluation harnesses, and risk advisories—and publish update cadences.
  • Align legal and commercial teams on how code commitments flow into contractual warranties and service descriptions.
Timeline plotting source publication cadence sized by credibility.
3 publication timestamps supporting this briefing. Source data (JSON)
Horizontal bar chart of credibility scores per cited source.
Credibility scores for every source cited in this briefing. Source data (JSON)

Continue in the AI pillar

Return to the hub for curated research and deep-dive guides.

Visit pillar hub

Latest guides

  • EU AI Act
  • General-purpose AI
  • Codes of practice
Back to curated briefings