← Back to all briefings
Policy 5 min read Published Updated Credibility 94/100

Policy Briefing — EU AI Act general-purpose model duties enter force on 1 August 2025

Twelve months after the EU AI Act enters into force, providers of general-purpose AI models must publish technical documentation, systemic-risk mitigation plans, and EU database disclosures or face market surveillance investigations.

Timeline plotting source publication cadence sized by credibility.
2 publication timestamps supporting this briefing. Source data (JSON)

Executive briefing: Regulation (EU) 2024/1689 (the EU AI Act) entered into force on 1 August 2024. Article 113(2)(b) sets a 12-month application window for general-purpose AI (GPAI) obligations, making 1 August 2025 the first enforcement date. Providers must deliver detailed technical documentation, summaries of training data, systemic risk mitigation measures for high-impact models, and notify the EU database for GPAI models before placing systems on the market.

Core obligations

  • Documentation packages. Articles 53(1)–(3) require GPAI providers to supply descriptions of model capabilities, training and evaluation data, performance limitations, and alignment techniques to deployers and competent authorities.
  • Systemic-risk controls. Article 53(4) and Article 55 impose additional duties on GPAI models with systemic risk, including risk management policies, incident reporting within 15 days, and adversarial testing programmes.
  • Transparency disclosures. Article 54 obliges providers to publish detailed summaries of copyrighted training content and to register models in the EU database managed by the AI Office.

Program actions

  • Model inventory. Identify foundation models offered in the EU, classify whether they meet the systemic-risk thresholds under Annex XI, and map existing documentation gaps.
  • Evidence packs. Build reusable documentation kits covering training data governance, evaluation results, safeguards, and intended use restrictions to satisfy Article 53 templates.
  • Incident playbooks. Align global AI incident reporting, bias mitigation, and security response plans so EU notifications flow within the 15-day statutory window.

Enablement moves

  • Coordinate with deployer customers on downstream transparency duties, including user disclosures and monitoring expectations.
  • Engage legal and public policy teams to monitor delegated acts defining systemic-risk thresholds and to plan conformity assessment pathways.

Sources

Timeline plotting source publication cadence sized by credibility.
2 publication timestamps supporting this briefing. Source data (JSON)
Horizontal bar chart of credibility scores per cited source.
Credibility scores for every source cited in this briefing. Source data (JSON)

Continue in the Policy pillar

Return to the hub for curated research and deep-dive guides.

Visit pillar hub

Latest guides

  • EU AI Act
  • General-purpose AI
  • Systemic risk
  • AI documentation
Back to curated briefings