← Back to all briefings

Policy · Credibility 94/100 · · 2 min read

Policy Briefing — September 10, 2025

The EU AI Act’s transparency, documentation, and copyright controls for general-purpose AI apply from August 2025, and September spot checks are testing whether providers can evidence compliance for national authorities and the AI Office.

Executive briefing: Regulation (EU) 2024/1689—the EU AI Act—entered into force on August 1, 2024. Twelve months later, Articles 53 to 55 require providers of general-purpose AI (GPAI) models to publish detailed summaries of the content used for training, prepare technical documentation describing model capabilities and limitations, and implement policies for copyright compliance. Downstream deployers must receive information enabling them to assess residual risks. With these provisions applicable from August 2025, September is when national authorities and the European AI Office begin requesting evidence that GPAI governance programs are live.

Key compliance checkpoints

  • Training data transparency. Produce and publish a sufficiently detailed, intelligible summary of training datasets—including provenance, modalities, and known biases—while protecting trade secrets as allowed under Recital 102.
  • Technical documentation. Maintain dossiers that capture model architecture, evaluation benchmarks, red-teaming outcomes, energy consumption estimates, and post-deployment monitoring plans.
  • IP safeguards. Document measures taken to respect EU copyright law, including opt-out handling for text and data mining requests under Directive (EU) 2019/790.

Operational priorities

  • Downstream enablement. Deliver deployment guidance, usage restrictions, and known limitations to distributors and deployers so they can meet Articles 27 and 52 obligations.
  • Risk governance. Establish cross-functional committees to review systemic risk indicators, severe incident logs, and mitigation plans for GPAI models designated under Article 51.
  • Audit readiness. Map documentation repositories to supervisory request templates and assign accountable owners for prompt responses to the AI Office.

Enablement moves

  • Implement data lineage tooling that tracks copyright status, consent, and geographic source for each dataset segment.
  • Publish model cards summarizing intended use, evaluation metrics, and limitations to align with transparency best practices referenced in Annex XI.

Sources

Zeph Tech inventories GPAI assets, automates training data transparency statements, and orchestrates AI Act documentation workflows.

  • Artificial intelligence
  • EU regulation
  • Transparency
Back to curated briefings