← Back to all briefings

Compliance · Credibility 50/100 · · 2 min read

Compliance Briefing — September 12, 2025

Deployers of high-risk AI in the EU now need documented fundamental rights impact assessments before rollout, as Article 29a of the AI Act takes hold ahead of 2026 supervisory checks.

Executive briefing: From August 2025, Article 29a of Regulation (EU) 2024/1689 obliges public authorities—and private entities providing public services—using high-risk AI systems to conduct fundamental rights impact assessments (FRIAs) prior to deployment. The assessment must analyse intended purpose, affected groups, foreseeable impacts on rights such as non-discrimination and data protection, risk mitigation measures, and stakeholder consultation outcomes. Authorities must share FRIA summaries with national human rights institutions and make them publicly available unless limited by security considerations. September project launches therefore require completed FRIAs plus governance evidence to satisfy early supervisory reviews.

Key compliance checkpoints

  • Scope confirmation. Identify high-risk AI systems listed in Annex III—such as credit scoring, educational admissions, employment screening, and essential public services—that trigger Article 29a obligations.
  • Rights analysis. Document how the system may affect equality, privacy, due process, accessibility, and consumer rights, referencing existing impact assessments under the GDPR or Digital Services Act where relevant.
  • Mitigation inventory. Catalogue safeguards—human review, appeal processes, transparency notices, and bias monitoring—and link each to specific risks surfaced in the FRIA.

Operational priorities

  • Stakeholder engagement. Build consultation plans with civil society groups, worker councils, or consumer advocates where the AI Act or national law requires input.
  • Publication workflow. Set approval paths for releasing FRIA summaries online while redacting sensitive information consistent with Article 53(4).
  • Audit trail. Store signed assessments, meeting minutes, and mitigation acceptance decisions to respond quickly to AI Office or national authority requests.

Enablement moves

  • Integrate FRIA templates into AI lifecycle management platforms so assessments occur before procurement or deployment gates.
  • Align FRIA outputs with GDPR data protection impact assessments to avoid duplicative work and ensure holistic risk coverage.

Sources

Zeph Tech embeds FRIA workflows, links safeguards to risk findings, and publishes transparency summaries for AI Act oversight bodies.

  • Artificial intelligence
  • Fundamental rights
  • Risk management
Back to curated briefings