AI Governance Briefing — January 13, 2025
As the EU AI Act’s Article 5 ban activates on 2 February 2025, organisations must finish offboarding prohibited AI, reconcile universal opt-outs, and assemble regulator-ready evidence demonstrating shutdowns and data stewardship.
Executive briefing: With Article 5 of Regulation (EU) 2024/1689 banning unacceptable-risk AI systems from 2 February 2025, organisations are in the final stretch of offboarding prohibited use cases. Boards must confirm that inventories are complete, shutdown steps are executed, and documentation demonstrates compliance for both internal deployments and vendor-supplied services. Universal opt-out governance is central: individuals whose data was used in banned systems must have their rights respected during archival, deletion, or transition processes. Zeph Tech’s governance playbook coordinates portfolio reviews, evidence vaults, opt-out handling, and regulator-ready narratives that prove unacceptable-risk systems are gone for good.
Portfolio sweep methodology
A comprehensive offboarding programme starts with an end-to-end inventory. Use a risk-tiered questionnaire covering system purpose, user base, decision impact, data sources, and linkage to biometric or behavioural analytics. Flag any system that infers sensitive traits from biometrics, scrapes facial images indiscriminately, scores individuals to control access to public services, or manipulates behaviour by exploiting vulnerabilities. Include shadow AI projects, prototypes, proofs of concept, and vendor APIs integrated into products or back-office functions.
Once flagged, assign ownership to system sponsors, legal counsel, data protection officers, and product managers. Establish offboarding workstreams with defined milestones: access revocation, model retirement, dataset sanitisation, opt-out fulfilment, communication, and evidence collation. Maintain a central tracker that records status, blockers, and anticipated completion dates. Boards should review the tracker weekly during January 2025.
Technical shutdown execution
- Model retirement. Archive model artefacts (weights, architecture, training scripts) in tamper-evident repositories. Generate cryptographic hashes and record storage locations. Disable deployment endpoints, remove container images, and revoke API credentials. If systems operated within third-party platforms, obtain written confirmation of shutdown and log export.
- Data sanitisation. Identify all datasets feeding the prohibited system, including raw captures, derived features, and intermediate outputs. Execute deletion or anonymisation consistent with retention schedules, legal obligations, and opt-out requests. Document destruction certificates, data lineage diagrams, and validation checks showing no residual references in downstream systems.
- Access control. Update identity and access management policies to revoke permissions for staff and vendors associated with the retired system. Implement monitoring to detect attempts to restore access or to create clones.
Universal opt-out orchestration
Respecting data subject rights is critical when retiring prohibited systems:
- Opt-out reconciliation. Cross-reference all data subjects captured in the prohibited system with consent and opt-out registries. Ensure withdrawals of consent, objections to processing, and Global Privacy Control signals are honoured in both archival and transition processes. Document correspondence with individuals confirming fulfilment.
- Notification and transparency. Where legally required, inform affected individuals about the discontinuation of the system, how their data was handled, and how opt-out preferences were applied. Provide accessible channels (web, mobile, call centre, in-person) and support for vulnerable groups. Maintain logs of notifications, responses, and escalations.
- Vendor compliance. Vendors supplying biometric or behavioural analytics must provide proof that they destroyed or returned data linked to opted-out individuals. Include clauses requiring confirmation of opt-out handling during contract termination. Audit evidence should include deletion certificates, system logs, and third-party attestations.
Evidence vault and documentation
Market surveillance authorities under the EU AI Act can request documentation long after systems are retired. Build an evidence vault with the following artefacts:
- Technical files. Capture system descriptions, intended purpose, training data specifications, performance metrics, risk assessments, and change logs. Annotate files to explain why the system was classified as unacceptable and how offboarding occurred.
- Decision records. Store board minutes, AI ethics committee notes, and approval workflows authorising shutdown. Highlight how universal opt-out compliance influenced decisions—for example, when high opt-out rates made continued operation untenable.
- Audit trails. Retain logs of system disablement, data deletion, access revocation, and control tests verifying success. Include time stamps, responsible personnel, and validation steps.
- Regulator correspondence. Archive communications with data protection authorities, market surveillance bodies, or sectoral regulators. Document requests received, responses provided, deadlines met, and follow-up actions.
Integration with governance frameworks
Offboarding should align with enterprise governance standards:
- NIST AI RMF. Map offboarding tasks to Govern, Map, Measure, and Manage functions. Show how opt-out registries and evidence vaults support governance outcomes.
- ISO/IEC 42001:2023. Update AI management system procedures to include unacceptable-risk retirement protocols. Ensure clause 8.4 change management, clause 9.1 monitoring, and clause 10 improvement reflect lessons learned.
- Data protection governance. Align with GDPR Article 17 (right to erasure), Article 21 (right to object), and national implementations. Update Records of Processing Activities (ROPAs) and Data Protection Impact Assessments (DPIAs) to note that the system has been withdrawn.
Legal and contractual considerations
Shutting down prohibited systems intersects with contract law and liability management:
- Supplier agreements. Review termination clauses, intellectual property rights, and obligations to destroy or return data. Ensure suppliers indemnify the organisation for breaches of Article 5 obligations that occurred during their service.
- Customer commitments. If the prohibited system supported customer-facing services, assess contractual obligations for service continuity. Provide alternative compliant solutions or compensation. Document communications and opt-out handling.
- Litigation readiness. Prepare for potential claims from individuals alleging harm. Assemble incident files, opt-out records, and expert reports demonstrating remedial actions taken during offboarding.
People and change management
Employees and contractors need clear guidance as systems are retired:
- Training. Provide targeted training on Article 5 requirements, offboarding procedures, opt-out handling, and evidence retention. Track completion and include assessments to confirm understanding.
- Role reassignment. Redeploy teams previously supporting prohibited systems to compliant initiatives. Document new responsibilities, access rights, and training needs.
- Speak-up channels. Encourage staff to report residual deployments or attempted circumvention. Maintain whistleblowing confidentiality, honour anonymity requests, and record investigative outcomes.
Regulator engagement playbook
Preparedness for supervisory inquiries is essential:
- Pre-briefing packages. Assemble concise dossiers summarising inventories, shutdown status, opt-out compliance metrics, and evidence availability. Include contact points and escalation paths.
- Mock inspections. Conduct dry runs simulating regulator requests. Time how long it takes to retrieve evidence, demonstrate opt-out fulfilment, and explain governance structures.
- Notification criteria. Define thresholds for self-reporting residual use or incidents, including failures to honour opt-outs or discovery of duplicate systems. Ensure legal counsel reviews disclosures before submission.
Board oversight dashboard
Boards should review a dashboard tracking:
- Number of systems classified as unacceptable risk and percentage decommissioned.
- Opt-out reconciliation status, including any unresolved requests or complaints.
- Evidence vault completeness, broken down by technical files, governance records, and audit logs.
- Regulator interactions and upcoming deadlines.
- Resource allocation for remediation and redevelopment of compliant alternatives.
Directors should challenge management to confirm there are no residual deployments, demand independent validation where necessary, and record their oversight activities in board minutes.
Transition to compliant alternatives
Retiring prohibited systems often leaves capability gaps. Organisations should develop compliant replacements while reinforcing universal opt-out commitments:
- Design principles. Embed privacy-by-design, human oversight, and opt-out default settings into replacement systems. Conduct DPIAs and Algorithmic Impact Assessments early in the design process.
- Pilot governance. Test replacements under controlled conditions with transparent communication to users. Honour opt-out preferences and track feedback to iterate before scaling.
- Documentation continuity. Ensure technical files for new systems reference the retirement of prohibited predecessors and explain how lessons learned informed design choices.
Action checklist for January 2025
- Complete final validation of the unacceptable-risk inventory, including vendor attestations and opt-out reconciliations.
- Conduct an evidence-room walkthrough with legal, compliance, and audit teams to ensure retrieval processes are efficient.
- Publish stakeholder updates summarising system retirement progress, opt-out protections, and pathways for raising concerns.
- Schedule post-mortem reviews to capture lessons learned and feed them into high-risk and general-purpose AI compliance programmes due later in 2025.
By the time Article 5 enforcement begins, organisations should have zero tolerance for prohibited AI within their estates. Thorough offboarding, universal opt-out stewardship, and regulator-ready evidence safeguard people, protect reputations, and set the stage for compliant innovation under the EU AI Act’s broader regime.
Continue in the AI pillar
Return to the hub for curated research and deep-dive guides.
Latest guides
-
AI Workforce Enablement and Safeguards Guide — Zeph Tech
Equip employees for AI adoption with skills pathways, worker protections, and transparency controls aligned to U.S. Department of Labor principles, ISO/IEC 42001, and EU AI Act…
-
AI Incident Response and Resilience Guide — Zeph Tech
Coordinate AI-specific detection, escalation, and regulatory reporting that satisfy EU AI Act serious incident rules, OMB M-24-10 Section 7, and CIRCIA preparation.
-
AI Model Evaluation Operations Guide — Zeph Tech
Build traceable AI evaluation programmes that satisfy EU AI Act Annex VIII controls, OMB M-24-10 Appendix C evidence, and AISIC benchmarking requirements.




