← Back to all briefings
Policy 7 min read Published Updated Credibility 94/100

EU AI Act unacceptable-risk ban takes effect on 2 February 2025

The EU AI Act's banned AI provisions go live February 2, 2025. Social scoring, untargeted facial scraping, and manipulative AI systems need to be shut down—and you need documentation proving it.

Editorially reviewed for factual accuracy

Policy pillar illustration for Zeph Tech briefings
Policy, regulatory, and mandate timeline briefings

The EU AI Act’s Article 5 prohibitions on unacceptable-risk AI systems become enforceable on 2 February 2025, closing the six-month transition window provided by Article 113(2)(a). Providers and deployers must prove that banned practices—including biometric categorization that infers sensitive traits, untargeted facial image scraping, social scoring by public authorities, and manipulative systems targeting vulnerable populations—have been decommissioned or redesigned with explicit legal justification. Boards should treat January 2025 as the final mobilization month: inventorying AI systems, documenting shutdown playbooks, integrating universal opt-out signals into monitoring, and assembling evidence for market surveillance authorities.

Countdown milestones

Key deadlines underpinning the unacceptable-risk ban include the Regulation’s entry into force on 1 August 2024, the 2 February 2025 prohibition date, and the obligation to cooperate with market surveillance authorities immediately thereafter. Providers must keep technical documentation available for 10 years, even for systems that have been withdrawn.

National competent authorities will expect to see inventories, risk assessments, and decommissioning evidence during 2025 inspections. Companies operating across Member States should also monitor guidance from the European Commission’s AI Office, the European Data Protection Board (EDPB), and national data protection authorities, which are coordinating on enforcement tactics.

Teams should structure the final month into weekly sprints: Week 1 focuses on inventory validation; Week 2 on execution of shutdown or redesign; Week 3 on evidence consolidation and opt-out testing; Week 4 on governance sign-off and readiness drills. Boards must receive status reports summarizing residual risks, unresolved dependencies, and regulatory engagement plans.

Risk inventory and classification

  • System scoping. catalog all AI systems, models, and automation workflows operating in the EU or impacting EU residents. Include experimental pilots, vendor-provided tools, and legacy models embedded in business processes. Tag each system with purpose, user base, data sources, model type, deployment status, and owner.
  • Article 5 screening. Use a structured questionnaire to test whether any system infers sensitive attributes from biometrics, scrapes facial images without targeted consent, evaluates individuals for social scoring outcomes, or manipulates behavior by exploiting vulnerabilities (for example, children, disabled persons). Record rationale for classification decisions, referencing Article 5 clauses, recitals, and any relevant guidance.
  • Exception assessment. Law enforcement bodies may rely on narrow derogations for remote biometric identification in public spaces, subject to judicial or administrative authorization. Document the legal basis, proportionality analysis, and safeguards for each instance. Without rigorous evidence, deployers risk immediate enforcement action.

Shutdown and redesign execution

Once unacceptable-risk systems are identified, providers and deployers must execute structured shutdown programs:

  • Technical decommissioning. Disable data pipelines feeding prohibited models, revoke API keys, and archive code repositories. Retain hash values of model artifacts and dataset manifests to prove provenance. Ensure third-party vendors confirm the destruction or isolation of related assets.
  • Operational transition. Replace prohibited functionality with compliant alternatives. For example, substitute prohibited emotion recognition used for customer service triage with rule-based routing informed by consented data. Document business impacts, mitigation strategies, and communication plans for affected users.
  • Residual risk monitoring. Deploy anomaly detection to identify attempts to restart banned models or to replicate prohibited features in new releases. Set up alerts for unusual access patterns, surges in API calls, or reappearance of banned datasets.

Universal opt-out and data governance

Although Article 5 targets specific high-risk practices, compliance depends on respecting data subject rights and universal opt-out signals across AI lifecycles. Boards should ensure:

  • Opt-out registries integrated with AI controls. Maintain a central registry capturing GDPR objections to processing, withdrawals of consent, Global Privacy Control signals, and Member State-specific opt-outs. Link the registry to AI deployment pipelines so models automatically exclude opted-out individuals from training, testing, and inference datasets. This is crucial when verifying that redesigns of prohibited systems rely solely on consented or anonymized data.
  • Vendor enforcement. Contracts with AI vendors and data brokers must require honoring opt-out signals and documenting compliance. Include audit rights to inspect logs demonstrating that opted-out records were removed. Boards should receive quarterly vendor compliance attestations.
  • Transparency and accessibility. Provide multilingual notices explaining how universal opt-out mechanisms interact with AI services. Ensure children, elderly citizens, and people with disabilities can exercise opt-out rights through low-friction channels. Document support interactions and resolutions, feeding insights into the risk management file.

Evidence and documentation

Market surveillance authorities can request information at any time. Providers and deployers need full evidence packages:

  • Technical documentation. Maintain model cards, training data descriptions, performance metrics, and risk assessments for each system classified as unacceptable-risk. When systems are retired, update technical files with decommissioning steps, data disposal actions, and references to opt-out compliance checks.
  • Governance records. Archive meeting minutes from AI ethics committees, risk committees, and board sessions where Article 5 compliance was outlined. Include attendance, decisions, dissenting opinions, and follow-up actions. These records show that leadership exercised oversight.
  • Incident logs. Record any breaches, complaints, or regulator interactions related to prohibited practices. Document remediation steps, user notifications, opt-out handling, and verification of closure.

Integration with risk management frameworks

The EU AI Act expects providers and deployers to embed compliance into broader risk management systems. Teams can use existing frameworks:

  • NIST AI RMF. Use the Govern and Map functions to catalog systems, identify unacceptable-risk characteristics, and assign accountability. Document how opt-out governance fits within the Manage function’s risk treatment plans.
  • ISO/IEC 42001:2023. Align Article 5 controls with AI management system requirements on policy, risk assessment, data governance, and continuous improvement. Ensure clause 8.4 change management includes testing for prohibited features.
  • EU data protection governance. Coordinate with Data Protection Officers to integrate AI inventories with Records of Processing Activities, Data Protection Impact Assessments (DPIAs), and legitimate interest balancing tests. Highlight how opt-out registries inform both AI and GDPR compliance.

Regulator engagement strategy

early communication with regulators can reduce enforcement risk:

  • Self-reporting playbooks. Prepare templates for notifying national authorities if residual deployments or vendor systems violate Article 5 after 2 February. Include contact lists, escalation thresholds, and scripts referencing remediation actions and opt-out protections.
  • Supervisory dialogs. For sectors such as finance, healthcare, and transportation, regulators may request assurance that AI controls align with sector-specific rules. Schedule briefings detailing inventory outcomes, decommissioning status, and evidence repositories.
  • Cross-border coordination. Multinationals should map responsibilities across EU Member States, designate local leads, and harmonize messaging. Document how opt-out requirements differ (for example, France’s CNIL guidance on emotion recognition, Spain’s AEPD focus on biometric data) and tailor responses as needed.

People, process, and culture

Successful compliance depends on informed teams:

  • Training. Update mandatory training modules to cover Article 5 prohibitions, opt-out handling, and evidence retention. Track completion rates and comprehension scores. Provide specialized workshops for AI developers, product managers, and risk officers.
  • Roles and responsibilities. Clarify accountability across product, legal, compliance, and data protection functions. Establish an AI control tower that consolidates status updates, metrics, and escalation paths.
  • Whistleblowing and speak-up channels. Encourage employees and contractors to report suspected prohibited practices. Ensure channels honor anonymity preferences and universal opt-out commitments. Record investigations and outcomes.

Board dashboard essentials

Boards should receive a January 2025 dashboard highlighting:

  • Inventory summary: number of AI systems, classification results, and status of decommissioning actions.
  • Opt-out metrics: volume of opt-out signals received, percentage applied to AI datasets, vendor compliance rates.
  • Evidence readiness: completion percentage for technical documentation, governance records, and incident logs.
  • Regulatory engagement: upcoming deadlines, supervisory meetings, and outstanding information requests.

Directors should challenge management on any gaps, request independent validation where needed, and document their oversight in board minutes.

Final month action checklist

  • Complete a red-team exercise simulating regulator inspection, including opt-out verification and document retrieval drills.
  • Lock down code repositories and deploy monitoring to detect reactivation attempts for prohibited models.
  • Issue stakeholder communications explaining the discontinuation or redesign of services, highlighting opt-out rights and complaint channels.
  • Validate contracts with vendors and partners to confirm they have removed prohibited capabilities and honored opt-out obligations.

By 2 February 2025, teams must be ready to evidence the absence of unacceptable-risk AI. Full governance, universal opt-out orchestration, and meticulous documentation will protect teams, preserve trust, and reduce enforcement exposure as the EU AI Act’s first hard deadline arrives.

Continue in the Policy pillar

Return to the hub for curated research and deep-dive guides.

Visit pillar hub

Latest guides

Coverage intelligence

Published
Coverage pillar
Policy
Source credibility
94/100 — high confidence
Topics
EU AI Act · Unacceptable-risk AI · Article 5 prohibitions · AI governance
Sources cited
3 sources (eur-lex.europa.eu, digital-strategy.ec.europa.eu, iso.org)
Reading time
7 min

Documentation

  1. Regulation (EU) 2024/1689 (AI Act) — European Union
  2. EU AI Act — Questions and Answers — European Commission
  3. ISO 31000:2018 — Risk Management Guidelines — International Organization for Standardization
  • EU AI Act
  • Unacceptable-risk AI
  • Article 5 prohibitions
  • AI governance
Back to curated briefings

Comments

Community

We publish only high-quality, respectful contributions. Every submission is reviewed for clarity, sourcing, and safety before it appears here.

    Share your perspective

    Submissions showing "Awaiting moderation" are in review. Spam, low-effort posts, or unverifiable claims will be rejected. We verify submissions with the email you provide, and we never publish or sell that address.

    Verification

    Complete the CAPTCHA to submit.