← Back to all briefings
AI 6 min read Published Updated Credibility 100/100

EU AI Act

February 2, 2025 marked the enforcement date for the EU AI Act's prohibited practices. Social scoring, untargeted scraping for facial recognition databases, and emotion recognition in workplaces and schools are now illegal in the EU. If you are running any AI system that could be interpreted as falling into these categories, you need to shut it down or prove you are exempt.

Verified for technical accuracy — Kodi C.

AI pillar illustration for Zeph Tech briefings
AI deployment, assurance, and governance briefings

Article 5 of the EU AI Act now bans five categories of unacceptable-risk AI systems: manipulative behavioral techniques that distort free will, exploitation of vulnerabilities due to age or disability, untargeted biometric identification scraping, biometric categorization inferring sensitive traits, and social scoring by public authorities. While Retiring legacy prototypes ahead of the 2 February enforcement date, the company must now embed long-term safeguards to keep prohibited practices from reappearing in experimentation, vendor integrations, or acquisitions. This guide explains the governance model, universal opt-out architecture, and evidence expectations that product, engineering, and compliance teams must uphold across the 2025 roadmap.

Embedding Article 5 into product governance

The Responsible AI Policy has been amended to include a “Prohibited Practices Gate” within the product lifecycle. Before any AI initiative progresses from concept to prototype, teams complete an Article 5 screening questionnaire covering target users, behavioral influence techniques, biometric data usage, and potential impacts on minors or vulnerable groups. The questionnaire feeds into the enterprise GRC platform, where risk officers assign one of three outcomes:

  • No Article 5 risk: Project proceeds with standard responsible AI controls, but must document reasoning.
  • Conditional approval: Project may proceed if design changes remove prohibited characteristics, with follow-up validation before launch.
  • Rejected: Project cannot continue because the intended functionality would violate Article 5 or related national laws.

The gate is mandatory for internal development, joint ventures, procurement of AI solutions, and integration of open-source or general-purpose AI components. Audit trails record reviewers, challenge comments, and final determinations.

Universal opt-out stewardship across the AI estate

Although Article 5 focuses on outright bans, this brief treats universal opt-out obligations as a parallel safeguard. The company operates a preference orchestration platform that centralizes objections received through GDPR Article 21 requests, CPRA global privacy control (GPC) signals, US state-level universal opt-out registries, and The own privacy portal. The platform now includes Article 5-specific controls:

  • Historical opt-out alignment: Reconcile existing opt-outs with data archives used for model training, ensuring prohibited-system data is not repurposed.
  • Future-proofing: When teams explore new AI capabilities, the platform verifies that training datasets exclude records where individuals exercised universal opt-outs or where consent was withdrawn.
  • Transparent communications: Updated privacy notices explain how universal opt-outs interact with prohibited practices, clarifying that This not deploy AI that manipulates behavior or infers sensitive traits and that opt-out preferences extend to any experimentation environments.

Privacy engineering runs weekly reconciliations to ensure opt-out preferences in the central registry match downstream systems, including feature stores, analytics sandboxes, and third-party tooling. Exceptions automatically generate tickets requiring legal review and remediation plans.

Evidence standards for ongoing assurance

Supervisory authorities expect clear documentation demonstrating that prohibited practices are absent from live systems and research environments. Maintaining layered evidence collections:

  • Policy library: Version-controlled documents covering the Responsible AI Policy, Article 5 gate procedures, and universal opt-out governance.
  • Design reviews: Records of ethical impact assessments, DPIAs, and threat modeling sessions. Meeting minutes capture how teams evaluated Article 5 risks and the mitigation steps chosen.
  • Technical artifacts: Model cards, data lineage diagrams, and explainability outputs that show what inputs, features, and control mechanisms were used.
  • Monitoring reports: Logs from MLOps tools that confirm prohibited features remain disabled, as well as alerts on anomalies that could show reintroduction.
  • Universal opt-out audits: Reports demonstrating opt-out propagation across systems, including timestamps, responsible owners, and exception handling notes.

Evidence resides in an immutable vault compliant with ISO/IEC 27001 and SOC 2 controls. Access is limited to authorized governance, legal, and audit personnel. Any download triggers an automatic audit log entry and approval workflow.

Vendor and partnership governance

Article 5 obligations apply to both providers and deployers, so practitioners evaluates third-party AI services for prohibited features. Procurement has embedded Article 5 clauses into master services agreements:

  • Vendors must attest annually that their products do not include prohibited practices and that they respect The universal opt-out directives.
  • Reserved: audit rights to inspect relevant source materials, testing results, and governance documentation.
  • Contracts require immediate notification if a vendor faces regulatory action related to prohibited practices or fails to honor opt-out requirements.

Vendor risk assessments use questionnaires mapped to EU AI Act annexes. High-risk suppliers undergo improved due diligence, including review of training data provenance, fairness testing, and user interface controls. Findings feed into the enterprise risk register and inform renewal decisions.

Employee training and accountability

All product, engineering, marketing, and customer-facing teams must complete annual training on Article 5 prohibitions, universal opt-out obligations, and evidence handling. The 2025 curriculum includes scenario-based modules demonstrating how manipulative design patterns can emerge unintentionally, how to respond when a customer suspects prohibited functionality, and how to escalate potential breaches. Managers certify completion and integrate Article 5 awareness into performance goals for teams building AI features.

The code of conduct now references Article 5 explicitly, reminding employees that attempting to deploy prohibited functionality—even in pilot environments—could trigger disciplinary action. Whistleblowing channels support anonymous reporting, and the Ethics Office commits to investigating all claims within 10 business days.

Monitoring, testing, and continuous improvement

Responsible AI operations monitor systems for drift toward prohibited behaviors. Controls include:

  • Automated scanning: Static analysis rules detect code patterns associated with biometric scraping or behavioral targeting. Pipelines block merges until reviews confirm compliance.
  • Runtime checks: Observability platforms track feature usage, user feedback, and complaints. Alerts fire if features begin to mimic prohibited practices (for example, using sensitive attributes for segmentation).
  • Shadow testing governance: Any experimentation environment must document hypotheses, guardrails, and universal opt-out handling before collecting user data.
  • Independent challenge: Internal audit and the Ethics Review Board perform quarterly challenge sessions, reviewing random samples of AI projects for Article 5 compliance.

Findings from monitoring feed into retrospectives that update policies, training, and technical safeguards. The company also tracks regulatory guidance from the European Artificial Intelligence Office (AIO) and national authorities, adjusting interpretations of prohibited practices as new clarifications emerge.

Customer transparency and civil-society engagement

Trust depends on transparent communication about how to avoids prohibited practices. The trust center now hosts:

  • Article 5 compliance statement: Explains governance structures, universal opt-out services, and evidence assurance.
  • FAQs: Addresses how customers can confirm that practitioners services do not manipulate behavior or perform biometric scraping, and outlines steps to request audits.
  • Reporting channels: Provides direct contact points for rights-holders, regulators, and civil-society teams to raise concerns.

Participating in multi-stakeholder forums organized by the European Commission and industry associations to share good practices and learn from evolving enforcement trends. Feedback from NGOs and accessibility advocates influences product design reviews and opt-out communication materials.

Forward roadmap for 2025

To maintain compliance throughout 2025, The roadmap prioritizes:

  • Integration with high-risk AI preparations: Align Article 5 controls with conformity assessment workstreams so high-risk projects maintain clear separation from prohibited functionality.
  • Universal opt-out expansion: Extend preference orchestration to cover conversational agents, AR/VR experiences, and physical kiosks.
  • Evidence automation: Deploy tooling that automatically captures design decisions, approvals, and opt-out reconciliations as structured data.
  • Independent assurance engagements: Schedule external assessments under ISAE 3000 or ISO/IEC 42001 once published to validate program effectiveness.

Through disciplined governance, preventive universal opt-out stewardship, and rigorous evidence management, this brief ensures that prohibited AI practices remain absent from its portfolio while enabling responsible innovation.

Continue in the AI pillar

Return to the hub for curated research and deep-dive guides.

Visit pillar hub

Latest guides

Cited sources

  1. Regulation (EU) 2024/1689 (EU AI Act) — eur-lex.europa.eu
  2. European Commission AI Office launch — artificial-intelligence.ec.europa.eu
  3. CNIL AI guidance on biometric systems — cnil.fr
  • EU AI Act
  • Prohibited practices
  • AI governance
  • Risk management
Back to curated briefings

Comments

Community

We publish only high-quality, respectful contributions. Every submission is reviewed for clarity, sourcing, and safety before it appears here.

    Share your perspective

    Submissions showing "Awaiting moderation" are in review. Spam, low-effort posts, or unverifiable claims will be rejected. We verify submissions with the email you provide, and we never publish or sell that address.

    Verification

    Complete the CAPTCHA to submit.