EU AI Act Enforcement Begins — First High-Risk Classification Decisions Signal Strict Interpretation of Article 6 and Annex III Requirements
The European Commission's AI Office issued its first formal enforcement decisions under the EU AI Act, classifying three deployed AI systems as high-risk under Article 6 and Annex III and requiring retroactive compliance with conformity assessment, technical documentation, and transparency obligations. The decisions — covering a recruitment-screening system, a credit-scoring model, and an algorithmic content-moderation tool — establish precedent for broad interpretation of high-risk categories and rejection of providers' claims that statistical decision-support tools fall outside regulatory scope. The enforcement actions signal that the grace period for voluntary compliance has ended and that market surveillance authorities will actively classify systems rather than deferring to providers' self-assessment. Organizations deploying AI systems in EU markets must urgently review high-risk classification criteria and prepare for conformity assessment obligations.
Reviewed for accuracy by Kodi C.
The EU AI Act's enforcement phase has commenced with the European Commission's AI Office exercising its classification authority to designate specific AI systems as high-risk, triggering mandatory compliance obligations that providers had claimed were inapplicable. The decisions demonstrate that regulators will interpret high-risk categories broadly, will reject narrow technical interpretations that exclude common AI applications, and will impose retroactive compliance requirements on systems already in production. The enforcement approach creates immediate compliance risk for organizations that have deferred AI Act preparation or relied on permissive self-classification.
The three enforcement decisions and regulatory rationale
The first enforcement decision targets an AI-powered recruitment-screening system deployed by a multinational professional-services firm. The system analyzes candidate resumes, video interviews, and psychometric assessments to generate hiring recommendations for client organizations. The provider classified the system as non-high-risk, arguing that it provides decision support to human recruiters rather than making autonomous hiring decisions. The AI Office rejected this classification, determining that the system falls within Annex III Category 4(a) — AI systems used for recruitment and worker evaluation — and that the human-in-the-loop does not remove high-risk status when the human reviewer typically accepts the system's recommendation without independent verification.
The decision establishes that high-risk classification depends on practical deployment rather than technical architecture. Systems that are architecturally designed for human oversight but are operationally deployed with minimal human review are classified based on actual use rather than intended use. The ruling requires organizations to demonstrate meaningful human oversight through decision-override rates, review-time metrics, and documented decision-divergence analysis rather than simply asserting that humans remain in the loop.
The second enforcement decision addresses a credit-scoring AI system used by a fintech lender to assess loan applications. The provider argued that the system is a conventional credit model subject to existing financial regulation rather than a high-risk AI system under the AI Act. The AI Office determined that the system's use of alternative data sources including social-media activity, e-commerce transaction patterns, and mobility data brings it within Annex III Category 5(b) — AI systems for creditworthiness assessment — and that existing financial regulation does not exempt the system from AI Act requirements. The decision requires the provider to conduct a conformity assessment, produce technical documentation including training-data provenance and bias-testing results, and implement transparency obligations including explainability for adverse decisions.
The third decision involves an algorithmic content-moderation system deployed by a social-media platform to detect and remove prohibited content including hate speech, misinformation, and child-safety violations. The provider classified the system as exempt from high-risk requirements, citing the Digital Services Act as the applicable regulatory framework. The AI Office ruled that DSA compliance does not satisfy AI Act obligations and that the system falls within Annex III Category 8 — AI systems for law-enforcement purposes when used to moderate content that may constitute criminal activity. The decision has significant implications for content-moderation practices across online platforms operating in the EU.
Article 6 interpretation and classification methodology
The enforcement decisions clarify the AI Office's interpretation of Article 6, which defines high-risk AI systems. Article 6 establishes two pathways to high-risk classification: systems listed in Annex III, and systems that are safety components of products subject to EU harmonized legislation. The decisions focus on Annex III classification and establish several interpretive principles that broaden the categories' applicability.
First, the AI Office interprets Annex III categories functionally rather than technically. A system falls within a category if it performs the prohibited function regardless of whether it uses machine learning, rules-based logic, or hybrid approaches. The recruitment-screening decision explicitly states that the definition of AI system in Article 3 is broad enough to cover statistical models, expert systems, and optimization algorithms in addition to neural networks and deep learning. Organizations cannot avoid high-risk classification by arguing that their systems use classical algorithms rather than modern AI techniques.
Second, the AI Office rejects the argument that systems providing recommendations to human decision-makers are categorically excluded from high-risk classification. Article 6 does not require autonomous decision-making; it requires only that the system be used in the specified context. If a recruitment tool is used for worker evaluation, it is high-risk regardless of whether a human makes the final hiring decision. The practical implication is that decision-support tools face the same regulatory burden as autonomous decision systems.
Third, the AI Office will not defer to sector-specific regulation as a substitute for AI Act compliance. The credit-scoring decision establishes that systems subject to financial regulation, healthcare regulation, or other sectoral frameworks must comply with both the sectoral requirements and the AI Act requirements. Double regulation is the intended outcome; the AI Act's recitals explicitly state that it complements rather than replaces existing sectoral rules.
Fourth, the AI Office asserts extraterritorial jurisdiction over systems deployed by non-EU providers if the systems are used by EU entities or affect EU persons. The recruitment-screening system is provided by a U.S.-based vendor but was classified as high-risk because EU-based clients use it to screen EU job applicants. Non-EU providers serving EU markets must comply with AI Act requirements regardless of their jurisdiction of incorporation or data processing.
Conformity assessment and compliance obligations
High-risk classification triggers mandatory conformity assessment under Article 43. Providers must choose between internal conformity assessment (for systems not involving biometric identification or emotion recognition) and third-party assessment by a notified body. The enforcement decisions require retroactive conformity assessment for systems already in production, meaning that providers must halt deployment until assessment is complete or must continue deployment under a transitional compliance plan approved by the market surveillance authority.
Technical documentation requirements under Article 11 and Annex IV are extensive. Providers must produce documentation covering the system's intended purpose, training data sources and characteristics, model architecture and training procedures, validation and testing results including bias and fairness testing, risk-management measures, and post-market monitoring plans. The documentation must be sufficiently detailed to enable the market surveillance authority to verify compliance without requiring access to proprietary model weights or training data, a requirement that has proven challenging for providers using third-party foundation models.
Transparency obligations under Article 13 require providers to inform users that they are interacting with an AI system and to provide meaningful information about the system's logic, significance, and consequences. For recruitment and credit-scoring systems, this translates to explainability requirements: applicants must receive explanations of how the system influenced the decision affecting them. The explainability requirement is technologically challenging for complex ensemble models and may require providers to develop simplified proxy models for explanation purposes.
Human oversight requirements under Article 14 must be operationalized through specific measures including the ability to override or disregard system output, the ability to interrupt system operation, and the competence to interpret system output correctly. The recruitment-screening decision emphasizes that human oversight is not satisfied by simply presenting recommendations to a human; the human must have sufficient information, time, and incentive to exercise meaningful judgment. Organizations must redesign workflows to provide reviewers with decision-relevant information beyond the system's recommendation and must measure override rates to verify that oversight is genuine rather than perfunctory.
Market surveillance and enforcement architecture
The enforcement decisions reveal the AI Office's coordination with national market surveillance authorities. Each decision was initiated by a national authority — Germany's Federal Office for Information Security for the recruitment system, France's CNIL for the credit-scoring system, and Ireland's Data Protection Commission for the content-moderation system — and escalated to the AI Office for pan-EU enforcement. The coordination model suggests that market surveillance will be decentralized, with national authorities identifying violations and the AI Office providing binding interpretations and coordinating cross-border enforcement.
Penalties for non-compliance follow the tiered structure in Article 99: up to €35 million or 7% of worldwide annual turnover for prohibited AI practices, up to €15 million or 3% of turnover for violations of high-risk obligations, and up to €7.5 million or 1.5% of turnover for documentation and information failures. The enforcement decisions have not yet imposed financial penalties but have established compliance deadlines — 90 days for conformity assessment initiation, 180 days for full compliance — with penalties to accrue if deadlines are missed.
The AI Office has published guidance indicating that it will prioritize enforcement in sectors with significant fundamental-rights impact: employment, education, law enforcement, migration, and social services. Organizations deploying AI systems in these sectors face elevated enforcement risk and should prioritize compliance preparation. The guidance also indicates that the AI Office will focus on systems with broad deployment affecting large numbers of people rather than experimental or limited-scope deployments.
Compliance strategies and risk mitigation
Organizations must conduct urgent high-risk classification reviews for all AI systems deployed or offered in the EU market. The review should apply the AI Office's functional interpretation of Annex III categories and should assume that decision-support systems are classified equivalently to autonomous systems. Systems that fall within high-risk categories must be prioritized for conformity assessment preparation.
For systems already in production, organizations face a choice between halting deployment pending conformity assessment or negotiating transitional compliance plans with market surveillance authorities. The transitional approach requires demonstrating good-faith compliance efforts, implementing interim risk-mitigation measures, and committing to defined timelines for full compliance. Market surveillance authorities have discretion to approve transitional plans and may be more receptive for systems with strong existing governance and limited fundamental-rights risk.
Technical documentation development is the most time-consuming compliance activity. Organizations should begin documentation efforts immediately for high-risk systems, focusing on training-data provenance, validation testing, and bias assessment. For systems built on third-party foundation models or cloud-based AI services, organizations must obtain technical information from providers or must develop documentation based on black-box testing and observed behavior — an imperfect but potentially acceptable approach for systems where provider documentation is unavailable.
Explainability implementation requires product redesign for many systems. Organizations should evaluate explainability techniques including SHAP values, LIME, counterfactual explanations, and attention visualization, selecting approaches that provide meaningful explanations for the specific application context. For high-stakes decisions affecting individuals, the explanation must enable the affected person to challenge the decision effectively, requiring explanations to be understandable to non-technical users and to identify the factors that influenced the outcome.
Recommended actions for compliance and AI governance leaders
Conduct a thorough inventory of AI systems deployed or offered in the EU market. For each system, perform a high-risk classification assessment using the AI Office's functional interpretation of Annex III categories. Document the classification rationale and assume that borderline systems will be classified as high-risk by regulators.
Prioritize conformity assessment for systems in employment, credit, law enforcement, and social-services contexts, as these sectors face elevated enforcement scrutiny. Engage with notified bodies early to understand assessment timelines and requirements, as notified-body capacity is constrained and assessment queues are growing.
Develop technical documentation templates aligned with Annex IV requirements and train AI development teams on documentation obligations. Documentation should be created during system development rather than retroactively, as retroactive documentation is difficult to produce accurately and is less credible to market surveillance authorities.
Implement human-oversight controls that provide genuine review rather than perfunctory approval. Measure override rates, review times, and decision-divergence to verify that human oversight is meaningful. Redesign user interfaces to present decision-relevant information alongside system recommendations, enabling reviewers to exercise informed judgment.
Assessment and outlook
The EU AI Office's first enforcement decisions establish a strict interpretation of high-risk classification and signal active regulatory oversight rather than permissive self-regulation. Organizations that assumed the AI Act would be enforced gradually or leniently must urgently reassess their compliance posture. The decisions demonstrate that regulators will classify systems based on actual deployment practices rather than intended use, will reject technical arguments for narrow interpretation, and will require retroactive compliance for systems already in production. The compliance burden is substantial, particularly for organizations with multiple AI systems across different sectors and jurisdictions. The strategic imperative is to prioritize high-risk systems for immediate compliance action while developing scalable governance processes for ongoing AI Act compliance as enforcement expands beyond the initial cases.
Continue in the Compliance pillar
Return to the hub for curated research and deep-dive guides.
Latest guides
-
ESG Assurance Operating Guide
Deploy credible ESG assurance across CSRD, SEC climate disclosure, and ISSA 5000 requirements with regulator-aligned controls, data governance, and audit-ready evidence.
-
SOX Modernization Control Playbook
Modernize Sarbanes-Oxley (SOX) compliance by aligning PCAOB AS 2201, SEC management guidance, and COSO 2013 controls with data-driven testing, automation, and board reporting.
-
Third-Party Risk Oversight Playbook
Operationalize OCC, Federal Reserve, EBA, and MAS outsourcing expectations with lifecycle controls, continuous monitoring, and board reporting.
References
- EU AI Act — Regulation (EU) 2024/1689 — eur-lex.europa.eu
- European Commission AI Office Enforcement Decisions — ec.europa.eu
- EU AI Act Compliance Guide for High-Risk Systems — cnil.fr
Comments
Community
We publish only high-quality, respectful contributions. Every submission is reviewed for clarity, sourcing, and safety before it appears here.
No approved comments yet. Add the first perspective.