← Back to all briefings
AI 7 min read Published Updated Credibility 92/100

OECD Publishes Systemic Approach to Classifying AI Systems — June 8, 2022

OECD released its systemic AI classification framework on 8 June 2022, giving organisations a seven-dimension taxonomy to catalogue context, autonomy, data, human oversight, and impact so governance and assurance teams can align with converging laws.

Timeline plotting source publication cadence sized by credibility.
2 publication timestamps supporting this briefing. Source data (JSON)

Executive briefing: On the Organisation for Economic Co-operation and Development (OECD) released the working paper A Systemic Approach to the Classification of AI Systems, establishing a common, multi-dimensional taxonomy that links the OECD AI Principles to real-world deployments. The framework catalogues systems by sectoral context, use case, learning technique, autonomy, human oversight, data characteristics, and potential impact. It gives policy makers, regulators, assurance teams, and developers a shared vocabulary for comparing risk, selecting safeguards, and preparing for converging rule sets such as the EU AI Act, NIST's AI Risk Management Framework (AI RMF), and ISO/IEC 42001.

The publication formalises work led by the OECD's Network of Experts on AI (ONE AI) after a global consultation in late 2021. It complements national AI strategies by recommending that organisations maintain registries of AI systems and continually update classifications as models evolve. Because the taxonomy captures both lifecycle stage and socio-technical context, it enables oversight committees to prioritise audits where autonomy is high, affected stakeholders are vulnerable, or datasets involve biometric, financial, or safety-critical attributes.

Understanding the OECD classification dimensions

The OECD model relies on seven interacting dimensions. Context captures the economic sector (e.g., healthcare, transport, public safety) and domain-specific conditions (such as critical infrastructure). Use case documents the task the AI performs, including decision support, classification, forecasting, or autonomous control. Techniques and capabilities distinguish among symbolic approaches, statistical learning, knowledge-based methods, perception systems, and language understanding. Human and machine involvement clarifies who designs, operates, or supervises the system, the level of autonomy granted, and how humans can intervene. Data and inputs describe training and inference data types, provenance, quality, and whether personal or sensitive information is processed. Outputs and decision stakes examine potential effects on individuals, organisations, or the environment, noting whether harm could be systemic or irreversible. Finally, Lifecycle and deployment records where the system sits—from research to pilot to production—and how monitoring, feedback, and retirement occur.

By requiring answers across these dimensions, the framework surfaces risks such as bias, opacity, robustness gaps, and dual-use concerns. For instance, facial recognition used for mobile device authentication would be tagged as a consumer security application with constrained autonomy, whereas a live facial recognition platform for law enforcement would trigger public safety context flags, high stakes, and governance requirements for algorithmic accountability, appeals, and third-party oversight.

Governance actions for boards and risk committees

Boards should direct management to embed the OECD classification in the organisation's AI governance charter. First, confirm that charter language references the OECD AI Principles—particularly human-centred values, transparency, robustness, security, accountability, and inclusive growth. Second, require the chief data officer (CDO) and chief risk officer (CRO) to maintain an authoritative inventory of AI systems, each tagged using the seven OECD dimensions and linked to responsible executives. Third, align board reporting cycles with the taxonomy: quarterly updates should highlight systems in high-risk contexts, shifts in autonomy levels, changes in data provenance, and mitigation plans.

Audit and ethics committees should update oversight calendars to include deep dives into high-stakes use cases—such as healthcare diagnostics, employment screening, credit scoring, and critical infrastructure automation. These committees must ensure internal policies reference national regimes like Canada's proposed Artificial Intelligence and Data Act, Singapore's Model AI Governance Framework, and sectoral requirements (e.g., U.S. Federal Reserve SR 11-7 for model risk). The OECD framework provides the scaffolding for comparing those regimes and identifying gaps in controls, documentation, and accountability structures.

Operational priorities for the next 90 days

  • Inventory and labelling. Stand up a cross-functional task force spanning data science, compliance, IT, legal, procurement, and business operations. Catalogue every AI-enabled workflow—including vendor-provided tools and embedded analytics—and label them using the OECD dimensions. Include metadata such as datasets, training pipelines, deployment environments, and monitoring owners.
  • Risk tiering. Using the taxonomy, assign risk tiers (e.g., minimal, limited, high, unacceptable) that mirror the EU AI Act and other regulatory frameworks. Document triggers for enhanced review, such as vulnerable populations, biometric identification, safety-critical decisioning, or high autonomy.
  • Control mapping. Map existing controls—model validation, bias testing, explainability reviews, incident reporting, rollback procedures—to each dimension. Identify gaps, especially around post-deployment monitoring, data governance, and human-in-the-loop escalation.
  • Policy updates. Refresh AI development standards to require classification at project initiation, gating approvals on evidence that risks and mitigations fit the assigned profile. Embed requirements into software development life cycle (SDLC) templates, MLOps pipelines, and risk acceptance forms.
  • Training. Develop training modules for product managers, data scientists, and compliance staff explaining how to use the framework, interpret each dimension, and maintain documentation. Include scenario-based exercises to test judgments on borderline cases.

Integration with global regulatory agendas

The OECD taxonomy serves as a Rosetta Stone for aligning with converging regulatory schemes. Under the draft EU AI Act, providers of high-risk systems must complete conformity assessments, maintain technical documentation, and implement risk management, data governance, logging, transparency, and human oversight controls. The OECD dimensions highlight which systems fall into the EU Act's Annex III categories (e.g., biometric identification, critical infrastructure, education, employment). Similarly, the U.S. NIST AI RMF emphasises mapping, measuring, and managing risks across the AI lifecycle; the OECD framework provides the mapping backbone, ensuring organisations capture context and stakeholder impacts before measurement.

In financial services, the Basel Committee and national supervisors increasingly demand AI model inventories, scenario testing, and senior management accountability. Classifications that document autonomy, data lineage, and decision stakes allow firms to demonstrate compliance with model risk management (MRM) expectations. In healthcare, alignment with the OECD framework helps organisations integrate U.S. FDA SaMD Pre-Cert requirements, EU Medical Device Regulation obligations, and ISO/IEC 81001-5-1 cybersecurity standards.

Sourcing and third-party management

Procurement teams must extend the classification to vendors. Update request for proposal (RFP) templates to ask suppliers for OECD-aligned profiles of AI components, including datasets used, model updates cadence, retraining triggers, and human oversight design. Contract clauses should mandate disclosure when context or autonomy changes, require access to training data summaries, and enable on-site audits. Leverage industry resources such as the Global Partnership on AI (GPAI) use case library, the EU's forthcoming AI Act conformity assessment guidance, and sector-specific registries to validate vendor claims.

For open-source and cloud AI services, assess whether providers publish model cards, data sheets, or system cards that align with the OECD dimensions. If not, establish internal processes to derive missing metadata—examining documentation, contacting maintainers, and conducting independent testing. Maintain a central repository of third-party classifications, linking to risk assessments, compliance reviews, and exit strategies.

Data governance and technical enablement

Effective classification requires high-quality metadata. Expand data catalogues to capture dataset provenance, consent status, representativeness, and refresh cadence. Embed data quality checks into ingestion pipelines and log how datasets map to OECD categories (e.g., personal data, biometric identifiers, geospatial data). Implement monitoring dashboards that track drift, performance, fairness metrics, and incident reports; link alerts to the classification so stakeholders immediately understand context and potential impact.

Adopt documentation tooling—such as automated model lineage capture, Jupyter notebook checkpoints, and API change logs—to preserve evidence for regulators. Ensure reproducibility by versioning training code, configuration, and hyperparameters. For high-autonomy systems, implement safe fallback modes, explainability tooling (SHAP, LIME, counterfactual explanations), and red-teaming exercises targeting adversarial robustness and misuse scenarios.

Metrics, reporting, and assurance

Define key risk indicators (KRIs) tied to the classification dimensions. Examples include percentage of AI systems with assigned human oversight roles, share of high-risk systems completing annual third-party audits, time to remediate identified biases, and frequency of incident reports by sector. Establish a scorecard that management presents to the board, showing risk tier distribution, emerging trends (e.g., increased autonomy), and progress on mitigation actions.

Internal audit should schedule thematic reviews assessing compliance with classification procedures, data governance, and monitoring controls. Where critical systems exist, commission external assurance engagements aligned with SOC 2, ISO/IEC 27001, or forthcoming AI assurance standards. Document lessons learned from incidents and feed them back into classification updates.

Timeline and next steps

  • Immediate (0–30 days): Approve the governance charter updates, appoint classification owners, and begin inventory discovery workshops.
  • Near term (30–90 days): Complete baseline classifications, tier systems by risk, remediate gaps in documentation, and integrate taxonomy checkpoints into change management.
  • Medium term (90–180 days): Conduct scenario exercises simulating regulatory inquiries, refresh classifications after major model updates, and align with forthcoming policy instruments (EU AI Act trilogue outcomes, U.S. EO 14110 implementation guidance).

By institutionalising the OECD classification, organisations improve transparency, accelerate compliance with emerging AI laws, and build trust with customers, employees, regulators, and society.

Timeline plotting source publication cadence sized by credibility.
2 publication timestamps supporting this briefing. Source data (JSON)
Horizontal bar chart of credibility scores per cited source.
Credibility scores for every source cited in this briefing. Source data (JSON)

Continue in the AI pillar

Return to the hub for curated research and deep-dive guides.

Visit pillar hub

Latest guides

  • OECD
  • Risk Management
  • AI Classification
Back to curated briefings