AI Governance Retrospective Briefing — December 20, 2021
Risk-based AI regulation accelerated between 2020 and 2021—from the EU’s white paper and AI Act proposal to U.S. executive mandates, the National AI Initiative, and NIST’s AI Risk Management Framework consultations—establishing today’s governance baselines.
Executive briefing: Regulators in Europe and the United States spent 2020–2021 constructing the policy scaffolding that now defines enterprise AI governance. The European Commission’s white paper and subsequent Artificial Intelligence Act proposal introduced risk-based obligations, while U.S. Executive Order 13960, the National AI Initiative Act, and NIST’s AI Risk Management Framework consultations set expectations for trustworthy AI across federal agencies and suppliers.123456 Organisations deploying AI should treat these milestones as the foundational playbook for compliance, assurance, and investment decisions.
Timeline of key developments
February 2020 — EU AI white paper. The European Commission proposed a risk-based regulatory approach with mandatory requirements for “high-risk” AI (safety, critical infrastructure, employment, and rights-sensitive applications) and voluntary labelling for lower-risk systems.1 It paired regulatory proposals with investment pillars—data spaces, testing facilities, and skills programmes.
December 2020 — U.S. Executive Order 13960. The order directed federal agencies to catalogue AI use cases, appoint responsible officials, implement risk management practices aligned with NIST guidance, and ensure transparency for AI impacting rights or safety.2 It emphasised values such as lawfulness, accuracy, and traceability.
January 2021 — National AI Initiative Act. The Act created the National AI Initiative Office, authorised interagency coordination bodies (AI Research Resource Task Force, National AI Advisory Committee), and funded research, education, and standards activities across NIST, NSF, DOE, and other agencies.3
April 2021 — EU Artificial Intelligence Act proposal. The Commission translated the white paper into binding legislation: providers of high-risk AI must implement risk management systems, data governance, technical documentation, human oversight, and conformity assessments before CE-marking products.4 The proposal also introduced transparency requirements for certain AI (chatbots, deepfakes) and prohibited specific practices (social scoring by authorities, manipulative AI affecting vulnerable groups).
July 2021 — NIST AI Risk Management Framework RFI. NIST solicited industry input on organising functions (map, measure, manage, govern), risk metrics, and socio-technical considerations to guide the forthcoming AI RMF and associated playbooks.5
December 2021 — OSTP/NSF National AI Research Resource RFI. The request explored governance, privacy safeguards, and shared infrastructure for a national AI research resource that would democratise access to computing power and high-quality datasets while embedding responsible AI principles.6
Cross-cutting governance expectations
- Risk classification. Both EU and U.S. initiatives emphasise categorising AI use cases by impact on safety, fundamental rights, and mission-critical processes, triggering proportionate controls.145
- Documentation and transparency. The AI Act proposal mandates detailed technical documentation, logging, and user information; Executive Order 13960 requires agencies to publish use-case inventories and explain AI decisions affecting individuals.24
- Human oversight. EU proposals require human oversight functions to intervene, review, and interpret AI outputs, while U.S. guidance emphasises human accountability for agency AI deployments.24
- Data governance and robustness. High-risk AI must employ high-quality, bias-controlled datasets, resilience testing, and continuous monitoring. NIST’s RMF consultations highlight socio-technical validation, robustness metrics, and documentation of limitations.145
- Institutional coordination. The National AI Initiative formalises coordination bodies and advisory committees to sustain standards work, education, and international collaboration.3
Implementation roadmap for enterprises
- Map AI portfolios. Inventory AI systems, categorising them by EU AI Act risk tiers (unacceptable, high, limited, minimal) and aligning with internal criticality scales. Identify systems subject to U.S. federal procurement oversight.
- Establish governance roles. Appoint accountable AI officers, cross-functional ethics committees, and liaison roles for regulatory engagement (EU notified bodies, U.S. agency customers).
- Design risk management lifecycle. Implement processes for dataset curation, model validation, bias testing, robustness evaluation, and post-deployment monitoring consistent with AI Act Annex IV documentation and NIST RMF functions.45
- Document transparency artefacts. Create user-facing disclosures, datasheets, model cards, and incident response plans. Maintain logs required for EU market surveillance and U.S. agency audits.24
- Align with national initiatives. Engage with National AI Initiative programmes, contribute to AI RMF workshops, and prepare to access the National AI Research Resource once operational.356
Metrics and assurance
- Key risk indicators. Number of high-risk AI systems without completed conformity assessments or impact analyses; unresolved fairness issues; and incidents requiring regulatory notification.
- Key performance indicators. Completion rates of technical documentation, frequency of human oversight reviews, and adherence to model retraining cadences.
- Audit coverage. Align internal audit with AI governance controls—data governance, algorithm change management, access controls, and monitoring. Prepare for external assessments by EU notified bodies or agency inspectors.
- Stakeholder engagement. Track participation in EU public consultations, NIST RMF workshops, and National AI Initiative committees to influence evolving requirements.
Sector-specific takeaways
- Healthcare. High-risk classification applies to medical device AI; ensure conformity with EU Medical Device Regulation and AI Act requirements, including post-market surveillance.
- Financial services. Align credit scoring and fraud detection AI with fairness, explainability, and governance expectations. Prepare for potential classification as high-risk under the AI Act when affecting creditworthiness or access to services.
- Public sector suppliers. Vendors delivering AI solutions to U.S. agencies must satisfy Executive Order 13960 principles, provide documentation aligned with NIST RMF, and support agency transparency obligations.25
- Industrial and infrastructure. High-risk designations extend to AI managing critical infrastructure, requiring resilience testing, incident reporting, and human oversight.
Programme risks and mitigations
- Regulatory divergence. Mitigation: create a harmonisation matrix mapping EU and U.S. requirements, identify equivalence opportunities, and reuse artefacts across jurisdictions.
- Documentation burden. Mitigation: automate model documentation via MLOps platforms, integrate compliance checkpoints into CI/CD pipelines, and maintain central repositories.
- Talent gaps. Mitigation: train data scientists and compliance teams on regulatory expectations, participate in National AI Initiative education programmes, and partner with external experts.
- Monitoring complexity. Mitigation: deploy monitoring tools that track model drift, bias, and performance; integrate alerts into risk registers; and rehearse incident response for AI failures.
Forward look
The EU AI Act will progress through trilogue negotiations, potentially expanding high-risk categories and enforcement powers. NIST will publish the AI RMF (expected 2023) informed by the 2021 consultations, and the National AI Initiative will coordinate research resource implementation.3456 Organisations that embed 2020–2021 governance foundations—risk classification, documentation, oversight, and coordination—will be better positioned to comply with future updates, including sector-specific regulations and global interoperability frameworks.
Sources
- 1 European Commission White Paper on Artificial Intelligence.
- 2 U.S. Executive Order 13960 on trustworthy AI.
- 3 National AI Initiative Act of 2020.
- 4 Proposal for an Artificial Intelligence Act.
- 5 NIST Request for Information on the AI Risk Management Framework.
- 6 OSTP/NSF Request for Information on a National AI Research Resource.
Zeph Tech applies these AI governance milestones to design risk classification matrices, documentation toolkits, and assurance routines for responsible AI programmes.
Continue in the AI pillar
Return to the hub for curated research and deep-dive guides.
Latest guides
-
AI Workforce Enablement and Safeguards Guide — Zeph Tech
Equip employees for AI adoption with skills pathways, worker protections, and transparency controls aligned to U.S. Department of Labor principles, ISO/IEC 42001, and EU AI Act…
-
AI Incident Response and Resilience Guide — Zeph Tech
Coordinate AI-specific detection, escalation, and regulatory reporting that satisfy EU AI Act serious incident rules, OMB M-24-10 Section 7, and CIRCIA preparation.
-
AI Model Evaluation Operations Guide — Zeph Tech
Build traceable AI evaluation programmes that satisfy EU AI Act Annex VIII controls, OMB M-24-10 Appendix C evidence, and AISIC benchmarking requirements.




