← Back to all briefings
AI 6 min read Published Updated Credibility 92/100

AI Governance Retrospective — AI governance

Risk-based AI regulation accelerated between 2020 and 2021—from the EU’s white paper and AI Act proposal to U.S. executive mandates, the National AI Initiative, and NIST’s AI Risk Management Framework consultations—establishing today’s governance baselines.

Editorially reviewed for factual accuracy

AI pillar illustration for Zeph Tech briefings
AI deployment, assurance, and governance briefings

Regulators in Europe and the United States spent 2020–2021 constructing the policy scaffolding that now defines enterprise AI governance. The European Commission’s white paper and subsequent Artificial Intelligence Act proposal introduced risk-based obligations, while U.S. Executive Order 13960, the National AI Initiative Act, and NIST’s AI Risk Management Framework consultations set expectations for trustworthy AI across federal agencies and suppliers.123456 Teams deploying AI should treat these milestones as the foundational playbook for compliance, assurance, and investment decisions.

Timeline of key developments

February 2020 — EU AI white paper. The European Commission proposed a risk-based regulatory approach with mandatory requirements for “high-risk” AI (safety, critical infrastructure, employment, and rights-sensitive applications) and voluntary labelling for lower-risk systems.1 It paired regulatory proposals with investment pillars—data spaces, testing facilities, and skills programs.

December 2020 — U.S. Executive Order 13960. The order directed federal agencies to catalog AI use cases, appoint responsible officials, implement risk management practices aligned with NIST guidance, and ensure transparency for AI impacting rights or safety.2 It emphasized values such as lawfulness, accuracy, and traceability.

January 2021 — National AI Initiative Act. The Act created the National AI Initiative Office, authorized interagency coordination bodies (AI Research Resource Task Force, National AI Advisory Committee), and funded research, education, and standards activities across NIST, NSF, DOE, and other agencies.3

April 2021 — EU Artificial Intelligence Act proposal. The Commission translated the white paper into binding legislation: providers of high-risk AI must implement risk management systems, data governance, technical documentation, human oversight, and conformity assessments before CE-marking products.4 The proposal also introduced transparency requirements for certain AI (chatbots, deepfakes) and prohibited specific practices (social scoring by authorities, manipulative AI affecting vulnerable groups).

July 2021 — NIST AI Risk Management Framework RFI. NIST solicited industry input on organising functions (map, measure, manage, govern), risk metrics, and socio-technical considerations to guide the forthcoming AI RMF and associated playbooks.5

December 2021 — OSTP/NSF National AI Research Resource RFI. The request explored governance, privacy safeguards, and shared infrastructure for a national AI research resource that would democratise access to computing power and high-quality datasets while embedding responsible AI principles.6

Cross-cutting governance expectations

  • Risk classification. Both EU and U.S. initiatives emphasize categorising AI use cases by impact on safety, fundamental rights, and mission-critical processes, triggering proportionate controls.145
  • Documentation and transparency. The AI Act proposal mandates detailed technical documentation, logging, and user information; Executive Order 13960 requires agencies to publish use-case inventories and explain AI decisions affecting individuals.24
  • Human oversight. EU proposals require human oversight functions to intervene, review, and interpret AI outputs, while U.S. guidance emphasizes human accountability for agency AI deployments.24
  • Data governance and robustness. High-risk AI must employ high-quality, bias-controlled datasets, resilience testing, and continuous monitoring. NIST’s RMF consultations highlight socio-technical validation, robustness metrics, and documentation of limitations.145
  • Institutional coordination. The National AI Initiative formalizes coordination bodies and advisory committees to sustain standards work, education, and international collaboration.3

Implementation roadmap for enterprises

  1. Map AI portfolios. Inventory AI systems, categorising them by EU AI Act risk tiers (unacceptable, high, limited, minimal) and aligning with internal criticality scales. Identify systems subject to U.S. federal procurement oversight.
  2. Establish governance roles. Appoint accountable AI officers, cross-functional ethics committees, and liaison roles for regulatory engagement (EU notified bodies, U.S. agency customers).
  3. Design risk management lifecycle. Implement processes for dataset curation, model validation, bias testing, robustness evaluation, and post-deployment monitoring consistent with AI Act Annex IV documentation and NIST RMF functions.45
  4. Document transparency artifacts. Create user-facing disclosures, datasheets, model cards, and incident response plans. Maintain logs required for EU market surveillance and U.S. agency audits.24
  5. Align with national initiatives. Engage with National AI Initiative programs, contribute to AI RMF workshops, and prepare to access the National AI Research Resource once operational.356

Metrics and assurance

  • Key risk indicators. Number of high-risk AI systems without completed conformity assessments or impact analyzes; unresolved fairness issues; and incidents requiring regulatory notification.
  • Key performance indicators. Completion rates of technical documentation, frequency of human oversight reviews, and adherence to model retraining cadences.
  • Audit coverage. Align internal audit with AI governance controls—data governance, algorithm change management, access controls, and monitoring. Prepare for external assessments by EU notified bodies or agency inspectors.
  • Stakeholder engagement. Track participation in EU public consultations, NIST RMF workshops, and National AI Initiative committees to influence evolving requirements.

Sector-specific takeaways

  • Healthcare. High-risk classification applies to medical device AI; ensure conformity with EU Medical Device Regulation and AI Act requirements, including post-market surveillance.
  • Financial services. Align credit scoring and fraud detection AI with fairness, explainability, and governance expectations. Prepare for potential classification as high-risk under the AI Act when affecting creditworthiness or access to services.
  • Public sector suppliers. Vendors delivering AI solutions to U.S. agencies must satisfy Executive Order 13960 principles, provide documentation aligned with NIST RMF, and support agency transparency obligations.25
  • Industrial and infrastructure. High-risk designations extend to AI managing critical infrastructure, requiring resilience testing, incident reporting, and human oversight.

program risks and mitigations

  • Regulatory divergence. Mitigation: create a harmonization matrix mapping EU and U.S. requirements, identify equivalence opportunities, and reuse artifacts across jurisdictions.
  • Documentation burden. Mitigation: automate model documentation via MLOps platforms, integrate compliance checkpoints into CI/CD pipelines, and maintain central repositories.
  • Talent gaps. Mitigation: train data scientists and compliance teams on regulatory expectations, participate in National AI Initiative education programs, and partner with external experts.
  • Monitoring complexity. Mitigation: deploy monitoring tools that track model drift, bias, and performance; integrate alerts into risk registers; and rehearse incident response for AI failures.

Looking ahead

The EU AI Act will progress through trilogue negotiations, potentially expanding high-risk categories and enforcement powers. NIST will publish the AI RMF (expected 2023) informed by the 2021 consultations, and the National AI Initiative will coordinate research resource setup.3456 Teams that embed 2020–2021 governance foundations—risk classification, documentation, oversight, and coordination—will be better positioned to comply with future updates, including sector-specific regulations and global interoperability frameworks.

Documentation

Applying these AI governance milestones to design risk classification matrices, documentation toolkits, and assurance routines for responsible AI programs.

Continue in the AI pillar

Return to the hub for curated research and deep-dive guides.

Visit pillar hub

Latest guides

Coverage intelligence

Published
Coverage pillar
AI
Source credibility
92/100 — high confidence
Topics
AI governance · Responsible AI · Regulation · Risk management
Sources cited
6 sources (commission.europa.eu, federalregister.gov, congress.gov, eur-lex.europa.eu)
Reading time
6 min

Documentation

  1. White Paper on Artificial Intelligence - A European approach to excellence and trust — European Commission
  2. Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government — Federal Register
  3. National Artificial Intelligence Initiative Act of 2020 (Division E, Public Law 116-283) — U.S. Congress
  4. Proposal for a Regulation laying down harmonized rules on Artificial Intelligence (Artificial Intelligence Act) — European Commission
  5. Artificial Intelligence Risk Management Framework — National Institute of Standards and Technology
  6. Request for Information on an Implementation Plan for a National Artificial Intelligence Research Resource — Office of Science and Technology Policy
  • AI governance
  • Responsible AI
  • Regulation
  • Risk management
Back to curated briefings

Comments

Community

We publish only high-quality, respectful contributions. Every submission is reviewed for clarity, sourcing, and safety before it appears here.

    Share your perspective

    Submissions showing "Awaiting moderation" are in review. Spam, low-effort posts, or unverifiable claims will be rejected. We verify submissions with the email you provide, and we never publish or sell that address.

    Verification

    Complete the CAPTCHA to submit.