← Back to all briefings

AI · Credibility 92/100 · · 4 min read

AI Retrospective Briefing — October 4, 2022

U.S. and EU policymakers stood up AI governance offices, risk frameworks, and liability tools between 2020 and 2022, creating the compliance bedrock Zeph Tech operators now map against newer mandates.

Why it matters: From late 2020 through 2022, the United States and European Union put the statutory and regulatory scaffolding for trustworthy AI into place. Agencies created permanent governance offices, codified risk management functions, and paired the EU AI Act with liability reforms that now anchor Zeph Tech client playbooks.

December 2020 — Executive Order 13960 on trustworthy AI. The White House ordered civilian agencies to catalogue AI systems, embed risk management, and favour procurements that follow trustworthy AI principles. The Federal Register notice established governance structures and standards coordination requirements still mirrored in agency AI inventories.

  • Chief data officers were tasked with annual AI use-case reporting and impact assessments before production deployment.
  • Agencies had to align with NIST risk frameworks, publish waivers, and document safeguards for algorithmic bias, transparency, and oversight.

January 2021 — National AI Initiative Act of 2020 (Public Law 116-283). Congress created a whole-of-government AI coordination office, multi-agency research institutes, and advisory committees to guide trustworthy deployment. The statute permanently authorised the National AI Initiative Office inside the White House Office of Science and Technology Policy.

  • The law directs OSTP to coordinate AI research, standards engagement, and workforce development across NIST, NSF, DOE, and other agencies.
  • It funds National AI Research Institutes and requires strategic roadmaps that balance innovation with civil-rights safeguards.

April 2021 — EU Artificial Intelligence Act proposal. The European Commission’s COM(2021) 206 draft formalised prohibitions on unacceptable AI, tiered compliance obligations for high-risk systems, and post-market monitoring. The proposal text triggered conformity assessment planning, notified-body accreditation, and harmonised standards work across the bloc.

  • High-risk providers must implement quality management systems, risk management, and incident logging, while deployers must conduct fundamental rights impact assessments.
  • General-purpose AI suppliers face transparency disclosures and documentation duties once the Council and Parliament finish the trilogue process.

December 2021 — NIST AI Risk Management Framework concept paper. NIST’s concept paper introduced the Map-Measure-Manage functions and sought comment ahead of the second AI RMF workshop. The document outlined governance and documentation guardrails for organisations scaling AI programmes.

  • NIST asked operators to validate risk characteristics, terminology, and use-case categories to ensure the final framework reflects sector realities.
  • The paper previewed companion playbook guidance and requested feedback by January 2022 to shape the initial draft released later that spring.

September 2022 — EU Artificial Intelligence Liability Directive proposal. The Commission proposed harmonised liability rules that complement the AI Act by lowering evidentiary barriers for harmed parties. The COM(2022) 496 draft introduced disclosure duties for high-risk AI evidence and rebuttable presumptions of causality when providers breach risk-management obligations.

  • National courts can compel providers to preserve logs and technical documentation for high-risk AI systems.
  • The directive eases the burden of proof for victims when non-compliance with AI Act obligations contributed to the damage.

October 2022 — Blueprint for an AI Bill of Rights. The U.S. Office of Science and Technology Policy articulated five rights-based safeguards: safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives. The blueprint directs sector regulators to translate those principles into enforceable rules and procurement guidance.

  • OSTP urged agencies to expand testing, evaluation, and independent validation before deployment, mirroring risk tiers emerging from the EU AI Act.
  • Regulators such as HHS, CFPB, and EEOC were instructed to align enforcement with the blueprint, setting expectations for human review and redress channels.

Action for operators: Use these statutes, proposals, and frameworks as the baseline crosswalk for AI inventory controls, risk tiering, documentation retention, and human oversight when aligning Zeph Tech playbooks with 2023–2024 rulemaking.

  • Artificial Intelligence
  • Regulation
  • Risk Management
Back to curated briefings