← Back to all briefings
AI 6 min read Published Updated Credibility 92/100

AI Governance Retrospective 2020–2022 — Regulatory Baseline and Controls

Between 2020 and 2022, EU and U.S. authorities built the policy scaffolding for trustworthy AI—drafting risk-tiered laws, fairness guidance, and testing frameworks—which now demand inventories, lifecycle controls, and outcome-focused assurance from AI producers and deployers.

Horizontal bar chart of credibility scores per cited source.
Credibility scores for every source cited in this briefing. Source data (JSON)

Executive briefing: Between 2020 and 2022, governments on both sides of the Atlantic established the scaffolding for modern AI governance. The European Commission released its draft Artificial Intelligence Act, Canada advanced the Artificial Intelligence and Data Act (AIDA), and the United States launched the National AI Initiative Office, the National Institute of Standards and Technology (NIST) AI Risk Management Framework, and the White House Blueprint for an AI Bill of Rights. Sectoral regulators—from financial supervisors to healthcare authorities—published expectations for testing, transparency, and accountability. Organizations now face a layered compliance environment that blends voluntary frameworks with binding obligations poised to enter force in 2024–2025.

Regulatory milestones (2020–2022)

2020: The European Commission issued its White Paper on Artificial Intelligence outlining a risk-based approach to trustworthy AI. The U.S. Congress enacted the National AI Initiative Act of 2020, creating the National AI Initiative Office and the National AI Advisory Committee to coordinate research, standards, and policy. NIST launched its AI risk framework development process, while the OECD expanded its AI Observatory to support cross-border policy analysis.

2021: The European Commission proposed the AI Act, introducing obligations for providers, deployers, importers, and distributors of high-risk AI systems. The act requires conformity assessments, technical documentation, data governance, human oversight, robustness testing, and post-market monitoring. The U.S. Federal Trade Commission warned companies against “biased or deceptive” AI marketing claims, linking AI governance to unfair or deceptive acts. The U.S. Equal Employment Opportunity Commission (EEOC) and the Department of Justice issued guidance cautioning employers about algorithmic disability discrimination. Meanwhile, China’s Cyberspace Administration published algorithmic recommendation rules, signaling global momentum.

2022: The European Council and Parliament advanced AI Act negotiations, while the Commission proposed the AI Liability Directive to harmonize civil remedies. The U.S. Office of Science and Technology Policy (OSTP) released the Blueprint for an AI Bill of Rights, highlighting rights to safe systems, algorithmic discrimination protections, data privacy, notice, and human alternatives. NIST published AI RMF drafts and the playbook for operationalizing trustworthy AI. Sectoral regulators, including the U.S. Consumer Financial Protection Bureau (CFPB), Securities and Exchange Commission (SEC), and UK Information Commissioner’s Office (ICO), issued supervisory expectations for automated decision-making.

Core governance themes

The 2020–2022 policy wave solidified foundational concepts for AI governance:

  • Risk-tiering: Laws such as the EU AI Act categorize systems by risk (unacceptable, high, limited, minimal), with high-risk systems requiring risk management, technical documentation, logging, and human oversight. Organizations must map AI use cases across these tiers.
  • Transparency and documentation: Regulators expect data lineage records, model documentation (model cards, factsheets), impact assessments, and disclosure to affected individuals. The EU AI Act requires technical documentation for conformity assessments; the AI Bill of Rights emphasizes plain-language notices.
  • Testing and monitoring: Continuous evaluation for accuracy, robustness, privacy, and fairness is now table stakes. NIST’s AI RMF introduces functions—Map, Measure, Manage, and Govern—that align with ongoing testing cycles.
  • Human oversight: Policies mandate human-in-the-loop controls capable of intervening, overriding, or auditing automated decisions. Workforce training and escalation protocols are critical.
  • Accountability: Boards and senior management must oversee AI risk. The EU AI Act assigns obligations across the supply chain, while U.S. regulators tie AI compliance to existing consumer protection and anti-discrimination statutes.

Control design considerations

To align with the emerging regulatory landscape, organizations should institute a comprehensive AI governance framework:

  • AI inventory and classification: Maintain a centralized registry of AI and automated decision systems, documenting purpose, data sources, model architecture, deployment context, risk tier, and regulatory touchpoints. Include third-party tools and vendor models.
  • Policy architecture: Adopt or update AI ethics principles, acceptable use policies, and human oversight protocols. Ensure policies reference applicable laws—EU AI Act, General Data Protection Regulation (GDPR) provisions on automated decision-making, U.S. Fair Credit Reporting Act (FCRA), Equal Credit Opportunity Act (ECOA), and sector guidance.
  • Risk assessments: Conduct AI impact assessments before deployment, covering bias, privacy, security, explainability, and safety. Align with frameworks like Canada’s Algorithmic Impact Assessment, the EU’s fundamental rights impact assessments, and the UK ICO’s AI risk toolkit.
  • Model lifecycle controls: Integrate governance into model development: data quality checks, feature selection transparency, algorithm testing, validation, monitoring, and retirement criteria. Implement version control and reproducibility practices, ensuring audit trails.
  • Third-party governance: Require vendors to provide model documentation, testing results, and evidence of compliance with applicable regulations. Establish contractual clauses covering bias mitigation, security controls, and incident notification.

Outcome testing and assurance

Compliance expectations emphasize demonstrable outcomes. Organizations should develop testing suites that examine:

  • Fairness and bias metrics: Monitor disparate impact ratios, equal opportunity differences, calibration curves, and subgroup performance. Align thresholds with regulatory guidance (for example, EEOC’s four-fifths rule) and document remediation steps.
  • Robustness and security: Test resistance to adversarial examples, data poisoning, and model drift. Implement red-teaming exercises, as recommended by NIST and ENISA’s Threat Landscape for AI.
  • Explainability: Evaluate whether explanations meet stakeholder needs. Use SHAP, LIME, or counterfactual methods and assess comprehension via user studies, especially for high-risk decisions affecting employment, credit, or healthcare.
  • Privacy protections: Validate de-identification, differential privacy, and secure multiparty computation where applicable. Ensure data minimization aligns with GDPR Article 5 and sector privacy rules (HIPAA, GLBA).
  • Operational KPIs: Track incident volumes, model downtime, override rates, and user appeals, tying results to continuous improvement plans.

Internal audit should develop AI-specific audit programs that evaluate governance structures, control design, testing evidence, and regulatory compliance. External assurance may become necessary for high-risk systems once the AI Act enters force; organizations should engage notified bodies early to understand conformity assessment expectations.

Cross-functional operating model

Effective AI governance requires collaboration across legal, compliance, technology, data science, risk, and ethics teams. Many organizations have established AI risk committees or councils that review high-risk proposals, approve deployment gates, and monitor incident reports. RACI matrices should clarify roles for model owners, validators, legal reviewers, and business sponsors. Training programs must cover technical topics (bias mitigation, secure coding) and policy obligations (GDPR Article 22, equal employment laws, consumer protection statutes).

Communication strategies are equally important. Customer-facing teams need scripts explaining how automated decisions are made, what appeal rights exist, and what data is collected. Investor relations should be prepared to discuss AI governance in ESG disclosures; many frameworks—such as the Sustainability Accounting Standards Board (SASB) and Task Force on Climate-related Financial Disclosures (TCFD)—increasingly expect commentary on technology governance risks.

Regulatory horizon and next steps

Looking ahead, organizations should prepare for AI-specific supervisory examinations. The EU AI Act is expected to finalize in 2024 with a two-year transition period; penalties could reach €30 million or 6 percent of global turnover. Canada’s AIDA may introduce administrative monetary penalties and ministerial orders. The U.S. Consumer Financial Protection Bureau and Federal Reserve have signaled interest in scrutinizing AI underwriting, while the Department of Labor monitors AI use in hiring. International standards bodies—including ISO/IEC JTC 1/SC 42—are developing management system standards (for example, ISO/IEC 42001) to codify AI governance, which will influence audit expectations.

Organizations should maintain a regulatory heat map, track consultations, and participate in industry associations such as the Data & Trust Alliance, Partnership on AI, and IEEE to stay ahead of best practices. Aligning with voluntary frameworks now will ease compliance when binding regulations arrive.

Zeph Tech’s Responsible AI Council is reconciling inventory, risk assessment, and monitoring controls against the EU AI Act, NIST AI RMF, and OSTP Blueprint requirements, embedding quarterly fairness testing and executive reporting into its digital governance program.

Horizontal bar chart of credibility scores per cited source.
Credibility scores for every source cited in this briefing. Source data (JSON)

Continue in the AI pillar

Return to the hub for curated research and deep-dive guides.

Visit pillar hub

Latest guides

  • Artificial intelligence
  • Governance
  • Risk management
  • Policy analysis
Back to curated briefings