← Back to all briefings
AI 5 min read Published Updated Credibility 92/100

OMB M-21-06 AI Regulation Guidance — November 17, 2020

OMB finalized AI regulatory principles for U.S. agencies, requiring risk assessments, transparency, and stakeholder engagement still cited in AI governance work.

Fact-checked and reviewed — Kodi C.

AI pillar illustration for Zeph Tech briefings
AI deployment, assurance, and governance briefings

OMB M-21-06 Overview

On 17 November 2020, the Office of Management and Budget (OMB) released Memorandum M-21-06 to translate the Trump Administration’s AI regulatory principles into binding guidance for federal agencies. Rooted in Executive Order 13859, the memorandum instructs regulators to encourage trustworthy artificial intelligence by aligning rulemaking with risk management, transparency, and evidence-driven standards. The guidance applies to both regulatory and non-regulatory actions and emphasizes that federal oversight should promote innovation while safeguarding the public.

Agencies are directed to interpret M-21-06 alongside existing statutes, the Administrative Procedure Act, and OMB Circular A-4. The memo stresses that AI-specific requirements must be proportional to the risk, enable flexibility for fast-changing technology, and minimize barriers to market entry. It highlights the dual responsibility to advance U.S. leadership in AI and to protect civil rights, privacy, safety, and economic fairness.

Regulatory principles

M-21-06 consolidates ten regulatory principles that agencies must evaluate before imposing new requirements on AI applications. Each principle aims to be applied in a case-by-case manner, recognizing the diversity of AI use cases and the potential for unintended consequences when rules are overly prescriptive.

  • Public trust in AI. Regulators should design oversight to earn and maintain public trust, acknowledging that trust is strengthened when agencies provide clear rationales for their actions.
  • Public participation. Agencies should use public comment, listening sessions, and pilot programs to gather early input on proposed AI rules, especially from communities that may be disproportionately affected.
  • Scientific integrity and information quality. Rulemaking should rely on valid data, reproducible methods, and peer-reviewed evidence to avoid biases in regulatory assumptions.
  • Risk assessment and management. Agencies must document risks and benefits of AI applications, distinguishing between context-specific harms (such as safety or discrimination) and systemic risks that may arise from scale.
  • Benefits and costs. Consistent with Circular A-4, regulators should quantify anticipated benefits and costs, considering whether lighter-touch tools (for example, guidance, voluntary consensus standards) can achieve comparable outcomes.
  • Flexibility. Because AI techniques evolve rapidly, agencies should avoid static technical specifications and instead allow performance-based approaches that can adapt to new models, datasets, and deployment contexts.
  • Fairness and non-discrimination. The memo underscores obligations under existing civil rights laws and encourages agencies to assess disparate impact risks, data quality, and model governance practices.
  • Disclosure and transparency. Agencies should consider requiring disclosures that help affected parties understand when AI is used, how it influences decisions, and what recourse is available for erroneous outcomes.
  • Safety and security. Regulators should evaluate resilience to adversarial manipulation, robustness across operating conditions, and secure data handling.
  • Interagency coordination. OMB directs agencies to collaborate through the Chief Information Officers Council, the National AI Initiative Office, and other interagency bodies to avoid duplicative or conflicting requirements.

The memo positions these principles as an analytical framework rather than a rigid checklist. Agencies will explain how each principle was considered when proposing or finalizing rules, particularly when regulations could constrain innovation or create compliance burdens for smaller entities.

Agency responsibilities

OMB M-21-06 outlines concrete steps for agencies that regulate AI-enabled products or use AI in mission delivery. Key responsibilities include the following:

  • Assess regulatory options. Before imposing new mandates, agencies should evaluate whether existing laws already address the identified risk and whether non-regulatory tools—such as voluntary technical standards, best-practice guidance, or sandbox programs—would suffice.
  • Use performance-based approaches. Agencies should articulate measurable outcomes (for example, accuracy, robustness, bias metrics) rather than prescribing specific algorithms or architectures, enabling market competition and innovation.
  • Document risk-benefit analyzes. The guidance expects agencies to prepare written analyzes showing how expected benefits justify regulatory costs, and to revisit those analyzes as new evidence emerges. This documentation should be made available for public comment when feasible.
  • Protect privacy and civil liberties. Agencies should review how AI systems process personal data and stay compliant with the Privacy Act, Section 208 of the E-Government Act, and sector-specific privacy rules. Civil liberties reviews are recommended when AI supports law enforcement or national security missions.
  • Support standards development. M-21-06 encourages participation in voluntary consensus standards bodies (such as NIST-led efforts) to harmonize terminology, risk management practices, and testing protocols that can be incorporated into regulation by reference.
  • Coordinate with OIRA. Significant regulatory actions involving AI remain subject to review by the Office of Information and Regulatory Affairs. Agencies must be prepared to show how their proposals align with the memorandum’s principles during the review process.
  • Conduct periodic review. After regulations are issued, agencies should monitor whether AI performance, market conditions, or risk profiles have shifted. If so, they should consider modifying guidance or updating rules to avoid obsolescence.

The memorandum also acknowledges AI used internally by the federal government. While the primary focus is external regulation, agencies should apply similar risk-management logic to procurement, grants, and operational systems to ensure that government use reflects the same trust, transparency, and safety goals expected of the private sector.

Continue in the AI pillar

Return to the hub for curated research and deep-dive guides.

Visit pillar hub

Latest guides

Coverage intelligence

Published
Coverage pillar
AI
Source credibility
92/100 — high confidence
Topics
OMB · AI Regulation · Risk Management
Sources cited
2 sources (hitehouse.gov, federalregister.gov)
Reading time
5 min

Source material

  1. M-21-06: Guidance for Regulation of Artificial Intelligence Applications — Office of Management and Budget
  2. Executive Order 13859: Maintaining American Leadership in Artificial Intelligence — Federal Register
  • OMB
  • AI Regulation
  • Risk Management
Back to curated briefings

Comments

Community

We publish only high-quality, respectful contributions. Every submission is reviewed for clarity, sourcing, and safety before it appears here.

    Share your perspective

    Submissions showing "Awaiting moderation" are in review. Spam, low-effort posts, or unverifiable claims will be rejected. We verify submissions with the email you provide, and we never publish or sell that address.

    Verification

    Complete the CAPTCHA to submit.