OMB M-21-06 AI Regulation Guidance — November 17, 2020
OMB finalised AI regulatory principles for U.S. agencies, requiring risk assessments, transparency, and stakeholder engagement Zeph Tech still cites in AI governance work.
On 17 November 2020, the Office of Management and Budget (OMB) released Memorandum M-21-06 to translate the Trump Administration’s AI regulatory principles into binding guidance for federal agencies. Rooted in Executive Order 13859, the memorandum instructs regulators to encourage trustworthy artificial intelligence by aligning rulemaking with risk management, transparency, and evidence-driven standards. The guidance applies to both regulatory and non-regulatory actions and emphasizes that federal oversight should promote innovation while safeguarding the public.
Agencies are directed to interpret M-21-06 alongside existing statutes, the Administrative Procedure Act, and OMB Circular A-4. The memo stresses that AI-specific requirements must be proportional to the risk, enable flexibility for rapidly evolving technology, and minimize barriers to market entry. It highlights the dual responsibility to advance U.S. leadership in AI and to protect civil rights, privacy, safety, and economic fairness.
Regulatory principles
M-21-06 consolidates ten regulatory principles that agencies must evaluate before imposing new requirements on AI applications. Each principle is intended to be applied in a case-by-case manner, recognizing the diversity of AI use cases and the potential for unintended consequences when rules are overly prescriptive.
- Public trust in AI. Regulators should design oversight to earn and maintain public trust, acknowledging that trust is strengthened when agencies provide clear rationales for their actions.
- Public participation. Agencies are encouraged to use public comment, listening sessions, and pilot programs to gather early input on proposed AI rules, especially from communities that may be disproportionately affected.
- Scientific integrity and information quality. Rulemaking should rely on valid data, reproducible methods, and peer-reviewed evidence to avoid biases in regulatory assumptions.
- Risk assessment and management. Agencies must document risks and benefits of AI applications, distinguishing between context-specific harms (such as safety or discrimination) and systemic risks that may arise from scale.
- Benefits and costs. Consistent with Circular A-4, regulators should quantify anticipated benefits and costs, considering whether lighter-touch tools (e.g., guidance, voluntary consensus standards) can achieve comparable outcomes.
- Flexibility. Because AI techniques evolve rapidly, agencies should avoid static technical specifications and instead allow performance-based approaches that can adapt to new models, datasets, and deployment contexts.
- Fairness and non-discrimination. The memo underscores obligations under existing civil rights laws and encourages agencies to assess disparate impact risks, data quality, and model governance practices.
- Disclosure and transparency. Agencies should consider requiring disclosures that help affected parties understand when AI is used, how it influences decisions, and what recourse is available for erroneous outcomes.
- Safety and security. Regulators are advised to evaluate resilience to adversarial manipulation, robustness across operating conditions, and secure data handling.
- Interagency coordination. OMB directs agencies to collaborate through the Chief Information Officers Council, the National AI Initiative Office, and other interagency bodies to avoid duplicative or conflicting requirements.
The memo positions these principles as an analytical framework rather than a rigid checklist. Agencies are expected to explain how each principle was considered when proposing or finalizing rules, particularly when regulations could constrain innovation or create compliance burdens for smaller entities.
Agency responsibilities
OMB M-21-06 outlines concrete steps for agencies that regulate AI-enabled products or use AI in mission delivery. Key responsibilities include the following:
- Assess regulatory options. Before imposing new mandates, agencies should evaluate whether existing laws already address the identified risk and whether non-regulatory tools—such as voluntary technical standards, best-practice guidance, or sandbox programs—would suffice.
- Use performance-based approaches. Agencies are encouraged to articulate measurable outcomes (e.g., accuracy, robustness, bias metrics) rather than prescribing specific algorithms or architectures, enabling market competition and innovation.
- Document risk-benefit analyses. The guidance expects agencies to prepare written analyses showing how expected benefits justify regulatory costs, and to revisit those analyses as new evidence emerges. This documentation should be made available for public comment when feasible.
- Protect privacy and civil liberties. Agencies should review how AI systems process personal data and ensure compliance with the Privacy Act, Section 208 of the E-Government Act, and sector-specific privacy rules. Civil liberties reviews are recommended when AI supports law enforcement or national security missions.
- Support standards development. M-21-06 encourages participation in voluntary consensus standards bodies (such as NIST-led efforts) to harmonize terminology, risk management practices, and testing protocols that can be incorporated into regulation by reference.
- Coordinate with OIRA. Significant regulatory actions involving AI remain subject to review by the Office of Information and Regulatory Affairs. Agencies must be prepared to show how their proposals align with the memorandum’s principles during the review process.
- Conduct periodic review. After regulations are issued, agencies should monitor whether AI performance, market conditions, or risk profiles have shifted. If so, they should consider modifying guidance or updating rules to avoid obsolescence.
The memorandum also acknowledges AI used internally by the federal government. While the primary focus is external regulation, agencies are advised to apply similar risk-management logic to procurement, grants, and operational systems to ensure that government use reflects the same trust, transparency, and safety goals expected of the private sector.
Implementation timeline
OMB framed M-21-06 as immediately applicable to actions initiated after publication, while encouraging agencies to integrate its analysis into ongoing rulemakings where practicable. A realistic implementation timeline for agencies typically unfolds in phases:
- Preparation (Weeks 1–8). Agencies inventory AI-related regulatory activities, assign policy leads, and compare existing rules against the memorandum’s principles. Legal, privacy, and civil rights offices identify where AI-specific guidance may be needed.
- Analytical integration (Months 3–6). Program offices and economists apply the principles to pre-rule and notice stages, refining problem statements, risk assessments, and benefit-cost estimates. Agencies engage with standards bodies to align draft requirements with emerging benchmarks.
- Public engagement (Months 6–9). Draft rules or guidance documents incorporate transparency, disclosure, and performance-based requirements. Agencies run listening sessions or issue requests for information to collect targeted feedback from industry, academia, civil society, and affected communities.
- Finalization and review (Months 9–12). Responses to comments document how each principle was addressed and whether alternative approaches were considered. Agencies coordinate with OIRA to ensure consistency with cross-agency policy and to minimize duplicative mandates.
- Retrospective review (Year 2 and beyond). Agencies measure real-world outcomes, track unintended consequences (such as barriers to entry or disparate impacts), and revise guidance to reflect technological advances and new empirical evidence.
This phased sequence aligns with Administrative Procedure Act milestones while preserving flexibility for agencies with different regulatory calendars. It also underscores that continuous learning is necessary because AI models and data ecosystems change rapidly.
Stakeholder engagement and transparency
M-21-06 repeatedly emphasizes that meaningful participation improves AI governance. Agencies are encouraged to use targeted outreach to small businesses, researchers, consumer advocates, and civil rights groups to surface context-specific risks. When technical details are sensitive or proprietary, the memo suggests relying on third-party testing, secure data enclaves, or red-team exercises to validate claims without compromising intellectual property or security.
Transparency is framed as both a regulatory tool and a safeguard. The memorandum notes that disclosures should be tailored to audience needs: consumers may need plain-language explanations of automated decisions, while regulators and auditors require technical documentation such as training data provenance, model limitations, and monitoring plans. Agencies are also advised to clarify avenues for contestability and human oversight, particularly in high-stakes uses like credit, healthcare, transportation, and public benefits.
Relationship to subsequent policy
Although M-21-06 was superseded by OMB Memorandum M-24-10 in March 2024, the earlier guidance remains an important reference point for understanding how federal regulators initially balanced innovation and risk. Many of its principles—especially performance-based regulation, risk-proportional safeguards, and coordination with standards bodies—continue to influence agency rulemaking under newer authorities such as Executive Order 14110 on Safe, Secure, and Trustworthy Artificial Intelligence.
For organizations working with federal partners, familiarity with M-21-06 provides historical context for current compliance expectations. It reveals how agencies justified early AI oversight decisions, which helps stakeholders anticipate continuity or shifts in policy as updated frameworks introduce stricter inventory, evaluation, and waiver requirements.
References
- OMB Memorandum M-21-06: Guidance for Regulation of Artificial Intelligence Applications — Office of Management and Budget; the primary source outlining the ten regulatory principles and implementation expectations for federal agencies.
- Federal Register Notice: Guidance for Regulation of Artificial Intelligence Applications — Public notice that announced the memorandum’s availability and effective date, reinforcing its applicability to regulatory and non-regulatory actions.
Continue in the AI pillar
Return to the hub for curated research and deep-dive guides.
Latest guides
-
AI Workforce Enablement and Safeguards Guide — Zeph Tech
Equip employees for AI adoption with skills pathways, worker protections, and transparency controls aligned to U.S. Department of Labor principles, ISO/IEC 42001, and EU AI Act…
-
AI Incident Response and Resilience Guide — Zeph Tech
Coordinate AI-specific detection, escalation, and regulatory reporting that satisfy EU AI Act serious incident rules, OMB M-24-10 Section 7, and CIRCIA preparation.
-
AI Model Evaluation Operations Guide — Zeph Tech
Build traceable AI evaluation programmes that satisfy EU AI Act Annex VIII controls, OMB M-24-10 Appendix C evidence, and AISIC benchmarking requirements.




