OMB M-24-10 Safety Controls Implementation
U.S. civilian agencies faced a March 28, 2025 deadline to implement OMB M-24-10's risk-management practices for AI that impacts safety or rights. Here's what you need to know about inventories, impact assessments, independent evaluations, human fallback controls, and waiver processes.
Fact-checked and reviewed — Kodi C.
The Office of Management and Budget's Memorandum M-24-10—issued on March 28, 2024 to implement President Biden's Executive Order 14110 and the AI in Government Act—sets a one-year clock for U.S. civilian agencies to operationalize a full risk-management framework for artificial intelligence that could affect the rights or safety of the public. Under the memo, Chief AI Officers (CAIOs) must build inventories of AI use cases, certify that systems are classified by risk, complete independent evaluations and impact assessments, and establish human fallback and incident-reporting procedures. Failure to implement the minimum practices outlined in Section 5(c) by December 1, 2024 means agencies must stop using those AI systems. Agencies must certify compliance by March 28, 2025.
Overview of M-24-10
The 60-page memorandum lays out OMB's strategy for advancing AI governance and innovation while managing risks to rights and safety. It designates a Chief AI Officer within each agency to lead setup, coordinate with privacy and civil-rights officials, oversee inventories of AI use cases, and develop an enterprise strategy for responsible AI adoption. Agencies must share models, data and code where possible, reduce barriers to responsible AI adoption and align procurement practices with risk-management guidelines.
The memo integrates existing laws and frameworks such as the AI in Government Act, the Advancing American AI Act and Executive Order 14110, and emphasizes that CAIOs must collaborate across data, cybersecurity, civil rights and customer-experience functions. This integrated approach ensures that AI governance is not siloed but embedded across organizational functions that affect how AI systems are developed, deployed, and monitored.
Minimum practices for safety-impacting and rights-impacting AI
Section 5(c) sets up a baseline set of practices that agencies must apply to AI systems that have the potential to significantly affect human life, critical infrastructure or public rights. Agencies must adopt these practices by December 1, 2024 or cease using the AI until compliance is achieved. The memo defines safety-impacting AI as any system whose output produces an action or serves as a principal basis for a decision that can significantly impact human life or well-being, the environment, critical infrastructure or strategic assets.
Rights-impacting AI covers systems used to adjudicate legal rights or control speech, law-enforcement risk assessments, biometric identification or access to social benefits. The appendix provides examples ranging from controlling dam gates and autonomous vehicles to allocating medical care and conducting criminal-risk assessments. This broad scope ensures that AI systems with significant consequences for individuals or communities receive appropriate oversight.
The minimum practices require agencies to conduct and publish impact assessments that identify potential harms, benefits, bias and equity risks, and describe mitigation strategies. Agencies must perform independent evaluations of AI systems before deployment and after significant modifications to verify that they meet functional, safety and ethical requirements. Governance boards must review safety-impacting and rights-impacting AI and approve deployment and changes.
Human oversight and incident response
M-24-10 requires agencies to implement real-time monitoring and human fallback controls to ensure that humans can intervene or end operations if the system malfunctions or produces harmful outcomes. This human-in-the-loop requirement ensures that AI systems do not operate autonomously in high-stakes contexts without appropriate human oversight and the ability to override automated decisions.
Agencies must document incident-response procedures and report incidents to OMB, affected communities and oversight bodies within specified timelines. These reporting requirements ensure that problems with AI systems are identified quickly and that lessons learned can be shared across government to prevent similar issues elsewhere.
Full documentation requirements include model cards, test plans, evaluation results, data provenance and decision logs, with unclassified portions made available to the public. This transparency enables teams to understand how AI systems work and to raise concerns about potential risks or biases.
Waivers, extensions and reporting
While the memo emphasizes that agencies must meet the minimum practices by December 1, 2024, it recognizes that certain systems cannot feasibly comply in time. CAIOs may request a one-year extension from OMB, accompanied by a detailed justification and mitigation plan. Extensions cannot be renewed, so agencies must use the additional time to address deficiencies and achieve full compliance.
Agencies can also waive specific requirements if they determine that a particular AI system does not fall within the definitions of safety-impacting or rights-impacting; however, such waivers must be documented, centrally tracked and reassessed if conditions change. This flexibility allows agencies to focus resources on the highest-risk systems while maintaining accountability for classification decisions.
Section 5(d) sets out quarterly attestation requirements: CAIOs must report compliance status, outstanding risks and corrective actions to OMB, which may share summaries with Congress. This ongoing reporting ensures continuous visibility into agency progress and creates accountability for sustained compliance.
Inventories and transparency
Section 5(a) requires agencies expand their AI use-case inventories to identify which systems are safety-impacting and rights-impacting, the datasets and models used, and the associated risks. These inventories must be published, updated quarterly and certified by the CAIO. Agencies must also assess whether any AI system is presumptively safety-impacting or rights-impacting, and document the justification for any deviation.
An annual public transparency report must summarize the agency's AI portfolio, highlight rights and safety risks, and explain how minimum practices have been applied. This transparency requirement ensures public accountability and enables civil society organizations to monitor government AI use. The combination of internal inventories and public reporting creates multiple layers of accountability.
Integration with existing frameworks
M-24-10 encourages agencies to align their setup with the NIST AI Risk Management Framework, the Blueprint for an AI Bill of Rights, ISO/IEC 23894:2023 on AI risk management and ISO/IEC 42001 on AI management systems. This alignment ensures that agencies can use existing good practices and standards rather than developing entirely new approaches.
Each minimum practice is severable and does not supersede other legal obligations. Agencies should map the memorandum's practices to the NIST AI RMF functions—Govern, Map, Measure and Manage—and integrate them into lifecycle processes. For example, AI development teams should incorporate fairness and bias metrics into model evaluations, while operations teams should monitor real-world performance and trigger retraining or rollback when anomalies are detected.
Procurement officers should embed minimum practice criteria into contracts so vendors deliver documentation, testing artifacts and mitigation plans. This procurement integration ensures that AI systems acquired from vendors meet the same standards as internally developed systems.
Implementation timeline and milestones
The memo's deadlines are aggressive: agencies had 60 days from March 28, 2024 to designate CAIOs and begin building inventories. By December 1, 2024, agencies must implement minimum practices for safety-impacting and rights-impacting AI, or cease operation of non-compliant systems. By March 28, 2025, CAIOs must certify, in consultation with inspectors general, that their agencies have applied the minimum practices, addressed outstanding risks and established governance structures.
Quarterly thereafter, agencies must attest to OMB on the status of compliance, including any extensions or waivers granted. The memo also directs OMB to work with the National AI Advisory Committee to review setup and recommend updates. This ongoing review process ensures that requirements evolve as AI technology and governance good practices advance.
Implications for agencies and contractors
For agencies, M-24-10 represents a shift from ad hoc AI governance to a structured, enforceable regime. Agencies must devote resources to inventorying AI systems, conducting risk and impact assessments, building governance boards and monitoring capabilities, and ensuring that contractors and third-party vendors adhere to the same standards. Failing to meet the deadlines could force critical systems offline, delaying services or endangering mission objectives.
The memo's public transparency provisions will also invite scrutiny from civil-society groups, regulators and Congress, increasing reputational and legal risks for agencies that lag. For technology vendors, M-24-10 signals that contracts with federal agencies will soon require detailed AI assurance artifacts, privacy and bias mitigations, and rapid incident reporting. Early engagement with agency CAIOs and integration of NIST AI RMF principles into product roadmaps will be essential.
This analysis and recommendations
This analysis sees M-24-10 as a watershed moment for public-sector AI governance. The memorandum establishes clear accountability through CAIO designation and board-level oversight, full documentation through inventories and impact assessments, and ongoing monitoring through quarterly attestations and incident reporting. These elements create a governance framework that other jurisdictions may emulate.
We recommend that agencies and contractors establish dedicated AI governance teams led by the CAIO and involving legal, privacy, civil-rights, cybersecurity and mission owners. Teams should develop charters that define decision rights and escalation paths. Building and maintaining full AI use-case inventories, classification schemes and crosswalks to data systems and human oversight plans is essential.
If you are affected, develop standard operating procedures for impact assessments, independent evaluations, testing and validation, using frameworks like NIST AI RMF and ISO/IEC 42001. Implementing strong monitoring tools that track model performance, drift, bias and compliance with fairness, transparency and interpretability requirements will support ongoing compliance.
Designing incident-response plans that meet OMB's reporting timelines and coordinate with communications and public affairs offices will help manage stakeholder expectations during incidents. Integrating minimum practice criteria into acquisition templates and MLOps pipelines ensures future AI projects inherit compliant controls automatically.
By acting now, agencies can turn the M-24-10 mandate from a compliance headache into a catalyst for building accountable, transparent and trustworthy AI programs that benefit the public.
Continue in the AI pillar
Return to the hub for curated research and deep-dive guides.
Latest guides
-
AI Procurement Governance Guide
Structure AI procurement pipelines with risk-tier screening, contract controls, supplier monitoring, and EU-U.S.-UK compliance evidence.
-
AI Workforce Enablement and Safeguards Guide
Equip employees for AI adoption with skills pathways, worker protections, and transparency controls aligned to U.S. Department of Labor principles, ISO/IEC 42001, and EU AI Act…
-
AI Model Evaluation Operations Guide
Build traceable AI evaluation programmes that satisfy EU AI Act Annex VIII controls, OMB M-24-10 Appendix C evidence, and AISIC benchmarking requirements.
Source material
- OMB Memorandum M-24-10 — Executive Office of the President
- ISO/IEC 42001:2023 — Artificial Intelligence Management System — International Organization for Standardization
- NIST AI Risk Management Framework — National Institute of Standards and Technology
Comments
Community
We publish only high-quality, respectful contributions. Every submission is reviewed for clarity, sourcing, and safety before it appears here.
No approved comments yet. Add the first perspective.