← Back to all briefings
AI 7 min read Published Updated Credibility 89/100

OMB Issues Draft AI Risk Management Guidance for Federal Agencies — August 29, 2023

OMB’s 29 August 2023 draft memorandum on safe and secure AI directs U.S. agencies to charter Chief AI Officers, governance boards, impact assessments, and independent evaluations—setting expectations contractors must meet while aligning DSAR, privacy, and algorithmic accountability obligations.

Timeline plotting source publication cadence sized by credibility.
2 publication timestamps supporting this briefing. Source data (JSON)

On , the White House Office of Management and Budget (OMB) released draft guidance titled Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence. The proposal—which OMB committed to finalising after a public comment period closing on —establishes minimum requirements for U.S. federal agencies deploying AI systems. It mandates the appointment of Chief AI Officers (CAIOs), formation of AI Governance Boards, inventories of AI use cases, impact assessments, real-world testing, independent evaluation, and reporting on compliance with the Blueprint for an AI Bill of Rights and NIST AI Risk Management Framework. Agencies must implement safeguards for “safety-impacting” and “rights-impacting” AI applications—systems that can affect critical infrastructure, health, safety, or constitutional rights. Although directed at federal entities, the guidance shapes expectations for contractors and vendors supplying AI services, and signals how privacy and DSAR obligations should interface with algorithmic accountability.

The draft divides requirements into governance structures, risk management practices, and external reporting. Agencies must designate a CAIO with authority to coordinate AI governance, risk assessment, acquisition, and compliance. They must also establish an AI Governance Board chaired by the CAIO and including the Chief Information Officer, Chief Data Officer, Chief Information Security Officer, Senior Agency Official for Privacy, and other mission leaders. The Board oversees adherence to AI policy, reviews high-risk use cases, approves impact assessments, and monitors mitigation plans. Agencies must maintain AI use case inventories, publish annual updates to Performance.gov, and release open-source code and data whenever feasible.

Governance implications for federal leaders and contractors

Agency heads are accountable for implementing the memo and should ensure charters and delegations of authority explicitly empower the CAIO and Governance Board. Inspectors General and agency risk officers should integrate AI compliance into audit plans, focusing on whether agencies identify safety- and rights-impacting uses, enforce testing requirements, and provide transparency to affected individuals. Privacy Officers must coordinate with CAIOs to align DSAR workflows—federal agencies receive Privacy Act and Freedom of Information Act (FOIA) requests that function similarly to DSARs—and ensure algorithmic outputs can be explained and corrected.

Contractors supplying AI tools or services should expect solicitations to reference OMB’s requirements. Requests for proposals will likely demand documentation of training data provenance, model evaluation results, bias testing, cybersecurity controls, and privacy protections. Vendors should prepare compliance packages demonstrating compatibility with NIST AI RMF functions (Govern, Map, Measure, Manage), as well as how their systems support federal DSAR obligations (e.g., enabling record access and correction). Suppliers may also be asked to assist with agency reporting, providing metrics on system performance, incident response, and risk mitigation.

Implementation roadmap for agencies

Phase 1: Organise governance. Agencies must appoint a CAIO within 60 days of final guidance, establish the AI Governance Board, and define decision-making protocols. They should update internal directives to require CAIO review of AI acquisitions and align with existing Chief Data Officer and Chief Privacy Officer responsibilities. Policies should specify triggers for Board review, such as new AI use cases in critical services or changes to models already deployed.

Phase 2: Inventory and classification. Agencies must catalogue all AI use cases, including systems developed in-house, procured from vendors, and embedded in cloud services. Each entry must note purpose, responsible office, training data sources, model lifecycle status, metrics, and whether the system is safety- or rights-impacting. For rights-impacting systems, agencies must document the legal authorities authorising use, the populations affected, and how individuals can contest outputs or request human review.

Phase 3: Risk assessments and testing. The draft requires comprehensive impact assessments and real-world testing for safety- and rights-impacting AI. Agencies should adapt or expand Privacy Impact Assessments (PIAs) to incorporate AI-specific considerations, including data minimisation, accuracy, explainability, and fairness. Testing must include evaluation in operational conditions, detection of performance degradation, and red-team exercises exploring adversarial attacks. Independent evaluation—conducted by an internal team separate from developers or an external party—must verify compliance before deployment and annually thereafter.

Phase 4: Operational safeguards. Agencies must implement guardrails such as monitoring, incident response playbooks, human oversight procedures, and documentation accessible to affected individuals. They must publish use case summaries, risk assessments, and contact channels for inquiries or complaints. This transparency intersects with DSAR obligations: agencies need to ensure that individuals can request records related to AI decisions, understand how data was used, and receive timely corrections.

DSAR and privacy considerations

The draft memo emphasises alignment with privacy law. Agencies must ensure AI systems handling personal data comply with the Privacy Act, E-Government Act, and agency-specific statutes. DSAR-like requests—Privacy Act access, amendment, and accounting of disclosures—must remain functional even when decisions are automated. Therefore, agencies should design AI architectures with audit trails, data lineage documentation, and mechanisms to trace outputs back to source data. Consent or notice statements must clearly explain AI usage, and agencies should provide accessible explanations of algorithmic logic in plain language.

OMB directs agencies to implement processes for addressing harms. Individuals must have clear channels to lodge complaints, request human review, or seek redress. Agencies should integrate DSAR intake systems with AI governance workflows so that when a request identifies potential bias or error, the Governance Board reviews whether mitigation or model retraining is necessary. Agencies must also maintain contact with civil rights offices to ensure that algorithmic impacts on protected classes are monitored and reported.

  • Data minimisation and retention. Agencies should revisit retention schedules to ensure AI training data containing personal information is minimised and disposed of according to National Archives and Records Administration (NARA) guidance. DSAR responses should note retention policies and provide deletion confirmations when authorised.
  • Transparency artefacts. Prepare “AI Fact Sheets” or model cards summarising purpose, training data, performance metrics, fairness evaluations, and contact information. These documents support DSAR responses and public transparency obligations.
  • Security integration. Coordinate with FedRAMP and FISMA compliance programmes to ensure AI systems meet security baselines. Incident response plans should include notification pathways for privacy officers when AI-driven breaches or misuse occur.

Procurement and vendor management

OMB’s draft requires agencies to embed AI risk controls into acquisition processes. Contracting officers must consult CAIOs, CIOs, and privacy officials before awarding AI-related contracts. Solicitations should require vendors to provide documentation on data sources, intellectual property rights, evaluation metrics, and privacy protections. Post-award, agencies must monitor contractor performance, including adherence to DSAR obligations and support for incident reporting. Vendors must notify agencies of model updates, data changes, or security incidents, enabling CAIOs to reassess risk.

Agencies should develop standard contract clauses referencing the AI Bill of Rights, NIST AI RMF, and privacy requirements. For example, contracts can mandate that vendors support export of data necessary for DSAR responses, provide access logs, and participate in independent audits. Agencies may also require algorithmic impact assessments to be co-developed with vendors and submitted to the Governance Board.

Reporting and oversight

The draft memo obliges agencies to publish annual reports summarising AI use, safeguards, incidents, and compliance. Agencies must submit these reports to OMB and make public versions available. OMB will aggregate metrics across government, providing visibility into adoption and risk trends. Agencies must also report “meaningful access” accommodations—such as alternative formats or multilingual support—to ensure individuals can understand AI-driven services and exercise DSAR rights. Congress and the Government Accountability Office are likely to scrutinise these reports, so agencies should maintain detailed evidence.

OMB indicates it will update guidance as AI technology evolves. Agencies should therefore build adaptive governance processes capable of incorporating new standards, such as emerging NIST profiles for generative AI. Continuous improvement should include stakeholder engagement, feedback loops from DSAR systems, and scenario planning for high-risk AI deployments.

Next steps for stakeholders

Agencies and contractors should take immediate action despite the guidance being in draft form. Key steps include:

  1. Conducting gap analyses comparing current AI governance frameworks to OMB’s requirements.
  2. Drafting or updating policies for AI inventories, impact assessments, DSAR integration, and transparency artefacts.
  3. Engaging privacy, civil rights, and ethics offices to align on procedures for handling complaints and rights requests.
  4. Preparing public comment submissions to OMB, particularly on implementation challenges such as resource constraints or conflicts with existing statutory mandates.

By organising governance, documenting risk controls, and integrating DSAR operations into AI oversight, agencies can comply with OMB’s vision and demonstrate trustworthy AI deployment. Vendors that align their products with these expectations will be better positioned to win federal work and support agencies’ accountability commitments.

Timeline plotting source publication cadence sized by credibility.
2 publication timestamps supporting this briefing. Source data (JSON)
Horizontal bar chart of credibility scores per cited source.
Credibility scores for every source cited in this briefing. Source data (JSON)

Continue in the AI pillar

Return to the hub for curated research and deep-dive guides.

Visit pillar hub

Latest guides

  • Office of Management and Budget
  • Risk Management
  • Federal Agencies
Back to curated briefings