← Back to all briefings
AI 8 min read Published Updated Credibility 91/100

Executive Order 13960 Promotes Trustworthy AI in Federal Government — December 3, 2020

Executive Order 13960 directed U.S. federal agencies to advance the use of trustworthy artificial intelligence consistent with constitutional values, civil rights, and privacy protections.

Timeline plotting source publication cadence sized by credibility.
2 publication timestamps supporting this briefing. Source data (JSON)

Executive Order (EO) 13960, signed on December 3, 2020, directs U.S. federal agencies to adopt artificial intelligence in a manner that is lawful, purposeful, trustworthy, and aligned with constitutional values. The order sits within a broader AI governance trajectory that includes the 2019 American AI Initiative, the National AI Research and Development Strategic Plan, and subsequent agency-specific policies. Its central goals are to accelerate mission use of AI, strengthen oversight so that systems remain reliable and accountable, and increase public transparency about where and how AI affects services, benefits, and enforcement.

The order’s scope is deliberately broad: it applies to any government AI application, whether developed in-house or purchased from vendors, including predictive analytics, computer vision, natural-language processing, decision-support tools, and robotic process automation. Agencies are instructed to institutionalize governance, designate responsible officials, and ensure that AI does not displace human judgment in ways that would undermine due process or civil rights. EO 13960 also requests regular reporting to the Office of Management and Budget (OMB) and the White House Office of Science and Technology Policy (OSTP) to ensure consistent interpretation and cross-agency learning.

Trustworthy AI under EO 13960 is framed around several characteristics: the system must have a clearly defined purpose; be accurate, reliable, and effective; include safeguards that ensure safety and security; and be understandable to users and affected parties. Agencies must assess legal compliance early, including privacy laws, records management requirements, procurement regulations, accessibility standards, and civil-rights protections. The order emphasizes that automation should not obscure how eligibility, enforcement, or adjudication decisions are made and requires meaningful human oversight where individuals’ rights or safety could be affected.

Federal principles and governance expectations

EO 13960 articulates nine principles that agencies must translate into policy and procedure. Among them are lawful and respectful use of AI; purposeful and performance-driven deployment; accuracy, reliability, and effectiveness validated through testing; safety and security protections including cybersecurity and supply-chain risk management; understandable and transparent operation; responsible procurement that embeds standards and performance assurances; monitoring and auditing to detect drift or misuse; accountability through designated officials and documentation; and regular public reporting where practicable. The order directs agencies to integrate these principles into strategic plans, acquisition templates, budget justifications, and workforce development programs.

To ensure oversight, each agency must identify a Chief AI Officer or comparable designee responsible for inventorying AI use cases, approving pilot-to-production transitions, and coordinating training. Governance boards are encouraged to review risk assessments before operational deployment, particularly when AI informs eligibility determinations, enforcement prioritization, or public-facing services. Agencies are also expected to maintain documentation such as model cards, data provenance records, test and evaluation (T&E) artifacts, and continuous-monitoring logs so that independent evaluators and inspectors general can verify performance.

The order requires agencies to publish inventories of AI use cases, updated at least annually, with descriptions of the system’s purpose, responsible organization, datasets used, and risk mitigation practices. Sensitive cases may be summarized with security or law-enforcement details withheld, but agencies must still describe the general function and safeguards. Publication is meant to foster cross-agency reuse, reduce duplicative spending, and allow civil society to understand where automation affects public services.

Implementation milestones and operational guidance

EO 13960 sets explicit timelines. Within 180 days of the order, the Director of OMB, in consultation with OSTP, was tasked with issuing implementation guidance. That guidance arrived as OMB Memorandum M-21-06, which clarified agency responsibilities, outlined risk management expectations, and established reporting templates. Within 60 days of that memo, agencies had to submit inventories of current and planned AI applications, identify responsible officials, and document how they were embedding the order’s principles into lifecycle processes.

OMB M-21-06 instructs agencies to tailor AI governance to the system’s impact level, leveraging existing frameworks such as the Risk Management Framework for information systems, privacy impact assessments, and Federal Information Security Modernization Act (FISMA) controls. High-impact systems—those influencing civil liberties, safety, or significant resource allocations—must undergo more rigorous test and evaluation, independent validation where feasible, and ongoing monitoring with defined performance thresholds. Agencies should also implement fallback or human-in-the-loop mechanisms that maintain service continuity and accountability if the AI output is suspect or degraded.

Data management is another milestone. Agencies must evaluate data quality, representativeness, and potential bias before training or fine-tuning AI models. The memo recommends standardized data documentation, lineage tracking, and adherence to the Federal Data Strategy for governance and stewardship. When using commercial or third-party data, agencies should document licensing, privacy protections, and any restrictions that could affect reproducibility or transparency. These practices are intended to prevent hidden biases from propagating into mission-critical analytics.

Workforce readiness is required alongside technical controls. Agencies must develop training that covers AI fundamentals, ethical considerations, privacy and civil-rights protections, and acquisition practices. Contracting officers and program managers should be able to evaluate vendor claims about model performance, interpretability, and robustness. Technical teams are encouraged to adopt secure software development practices, adversarial testing, and red-teaming to uncover failure modes before production deployment.

Impact on contractors and acquisition processes

Because EO 13960 applies to both internally built and externally procured systems, it significantly affects contractors. Acquisition teams are instructed to embed trustworthy AI principles into requests for proposals, statements of work, and evaluation criteria. Vendors may be asked to provide model documentation, training and evaluation datasets, explainability techniques, and post-deployment monitoring plans. Agencies may also require software bills of materials (SBOMs), supply-chain risk assessments, and attestations regarding data rights and privacy safeguards.

Performance metrics and acceptance testing must align with mission needs and the order’s standards. For example, a computer vision system used for infrastructure inspection should demonstrate accuracy across diverse environmental conditions and include confidence thresholds that trigger human review. Predictive models informing benefits eligibility or enforcement prioritization should be tested for disparate impact, calibrated for fairness where applicable, and designed to support auditability. Vendors are expected to cooperate with independent verification and validation activities and to provide timely patches or model updates when drift or vulnerabilities are detected.

The order also highlights transparency obligations. Contractors providing public-facing AI tools may be required to support notices that inform users when AI is involved, offer accessible explanations of how outputs are generated, and document how individuals can seek redress or human review. Agencies may use modular contracting to pilot AI capabilities in low-risk environments before scaling, reducing the chance of lock-in and enabling iterative evaluation of vendor performance.

Relationship to civil rights, privacy, and safety

EO 13960 reinforces that AI must uphold existing legal protections. Agencies must ensure compliance with civil-rights statutes such as Title VI and Title VII of the Civil Rights Act, the Equal Credit Opportunity Act where applicable, the Rehabilitation Act’s Section 508 for accessibility, and constitutional due-process standards. Privacy protections include adherence to the Privacy Act of 1974, the E-Government Act’s privacy impact assessments, and sector-specific statutes such as the Health Insurance Portability and Accountability Act (HIPAA) when applicable. The order’s emphasis on understandability and documentation is designed to make it easier for affected individuals to obtain meaningful explanations and appeal decisions.

Safety and security are intertwined with these protections. Agencies must address adversarial machine-learning risks, data poisoning, and model theft through robust cybersecurity practices, configuration management, and continuous monitoring. Systems should be resilient to drift in operational data, with retraining protocols that include validation and rollback procedures. For physical systems such as robotics or unmanned vehicles, agencies must meet safety standards and conduct scenario-based testing to verify performance in realistic environments before deployment.

Measuring success and sustaining accountability

To maintain accountability, agencies are encouraged to establish performance dashboards that track AI system health, bias indicators, and user feedback. Inspectors general, privacy officers, and civil rights offices should have access to documentation and logs necessary to audit compliance with EO 13960 and OMB guidance. Lessons learned from incidents or near-misses should feed back into acquisition checklists, model governance playbooks, and training curricula. Public transparency reports and open data, when consistent with security and privacy requirements, can build public trust and foster innovation by enabling researchers to replicate evaluations and propose improvements.

EO 13960 also intersects with later federal actions. For example, OMB Memorandum M-24-10 on Advancing Governance, Innovation, and Risk Management for Agency Use of AI (March 2024) builds on the inventories, test and evaluation practices, and risk tiers established under EO 13960. Agencies that implemented the order effectively are better positioned to meet newer mandates, such as appointing Chief AI Officers, developing AI strategy implementation plans, and conducting independent evaluations for safety-impacting systems. The order thus serves as a foundational baseline for the federal government’s maturing AI governance architecture.

Overall, EO 13960 compels federal agencies to operationalize trustworthy AI principles through concrete governance structures, rigorous testing, transparent inventories, and contractor requirements that embed accountability into every stage of the AI lifecycle. By aligning mission-driven innovation with legal safeguards and oversight, the order aims to enable responsible adoption of AI technologies that improve public services without compromising civil liberties.

Sources

Timeline plotting source publication cadence sized by credibility.
2 publication timestamps supporting this briefing. Source data (JSON)
Horizontal bar chart of credibility scores per cited source.
Credibility scores for every source cited in this briefing. Source data (JSON)

Continue in the AI pillar

Return to the hub for curated research and deep-dive guides.

Visit pillar hub

Latest guides

  • Executive Order 13960
  • United States
  • Trustworthy AI
Back to curated briefings