← Back to all briefings
Policy 5 min read Published Updated Credibility 71/100

U.S. Executive Order 14110 on Safe, Secure, and Trustworthy AI

Biden's AI executive order is the most aggressive federal AI action yet. Foundation model developers now have to report to the government when they start training runs above certain compute thresholds, share safety test results, and follow red-teaming standards NIST is developing. If you are building large language models, this changes everything. If you are deploying them, expect procurement requirements and sectoral guidance coming fast.

Reviewed for accuracy by Kodi C.

Policy pillar illustration for Zeph Tech briefings
Policy, regulatory, and mandate timeline briefings

On 30 October 2023 the White House issued Executive Order 14110, the most expansive U.S. federal directive on artificial intelligence to date. The order requires developers of dual-use foundation models share safety test results with the federal government under the Defense Production Act, tasks NIST with advancing red-teaming and watermarking standards, and instructs DHS and other agencies to issue sectoral guidance.

It also orders the development of privacy-preserving techniques, directs agencies to combat algorithmic discrimination, and sets timelines for guidance on critical infrastructure, healthcare, education, and labor impacts. Organizations building or deploying advanced models should track forthcoming reporting obligations, security benchmarks, and procurement conditions shaped by the order.

Defense Production Act Reporting

The order invokes Defense Production Act authorities requiring developers of dual-use foundation models to notify the federal government when starting training runs exceeding specified computational thresholds. Notifications must include information about model capabilities, safety assessments, and red-team findings. This is the first use of DPA reporting requirements for AI development activities.

The computational threshold targets models requiring significant computing resources, initially focused on systems training using approximately 10^26 floating-point operations. Organizations operating large-scale training infrastructure should assess whether planned development activities trigger reporting obligations under forthcoming Commerce Department implementing regulations.

NIST Standards Development

The order directs NIST to develop standards, guidelines, and good practices for AI safety and security. Priority areas include red-teaming methodologies enabling systematic identification of model vulnerabilities, watermarking techniques for AI-generated content authentication, and testing protocols for evaluating safety characteristics.

NIST must establish consensus processes engaging industry, civil society, and academic teams. Standards development timelines create urgency for organizations seeking to influence outcomes. Active participation in NIST processes positions organizations favorably as requirements translate into procurement conditions and regulatory expectations.

Sectoral Guidance Requirements

Cabinet departments receive directives to issue guidance addressing AI risks within their jurisdictions. HHS must address healthcare AI applications including clinical decision support and administrative automation. DOE focuses on critical infrastructure protection. DHS addresses transportation, telecommunications, and cybersecurity applications.

Education, labor, housing, and financial services agencies also receive specific mandates. Organizations operating in regulated industries should monitor sectoral guidance development and engage agency consultation processes shaping setup.

Federal Procurement Implications

The order directs OMB to establish requirements for AI risk management in federal acquisition. Contractors providing AI systems to government agencies will face obligations around testing, documentation, and ongoing monitoring. These requirements affect existing contracts as well as new procurements.

Organizations selling AI products or services to the federal government should prepare for improved scrutiny of AI governance practices. Procurement requirements frequently establish de facto industry standards as commercial customers adopt government-driven expectations.

Privacy and Civil Rights Protections

The order addresses privacy concerns through directives on privacy-preserving machine learning techniques and restrictions on AI-enabled surveillance. Agencies must evaluate AI systems for potential discrimination impacts before deployment. Civil rights considerations extend to housing, employment, credit, and public services applications.

Impact assessment requirements apply to federal agencies and influence expectations for private sector organizations through procurement conditions and sectoral guidance. If you are affected, develop assessment methodologies evaluating disparate impact risks across protected characteristics.

Workforce and Economic Impacts

The order acknowledges AI impacts on employment, directing studies on automation effects and workforce development needs. Labor-related guidance addresses surveillance in the workplace, algorithmic management, and worker notification requirements. These provisions signal forthcoming federal attention to employment-related AI applications.

Schedule and deadlines

Agency actions proceed under aggressive timelines, with many deliverables required within 90-180 days. If you are affected, monitor agency activities and engage emerging consultation opportunities. The compressed setup schedule creates compliance urgency for organizations subject to forthcoming requirements.

Continue in the Policy pillar

Return to the hub for curated research and deep-dive guides.

Visit pillar hub

Latest guides

Coverage intelligence

Published
Coverage pillar
Policy
Source credibility
71/100 — medium confidence
Topics
Executive Order · Model Safety · Regulation · United States
Sources cited
2 sources (iso.org, crsreports.congress.gov)
Reading time
5 min

References

  1. Industry Standards and Best Practices — International Organization for Standardization
  2. Congressional Research Service Analysis
  • Executive Order
  • Model Safety
  • Regulation
  • United States
Back to curated briefings

Comments

Community

We publish only high-quality, respectful contributions. Every submission is reviewed for clarity, sourcing, and safety before it appears here.

    Share your perspective

    Submissions showing "Awaiting moderation" are in review. Spam, low-effort posts, or unverifiable claims will be rejected. We verify submissions with the email you provide, and we never publish or sell that address.

    Verification

    Complete the CAPTCHA to submit.