← Back to all briefings
AI 6 min read Published Updated Credibility 90/100

White House AI Executive Order

Biden signed the most sweeping AI executive order in U.S. history. Red-teaming for powerful models, content authentication, federal hiring push, and immigration streamlining for AI talent. This sets the agenda.

Reviewed for accuracy by Kodi C.

AI pillar illustration for Zeph Tech briefings
AI deployment, assurance, and governance briefings

President Biden’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence (EO 14110) introduces a first-of-its-kind reporting regime for dual-use foundation models. Developers who train models using computing power above 1026 floating-point operations—or who produce models with capabilities that could pose serious national or economic security risks—must notify the Department of Commerce, share safety test results, and describe measures taken to prevent malicious use. The order also instructs Commerce to require disclosures from operators that acquire large clusters of AI chips, sets timelines for watermarking guidance, and calls for supply-chain risk assessments tied to semiconductor manufacturing. These obligations, coupled with parallel requirements for federal procurement, consumer protection, and critical infrastructure, mean AI governance cannot remain aspirational. Boards must oversee full compliance programs, delivery teams must automate reporting pipelines and safety evaluations, and privacy leaders must ensure training-data transparency and DSAR readiness as new disclosures surface sensitive information.

What the reporting regime requires

EO 14110’s Section 4 compels developers of “dual-use foundation models” to file reports with Commerce both before training begins and after significant milestones. Pre-training notifications must describe model purpose, the amount and type of compute used, data sources, safety testing plans, and the protections in place to prevent model weights from being stolen or misused. Post-training reports must deliver safety testing results, red-team outcomes, and updates on mitigation measures. The order directs Commerce to define when a model qualifies as “dual-use” based on risk profiles, and sets a compute threshold—training runs requiring more than 1026 floating-point operations on infrastructure located in or associated with the United States—to capture frontier models even if developers do not consider them high risk.

Beyond model developers, EO 14110 instructs Commerce to require cloud providers to report foreign clients training models above the compute threshold, and to collect information from entities acquiring significant clusters of AI accelerators (for example, >100 TFLOPS computing systems). The Department must issue regulations within 180 days, after consulting with intelligence and national security agencies. Developers will implement safeguards such as multi-factor authentication, source-code protection, and insider-risk mitigation to comply with reporting obligations.

What this means for governance

Boards and executive leadership must treat the EO as a binding compliance regime. Governance committees should task management with creating a foundation model compliance manual that documents reporting triggers, approval workflows, and escalation paths. Appoint an executive responsible officer—often the chief AI officer or chief information security officer—to certify that reporting submissions are accurate, complete, and timely. Audit committees must require periodic internal audits or third-party attestations covering safety testing, compute metering, and access controls around model artifacts.

The order’s emphasis on supply-chain integrity means governance bodies must oversee procurement and infrastructure strategies. Directors should review hardware acquisition plans, chip leasing arrangements, and foreign partnerships to confirm they align with export-control obligations and Commerce’s forthcoming reporting requirements. Establish policies for collaborating with academic partners or start-ups: joint development agreements should mandate compliance with EO 14110 reporting, clarify data-sharing rules, and allocate responsibility for responding to government inquiries.

Given the potential for enforcement to evolve into licensing regimes, governance should monitor policy developments and maintain relationships with Commerce and the White House Office of Science and Technology Policy (OSTP). Scenario planning—examining how the company would respond if compute thresholds drop or if mandatory licensing is introduced—helps ensure resilience.

How to implement this

Delivery teams need to operationalize reporting, safety testing, and supply-chain controls. Key steps include:

  • Instrument compute tracking. Implement telemetry that measures floating-point operations, GPU hours, and hardware configurations for each training run. Ensure metrics distinguish between U.S.-located infrastructure and foreign sites, as the EO captures compute “significantly owned or controlled” by U.S. entities. Build dashboards that flag when upcoming runs approach reporting thresholds.
  • Automate reporting dossiers. Create templates for Commerce notifications that pull data from experiment tracking systems (for example, MLflow, Weights & Biases), security tooling, and policy repositories. Include descriptions of data sources, safety testing plans, and mitigation measures. Implement workflow approvals involving legal, security, and policy teams before submission.
  • Expand safety testing. Develop red-team playbooks covering biological weaponisation, cyber intrusion, critical infrastructure disruption, and misinformation scenarios—areas explicitly highlighted in EO 14110. Maintain version-controlled test scripts, capture metrics (success rate of jailbreak attempts, time to detection), and remediate weaknesses before deployment.
  • Secure model artifacts. Enforce hardware security modules for weight storage, implement least-privilege access with hardware-backed attestation, and monitor for anomalous downloads. Align controls with NIST’s forthcoming secure development guidance and adopt tamper-evident logging.
  • Prepare for cloud-provider coordination. If using hyperscaler platforms, negotiate contractual clauses that specify data flows, reporting responsibilities, and breach-notification procedures. Validate that provider telemetry can support Commerce reporting and DSAR requirements.
  • Document supply-chain risk management. Map semiconductor procurement, firmware supply, and model hosting dependencies. Capture attestations for Trusted Foundry participation, secure boot processes, and vulnerability management on accelerator clusters.

program managers should integrate these actions into product roadmaps, gating model releases on completion of reporting and safety tasks. Establish training programs for engineers and researchers explaining the EO’s requirements, expected documentation, and escalation points if they observe anomalies.

Interaction with other EO initiatives

Section 4 interacts with parallel EO provisions. Commerce, NIST, and the Department of Energy must propose watermarking and content authentication guidance within 180 days; developers should plan for provenance metadata in outputs.

The Department of Justice and Federal Trade Commission are instructed to coordinate on AI-related consumer protection enforcement, meaning reporting missteps could trigger broader investigations. Meanwhile, the EO encourages the launch of an AI Safety and Security Board under DHS, which may request additional information about reported models when evaluating sector risks. Implementation teams should maintain a single repository of submissions, safety test records, and mitigation plans to satisfy multiple inquiries without inconsistent messaging.

DSAR and privacy operations

Reporting obligations increase transparency around training data, evaluation datasets, and model usage—raising the likelihood of DSARs from individuals whose data influenced model behavior. Privacy teams must maintain full data inventories documenting data provenance, consent mechanisms, retention schedules, and deletion workflows. When reporting to Commerce, ensure summaries of training data categories avoid exposing personal data; where detailed descriptions are necessary, coordinate with legal counsel to apply anonymization or aggregated statistics.

DSAR playbooks should include procedures for frontier AI systems: identify whether training datasets contain personal data, determine if synthetic data was generated from personal information, and explain how safety testing impacts individuals. Provide DSAR responses that outline the organization’s adherence to EO 14110 safeguards, human oversight, and redress channels. Because cloud providers may also receive DSARs related to hosted training runs, establish joint response agreements to avoid conflicting communications.

The EO’s emphasis on security logging and incident reporting requires privacy teams to balance regulatory transparency with confidentiality obligations. Maintain role-based access control on reporting repositories, log all access, and build audit trails demonstrating that DSAR disclosures did not compromise security. Update privacy notices and AI transparency statements to reflect new reporting practices, referencing Commerce submissions and international data-transfer safeguards as applicable.

By embedding the EO’s reporting obligations into governance, setup, and DSAR operations, teams can show responsible stewardship of foundation models, reduce regulatory risk, and sustain public trust as AI oversight accelerates.

Continue in the AI pillar

Return to the hub for curated research and deep-dive guides.

Visit pillar hub

Latest guides

References

  1. Industry Standards and Best Practices — International Organization for Standardization
  2. NIST AI Risk Management Framework
  • White House AI Executive Order
  • Defense Production Act
  • NIST AI RMF
  • ISO/IEC 42001
Back to curated briefings

Comments

Community

We publish only high-quality, respectful contributions. Every submission is reviewed for clarity, sourcing, and safety before it appears here.

    Share your perspective

    Submissions showing "Awaiting moderation" are in review. Spam, low-effort posts, or unverifiable claims will be rejected. We verify submissions with the email you provide, and we never publish or sell that address.

    Verification

    Complete the CAPTCHA to submit.