← Back to all briefings
Cybersecurity 6 min read Published Updated Credibility 91/100

Executive Order 14110 on Safe, Secure, and Trustworthy AI — October 30, 2023

Executive Order 14110 directs NIST, DHS, and sector risk agencies to build secure-by-design AI guidance, critical infrastructure risk frameworks, and red-teaming regimes—demanding governance to oversee compliance roadmaps, engineers to align delivery lifecycles, and privacy teams to prep DSAR-ready logging for mandated reporting.

Horizontal bar chart of credibility scores per cited source.
Credibility scores for every source cited in this briefing. Source data (JSON)

Executive briefing: President Biden’s 30 October 2023 Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence (EO 14110) establishes a sweeping federal agenda to harden AI development and deployment. Beyond headline requirements for model reporting, the order directs the National Institute of Standards and Technology (NIST), the Department of Homeland Security (DHS), and every Sector Risk Management Agency (SRMA) to build secure-by-design practices, critical infrastructure risk frameworks, and rigorous red-teaming protocols. NIST must publish guidance for red-teaming foundation models within 270 days, integrate AI security into the Secure Software Development Framework (SSDF), and launch a companion to the AI Risk Management Framework. DHS, through CISA, must produce an AI safety and security board, evaluate AI-enabled threats to critical infrastructure, and issue sector-specific mitigation roadmaps within one year. SRMAs must inventory AI use, assess vulnerabilities, and coordinate with the newly established AI Safety and Security Board. These directives demand board-level oversight of compliance plans, program-level execution of new engineering controls, and privacy teams capable of delivering DSAR-grade logs and explanations for federal reporting and public accountability.

Governance implications

Boards and executive leadership must embed EO 14110 obligations into corporate AI governance. Directors should request a cross-functional compliance roadmap that maps each executive-order requirement to accountable owners, deadlines, and dependencies. Risk committees must ensure that AI and cybersecurity risk appetite statements reflect the order’s secure-by-design expectations, including commitments to adversarial testing, vulnerability remediation, and coordinated disclosure. Audit committees should mandate periodic maturity assessments referencing NIST’s AI RMF companion and SSDF updates, ensuring that internal controls evolve as federal guidance is released.

Because EO 14110 elevates SRMAs’ oversight, governance teams in regulated sectors—energy, financial services, healthcare, transportation—must plan for sector-specific directives. Boards should require management to monitor SRMA guidance, participate in industry consultations, and allocate budget for compliance. Establishing an AI governance council that reports quarterly to the board can align technology, security, privacy, legal, and public policy teams. Finally, governance should ensure that companies engaging with federal agencies have documented strategies for data sharing, incident reporting, and cooperation with the AI Safety and Security Board, including clear escalation paths when high-risk issues arise.

Implementation roadmap

Engineering, product, and security leaders must operationalise EO 14110 mandates across the AI lifecycle. Key steps include:

  • Integrate NIST guidance. Monitor NIST’s forthcoming AI RMF companion resources and SSDF updates, incorporating them into secure development lifecycles. Update architecture review checklists to require threat modelling, adversarial robustness testing, dataset provenance validation, and secure deployment configurations for AI components.
  • Build red-teaming capabilities. Establish internal or third-party red teams that stress-test foundation models and AI-enabled applications. Document methodologies aligned with NIST’s forthcoming guidance, including testing for prompt injection, model inversion, hallucinations affecting critical decisions, and misuse scenarios relevant to your sector.
  • Enhance vulnerability management. Extend vulnerability disclosure programmes to cover AI systems. Provide channels for security researchers, customers, and regulators to report AI flaws, and define remediation SLAs that reflect the order’s secure-by-design principles. Integrate AI vulnerabilities into enterprise risk registers.
  • Coordinate with critical infrastructure roadmaps. As DHS and SRMAs publish sector-specific mitigation plans, map their requirements to existing control frameworks (e.g., NERC CIP, FFIEC, HIPAA). Update control libraries, playbooks, and audit checklists accordingly.
  • Strengthen supply-chain assurance. EO 14110 emphasises secure model supply chains. Require third-party AI providers to attest to NIST-aligned practices, review their red-teaming results, and ensure contractual rights to audit or obtain supporting evidence.
  • Update incident response. Incorporate AI misuse scenarios into incident response playbooks. Define triggers for notifying DHS or SRMAs, coordinate with legal counsel on privilege considerations, and rehearse communications plans addressing both regulators and customers.

Implementation teams should build a regulatory watch function to track when NIST, DHS, and SRMAs release guidance. Use agile governance: create backlog items for each upcoming requirement, assign cross-functional squads, and run sprints to update documentation, tooling, and training. Invest in education programmes that teach developers secure coding for AI, emphasising data minimisation, model monitoring, and explainability. Ensure enterprise risk management integrates AI security metrics—such as percentage of models covered by adversarial testing or time to remediate AI-specific vulnerabilities—so that leadership can monitor progress.

Sector-specific considerations

Financial services. EO 14110 instructs the Treasury Department to report on AI-related financial stability risks. Banks and fintechs should expect supervisory expectations tying AI governance to model risk management (SR 11-7), third-party risk, and cyber resilience. Prepare to supply regulators with inventories of AI models, red-teaming evidence, and customer-impact assessments.

Healthcare and life sciences. The order directs HHS to establish an AI task force that will draft safety and privacy guidance for healthcare AI, including quality-management standards for medical AI and clarification of HIPAA obligations. Providers and health-tech companies must prepare to integrate clinical risk management, algorithmic bias testing, and patient-consent management into EO 14110 compliance.

Critical manufacturing and energy. DOE and DHS will evaluate AI-enabled physical security and cyber-physical risks across energy infrastructure, potentially updating NERC standards or DOE cybersecurity directives. Operators should align predictive maintenance, grid optimisation, and robotics AI deployments with forthcoming resilience benchmarks and ensure workforce training keeps pace.

Public-sector contractors. The order requires the Federal Acquisition Regulatory Council to propose rules addressing government procurement of AI, including safety testing evidence and supply-chain assurances. Contractors must ready documentation packages demonstrating compliance with NIST guidance, data provenance controls, and privacy safeguards for AI services sold to federal agencies.

Program management and metrics

Chief information security officers and programme management offices should translate EO 14110 into measurable objectives. Define key results such as: “100 percent of production AI systems mapped to a risk tier and subjected to adversarial testing,” “Mean time to remediate AI vulnerabilities reduced to 30 days,” and “All high-risk AI deployments reviewed by privacy, legal, and ethics leaders before launch.” Implement dashboards that integrate these metrics with DSAR response times and incident reports so leadership can monitor progress holistically.

Budgeting processes must account for new tooling—model monitoring platforms, secure ML operations pipelines, and AI-specific logging. Document capital and operating expenses tied to EO 14110 compliance to support board oversight and potential cost recovery through regulatory rate cases or contract negotiations.

DSAR and privacy operations

The executive order’s emphasis on transparency, testing, and incident reporting intersects with data subject rights. Organisations must maintain detailed logs of model inputs, outputs, training datasets, evaluation results, and access controls to satisfy potential federal audits. Privacy teams should integrate these logs into DSAR workflows, ensuring that individuals who request information about AI-driven decisions receive clear explanations aligned with EO 14110 transparency goals and applicable privacy laws. For automated decision-making affecting consumers, provide model context, data sources, and human-review options in DSAR responses.

Because EO 14110 encourages sharing safety test results and incident data with government partners, privacy officers need protocols that reconcile regulatory disclosures with personal data protections. Establish review committees that evaluate whether incident reports contain personal data, determine anonymisation requirements, and record legal bases for disclosure. Update privacy notices to reflect enhanced monitoring and testing, and ensure retention schedules cover the additional logs and evaluation artefacts produced under NIST and DHS guidance.

Finally, DSAR teams should align with security and compliance on consent and opt-out management. If secure-by-design controls require expanded telemetry, confirm that telemetry collection complies with state privacy laws and that individuals can exercise rights without undermining security obligations. Document how AI governance, security, and privacy teams coordinate when DSARs reference AI systems, and include EO 14110-driven obligations in tabletop exercises. By embedding DSAR readiness into EO 14110 compliance, organisations can deliver secure AI while maintaining trust.

Executing against EO 14110’s secure AI agenda will require sustained governance oversight, disciplined implementation, and privacy-aware operations—but organisations that align early will be better positioned for forthcoming regulations and federal partnerships.

Horizontal bar chart of credibility scores per cited source.
Credibility scores for every source cited in this briefing. Source data (JSON)

Continue in the Cybersecurity pillar

Return to the hub for curated research and deep-dive guides.

Visit pillar hub

Latest guides

  • United States
  • Artificial intelligence
  • Critical infrastructure
  • Secure by design
Back to curated briefings