Compliance Briefing — October 30, 2023
U.S. Executive Order 14110 commands federal agencies and critical sectors to institute AI safety governance, phased implementation, and DSAR-aligned transparency around model evaluations, data usage, and rights to contest automated decisions.
Executive briefing: On President Joe Biden signed Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The order directs more than a dozen federal agencies to develop standards, guidance, and reporting mechanisms covering AI safety testing, national security, critical infrastructure, privacy, labor, and consumer protection. Key mandates include new rules for dual-use foundation models, incident reporting to the Department of Homeland Security (DHS), safety test sharing with the Department of Commerce, evaluations by the National Institute of Standards and Technology (NIST), and privacy-enhancing technology pilots. Organizations building or deploying AI must align governance structures, implementation roadmaps, and DSAR-ready transparency to satisfy current and forthcoming requirements derived from the order.
Governance expectations and oversight structures
The order requires agencies to designate Chief AI Officers, establish AI governance boards, and integrate AI risk management with privacy and civil rights protections. Private-sector entities serving federal customers or operating in critical infrastructure sectors should mirror this governance model by forming cross-functional AI oversight councils that include security, privacy, legal, human resources, and DSAR leads. Boards should request quarterly briefings on Executive Order 14110 milestones, including agency rulemaking timelines, compliance obligations, and potential procurement clauses.
Governance frameworks must document accountability for AI lifecycle stages—data collection, training, evaluation, deployment, and monitoring. The order references NIST’s AI Risk Management Framework and directs the agency to develop guidelines for generative AI, red-team testing, and synthetic content watermarking. Organizations should map these guidelines to board-level risk appetites, ensuring oversight committees understand the privacy implications, DSAR exposure, and public reporting expectations associated with AI deployments.
Implementation roadmap for compliance
Companies should plan for a multi-phase implementation program aligned with key deadlines in the order:
- Immediate actions (0–90 days): Inventory AI systems, identify those that meet the order’s definition of dual-use foundation models (models trained with quantities of computing power above thresholds to be defined by the Secretary of Commerce), and assess exposure to federal reporting requirements. Establish incident reporting workflows that can escalate AI safety events to DHS’s Cybersecurity and Infrastructure Security Agency (CISA) within the prescribed timeframe. Update DSAR registers to note which AI systems process personal data and automated decision-making.
- Mid-term (90–270 days): Implement AI red-teaming protocols consistent with NIST guidance, document evaluation results, and prepare to share them with the Department of Commerce under forthcoming rules. Participate in privacy-preserving data sharing pilots, adopting technologies such as secure multiparty computation, homomorphic encryption, or differential privacy. Align workforce impact assessments with Department of Labor guidance and plan for notices to employees affected by AI-driven monitoring or scheduling.
- Long-term (270–540 days and beyond): Integrate AI governance into enterprise risk management, incorporate AI metrics into Securities and Exchange Commission (SEC) disclosures where material, and update contracts with government agencies to reflect Executive Order obligations. Monitor agency rulemaking, including Federal Trade Commission (FTC) guidance on unfair or deceptive AI practices, Department of Justice (DOJ) civil rights enforcement, and Consumer Financial Protection Bureau (CFPB) expectations for explainability in credit decisions. Ensure DSAR processes can deliver detailed information about data sources, model logic, and contestation pathways.
Safety testing, reporting, and documentation
The order directs the Secretary of Commerce, through the Bureau of Industry and Security (BIS), to define thresholds for reporting and to develop guidance on content authentication and provenance. Organizations should maintain detailed documentation of model architectures, training data sources, compute usage, evaluation protocols, and safety mitigations. Establish centralized repositories where red-team reports, alignment test results, and incident response playbooks are stored with access controls and retention policies aligned with privacy requirements. When DSARs request information about automated decisions, the repository should support rapid retrieval of relevant test results and impact assessments.
For critical infrastructure operators, DHS will establish AI safety and security board review processes. Companies should map these processes to existing critical infrastructure sector-specific plans and ensure DSAR teams can articulate how AI-related incidents are recorded, investigated, and communicated to affected individuals. Implement logging to capture model inputs, outputs, and decision rationales, providing evidence in DSAR responses and regulatory inquiries.
Privacy, civil rights, and DSAR readiness
Executive Order 14110 emphasizes privacy-preserving technologies and enforcement of civil rights protections. The order directs the Federal Privacy Council to issue guidance on managing privacy risks and encourages agencies to implement privacy-enhancing technologies. Organizations should update privacy impact assessments (PIAs) for AI systems, referencing DSAR processes for individuals seeking access to their data or explanations of automated decisions. Ensure DSAR workflows can interface with AI governance tools to provide comprehensive responses that include data provenance, model purpose, risk mitigation steps, and avenues to contest outcomes.
The order instructs the Department of Health and Human Services (HHS) to develop a safety program for AI in healthcare and to evaluate algorithmic bias. Healthcare providers and vendors should integrate DSAR processes with patient access rights under HIPAA and emerging state privacy laws. Financial institutions must align with CFPB guidance on fair lending and provide DSAR transparency about AI-driven credit decisions. Employers deploying AI for hiring or surveillance should prepare to deliver DSAR responses that address equal employment opportunity considerations and demonstrate compliance with Department of Labor guidelines.
Workforce, innovation, and international coordination
The order calls for programs to support workers affected by AI, promote responsible innovation, and coordinate internationally through forums such as the G7, OECD, and Global Partnership on AI. Organizations should document workforce impact assessments, reskilling initiatives, and DSAR processes for employees querying algorithmic decisions. Maintain records of international data transfers supporting AI development, ensuring compliance with cross-border data protection regimes and DSAR reciprocity agreements.
Innovation initiatives, including the establishment of a National AI Research Resource (NAIRR) pilot, will require governance over research data and compute. Research institutions should implement data use agreements, ethical review boards, and DSAR channels for participants contributing data to AI research. For collaborations with international partners, align privacy and DSAR practices with foreign laws, documenting safeguards and consent mechanisms.
Metrics, assurance, and reporting
Develop KPIs to track AI governance maturity: percentage of AI systems inventoried, number of models subject to red-teaming, DSAR response times involving AI, frequency of incidents reported to DHS, and adoption of privacy-enhancing technologies. Present metrics to executive leadership and include highlights in ESG or sustainability reports. Prepare for external assurance—through internal audit or third-party assessments—to validate AI controls, documentation, and DSAR responsiveness.
Agencies will publish public dashboards and reports on AI implementation progress. Organizations should align their transparency reporting with federal expectations, providing structured data on AI use cases, risk mitigations, and DSAR handling. Engage with agency rulemaking by submitting comments, sharing best practices, and requesting clarifications where necessary.
Next steps
Immediately form an Executive Order 14110 response team, update AI inventories, and brief the board on strategic implications. Within six months, complete red-team evaluations for high-impact models, integrate DSAR processes with AI governance tools, and align privacy-enhancing technology pilots with agency programs. Over the next year, monitor rulemaking from Commerce, DHS, NIST, FTC, and other agencies, adjusting controls and documentation accordingly. Proactive governance, disciplined implementation, and rights-respecting DSAR practices will position organizations to meet Executive Order 14110 requirements and build public trust in responsible AI.
Continue in the Compliance pillar
Return to the hub for curated research and deep-dive guides.
Latest guides
-
Third-Party Risk Oversight Playbook — Zeph Tech
Operationalize OCC, Federal Reserve, EBA, and MAS outsourcing expectations with lifecycle controls, continuous monitoring, and board reporting.
-
Compliance Operations Control Room — Zeph Tech
Implement cross-border compliance operations that satisfy Sarbanes-Oxley, DOJ guidance, EU DORA, and MAS TRM requirements with verifiable evidence flows.
-
SOX Modernization Control Playbook — Zeph Tech
Modernize Sarbanes-Oxley (SOX) compliance by aligning PCAOB AS 2201, SEC management guidance, and COSO 2013 controls with data-driven testing, automation, and board reporting.




