← Back to all briefings
AI 5 min read Published Updated Credibility 92/100

White House Secures Voluntary AI Safety Commitments — July 21, 2023

The White House’s 21 July 2023 voluntary AI commitments from seven frontier developers raise governance expectations for safety testing, watermarking, and data stewardship, prompting enterprises to harden implementation playbooks and DSAR disclosures.

Timeline plotting source publication cadence sized by credibility.
1 publication timestamps supporting this briefing. Source data (JSON)

On , the White House announced that seven leading AI companies—Amazon, Anthropic, Google, Inflection AI, Meta, Microsoft, and OpenAI—signed voluntary commitments to advance safe, secure, and trustworthy AI. The commitments, negotiated by the Biden-Harris Administration, include obligations to conduct internal and external security testing before releasing advanced models, invest in cybersecurity and insider-threat safeguards, report discovered vulnerabilities, watermark AI-generated content, and publicly disclose limitations and societal risks. Although non-binding, the commitments establish a de facto governance benchmark for the wider industry and foreshadow potential regulation. Boards and executives at organisations deploying or procuring AI systems must align their governance, implementation roadmaps, and DSAR processes with these expectations to mitigate legal, reputational, and contractual risk.

The commitments cover three pillars: security, responsibility, and trust. Under security, companies pledged to perform internal red-teaming and external testing prior to release, share best practices with peers and governments, and ensure strong physical and cybersecurity controls for model weights. Responsibility commitments include prioritising research on societal risks such as bias and discrimination, facilitating third-party discovery and reporting of vulnerabilities, and developing privacy safeguards. Trust commitments focus on transparency—watermarking AI-generated audio and visual content, reporting on capabilities and limitations, and prioritising solutions for societal challenges.

Governance actions

Boards should incorporate the White House commitments into AI oversight frameworks. Even if an organisation was not among the original signatories, investors, customers, and regulators may expect similar standards. Directors should request an assessment of current AI governance policies against the commitments, covering model inventory, risk classification, testing protocols, and disclosure processes. Audit and risk committees should mandate regular reporting on AI incidents, third-party assessments, and compliance with forthcoming regulations such as the EU AI Act, the UK AI assurance framework, and US sectoral rules (e.g., financial services model risk management guidance). Governance policies must articulate roles and responsibilities—identifying accountable executives for AI safety, privacy, and DSAR compliance—and ensure conflict-of-interest management for research teams who may face commercial pressure to deploy models quickly.

Procurement and vendor-management functions should update due-diligence questionnaires to confirm whether AI suppliers align with the White House commitments. Contracts should require disclosure of testing methodologies, incident reporting timelines, watermarking capabilities, and cooperation with DSAR requests involving AI-generated content. For enterprises building custom models using cloud platforms, service-level agreements should include access to safety evaluation results, model cards, and red-team findings. Boards should also oversee participation in industry alliances—such as the Frontier Model Forum—to stay abreast of evolving best practices.

Implementation roadmap

Operational teams should translate the commitments into actionable controls. Security testing requires establishing red-team programmes that simulate misuse scenarios (prompt injection, data exfiltration, adversarial attacks) and documenting outcomes. Organisations should integrate these tests into change-management processes, gating model releases until remediation is verified. Implementation should leverage frameworks like NIST’s AI Risk Management Framework and OWASP’s LLM Security Testing Guide to structure test cases. Collaboration agreements with third-party researchers must outline vulnerability disclosure timelines, safe harbour provisions, and DSAR coordination when test data involves personal information.

Watermarking and provenance commitments demand technical investments. Companies should evaluate watermarking standards such as the Coalition for Content Provenance and Authenticity (C2PA) and integrate provenance metadata into content management systems. Governance should require periodic verification that watermarks survive typical transformations and are documented for DSAR responses when individuals contest the origin of media. For textual outputs, firms can implement cryptographic signing or metadata logging to demonstrate provenance.

To deliver transparency reporting, organisations must build documentation pipelines: model cards detailing intended use, training data sources, evaluation results, and known limitations; system cards describing deployment contexts; and impact assessments quantifying potential harms. These documents should be version-controlled and linked to DSAR knowledge bases so privacy teams can respond to individuals requesting insight into automated decision-making. Implementation teams should also establish release notes and risk registers that capture post-deployment incidents and mitigation actions.

Data governance is critical. The commitments emphasise privacy and security safeguards for training data and model weights. Enterprises should map data lineage, implement differential privacy or federated learning where feasible, and enforce role-based access controls for sensitive datasets. DSAR procedures must encompass AI training corpora and fine-tuning data, clarifying whether personal data can be located, anonymised, or deleted. Where models ingest public web data, organisations should maintain records of legal bases for processing and opt-out mechanisms, referencing the commitments’ focus on protecting privacy.

DSAR and transparency obligations

The commitments’ emphasis on transparency and user trust increases pressure to provide meaningful responses to DSARs involving AI-generated content or automated decisions. Privacy teams should update DSAR playbooks to include steps for retrieving prompts, outputs, and audit logs associated with a requester. They should coordinate with model engineering teams to supply explanations or contestation options when individuals believe AI-driven outcomes are inaccurate or discriminatory. Where watermarking enables provenance tracking, organisations should capture metadata (timestamp, model version, safety filters applied) in DSAR archives. Legal counsel must ensure responses balance transparency with protection of trade secrets and security-sensitive information, documenting redaction rationales.

Organisations operating in multiple jurisdictions should harmonise DSAR approaches to satisfy GDPR, CCPA, Canada’s CPPA proposals, and other privacy regimes. Automated tools can assist by tagging personal data within training datasets and by orchestrating deletion or suppression requests. Enterprises should also prepare customer-facing transparency reports summarising AI incidents, red-team exercises, and progress against the White House commitments—mirroring the periodic updates the administration expects from signatories.

Monitoring and assurance

To sustain compliance, internal audit and risk functions should establish AI assurance programmes. Auditors can evaluate whether red-team findings are tracked through remediation, whether watermarking controls operate effectively, and whether DSAR responses referencing AI systems meet statutory deadlines. Metrics should include the number of models covered by the commitments, percentage of models with completed risk assessments, time to resolve identified vulnerabilities, and DSAR volumes involving AI. Organisations should conduct tabletop exercises simulating coordinated disclosures—combining vulnerability reporting, regulatory engagement, and DSAR fulfilment—to test crisis management.

External assurance may soon be expected; companies can engage independent assessors to review AI safety controls, similar to SOC 2 or ISO/IEC 42001 audits. Investor relations and ESG teams should align disclosures with sustainability reporting frameworks that increasingly reference AI governance. By implementing robust controls now, enterprises can demonstrate readiness for forthcoming legislation and build trust with stakeholders who view the White House commitments as a baseline for responsible AI conduct.

Timeline plotting source publication cadence sized by credibility.
1 publication timestamps supporting this briefing. Source data (JSON)
Horizontal bar chart of credibility scores per cited source.
Credibility scores for every source cited in this briefing. Source data (JSON)

Continue in the AI pillar

Return to the hub for curated research and deep-dive guides.

Visit pillar hub

Latest guides

  • United States
  • AI Safety
  • Industry Commitments
Back to curated briefings