White House Secures Voluntary AI Safety Commitments — July 21, 2023
Seven leading AI companies agreed to voluntary commitments covering security testing, transparency, and responsible deployment of advanced AI systems.
Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI pledged to conduct internal and external red-teaming, invest in cybersecurity, share best practices, and watermark AI-generated content. They also committed to support third-party discovery of vulnerabilities and report system capabilities and limitations.
- Security testing. Companies will test models before release and share risk information with governments and researchers.
- Transparency. Commitments include developing watermarking systems, reporting capabilities, and discouraging malicious use.
- Responsible deployment. Firms pledged to prioritise societal benefits, manage bias, and protect privacy throughout the AI lifecycle.
Enterprise governance teams should map these voluntary standards to supplier due diligence and contract language.