United States and United Kingdom Sign AI Safety Cooperation MoU — April 1, 2024
The U.S. and U.K. governments signed a memorandum of understanding to collaborate on frontier AI safety research, testing, and standards through their national AI safety institutes.
The agreement enables the U.S. AI Safety Institute at NIST and the U.K. AI Safety Institute to share testing methodologies, conduct joint evaluations, and exchange staff. It also establishes a framework for developing interoperable safety benchmarks to manage cross-border AI risks.
- Joint evaluations. The institutes will align red-teaming, sandboxing, and capability assessments for advanced AI models.
- Standards collaboration. Both governments committed to co-develop international safety guidance and measurement tools.
- Information sharing. The MoU supports reciprocal secondees and structured exchanges on emerging AI hazards.
Vendors preparing for multinational safety expectations monitor these workstreams to anticipate converging audit evidence requirements.