← Back to all briefings
AI 5 min read Published Updated Credibility 91/100

United States and United Kingdom Sign AI Safety Cooperation MoU — April 1, 2024

The U.S. and U.K. governments signed a memorandum of understanding to collaborate on frontier AI safety research, testing, and standards through their national AI safety institutes.

Timeline plotting source publication cadence sized by credibility.
2 publication timestamps supporting this briefing. Source data (JSON)

The agreement enables the U.S. AI Safety Institute at NIST and the U.K. AI Safety Institute to share testing methodologies, conduct joint evaluations, and exchange staff. It also establishes a framework for developing interoperable safety benchmarks to manage cross-border AI risks.

  • Joint evaluations. The institutes will align red-teaming, sandboxing, and capability assessments for advanced AI models.
  • Standards collaboration. Both governments committed to co-develop international safety guidance and measurement tools.
  • Information sharing. The MoU supports reciprocal secondees and structured exchanges on emerging AI hazards.

Vendors preparing for multinational safety expectations monitor these workstreams to anticipate converging audit evidence requirements.

Timeline plotting source publication cadence sized by credibility.
2 publication timestamps supporting this briefing. Source data (JSON)
Horizontal bar chart of credibility scores per cited source.
Credibility scores for every source cited in this briefing. Source data (JSON)

Continue in the AI pillar

Return to the hub for curated research and deep-dive guides.

Visit pillar hub

Latest guides

  • International Cooperation
  • AI Safety
  • United States
  • United Kingdom
Back to curated briefings