← Back to all briefings
AI 6 min read Published Updated Credibility 91/100

United States and United Kingdom Sign AI Safety Cooperation MoU — April 1, 2024

The US and UK signed an AI safety MOU in April 2024, formalizing cooperation between their AI safety institutes. They are sharing research, aligning on evaluation methods, and running joint testing on frontier AI models. It is the beginning of what could become a coordinated international approach to AI safety—or at least a US-UK-aligned one.

Accuracy-reviewed by the editorial team

AI pillar illustration for Zeph Tech briefings
AI deployment, assurance, and governance briefings

The U.S. and U.K. governments signed a memorandum of understanding on 1 April 2024 to collaborate on frontier AI safety research, testing, and standards through their national AI safety institutes. The agreement enables the U.S. AI Safety Institute at NIST and the U.K. AI Safety Institute to share testing methodologies, conduct joint evaluations, and exchange staff. It also sets up a framework for developing interoperable safety benchmarks to manage cross-border AI risks. This bilateral partnership represents the most significant international AI safety cooperation agreement to date, building upon commitments made at the UK AI Safety Summit at Bletchley Park in November 2023 and reflecting shared concerns about advanced AI system risks.

Strategic Context and Objectives

Both the United States and United Kingdom have established dedicated AI safety institutes tasked with understanding and mitigating risks from frontier AI systems—advanced models whose capabilities approach or exceed human performance in economically valuable tasks. The U.S. AI Safety Institute operates within the National Institute of Standards and Technology, building upon NIST's expertise in measurement science, standards development, and risk assessment.

The U.K. AI Safety Institute was established following the Bletchley Summit with a mandate to evaluate frontier AI systems and develop safety science. The MoU formalizes cooperation that enables both institutes to use each other's capabilities, avoid duplicative efforts, and develop consistent approaches to AI safety evaluation that companies operating internationally can satisfy without handling divergent national requirements.

Joint Evaluation Framework

The institutes will align red-teaming, sandboxing, and capability assessments for advanced AI models developed by leading laboratories. Joint evaluations enable pooling of expertise, testing infrastructure, and analytical resources that neither country could deploy independently at the same scale.

Evaluation methodologies will address dangerous capabilities including autonomous operation, cyber offense, biological weapons development assistance, and deception or manipulation capabilities. Sandboxing arrangements enable testing advanced models in controlled environments where potential harms can be contained while researchers probe system boundaries. Harmonized evaluation criteria reduce burden on AI developers while ensuring consistent safety standards across key markets.

Standards Development Cooperation

Both governments committed to co-develop international safety guidance and measurement tools applicable to frontier AI systems. Standards collaboration addresses measurement methodologies for AI capabilities and risks, safety benchmarks enabling comparison across models and developers, good practices for responsible development and deployment, and incident reporting and information sharing frameworks. This cooperation positions U.S. and U.K. approaches as influential inputs to international standards development in bodies like ISO, IEC, and ITU. Harmonized national standards reduce market fragmentation while maintaining rigorous safety expectations.

Information Sharing and Personnel Exchange

The MoU supports reciprocal secondees and structured exchanges on emerging AI hazards between the two institutes. Personnel exchanges enable knowledge transfer on evaluation methodologies, institutional practices, and emerging research findings. Information sharing arrangements address treatment of confidential business information obtained through voluntary or mandatory evaluations. Structured communication channels enable rapid exchange of information on newly identified risks, vulnerabilities, or concerning capabilities discovered during evaluations. Coordination on emerging hazards helps both governments respond effectively to fast-changing AI capabilities.

Industry Implications

Organizations developing or deploying advanced AI systems should monitor developments in bilateral safety cooperation and prepare for potential evolution of evaluation requirements. Joint U.S.-U.K.

approaches to AI safety evaluation may become de facto international standards that other jurisdictions reference or adopt. Companies seeking access to both markets should engage constructively with both safety institutes, understanding that cooperation reduces the burden of satisfying divergent national requirements. Track harmonization efforts between NIST and UK AI safety frameworks, positioning compliance programs to satisfy both jurisdictions efficiently through unified evaluation and documentation approaches.

Broader International Implications

The U.S.-U.K. partnership sets up a template for bilateral AI safety cooperation that other aligned democracies may seek to join or emulate. The G7 Hiroshima AI Process and subsequent developments show interest from Japan, Canada, France, Germany, and Italy in coordinated approaches to AI governance. The EU AI Act sets up a complementary regulatory framework that U.S.-U.K. safety cooperation may inform or align with over time. Organizations operating globally should monitor how bilateral arrangements evolve into broader multilateral frameworks that could eventually establish full international AI safety standards.

Future Outlook and Monitoring

If you are affected, monitor joint initiatives, track published safety frameworks, and assess applicability of collaborative research outputs to organizational AI safety programs. Industry engagement through working groups, standards bodies, and peer networks provides early insight into emerging expectations and good practices. Active participation can influence outcomes and ensure organizational interests are considered in future developments of bilateral and multilateral AI safety frameworks.

Continue in the AI pillar

Return to the hub for curated research and deep-dive guides.

Visit pillar hub

Latest guides

Coverage intelligence

Published
Coverage pillar
AI
Source credibility
91/100 — high confidence
Topics
International Cooperation · AI Safety · United States · United Kingdom
Sources cited
3 sources (gov.uk, hitehouse.gov, iso.org)
Reading time
6 min

Further reading

  1. UK and US sign Memorandum of Understanding to strengthen partnership on AI safety — UK Department for Science, Innovation and Technology
  2. Fact Sheet: United States and United Kingdom Announce Intent to Establish Bilateral Partnership on AI Safety — Office of Science and Technology Policy
  3. ISO/IEC 42001:2023 — Artificial Intelligence Management System — International Organization for Standardization
  • International Cooperation
  • AI Safety
  • United States
  • United Kingdom
Back to curated briefings

Comments

Community

We publish only high-quality, respectful contributions. Every submission is reviewed for clarity, sourcing, and safety before it appears here.

    Share your perspective

    Submissions showing "Awaiting moderation" are in review. Spam, low-effort posts, or unverifiable claims will be rejected. We verify submissions with the email you provide, and we never publish or sell that address.

    Verification

    Complete the CAPTCHA to submit.