United States and United Kingdom Sign AI Safety Cooperation MoU — April 1, 2024
The U.S. and U.K. governments signed a memorandum of understanding to collaborate on frontier AI safety research, testing, and standards through their national AI safety institutes.
The agreement enables the U.S. AI Safety Institute at NIST and the U.K. AI Safety Institute to share testing methodologies, conduct joint evaluations, and exchange staff. It also establishes a framework for developing interoperable safety benchmarks to manage cross-border AI risks.
- Joint evaluations. The institutes will align red-teaming, sandboxing, and capability assessments for advanced AI models.
- Standards collaboration. Both governments committed to co-develop international safety guidance and measurement tools.
- Information sharing. The MoU supports reciprocal secondees and structured exchanges on emerging AI hazards.
Vendors preparing for multinational safety expectations monitor these workstreams to anticipate converging audit evidence requirements.
Continue in the AI pillar
Return to the hub for curated research and deep-dive guides.
Latest guides
-
AI Workforce Enablement and Safeguards Guide — Zeph Tech
Equip employees for AI adoption with skills pathways, worker protections, and transparency controls aligned to U.S. Department of Labor principles, ISO/IEC 42001, and EU AI Act…
-
AI Incident Response and Resilience Guide — Zeph Tech
Coordinate AI-specific detection, escalation, and regulatory reporting that satisfy EU AI Act serious incident rules, OMB M-24-10 Section 7, and CIRCIA preparation.
-
AI Model Evaluation Operations Guide — Zeph Tech
Build traceable AI evaluation programmes that satisfy EU AI Act Annex VIII controls, OMB M-24-10 Appendix C evidence, and AISIC benchmarking requirements.




