← Back to all briefings
AI 5 min read Published Updated Credibility 90/100

AI Governance Briefing — August 19, 2024

U.S. and UK AI Safety Institutes formalize a testing partnership, aligning evaluation protocols and compute sharing for high-risk models.

Timeline plotting source publication cadence sized by credibility.
2 publication timestamps supporting this briefing. Source data (JSON)

Executive briefing: On August 19, 2024 the U.S. Department of Commerce and the UK Department for Science, Innovation and Technology signed a memorandum of understanding linking the U.S. AI Safety Institute and the UK AI Safety Institute to co-develop model evaluations and share testbed infrastructure.

Key governance signals

  • Joint testing protocols. The institutes agreed to publish interoperable evaluation suites for frontier model robustness, including red-team playbooks and interpretability benchmarks.
  • Compute cooperation. The MOU commits both governments to provide reciprocal access to secure compute clusters for third-party safety researchers vetted by the institutes.
  • Industry participation. Frontier model developers (Anthropic, Google, Microsoft/OpenAI) will pilot the shared test protocols ahead of the 2025 AI Seoul Summit progress report.

Control alignment

  • NIST AI RMF 1.0. Update Measure and Manage functions to reflect the joint institute metrics, ensuring evaluation coverage and reporting align with cross-border expectations.
  • ISO/IEC 42001. Incorporate institute-issued evaluation guidance into AI management system controls for risk assessment, model release, and incident escalation.

Detection and response priorities

  • Map existing red-team pipelines to the institutes’ evaluation templates; prioritise gap remediation where safety test coverage diverges.
  • Establish data-handling agreements for any compute sharing with the institutes, ensuring export-control compliance and logging of test artifacts.

Enablement moves

  • Nominate internal model evaluation leads to participate in institute workshops and contribute feedback on shared benchmarks.
  • Coordinate with legal and policy teams to align transparency disclosures with future joint progress reports.

Sources

Zeph Tech aligns enterprise AI governance with cross-border institute benchmarks to keep safety evaluation programmes regulator-ready.

Timeline plotting source publication cadence sized by credibility.
2 publication timestamps supporting this briefing. Source data (JSON)
Horizontal bar chart of credibility scores per cited source.
Credibility scores for every source cited in this briefing. Source data (JSON)

Continue in the AI pillar

Return to the hub for curated research and deep-dive guides.

Visit pillar hub

Latest guides

  • AI Safety Institute
  • Memorandum of understanding
  • Model evaluation
  • Transatlantic cooperation
Back to curated briefings