Bletchley Declaration on AI Safety — November 1, 2023
The Bletchley Declaration united 28 countries and the EU around frontier AI risk-sharing, obliging governance teams to coordinate international safety commitments, operations leaders to build evaluation and reporting pipelines, and privacy offices to manage DSAR expectations as transparency pledges expand.
Executive briefing: On 1 November 2023 governments from 28 countries and the European Union signed the Bletchley Declaration at the UK AI Safety Summit. Signatories—including the United States, United Kingdom, China, Canada, France, Germany, Italy, Japan, the Republic of Korea, Singapore, Australia, India, and Gulf states—agreed on a common definition of frontier AI risk, committed to scientific cooperation on safety research, and endorsed iterative policy development anchored in evidence-based evaluations. The declaration recognises that advanced AI can pose “catastrophic” risks if misused or uncontrolled, emphasises the need for public-sector capability to evaluate models before and after deployment, and commits to follow-up summits in the Republic of Korea (mid‑2024) and France (late 2024). For enterprises, the declaration signals the contours of emerging international regulation: governance teams must align AI strategies with multilateral risk-sharing frameworks, delivery teams need to prepare evaluation pipelines that can support cross-border oversight, and privacy officers should brace for expanded transparency obligations that intersect with DSAR workflows.
Key commitments in the declaration
The Bletchley Declaration outlines five pillars:
- Shared risk understanding. Governments will develop collective definitions of frontier AI systems and the risks they pose, referencing capabilities that could enable cyber warfare, biological misuse, or erosion of democratic processes.
- Scientific collaboration. Public and private research institutions are encouraged to share evaluation methodologies, safety tools, and best practices, while respecting intellectual property and national security constraints.
- Evidence-based policy. Signatories commit to iterative regulation informed by empirical testing, encouraging developers to provide access for external evaluations and to publish safety reports.
- International coordination. Future summits and working groups will align safety standards, with Korea and France hosting follow-up events to maintain momentum.
- Inclusive growth. The declaration highlights the importance of ensuring AI benefits are distributed globally, including support for developing nations to build safety capacity.
While non-binding, the declaration sets expectations that companies will cooperate with government testing regimes, share safety metrics, and maintain incident response channels that function across borders. The inclusion of China and the EU alongside Western allies underscores that global supply chains and AI markets must prepare for harmonised—or at least interoperable—requirements.
Governance implications
Boards and executive committees should incorporate the Bletchley commitments into AI governance frameworks. Directors should require management to map how existing policies address international cooperation, safety evaluations, and transparency obligations. Establish cross-border governance forums that include legal, public policy, security, and product leaders to monitor developments from upcoming Korean and French summits. These forums should maintain inventories of government partnerships, model evaluation requests, and incident-sharing obligations to ensure consistent oversight.
Governance teams must also prepare for potential treaty-like arrangements. The declaration foreshadows harmonised risk classifications and evaluation protocols; boards should encourage participation in industry consortia (e.g., the Frontier Model Forum, AI Alliance) to influence standards and share best practices. Risk committees ought to update enterprise risk appetite statements to address frontier AI, specifying thresholds for acceptable external testing, conditions for sharing model weights, and processes for suspending deployments if regulators raise safety concerns.
Implementation roadmap
Operations and engineering leaders should translate the declaration’s themes into actionable programmes:
- Safety evaluation infrastructure. Build testing environments that allow governments or accredited third parties to assess models securely. Implement sandboxing, secure API gateways, and logging that preserves audit trails without exposing proprietary IP.
- Cross-border reporting workflows. Develop playbooks for responding to evaluation requests or incident inquiries from multiple jurisdictions. Ensure responses are coordinated across legal, security, and communications teams to avoid conflicting disclosures.
- Capability classification. Adopt internal taxonomies that map model capabilities to risk tiers aligned with Bletchley definitions. Use these classifications to trigger governance approvals, human oversight requirements, and communication plans.
- Research collaboration readiness. Establish data-sharing agreements and IP frameworks that enable participation in joint safety research without compromising trade secrets. Document the criteria for providing model access, including security vetting and usage restrictions.
- Resilience exercises. Run tabletop exercises simulating cross-border incidents—such as a safety flaw discovered by an overseas regulator—to rehearse decision-making, escalation, and communication.
Implementation should prioritise interoperability with emerging regional regulations. Align testing artefacts with EU AI Act conformity assessment expectations, U.S. NIST AI RMF documentation, UK AI assurance frameworks, and Singapore’s Model AI Governance Framework. Maintain multilingual documentation and ensure regional teams understand local regulatory nuances while operating under a unified global policy.
Implications for compliance programmes
The declaration elevates AI safety to a geopolitical coordination problem. Compliance leaders should integrate Bletchley commitments into enterprise control frameworks such as ISO/IEC 42001 (AI management systems) and SOC 2 trust criteria. Map each declaration pillar to specific controls—risk assessments, third-party assurance, safety reporting—and evaluate maturity levels. Establish compliance roadmaps that account for potential future requirements, such as mandatory safety case submissions or licensing of high-capability models.
Organisations should also enhance stakeholder engagement. The declaration emphasises inclusive growth, suggesting regulators may expect evidence of community consultation, workforce impact assessments, and accessibility considerations when evaluating AI deployments. Compliance teams should document outreach activities, capture feedback, and show how concerns influenced design decisions. This documentation will also support DSAR responses that question fairness or discrimination.
DSAR and privacy operations
Expanded transparency commitments raise privacy stakes. Organisations participating in frontier AI evaluations must track what personal data enters testing environments and ensure appropriate legal bases. Privacy officers should update records of processing to cover cross-border safety collaborations, specifying retention periods and data-transfer safeguards (e.g., Standard Contractual Clauses, UK IDTA, APEC CBPR). Implement consent or notice mechanisms for datasets used in external evaluations, and document anonymisation techniques when sharing logs or outputs.
DSAR workflows need to account for international cooperation. If regulators or research partners access model outputs containing personal data, DSAR responses must include these disclosures. Build tooling that can trace when specific individuals’ data appears in evaluation datasets or logs, and establish joint response protocols with partners to ensure consistent messaging. When relying on exemptions (e.g., national security, regulatory privilege), document legal justifications and provide partial disclosures or summaries where possible.
Privacy leads should synchronise with security and legal teams on information-sharing agreements. Include clauses that specify DSAR handling responsibilities, data minimisation expectations, and breach-notification procedures. Maintain registers of all external evaluations and associated datasets so that DSAR teams can rapidly identify relevant records.
Finally, integrate privacy considerations into crisis communications. Should an international evaluation surface a safety issue involving personal data, DSAR teams must coordinate with incident response to deliver timely, accurate information to affected individuals while meeting regulatory notification deadlines in multiple jurisdictions. Maintaining clear audit trails and multilingual templates will support compliance and preserve trust.
By aligning governance structures, implementation plans, compliance programmes, and DSAR operations with the Bletchley Declaration, organisations can demonstrate leadership in global AI safety cooperation and anticipate the regulatory architecture that will follow.
Continue in the AI pillar
Return to the hub for curated research and deep-dive guides.
Latest guides
-
AI Workforce Enablement and Safeguards Guide — Zeph Tech
Equip employees for AI adoption with skills pathways, worker protections, and transparency controls aligned to U.S. Department of Labor principles, ISO/IEC 42001, and EU AI Act…
-
AI Incident Response and Resilience Guide — Zeph Tech
Coordinate AI-specific detection, escalation, and regulatory reporting that satisfy EU AI Act serious incident rules, OMB M-24-10 Section 7, and CIRCIA preparation.
-
AI Model Evaluation Operations Guide — Zeph Tech
Build traceable AI evaluation programmes that satisfy EU AI Act Annex VIII controls, OMB M-24-10 Appendix C evidence, and AISIC benchmarking requirements.




