← Back to all briefings
AI 6 min read Published Updated Credibility 92/100

G7 Leaders Launch Hiroshima Process for Generative AI — October 30, 2023

G7 leaders published Hiroshima AI Process guiding principles and a voluntary code of conduct for advanced AI developers, requiring governance committees to benchmark policies against the 11 commitments, programme teams to operationalise risk and transparency controls, and privacy offices to integrate DSAR readiness with cross-border AI disclosures.

Timeline plotting source publication cadence sized by credibility.
1 publication timestamps supporting this briefing. Source data (JSON)

Executive briefing: On 30 October 2023 the G7’s Hiroshima AI Process released International Guiding Principles and an accompanying Code of Conduct for organisations that design, develop, deploy, and operate advanced artificial intelligence systems. The documents—endorsed by Canada, France, Germany, Italy, Japan, the United Kingdom, the United States, and the European Union—set 11 high-level principles covering risk management, transparency, security, fairness, accountability, and incident reporting. They also articulate 11 voluntary commitments for frontier model developers, including rigorous adversarial testing, supply-chain security, responsible deployment, and mechanisms for addressing misuse. Although non-binding, the guidance signals how future regulation may evolve and is already influencing domestic policymaking, such as the EU AI Act’s trilogue negotiations and the U.S. Executive Order 14110 implementation plans. Governance teams must benchmark corporate AI frameworks against the Hiroshima principles, delivery teams must adapt engineering and product processes to satisfy the code of conduct, and privacy officers must ensure DSAR and disclosure practices accommodate cross-border transparency expectations.

Understanding the principles and code of conduct

The guiding principles call on organisations to implement risk-based approaches, design AI to be safe throughout its lifecycle, and maintain security—including supply-chain integrity and vulnerability management. They highlight the need for transparency (documenting capabilities and limitations), accountability (assigning responsibility and ensuring human oversight), fairness and respect for human rights, and sustainable development. Importantly, the principles emphasise incident sharing and redress mechanisms, urging organisations to collaborate with governments and stakeholders to mitigate harmful outcomes. The code of conduct translates these themes into concrete developer obligations: identify and manage risks before deployment; test models for misuse, bias, and safety; protect model weights and infrastructure; establish supply-chain due diligence; monitor and report significant incidents; and provide transparency to users and downstream developers.

G7 members positioned the code as a baseline for responsible frontier AI practices while acknowledging the need to support innovation. The documents are aligned with OECD AI principles and complement existing sectoral frameworks such as the U.S. AI Risk Management Framework (RMF) and NIST’s Secure Software Development practices. The Hiroshima AI Process anticipates iterative updates—Korea will host a follow-up ministerial in 2024 and France will convene stakeholders in 2025—making ongoing governance engagement crucial.

Governance implications

Boards and executive risk committees must integrate the Hiroshima principles into enterprise AI policies. Start by mapping current policies, AI charters, and oversight structures against each principle; identify gaps in areas like incident reporting, supply-chain due diligence, or sustainability metrics. Governance bodies should assign accountability for meeting each principle, typically spanning the chief data officer, chief information security officer, chief privacy officer, and business unit leaders. Update board reporting templates to include Hiroshima-aligned metrics—number of model risk assessments completed, coverage of adversarial testing, status of third-party assurance, and DSAR volumes involving AI outputs.

The code of conduct references multi-stakeholder engagement and redress, signalling that governance should formalise stakeholder advisory councils or ethics committees. These groups can provide independent challenge to AI deployment plans, review fairness assessments, and evaluate human-rights impacts. Directors should also ensure that enterprise risk appetite statements explicitly address frontier AI development, specifying thresholds for allowable autonomous decision-making, acceptable model opacity, and requirements for human-in-the-loop review. Where organisations rely on third-party models or platforms, governance must extend oversight to vendor management—demanding attestations that suppliers adhere to the Hiroshima code or equivalent standards.

Implementation roadmap

Programme managers and engineering leaders need to embed the code of conduct into development workflows. Key implementation steps include:

  • Establish cross-functional risk triage. Create an AI risk council that includes product owners, security, privacy, legal, and compliance personnel. The council should evaluate new AI initiatives, assign risk tiers, and determine testing depth, aligning decisions with the Hiroshima principles and existing frameworks like the NIST AI RMF or ISO/IEC 23894.
  • Operationalise adversarial and safety testing. Build testing pipelines that cover robustness to prompt injection, model inversion, data poisoning, and jailbreak attempts. Document test results, remediation plans, and sign-offs before deployment. Leverage third-party red teams or government-provided evaluation suites where appropriate.
  • Implement supply-chain security controls. Maintain inventories of datasets, model components, libraries, and hardware used in AI systems. Apply software bill of materials (SBOM) practices, vulnerability scanning, and provenance checks for training data. Establish contractual clauses requiring suppliers and cloud providers to adhere to the code’s expectations.
  • Enhance transparency artefacts. Produce model cards, system cards, and datasheets that describe capabilities, limitations, training data provenance, evaluation metrics, and intended users. Share redress mechanisms and contact points with downstream developers and customers.
  • Strengthen monitoring and incident response. Implement telemetry that tracks model performance, drift, and anomalous usage. Define thresholds that trigger incident investigations, involve legal counsel, and notify regulators or partners as required by the code’s incident-sharing expectations.
  • Embed human oversight and redress. Design user interfaces and workflows that enable humans to review AI recommendations, override outputs, and log rationales. Create escalation channels for users or affected individuals to contest AI-driven decisions.

Implementation should also consider organisational change. Provide training modules that explain the Hiroshima principles, illustrate acceptable use cases, and highlight escalation procedures. Update project management templates to include checklists referencing each principle, and require sign-off from privacy and ethics leads before moving AI features into production. For multinational organisations, harmonise the Hiroshima code with jurisdiction-specific rules—such as EU AI Act obligations for high-risk systems or sectoral regulations like the FDA’s guidance on clinical decision support.

DSAR and privacy operations

The Hiroshima AI Process emphasises transparency and accountability, which naturally intersects with DSAR obligations under GDPR, CCPA, and emerging AI laws. Privacy teams must ensure that AI systems can surface personal data used in training, inference, and monitoring. Conduct data-mapping exercises that link AI pipelines to underlying datasets, documenting legal bases for processing, retention periods, and data minimisation measures. When AI models rely on synthetic or anonymised data, maintain evidence supporting anonymisation claims to respond confidently to DSARs.

DSAR workflows should include AI-specific templates that explain how models operate, the sources of training data, and safeguards implemented per the Hiroshima code. Build tooling that can extract model inputs, outputs, and relevant metadata for a given individual without exposing other users’ data. If models are hosted by third parties, negotiate contractual clauses guaranteeing timely assistance with access, rectification, or deletion requests.

Privacy officers must also prepare for cross-border transparency expectations. The G7 principles encourage sharing meaningful information with international partners when serious incidents occur; ensure DSAR and breach-response teams coordinate so that disclosures to regulators align with individual rights communications. Update privacy notices and AI explainability documentation to reference the organisation’s adoption of the Hiroshima principles, clarifying how individuals can seek redress or additional information. Finally, integrate DSAR metrics into governance dashboards so boards can monitor whether AI deployments are increasing request volumes or elongating response times, and allocate resources accordingly.

By aligning governance frameworks, implementation practices, and DSAR operations with the Hiroshima AI Process, organisations can demonstrate responsible stewardship of advanced AI, maintain regulatory goodwill, and build trust with users and regulators alike.

Timeline plotting source publication cadence sized by credibility.
1 publication timestamps supporting this briefing. Source data (JSON)
Horizontal bar chart of credibility scores per cited source.
Credibility scores for every source cited in this briefing. Source data (JSON)

Continue in the AI pillar

Return to the hub for curated research and deep-dive guides.

Visit pillar hub

Latest guides

  • G7
  • Generative AI
  • International Cooperation
Back to curated briefings