← Back to all briefings
AI 5 min read Published Updated Credibility 92/100

G7 Launches Hiroshima Process on Generative AI — May 19, 2023

G7 leaders launched the Hiroshima AI Process to coordinate generative AI governance across transparency, security, and human-centric innovation.

Timeline plotting source publication cadence sized by credibility.
5 publication timestamps supporting this briefing. Source data (JSON)

Executive briefing: G7 leaders meeting in Hiroshima on 19 May 2023 launched the Hiroshima AI Process to develop risk-based governance for generative AI, signalling coordination on technical standards, transparency, and accountability across the world’s major advanced economies. The leaders’ communiqué commits members to promote “trustworthy” AI that respects democratic values, human rights, and the rule of law while enabling innovation and cross-border collaboration. Organisations operating in G7 markets should anticipate converging regulatory expectations and voluntary codes of conduct that influence product design, procurement, and ecosystem partnerships.

The Hiroshima AI Process complements existing G7 initiatives on digital infrastructure, data free flow with trust, and cyber resilience. Leaders tasked relevant ministers to develop a comprehensive framework by year end, covering generative AI safety, intellectual property protections, disinformation safeguards, and strategies to support responsible deployment in public services. The process will engage international organisations such as the OECD, GPAI, and the Global Partnership on AI to build on existing principles and develop implementable tools.

Capability implications

The G7 agenda sets expectations across four capability areas:

  • Transparency and accountability. Developers should disclose information about training data, model limitations, and safeguards for generative systems. Leaders emphasised mechanisms to identify AI-generated content, including watermarking and provenance metadata.
  • Security and resilience. The communiqué highlights protecting critical infrastructure and supply chains from malicious AI use, requiring security-by-design, rigorous testing, and cross-border threat intelligence sharing.
  • Human-centric design. AI systems must support inclusive growth, respect labour rights, and augment workers. Leaders acknowledged the need for skills development, social dialogue, and protection of intellectual property.
  • Global interoperability. The process will align with OECD AI principles and seek interoperability with emerging frameworks in the EU, US, and other jurisdictions, lowering compliance friction for multinational deployments.

Implementation roadmap

Enterprises should prepare for converging G7 expectations by orchestrating multi-disciplinary programmes:

  • Risk assessments. Conduct model risk assessments for generative AI covering bias, hallucinations, security misuse, and intellectual property. Document mitigation strategies, human oversight, and monitoring plans.
  • Transparency tooling. Implement watermarking, metadata tagging, and transparency statements that align with emerging best practices. Engage product, legal, and communications teams to design user-facing disclosures.
  • Content governance. Update content moderation policies to address AI-generated disinformation, deepfakes, and synthetic media. Develop escalation protocols for law enforcement requests and platform takedowns.
  • Cross-border compliance alignment. Map AI deployments to current and forthcoming regulations (EU AI Act, US NIST AI RMF, Canada’s AIDA) to identify control overlaps and standardise documentation.
  • Data governance alignment. Map data sourcing, consent, and localisation requirements to DFFT principles and sector regulations to ensure training data practices meet cross-border expectations.
  • Stakeholder engagement. Participate in industry consortia, OECD working groups, and GPAI projects to influence guidance and benchmark against peers.

Global cooperation priorities

Leaders highlighted the need to support emerging and developing economies with capacity-building, infrastructure investment, and access to trustworthy AI tools. The Hiroshima AI Process will coordinate with initiatives on digital connectivity and DFFT (Data Free Flow with Trust) to ensure open, secure data flows underpinning responsible AI ecosystems.

G7 members also committed to aligning export controls, intellectual property protections, and research security measures so advanced AI capabilities are not misused while maintaining scientific collaboration.

Responsible governance

The Hiroshima AI Process emphasises democratic governance, requiring organisations to embed robust oversight:

  • Board stewardship. Boards should integrate AI ethics and geopolitical risk into risk committee agendas, ensuring oversight of cross-border AI strategies and adherence to evolving codes of conduct.
  • Public accountability. Develop transparency reports detailing AI use cases, safeguards, and impact assessments. Provide grievance mechanisms for users and civil society, especially where AI affects rights or access to services.
  • Workforce transition planning. Invest in skills programmes, reskilling, and social dialogue to manage workforce impacts, aligning with G7 commitments to support quality jobs in the digital economy.
  • International coordination. Assign leadership to monitor G7 outcomes, coordinate with government affairs teams, and harmonise responses across subsidiaries.

Sector playbooks

  • Technology platforms. Implement provenance systems for AI-generated content, expand red-teaming programmes, and engage with policymakers on voluntary codes.
  • Media and entertainment. Deploy synthetic media detection tools, update licensing agreements to protect creators, and align with intellectual property safeguards emphasised by G7 leaders.
  • Financial services. Integrate generative AI within risk management frameworks to support customer service and analytics while maintaining compliance with anti-fraud, privacy, and conduct regulations.
  • Public sector and critical infrastructure. Evaluate AI deployment for service delivery, ensuring security vetting, resilience testing, and citizen transparency to meet G7 commitments on public trust.

Measurement and accountability

Develop metrics to demonstrate responsible participation in the Hiroshima AI Process agenda:

  • Transparency coverage. Track percentage of AI products with published transparency statements, watermarking, or provenance metadata.
  • Risk mitigation efficacy. Monitor incidents of misuse, hallucinations, or disinformation detected and resolved, along with remediation time.
  • Stakeholder engagement. Measure participation in multi-stakeholder forums, feedback received, and adjustments made to governance policies.
  • Skills investment. Quantify training hours, certification completion, and workforce transition support related to AI adoption.
  • Interoperability readiness. Assess alignment with key frameworks (OECD, NIST, EU AI Act), identifying gaps and remediation progress.

Track progress against forthcoming G7 milestones, including ministerial meetings in late 2023 that will recommend concrete measures and voluntary codes. Prepare executive briefings summarising outcomes to adjust governance controls in real time.

These metrics should feed board dashboards and public sustainability reports, underscoring commitment to trustworthy AI principles across G7 markets.

By aligning internal policies with G7 deliverables, organisations can streamline compliance with regional initiatives such as the EU AI Act, Canada’s AI and Data Act, and US voluntary commitments while demonstrating leadership in global AI stewardship.

Zeph Tech partners with global enterprises to align AI governance programmes with the G7 Hiroshima Process, combining policy horizon scanning, transparency tooling, and accountable deployment playbooks.

Executives should designate policy liaisons to participate in Hiroshima AI Process consultations, ensuring enterprise experiences inform the development of voluntary codes and risk management toolkits.

Aligning product roadmaps now will simplify certification once G7 partners translate the Hiroshima commitments into concrete regulatory or procurement requirements.

Timeline plotting source publication cadence sized by credibility.
5 publication timestamps supporting this briefing. Source data (JSON)
Horizontal bar chart of credibility scores per cited source.
Credibility scores for every source cited in this briefing. Source data (JSON)

Continue in the AI pillar

Return to the hub for curated research and deep-dive guides.

Visit pillar hub

Latest guides

  • AI governance
  • International policy
  • Generative AI
Back to curated briefings