← Back to all briefings
AI 5 min read Published Updated Credibility 92/100

G7 Launches Hiroshima Process on Generative AI — May 19, 2023

G7 Hiroshima Process on generative AI launched in May 2023. International coordination on AI governance among major economies. This framework shaped subsequent AI safety discussions globally.

Reviewed for accuracy by Kodi C.

AI pillar illustration for Zeph Tech briefings
AI deployment, assurance, and governance briefings

G7 leaders meeting in Hiroshima on 19 May 2023 launched the Hiroshima AI Process to develop risk-based governance for generative AI, signaling coordination on technical standards, transparency, and accountability across the world’s major advanced economies. The leaders’ communiqué commits members to promote “trustworthy” AI that respects democratic values, human rights, and the rule of law while enabling innovation and cross-border collaboration. Teams operating in G7 markets should anticipate converging regulatory expectations and voluntary codes of conduct that influence product design, procurement, and ecosystem partnerships.

The Hiroshima AI Process complements existing G7 initiatives on digital infrastructure, data free flow with trust, and cyber resilience. Leaders tasked relevant ministers to develop a full framework by year end, covering generative AI safety, intellectual property protections, disinformation safeguards, and strategies to support responsible deployment in public services. The process will engage international teams such as the OECD, GPAI, and the Global Partnership on AI to build on existing principles and develop implementable tools.

Capability implications

The G7 agenda sets expectations across four capability areas:

  • Transparency and accountability. If you are a developer, disclose information about training data, model limitations, and safeguards for generative systems. Leaders emphasized mechanisms to identify AI-generated content, including watermarking and provenance metadata.
  • Security and resilience. The communiqué highlights protecting critical infrastructure and supply chains from malicious AI use, requiring security-by-design, rigorous testing, and cross-border threat intelligence sharing.
  • Human-centric design. AI systems must support inclusive growth, respect labor rights, and increase workers. Leaders acknowledged the need for skills development, social dialog, and protection of intellectual property.
  • Global interoperability. The process will align with OECD AI principles and seek interoperability with emerging frameworks in the EU, US, and other jurisdictions, lowering compliance friction for multinational deployments.

How to implement this

Teams should prepare for converging G7 expectations by orchestrating multi-disciplinary programs:

  • Risk assessments. Conduct model risk assessments for generative AI covering bias, hallucinations, security misuse, and intellectual property. Document mitigation strategies, human oversight, and monitoring plans.
  • Transparency tooling. Implement watermarking, metadata tagging, and transparency statements that align with emerging good practices. Engage product, legal, and communications teams to design user-facing disclosures.
  • Content governance. Update content moderation policies to address AI-generated disinformation, deepfakes, and synthetic media. Develop escalation protocols for law enforcement requests and platform takedowns.
  • Cross-border compliance alignment. Map AI deployments to current and forthcoming regulations (EU AI Act, US NIST AI RMF, Canada’s AIDA) to identify control overlaps and standardize documentation.
  • Data governance alignment. Map data sourcing, consent, and localization requirements to DFFT principles and sector regulations to ensure training data practices meet cross-border expectations.
  • Stakeholder engagement. Participate in industry consortia, OECD working groups, and GPAI projects to influence guidance and benchmark against peers.

Global cooperation priorities

Leaders highlighted the need to support emerging and developing economies with capacity-building, infrastructure investment, and access to trustworthy AI tools. The Hiroshima AI Process will coordinate with initiatives on digital connectivity and DFFT (Data Free Flow with Trust) to ensure open, secure data flows underpinning responsible AI ecosystems.

G7 members also committed to aligning export controls, intellectual property protections, and research security measures so advanced AI capabilities are not misused while maintaining scientific collaboration.

Responsible governance

The Hiroshima AI Process emphasizes democratic governance, requiring teams to embed strong oversight:

  • Board stewardship. Boards should integrate AI ethics and geopolitical risk into risk committee agendas, ensuring oversight of cross-border AI strategies and adherence to evolving codes of conduct.
  • Public accountability. Develop transparency reports detailing AI use cases, safeguards, and impact assessments. Provide grievance mechanisms for users and civil society, especially where AI affects rights or access to services.
  • Workforce transition planning. Invest in skills programs, reskilling, and social dialog to manage workforce impacts, aligning with G7 commitments to support quality jobs in the digital economy.
  • International coordination. Assign leadership to monitor G7 outcomes, coordinate with government affairs teams, and harmonize responses across subsidiaries.

Playbooks by industry

  • Technology platforms. Implement provenance systems for AI-generated content, expand red-teaming programs, and engage with policymakers on voluntary codes.
  • Media and entertainment. Deploy synthetic media detection tools, update licensing agreements to protect creators, and align with intellectual property safeguards emphasized by G7 leaders.
  • Financial services. Integrate generative AI within risk management frameworks to support customer service and analytics while maintaining compliance with anti-fraud, privacy, and conduct regulations.
  • Public sector and critical infrastructure. Evaluate AI deployment for service delivery, ensuring security vetting, resilience testing, and citizen transparency to meet G7 commitments on public trust.

Measurement and accountability

Develop metrics to show responsible participation in the Hiroshima AI Process agenda:

  • Transparency coverage. Track percentage of AI products with published transparency statements, watermarking, or provenance metadata.
  • Risk mitigation effectiveness. Monitor incidents of misuse, hallucinations, or disinformation detected and resolved, along with remediation time.
  • Stakeholder engagement. Measure participation in multi-stakeholder forums, feedback received, and adjustments made to governance policies.
  • Skills investment. Quantify training hours, certification completion, and workforce transition support related to AI adoption.
  • Interoperability readiness. Assess alignment with key frameworks (OECD, NIST, EU AI Act), identifying gaps and remediation progress.

Track progress against forthcoming G7 milestones, including ministerial meetings in late 2023 that will recommend concrete measures and voluntary codes. Prepare executive briefings summarizing outcomes to adjust governance controls in real time.

These metrics should feed board dashboards and public sustainability reports, underscoring commitment to trustworthy AI principles across G7 markets.

By aligning internal policies with G7 deliverables, teams can simplify compliance with regional initiatives such as the EU AI Act, Canada’s AI and Data Act, and US voluntary commitments while demonstrating leadership in global AI stewardship.

Partnering with global enterprises to align AI governance programs with the G7 Hiroshima Process, combining policy horizon scanning, transparency tooling, and accountable deployment playbooks.

Executives should designate policy liaisons to participate in Hiroshima AI Process consultations, ensuring enterprise experiences inform the development of voluntary codes and risk management toolkits.

Aligning product roadmaps now will simplify certification once G7 partners translate the Hiroshima commitments into concrete regulatory or procurement requirements.

Continue in the AI pillar

Return to the hub for curated research and deep-dive guides.

Visit pillar hub

Latest guides

Coverage intelligence

Published
Coverage pillar
AI
Source credibility
92/100 — high confidence
Topics
AI governance · International policy · Generative AI
Sources cited
5 sources (mofa.go.jp, hitehouse.gov, reuters.com, consilium.europa.eu)
Reading time
5 min

References

  1. G7 Leaders’ Communiqué Hiroshima 2023 — Government of Japan
  2. G7 Leaders’ Statement on Technology — The White House
  3. G7 leaders agree to coordinate rules on generative AI — Reuters
  4. G7 Leaders’ Statement on Economic Resilience and Economic Security — Council of the European Union
  5. G7 Labour and Employment Ministers’ Declaration — G7 Labour and Employment Ministers
  • AI governance
  • International policy
  • Generative AI
Back to curated briefings

Comments

Community

We publish only high-quality, respectful contributions. Every submission is reviewed for clarity, sourcing, and safety before it appears here.

    Share your perspective

    Submissions showing "Awaiting moderation" are in review. Spam, low-effort posts, or unverifiable claims will be rejected. We verify submissions with the email you provide, and we never publish or sell that address.

    Verification

    Complete the CAPTCHA to submit.