← Back to all briefings

AI · Credibility 92/100 · · 1 min read

Global Partnership on AI Launch — June 15, 2020

Canada and France formally launched the Global Partnership on Artificial Intelligence (GPAI), convening 15 founding members to advance responsible AI development and adoption.

Executive briefing: On , the Global Partnership on Artificial Intelligence (GPAI) was formally launched by Australia, Canada, the European Union, France, Germany, India, Italy, Japan, Mexico, New Zealand, the Republic of Korea, Singapore, the United Kingdom, and the United States. The initiative operates as an OECD-hosted multistakeholder forum dedicated to responsible AI development and deployment. GPAI builds on the OECD AI Principles, bringing together governments, industry, academia, and civil society to support practical projects, policy guidance, and capacity building.

Execution priorities for international AI policy leads

Compliance checkpoints with GPAI commitments

Catalogue your organisation's contributions to GPAI working groups—Responsible AI, Data Governance, Future of Work, Innovation & Commercialization—to align domestic strategies with the partnership's human-rights commitments.GPAI launch announcement

Ensure inter-ministerial governance structures can brief GPAI's Council and Steering Committee on progress, reflecting the co-chairs' expectation for transparent reporting.GPAI launch announcement

Operational moves for project participation

Nominate experts and allocate travel or virtual collaboration budgets so they can participate in GPAI's multi-stakeholder expert groups and contribute datasets, research, and policy drafts.Global Partnership on AI

Integrate GPAI project outputs—such as pandemic response toolkits and responsible AI maturity assessments—into national programmes to avoid duplication and accelerate adoption.Global Partnership on AI

Enablement tasks for domestic stakeholders

Engage industry, academia, and civil society networks to gather feedback on GPAI recommendations so member states present unified positions during plenary meetings.GPAI launch announcement

Publicise knowledge-sharing opportunities offered through GPAI's Montréal and Paris centres of expertise to build capacity among regulators and innovators.Global Partnership on AI

Governance structure

GPAI features a Council of participating governments, a multistakeholder Steering Committee, and working groups focused on responsible AI, data governance, future of work, innovation and commercialization, and pandemic response (later evolved into AI for the SDGs). A Secretariat is hosted by the OECD in Paris, with two Centres of Expertise: one in Montreal (CEIMIA) and another in Paris (INRIA) providing administrative and research support.

Working groups coordinate expert projects, produce reports, and recommend policy actions. Members include representatives from governments, research institutions, companies, and civil society organisations. GPAI emphasises transparency, diversity, and inclusive participation.

Initial priorities

Early GPAI projects included developing responsible AI frameworks, identifying best practices for AI in pandemic response, exploring AI’s impact on labour markets, and promoting data governance models that respect privacy and innovation. The responsible AI working group focuses on topics such as AI auditability, human rights, and trustworthy AI metrics. The data governance group examines cross-border data flows, interoperability, and data trusts.

The future of work group analyses AI-driven skills shifts, lifelong learning strategies, and social protection policies. The innovation and commercialization group studies pathways to scale AI solutions, supporting SMEs and start-ups.

Implications for organisations

Participation in GPAI initiatives allows organisations to contribute to global AI policy discussions, share best practices, and access research outputs. Companies developing AI should align with GPAI principles, demonstrating transparency, accountability, fairness, and human-centric design. Engagement can enhance credibility with regulators, investors, and customers.

Research institutions and civil society organisations can collaborate on GPAI projects, influence policy recommendations, and access funding opportunities for responsible AI research. Government agencies can leverage GPAI resources to inform national AI strategies.

Responsible AI frameworks

GPAI builds on existing ethical guidelines, advocating for risk-based approaches, impact assessments, and governance mechanisms. Organisations should conduct AI impact assessments covering safety, bias, privacy, and societal effects. Transparency measures include documentation, model cards, and explainability. Accountability frameworks must define roles, escalation processes, and remediation mechanisms.

AI assurance requires monitoring models for drift, bias, and unintended outcomes. GPAI encourages the development of tools and standards for auditing AI systems, aligning with work by ISO/IEC JTC 1/SC 42, IEEE, and NIST.

Data governance and sharing

The data governance working group examines mechanisms such as data trusts, federated learning, and privacy-preserving technologies. Organisations should consider how to share data responsibly, ensuring compliance with GDPR, CCPA, and other regulations. GPAI supports interoperable frameworks that enable cross-border AI collaboration while protecting fundamental rights.

Data governance strategies should include consent management, anonymisation, encryption, and access controls. Collaboration with GPAI can help organisations align practices with global norms and participate in pilot projects.

Future of work and skills

GPAI’s future of work projects focus on workforce transitions, upskilling, and inclusive growth. Organisations should assess AI’s impact on job roles, invest in training, and engage with labour representatives. Policies should address fairness, worker voice, and social safety nets.

Companies can use GPAI resources to benchmark workforce strategies, design reskilling programmes, and measure outcomes. Collaboration with educational institutions and governments supports talent pipelines.

Innovation and commercialization

The innovation working group identifies barriers to scaling AI solutions, particularly for SMEs. It explores public procurement, access to capital, and international partnerships. Organisations can leverage GPAI insights to refine go-to-market strategies, evaluate ethical considerations in product design, and engage with investors on responsible AI.

Start-ups may benefit from networking opportunities, mentorship, and visibility within GPAI initiatives. Large enterprises can share lessons learned from deploying AI at scale, contributing to shared knowledge.

Action plan

  1. Immediate: Review GPAI’s mission and working group outputs. Identify opportunities to participate in projects, consultations, or events.
  2. 30–60 days: Align internal AI governance frameworks with GPAI principles. Document responsible AI policies, impact assessment procedures, and risk management practices.
  3. 60–90 days: Engage with GPAI Centres of Expertise, contribute case studies, or propose collaborative projects. Establish partnerships with academia and civil society to support GPAI objectives.
  4. Continuous: Monitor GPAI publications, integrate recommendations into AI roadmaps, and report progress to stakeholders.

Engaging with GPAI helps organisations demonstrate responsible AI leadership, influence global standards, and foster trustworthy innovation.

Pandemic response and societal resilience

The pandemic response working group, created during launch, examines how AI can support public health, supply chains, and crisis communication. Projects include evaluating contact tracing technologies, modelling disease spread while safeguarding privacy, and sharing best practices for using AI in vaccine research. Organisations participating in healthcare or logistics can contribute insights on ethical deployment, data governance, and transparency.

Lessons from the pandemic response projects feed into broader resilience planning, informing policies on emergency data sharing, algorithmic accountability, and public trust.

Measuring impact and accountability

GPAI tracks project outcomes through annual reports, metrics, and peer review. Organisations engaged in GPAI should establish internal metrics—such as adoption of responsible AI tools, reduction in bias incidents, or workforce training completion—to demonstrate progress. Reporting to GPAI stakeholders builds credibility and encourages continuous improvement.

The partnership encourages open publication of research and tools, enabling global uptake. Companies can reference GPAI outputs in ESG and sustainability reporting to highlight responsible AI initiatives.

How to get involved

Participation pathways include applying to join working groups, contributing to consultations, submitting project proposals, and attending GPAI summits. Organisations should monitor announcements from the GPAI Secretariat and Centres of Expertise. Establishing internal points of contact ensures timely responses to collaboration opportunities.

Engagement requires commitment to GPAI’s principles, transparency about funding or conflicts of interest, and willingness to share expertise. Multistakeholder collaboration can involve co-developing toolkits, organising workshops, or piloting responsible AI solutions.

Follow-up: GPAI expanded to 29 members by 2024, adopted a generative AI work plan at the New Delhi summit in December 2023, and now partners with the OECD AI Incidents Monitor on risk tracking.

Sources

  • GPAI
  • International Cooperation
  • Responsible AI
Back to curated briefings