Global Partnership on AI Launch — June 15, 2020
An in-depth look at the June 2020 launch of the Global Partnership on AI, its governance, working streams, and policy impact, including how fourteen founding members and the EU linked OECD principles to applied projects and open references.
The Global Partnership on Artificial Intelligence (GPAI) was officially launched on 15 June 2020 by fourteen founding members—Australia, Canada, France, Germany, India, Italy, Japan, Mexico, New Zealand, the Republic of Korea, Singapore, Slovenia, the United Kingdom, and the United States—with the European Union participating as a member organization. Announced during the COVID-19 pandemic and building on the OECD AI Principles, GPAI set out to close the gap between AI theory and practice by providing a forum where governments, civil society, academia, and industry could develop shared projects grounded in human rights, inclusion, and economic resilience. The partnership operates on the premise that trustworthy AI requires public accountability, open scientific collaboration, and policy coherence across jurisdictions so that innovations scale without eroding privacy, fairness, or safety.
Founding members emphasized that GPAI would not be another high-level declaration but a practical mechanism for coordinated research and deployment. To that end, the partners established two Centers of Expertise—one in Montreal at the International Centre of Expertise in Montréal for the Advancement of Artificial Intelligence (CEIMIA), and one in Paris hosted by INRIA—to provide operational support, convene expert working groups, and steward open datasets and tools. The OECD, which drafted the first intergovernmental AI principles in 2019, was asked to host the Secretariat and provide analytical support, ensuring that GPAI’s applied work remained linked to evidence-based policy guidance. From the outset, GPAI also committed to multistakeholder participation: researchers and civil society experts co-lead projects alongside government representatives, and meeting minutes are published to maintain transparency.
The launch attracted global attention because it created the first government-backed institution explicitly focused on the responsible development and deployment of AI. Rather than replacing national strategies, GPAI was designed to align them. Members pledged to share best practices on topics such as data governance, intellectual property, compute access, and the measurement of AI’s social impact. The partnership’s mandate also included support for developing economies, recognizing that the benefits of machine learning, language models, and robotics must be distributed globally to avoid widening digital divides. By mid-2020, GPAI working groups were tasked with producing practical guidance on pandemic response, synthetic data, and privacy-preserving machine learning—all issues that demanded rapid, coordinated action.
Governance structure
GPAI’s governance model combines ministerial oversight with expert-led project management. A Council of Representatives, comprising delegates from each member jurisdiction, sets the strategic direction and approves the annual work program. The Council elects a Steering Committee to handle budgeting, membership applications, and coordination with the OECD. Rotating co-chairs ensure that no single country dominates the agenda, and the Steering Committee’s decisions are published to maintain accountability. The two Centers of Expertise provide day-to-day operational support, including project facilitation, communications, and the maintenance of shared repositories for code, datasets, and reports.
Multistakeholder Expert Groups are the core engines of GPAI. Each group brings together technologists, ethicists, economists, and civil society leaders who design and deliver projects within defined timelines. Expert Group members are selected through open calls and vetted for conflicts of interest. To avoid capture by any one sector, the Terms of Reference specify balanced representation, and members must disclose funding sources. The governance framework also outlines how outputs transition from exploratory research to deployable tools; for example, projects are expected to include reproducibility plans, documentation templates, and licensing guidance so that agencies and companies can implement recommendations without ambiguity.
GPAI’s accountability mechanisms extend to transparency about funding and data handling. Project budgets are reported annually, and any datasets generated under GPAI auspices must comply with privacy standards that mirror the OECD AI Principles and, where applicable, GDPR. By aligning governance with accepted international norms, the partnership seeks to build trust among participants who might otherwise compete. This governance model has been cited by digital policy analysts as a blueprint for other multilateral technology collaborations.
Workstreams and expert group priorities
At launch, GPAI organized its work into thematic streams: Responsible AI, Data Governance, the Future of Work, and Innovation and Commercialization, with an ad hoc COVID-19 Response subgroup that was soon formalized. Each stream focuses on applied research and policy tooling rather than abstract principle-writing. For instance, the Responsible AI group examines methods for algorithmic auditing, robustness testing, and human oversight. Projects have explored bias mitigation in computer vision datasets, red-teaming methodologies for large language models, and impact assessment frameworks that map system-level harms to measurable indicators.
The Data Governance workstream concentrates on data stewardship models that reconcile access with protection. Early efforts documented best practices for federated learning, privacy-enhancing technologies, and secure data trusts. By 2021, the stream had produced case studies on public-sector data collaborations in health and mobility, highlighting how differential privacy and homomorphic encryption could be incorporated into procurement requirements. The Future of Work stream, meanwhile, studied how AI-driven automation affects job quality, skills training, and worker voice, and recommended metrics for labor inspectors to evaluate algorithmic management systems used in logistics and gig platforms.
Innovation and Commercialization projects address the practical barriers startups face when translating AI research into market-ready products. GPAI partners have prototyped evaluation sandboxes where early-stage companies can test compliance with safety benchmarks, and they have catalogued open-source toolchains that reduce dependency on proprietary infrastructure. The COVID-19 Response workstream delivered rapid assessments of how AI could support contact tracing, vaccine discovery, and resource allocation without eroding civil liberties. These outputs were disseminated through webinars and open repositories so public health agencies could adapt the recommendations in real time.
Policy implications and global coordination
GPAI’s applied projects feed directly into the policy cycle. Reports on algorithmic risk management informed discussions at the OECD’s Committee on Digital Economy Policy, while toolkits for AI procurement have been referenced by national digital ministries drafting “trustworthy AI” legislation. The partnership has also served as a convening space for aligning export controls on advanced semiconductors, though members stress that GPAI is not a trade body. Instead, its value lies in creating shared evidence that can be cited in parliamentary hearings, regulatory rulemaking, and standards-development processes at ISO and IEEE.
Policy convergence is particularly important for topics like generative AI, where model capabilities can outpace existing safeguards. GPAI working groups have recommended minimum documentation standards for frontier model releases, including model cards that disclose training data scope, known limitations, and incident reporting channels. They have also explored evaluation benchmarks for detecting synthetic media, urging platforms to integrate provenance metadata such as C2PA signatures. These recommendations complement national AI bills of rights and the EU’s emerging AI Act by providing implementable, open-source components.
Another policy priority involves capacity building for low- and middle-income countries. GPAI members have funded fellowships and technical assistance programs that pair local researchers with international labs. Outputs include open curricula on AI ethics, starter kits for responsible data sharing in agriculture and public health, and guidance on establishing national AI research clouds. By focusing on practical tools and training materials, the partnership helps governments avoid reinventing governance structures and accelerates the diffusion of responsible AI practices beyond wealthy economies.
Looking ahead, GPAI’s success will depend on continued transparency, rigorous evaluation, and inclusive participation. The partnership has committed to publishing project roadmaps, implementation guides, and post-deployment audits so that stakeholders can judge whether initiatives deliver measurable benefits. Early pilots—such as robustness benchmarks for language models and open-source toolkits for privacy-preserving analytics—are being refined through public comment periods. GPAI’s hybrid model of governmental backing and expert-led execution positions it to translate evolving technical research into trustworthy, rights-respecting applications.
With geopolitical tensions rising around advanced AI capabilities, GPAI provides a rare venue where governments can collaborate on safety and accountability without stifling innovation. By pairing Centers of Expertise with a clear governance charter and maintaining strong ties to the OECD’s evidence base, the partnership offers a pragmatic template for international technology cooperation. If it continues to deliver open resources, measurable policy impact, and equitable global participation, GPAI could help set durable norms for AI systems that are safe, fair, and aligned with democratic values.
References: Global Partnership on AI — Who We Are; OECD Recommendation on Artificial Intelligence.
Continue in the AI pillar
Return to the hub for curated research and deep-dive guides.
Latest guides
-
AI Workforce Enablement and Safeguards Guide — Zeph Tech
Equip employees for AI adoption with skills pathways, worker protections, and transparency controls aligned to U.S. Department of Labor principles, ISO/IEC 42001, and EU AI Act…
-
AI Incident Response and Resilience Guide — Zeph Tech
Coordinate AI-specific detection, escalation, and regulatory reporting that satisfy EU AI Act serious incident rules, OMB M-24-10 Section 7, and CIRCIA preparation.
-
AI Model Evaluation Operations Guide — Zeph Tech
Build traceable AI evaluation programmes that satisfy EU AI Act Annex VIII controls, OMB M-24-10 Appendix C evidence, and AISIC benchmarking requirements.




