Global Partnership on AI Adopts First Workplan — December 3, 2020
GPAI members convened in Montreal for the inaugural summit, endorsing their first multi-year workplan on responsible AI, data governance, and pandemic response projects.
Canada and France convened ministers, researchers, industry leaders, and civil society experts on 3 December 2020 for the inaugural Global Partnership on AI (GPAI) Summit in Montréal. Delegates endorsed the first multi-year workplan covering four permanent working groups—Responsible AI, Data Governance, Future of Work, and Innovation & Commercialization—plus a dedicated multistakeholder response to COVID-19. The agenda aligned with the OECD AI Principles that underpin GPAI, and the OECD hosts the GPAI Secretariat to provide analytical and administrative support. Participants stressed that the partnership would remain practice-oriented, producing tools that national agencies and companies can deploy to manage AI risks and expand equitable access to AI benefits.
The summit formalized how the two Centres of Expertise will anchor project execution. Canada announced that the International Centre of Expertise in Montréal for the Advancement of Artificial Intelligence (CEIMIA) would coordinate projects across responsible AI and pandemic response, while France confirmed that a partner Centre of Expertise in Paris would mobilize researchers and policy specialists on the future of work and innovation themes. By pairing policy direction from governments with implementation know-how from academia, industry labs, and nonprofits, GPAI members aimed to shorten the path from research to pilot deployments.
Founding members—including Canada, France, the European Union, the United States, the United Kingdom, Japan, Germany, India, the Republic of Korea, Singapore, and other partners—reiterated that GPAI will complement, not replace, the broader OECD AI policy work and other multilateral initiatives. Membership was structured to allow additional economies to join while retaining strong technical independence for the expert working groups. The inaugural workplan also called for close collaboration with UNESCO, the Global Partnership for Sustainable Development Data, and standard-setting bodies so that GPAI recommendations align with international norms.
Workstreams and research focus
The Responsible AI working group prioritized projects that operationalize human-centric AI principles. Deliverables include comparative guidance on algorithmic impact assessments, practical bias measurement methodologies for computer vision and natural language processing systems, and a catalogue of technical robustness tests that regulators and procurement teams can integrate into safety assessments. The group also committed to sharing model documentation templates and post-deployment monitoring practices tailored for high-risk applications such as biometric identification, health triage, and credit scoring.
The Data Governance working group focused on governance frameworks that enable cross-border data sharing while respecting privacy and security obligations. Summit participants highlighted the need for interoperable data access agreements, consent management tooling, and reference architectures for privacy-preserving computation. Early projects mapped how data trusts, data altruism mechanisms, and federated learning pilots can help researchers and start-ups collaborate without compromising sensitive data. Members agreed to publish a playbook on building high-quality training datasets with transparent lineage, quality controls, and mechanisms to identify and remediate harmful content.
The Future of Work working group examined labour market impacts and workforce resilience. It advanced a research agenda on AI-enabled job transition pathways, worker voice in algorithmic management, and reskilling initiatives that target small and medium-sized enterprises (SMEs). Delegates noted that AI adoption patterns differ across sectors, so the workplan calls for sector-specific case studies in manufacturing, healthcare, public services, and agriculture. The group will also collaborate with social partners to create guidance on responsible workplace surveillance and human oversight of automated decision systems.
The Innovation & Commercialization working group assembled pilot projects that lower barriers for SMEs and public-sector agencies to adopt trustworthy AI. Planned outputs include open-source reference implementations for AI safety baselines, procurement-ready evaluation criteria, and a cross-jurisdictional compendium of regulatory sandboxes. The group intends to map compute and dataset access programs that can be shared among GPAI members, reducing duplication and enabling researchers in lower-resourced ecosystems to participate.
The AI and Pandemic Response subgroup, established during 2020, continued to coordinate projects that use AI for epidemiological modelling, diagnostics, and supply-chain resilience. Summit discussions emphasized documenting lessons learned from early pandemic deployments, including model drift risks, privacy-preserving contact tracing, and equity considerations in vaccine distribution analytics. The subgroup will publish practical guidelines for integrating AI tools into public health workflows, with an emphasis on transparent governance, clinical validation, and open data practices that respect privacy.
Governance and participation
Delegates confirmed a rotating Steering Committee that sets strategic direction, reviews project proposals, and ensures geographic balance across the working groups. Each working group is co-chaired by representatives from at least two member governments and supported by independent experts drawn from academia, industry, and civil society. The Steering Committee works in tandem with the GPAI Council, which provides minister-level oversight. Canada and France serve as co-chairs for the Council during the inaugural cycle, reflecting their role as GPAI founders.
The OECD, serving as GPAI’s Secretariat, provides legal, administrative, and analytical support. This arrangement allows GPAI to leverage the OECD’s research capacity and digital policy fora while retaining a flexible, project-driven structure. Summit communiqués noted that project findings would be shared through the OECD AI Observatory to maximize transparency and reuse. GPAI members also committed to periodic public reporting on project milestones, funding contributions, and participation metrics to maintain accountability.
Membership expansion was a recurring theme. The workplan invites additional economies that endorse the OECD AI Principles and demonstrate commitments to democratic values, human rights, and rule of law. Delegates also encouraged deeper engagement from multilateral development banks and standard-setting organizations to ensure that GPAI technical recommendations can feed into financing programmes and global interoperability efforts.
Deliverables and milestones for 2021
The inaugural workplan set clear milestones for 2021 to demonstrate momentum and provide actionable outputs:
- Responsible AI toolkits. Publish a suite of risk management practices, including model cards, audit checklists, and post-deployment monitoring guidance that public-sector procurement teams can integrate into tenders and oversight processes.
- Dataset governance guidance. Release templates for data access agreements, quality assessment checklists, and benchmark scenarios for privacy-preserving analytics to help members implement responsible data sharing.
- Future of work case studies. Deliver comparative assessments of AI-enabled training and job-matching programmes, focusing on SMEs and vulnerable workers, and develop recommendations for balancing innovation with worker protections.
- Innovation playbook. Compile an inventory of regulatory sandboxes, safety evaluation resources, and compute-credit programmes that member economies can replicate to accelerate trustworthy AI adoption.
- Pandemic response evaluations. Document best practices for responsible AI use in epidemiology, diagnostics, and supply chain analytics, highlighting governance safeguards that were effective during the COVID-19 emergency.
These deliverables were paired with evaluation criteria that track openness, reproducibility, and real-world adoption. Each working group must publish progress updates before the next annual summit, and the Steering Committee will review whether pilots translate into scalable programmes.
Why this matters for policy and industry teams
For governments, the inaugural workplan provides ready-to-use guidance that complements national AI strategies. Responsible AI toolkits and dataset governance playbooks can shorten policy development cycles and align oversight approaches across jurisdictions. Agencies that procure AI systems gain access to harmonized requirements on testing, transparency, and human oversight, reducing uncertainty for vendors and boosting public trust.
For companies, especially SMEs, the workplan highlights pathways to participate in cross-border AI research without shouldering prohibitive compliance burdens. Shared sandboxes, compute support, and common evaluation protocols reduce duplication and allow smaller teams to demonstrate safety and fairness benchmarks recognized by multiple regulators. The pandemic response outputs show how to integrate AI into critical infrastructure while maintaining rigorous privacy and accountability safeguards.
For researchers and civil society, the Summit underscored GPAI’s emphasis on multi-stakeholder governance. Openly published case studies, transparent selection of pilot sites, and commitments to independent evaluation create opportunities to scrutinize and improve AI deployments. Collaboration with UNESCO, the OECD Network of Experts on AI, and standards bodies such as ISO/IEC will help translate findings into durable international norms.
Sources
- First annual meeting of the Global Partnership on Artificial Intelligence — Innovation, Science and Economic Development Canada; official communiqué describing the Montréal Summit decisions, governance structure, and Centres of Expertise.
- GPAI Montreal Summit 2020 — Global Partnership on Artificial Intelligence; provides agendas, working group reports, and supporting materials for the inaugural summit.
- OECD support for the Global Partnership on AI — Organisation for Economic Co-operation and Development; outlines the OECD’s role as GPAI Secretariat and links to workplan documentation.
Continue in the AI pillar
Return to the hub for curated research and deep-dive guides.
Latest guides
-
AI Workforce Enablement and Safeguards Guide — Zeph Tech
Equip employees for AI adoption with skills pathways, worker protections, and transparency controls aligned to U.S. Department of Labor principles, ISO/IEC 42001, and EU AI Act…
-
AI Incident Response and Resilience Guide — Zeph Tech
Coordinate AI-specific detection, escalation, and regulatory reporting that satisfy EU AI Act serious incident rules, OMB M-24-10 Section 7, and CIRCIA preparation.
-
AI Model Evaluation Operations Guide — Zeph Tech
Build traceable AI evaluation programmes that satisfy EU AI Act Annex VIII controls, OMB M-24-10 Appendix C evidence, and AISIC benchmarking requirements.




