UN launches High-Level Advisory Body on Artificial Intelligence to shape global governance
Overview of the United Nations Secretary-General’s launch of a High-Level Multistakeholder Advisory Body on Artificial Intelligence, including its mandate, composition, objectives and significance.
Fact-checked and reviewed — Kodi C.
Overview of the United Nations Secretary-General’s launch of a High-Level Multistakeholder Advisory Body on Artificial Intelligence, including its mandate, composition, objectives and significance.
Background
The past year has witnessed an extraordinary advance in artificial intelligence (AI), with chatbots, image generators and other generative systems entering the mainstream. United Nations Secretary‑General António Guterres cautioned that this progress brings both transformational opportunities and serious risks. In a press conference launching a new High‑Level Multistakeholder Advisory Body on AI, he said the world needs an inclusive conversation on how to govern AI so its benefits are maximized and its dangers contained.
Launch of the High‑Level Advisory Body
On 26 October 2023 the UN Secretary‑General announced the creation of a High‑Level Advisory Body on artificial intelligence. At the launch he noted that AI could “power extraordinary progress for humanity,” predicting crises, improving public‑health and education services, and supercharging climate action and the Sustainable Development Goals.
Yet he warned that AI expertise and resources are concentrated in a few companies and countries; without global cooperation the technology could deepen inequality and undermine trust. He cited risks such as misinformation, entrenched bias, surveillance, privacy invasion and fraud. Because of these stakes, the Advisory Body aims to link various governance initiatives and guide on managing AI responsibly.
Mandate and objectives
The Advisory Body will work independently and report to the UN Secretary‑General. By the end of 2023 it will produce preliminary recommendations in three areas: (1) international governance of AI; (2) a shared understanding of risks and challenges; and (3) key opportunities and enablers. These recommendations will inform preparations for the Summit of the Future in 2024 and negotiations of the proposed Global Digital Compact. The body will also explore ways to connect existing national and sectoral AI initiatives, ensuring efforts are coherent rather than fragmented.
Composition
The Advisory Body includes thirty‑nine experts appointed by the Secretary‑General. They come from government, the private sector, academia and civil society and will serve in a personal capacity. The UN emphasizes that the group is gender‑balanced, geographically diverse and spans generations. According to the UN’s appointments announcement, members include:
- Government officials and policy leaders: Omar Sultan al Olama (Minister of State for AI, United Arab Emirates), Carme Artigas (Spain’s Secretary of State for Digitalization and AI), Anna Christmann (Aerospace Coordinator in Germany) and Haksoo Ko (Chair of Korea’s Personal Information Protection Commission), among others.
- Academics and researchers: Abeba Birhane (Mozilla Foundation advisor on AI accountability, Ethiopia), Virginia Dignum (Professor of Responsible AI at Umeå University, Sweden), Andreas Krause (Professor at ETH Zurich, Switzerland), Emma Ruttkamp‑Bloem (University of Pretoria, South Africa) and Jaan Tallinn (co‑founder of the center for the Study of Existential Risk, Estonia).
- Industry leaders: Natasha Crampton (Chief Responsible AI Officer, Microsoft), Mira Murati (Chief Technology Officer, OpenAI), Hiroaki Kitano (Chief Technology Officer, Sony), James Manyika (Senior Vice‑President of Google‑Alphabet), Nazneen Rajani (Lead Researcher at Hugging Face), and Yi Zeng (Director of the Brain‑inspired Cognitive AI Lab, Chinese Academy of Sciences).
- Civil society and philanthropy: Nighat Dad (Executive Director of Pakistan’s Digital Rights Foundation), Vilas Dhar (President of the Patrick J. McGovern Foundation), Rahaf Harfoush (digital anthropologist, France) and Marietje Schaake (International Policy Director at Stanford University’s Cyber Policy Center).
Key Themes and Focus Areas
The Advisory Body's work centers on several interconnected themes reflecting the complexity of AI governance. The first theme addresses international governance structures, examining how existing institutions can adapt to AI challenges and whether new mechanisms are needed. The body will consider how to balance innovation with risk mitigation and how to ensure governance frameworks remain adaptable as technology evolves.
The second theme focuses on building shared understanding of AI risks and challenges. This includes examining potential harms from AI systems including bias, discrimination, privacy violations, and impacts on employment. The body will also address emerging concerns about AI-generated misinformation, autonomous weapons, and existential risks from advanced AI systems. Developing common definitions and assessment frameworks supports coherent international dialog.
The third theme explores opportunities and enablers for beneficial AI deployment. This includes examining how AI can accelerate progress toward Sustainable Development Goals, improve public services, and address global challenges like climate change and pandemic preparedness. The body will consider infrastructure, capacity-building, and resource-sharing mechanisms to ensure AI benefits are widely distributed.
Relationship to Global Digital Compact
The Advisory Body's recommendations will directly inform negotiations on the proposed Global Digital Compact, a framework the UN aims to adopt at the Summit of the Future in September 2024. The Compact seeks to establish shared principles for an open, free, secure digital future for all. AI governance represents a central component of this broader digital governance agenda.
Preparatory processes for the Compact involve multi-stakeholder consultations across regions and sectors. The Advisory Body provides expert input to ensure AI-specific considerations are adequately addressed in the Compact's provisions on digital cooperation, data governance, and emerging technology management.
Implementation and Next Steps
The Advisory Body operates on an accelerated timeline, producing preliminary recommendations by end of 2023 and final recommendations in advance of the Summit of the Future. This tight schedule reflects the urgency policymakers attach to AI governance amid rapid technological advancement. The body's work will inform both near-term policy responses and longer-term institutional arrangements.
Organizations engaged in AI development, deployment, or governance should monitor the Advisory Body's outputs and consider how recommendations may influence regulatory environments, international standards, and stakeholder expectations. early engagement with emerging governance frameworks positions organizations to contribute constructively to global AI governance discourse.
Final assessment
The UN High-Level Advisory Body on Artificial Intelligence represents a significant step toward coordinated international AI governance. By bringing together diverse expertise from government, industry, academia, and civil society, the body aims to develop recommendations that balance innovation with risk management and ensure AI benefits are broadly shared. Organizations and policymakers should engage with the body's work as part of broader efforts to shape responsible AI development and deployment.
Ongoing monitoring of Advisory Body proceedings and outputs enables organizations to anticipate governance developments and adapt strategies as needed. Early engagement with emerging frameworks supports both compliance preparation and constructive contribution to global AI governance discourse.
Industry associations and civil society organizations provide additional forums for coordinating perspectives and influencing outcomes.
Multilateral Governance Frameworks
The UN Advisory Body addresses gaps in international AI governance coordination. National regulatory approaches vary significantly, creating challenges for organizations operating across jurisdictions. The body's recommendations aim to establish common principles enabling regulatory interoperability while respecting diverse national contexts and development priorities.
Organizations should monitor advisory body outputs for emerging consensus on AI governance principles. Early alignment with international norms reduces future compliance burdens as national regulations converge around common frameworks.
Stakeholder Engagement Opportunities
Civil society organizations, industry groups, and technical experts can contribute to UN AI governance processes through consultation mechanisms. Constructive engagement helps ensure governance frameworks balance innovation objectives with risk mitigation and human rights protection. Documentation of stakeholder input demonstrates commitment to responsible AI development.
Continue in the AI pillar
Return to the hub for curated research and deep-dive guides.
Latest guides
-
AI Governance Implementation Guide
Operationalise the EU AI Act, ISO/IEC 42001, and U.S. OMB M-24-10 requirements with accountable inventories, controls, and reporting workflows.
-
AI Incident Response and Resilience Guide
Coordinate AI-specific detection, escalation, and regulatory reporting that satisfy EU AI Act serious incident rules, OMB M-24-10 Section 7, and CIRCIA preparation.
-
AI Procurement Governance Guide
Structure AI procurement pipelines with risk-tier screening, contract controls, supplier monitoring, and EU-U.S.-UK compliance evidence.
Coverage intelligence
- Published
- Coverage pillar
- AI
- Source credibility
- 91/100 — high confidence
- Topics
- AI · António Guterres · AI Governance · Artificial Intelligence · AI governance
- Sources cited
- 3 sources (un.org, unesco.org)
- Reading time
- 6 min
Comments
Community
We publish only high-quality, respectful contributions. Every submission is reviewed for clarity, sourcing, and safety before it appears here.
No approved comments yet. Add the first perspective.