NIST releases AI Risk Management Framework concept paper
NIST’s concept paper proposes a voluntary AI Risk Management Framework built around core functions of mapping, measuring, managing and governing risks. It calls for public feedback and outlines a roadmap to publish the first draft in early 2022 and version 1.0 in early 2023.
Context and motivation. On 13 December 2021 the U.S. National Institute of Standards and Technology (NIST) published a 25‑page concept paper outlining its proposed Artificial Intelligence Risk Management Framework (AI RMF). The paper, which incorporates feedback from a July 2021 request for information and an October workshop, was released for public comment and will serve as the basis for the first draft of the framework【820559771068663†L173-L181】. NIST called on stakeholders to submit feedback by 25 January 2022 and stated that a draft AI RMF would be circulated in early 2022, with version 1.0 targeted for early 2023【820559771068663†L173-L180】. This concept paper is part of broader U.S. policy efforts – mandated by the National AI Initiative Act and recommended by the National Security Commission on AI – to develop voluntary, consensus‑based standards for trustworthy AI.
Scope and objectives. The concept paper describes the AI RMF as a voluntary framework intended for use by organisations across sectors and sizes to manage risks associated with the design, development, use and evaluation of AI systems【93563987337469†L0-L24】. NIST frames the document as an initial attempt to address risks that are unique to AI, such as long‑term, low‑probability, high‑impact harms; systemic bias; threats to safety and privacy; and consumer protection challenges【93563987337469†L63-L100】. It emphasises that managing AI risks requires a multi‑stakeholder, interdisciplinary approach because risks may arise from the data, model, deployment context and the broader socio‑technical environment【93563987337469†L368-L408】.
Core functions and structure. NIST proposes that the AI RMF be organised around four core functions – Map, Measure, Manage and Govern – each with categories, sub‑categories and implementation tiers. The Map function establishes context and enumerates risks, requiring organisations to identify the domain and intended use of a system, relevant user expectations and potential benefits or harms【93563987337469†L368-L408】. NIST stresses that this exercise should be carried out by a diverse, multidisciplinary team that includes external stakeholders【93563987337469†L414-L418】. The Measure function analyses enumerated risks, assessing their nature, likelihood and potential impact【93563987337469†L421-L437】. The Manage function prioritises risks and guides decisions on avoiding, mitigating, transferring or accepting them【93563987337469†L441-L461】. Finally, the Govern function ensures that appropriate organisational policies, processes and roles are in place to cultivate a culture of continuous AI risk management【93563987337469†L471-L496】.
Profiles, tiers and adaptability. Beyond the core functions, the concept paper introduces profiles and implementation tiers. Profiles enable users to prioritise activities and outcomes that align with their organisation’s values, mission and risk tolerance, and they can be tailored to sector‑specific or use‑case‑specific contexts【93563987337469†L521-L533】. Implementation tiers support decision‑making and communication about the sufficiency of an organisation’s processes, resources and expertise for managing AI risks【93563987337469†L535-L546】. These design elements mirror NIST’s Cybersecurity and Privacy Frameworks and are intended to provide flexibility so that the AI RMF can be adopted by organisations with varying levels of maturity.
Importance of risk identification and diversity. NIST underscores that risk identification should consider not only obvious harms but also cumulative or systemic risks that may manifest later in an AI system’s lifecycle【93563987337469†L421-L437】. The concept paper notes that mapping and measuring risks should engage diverse perspectives to capture impacts on individuals, groups, organisations and society【93563987337469†L404-L418】. This emphasis on diversity reflects concerns raised by civil‑society groups and researchers that AI systems can perpetuate historical biases and injustices if risk assessment is not inclusive.
Call for participation and next steps. According to NIST’s news release, the concept paper solicited public input to refine the AI RMF and inform future workshops【820559771068663†L173-L183】. NIST invited comments on whether the approach was on the right track, whether the scope and audience were appropriate, and what additional topics should be included【93563987337469†L19-L31】. It planned to integrate feedback into a draft framework for public comment in early 2022 and to hold further workshops ahead of releasing version 1.0 in early 2023【820559771068663†L173-L180】. The concept paper emphasises that the AI RMF will be voluntary and consensus‑driven, aimed at fostering trust in AI and encouraging innovation while managing potential harms【93563987337469†L123-L130】.
Broader significance. The development of an AI RMF reflects growing recognition that AI technologies require structured risk management similar to cybersecurity and privacy frameworks. By articulating functions like Map, Measure, Manage and Govern, NIST provides a common language for organisations to identify and mitigate risks throughout the AI lifecycle. The framework also acknowledges the need for continual governance as AI systems evolve【93563987337469†L471-L496】. When finalised, the AI RMF is expected to inform U.S. federal agency adoption of AI technologies, influence international standards efforts, and provide guidance for industry sectors seeking to deploy AI responsibly.
Outlook. Although the concept paper is only an early step, it signals NIST’s commitment to a collaborative and iterative process. Stakeholders should monitor the forthcoming draft and participate in consultations to ensure that the framework addresses emerging issues such as generative AI, explainability and algorithmic accountability. As AI systems become more ubiquitous, frameworks like NIST’s AI RMF will play a critical role in balancing innovation with ethical and legal obligations, promoting transparency and accountability, and maintaining public trust in AI-driven decision making.
National security backdrop. The push for an AI risk management framework stems in part from the U.S. National Security Commission on Artificial Intelligence (NSCAI). In its March 2021 final report, the 15‑member commission warned that the United States was “a long way from being ‘AI-ready’” and urged the government to invest billions in AI research and talent to compete with nations such as China【41970880052270†L77-L105】. The report called for a Technology Competitiveness Council at the White House and recommended sweeping changes to ensure the United States could defend itself against AI‑enabled threats and remain a leader in AI innovation【41970880052270†L106-L132】. These recommendations underscored the need for national strategies and standards for trustworthy and safe AI development, reinforcing NIST’s mandate under the National AI Initiative Act to create the AI RMF.
Detailed functions explained. The concept paper devotes several pages to describing each core function. The Map function emphasises identifying the domain, intended use, users and potential impacts of an AI system. It advises organisations to consider the system’s geographical, cultural and temporal context and to involve a multidisciplinary team that reflects diverse stakeholders【93563987337469†L368-L408】. Mapping provides a baseline to determine whether an AI solution is warranted or whether existing processes should be retained【93563987337469†L368-L408】. The Measure function requires analysing enumerated risks through both qualitative and quantitative assessments, considering uncertainties, trade‑offs and the effectiveness of existing controls【93563987337469†L421-L437】. The Manage function involves prioritising risks and deciding whether to avoid, mitigate, share or accept them, while acknowledging that new risks may emerge over time【93563987337469†L441-L461】. Finally, the Govern function stresses continuous organisational oversight, establishing policies and roles to ensure that risk management processes remain effective throughout the AI system’s life cycle【93563987337469†L471-L496】.
Relation to existing frameworks. NIST’s proposal mirrors the structure of its widely adopted Cybersecurity Framework (CSF) and Privacy Framework. Like those frameworks, the AI RMF uses categories, sub‑categories and implementation tiers to help organisations translate high‑level principles into actionable practices. The concept paper notes that the functions, categories and profiles are not intended to be prescriptive but to provide a flexible foundation that can evolve with technological advances and sector‑specific needs【93563987337469†L521-L533】. This alignment allows organisations already familiar with the CSF to integrate AI risk management into existing governance processes.
Continue in the AI pillar
Return to the hub for curated research and deep-dive guides.
Latest guides
-
AI Workforce Enablement and Safeguards Guide — Zeph Tech
Equip employees for AI adoption with skills pathways, worker protections, and transparency controls aligned to U.S. Department of Labor principles, ISO/IEC 42001, and EU AI Act…
-
AI Incident Response and Resilience Guide — Zeph Tech
Coordinate AI-specific detection, escalation, and regulatory reporting that satisfy EU AI Act serious incident rules, OMB M-24-10 Section 7, and CIRCIA preparation.
-
AI Model Evaluation Operations Guide — Zeph Tech
Build traceable AI evaluation programmes that satisfy EU AI Act Annex VIII controls, OMB M-24-10 Appendix C evidence, and AISIC benchmarking requirements.




