NIST issues second draft AI Risk Management Framework
NIST released the second draft of its AI Risk Management Framework on 29 July 2022, seeking comments on updated governance, measurement, and risk mitigation guidance ahead of the 1.0 release.
Verified for technical accuracy — Kodi C.
Second Draft Publication and Consultation
NIST published the second draft of its AI Risk Management Framework on 29 July 2022, incorporating feedback from the initial draft released in March 2022 and extensive stakeholder engagement. The draft represented significant evolution toward the final framework structure, introducing the core functions, categories, and subcategories that would characterize the released version.
NIST received over 240 comments on the first draft from industry, academia, civil society, and government teams, which informed significant revisions addressing scope clarification, practical setup guidance, and terminology alignment with existing risk management frameworks. The second draft comment period enabled further refinement before final publication.
Core Functions Framework
The second draft established the four core functions that structure the framework: Govern, Map, Measure, and Manage. The Govern function addresses organizational policies, processes, and structures for AI risk management. Map focuses on context establishment, including use case definition and stakeholder identification.
Measure includes assessment and monitoring of AI system characteristics and behaviors. Manage addresses response to identified risks through mitigation, monitoring, and continuous improvement. This functional structure mirrors established risk management frameworks like NIST Cybersecurity Framework, enabling organizations to use existing risk management capabilities and integrate AI-specific considerations.
Categories and Subcategories
Within each core function, the draft specified categories and subcategories providing detailed guidance on specific risk management activities. Categories address major risk domains while subcategories detail specific practices and outcomes. For example, the Map function includes categories for understanding AI system context and identifying risks, with subcategories specifying stakeholder identification, deployment context documentation, and potential impact assessment. The hierarchical structure enables organizations to tailor framework adoption based on their AI maturity, risk profiles, and resource constraints while maintaining alignment with framework principles.
Trustworthy AI Characteristics
The draft elaborated on trustworthy AI characteristics that risk management activities should address: valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-improved, and fair with harmful bias managed. These characteristics represent aspirational qualities that AI systems should show, with framework activities designed to assess, improve, and monitor their presence. The characteristics framework provides vocabulary for discussing AI system quality attributes and establishing organizational priorities among potentially competing considerations.
Voluntary Adoption Approach
NIST designed the framework for voluntary adoption, recognizing the diversity of AI applications and organizational contexts that resist one-size-fits-all mandates. The framework accommodates different setup approaches based on sector requirements, organization size, AI system risk levels, and existing governance maturity. This flexibility enables wide applicability while requiring organizations to determine appropriate setup depth for their circumstances. The voluntary approach contrasts with regulatory requirements like the EU AI Act while potentially informing mandatory compliance frameworks adopted by sectors or jurisdictions.
Stakeholder Engagement Process
The second draft reflected extensive stakeholder engagement including public workshops, written comment submissions, and targeted discussions with industry consortia, standards organizations, and advocacy groups. NIST maintained transparent development processes, publishing received comments and explaining how feedback influenced framework evolution. This engagement approach built consensus around framework direction while ensuring diverse perspectives shaped development. Organizations that participated in consultation gained advance understanding of framework direction and opportunity to influence final form.
Integration with Existing Frameworks
The draft explicitly addressed integration with existing risk management frameworks and standards, including NIST Cybersecurity Framework, ISO 31000 risk management, and sector-specific requirements. Cross-references enabled organizations to map AI RMF activities to existing compliance and governance structures. The framework complemented rather than replaced existing standards, providing AI-specific extensions to established risk management practices. Organizations with mature cybersecurity or enterprise risk management programs could use existing capabilities while adding AI-specific elements.
Cited sources
- NIST announcement describes the second draft and consultation process.
- Final AI RMF 1.0 reflects evolution from second draft to released framework.
- AI RMF Knowledge Base provides setup resources and community practices.
Continue in the AI pillar
Return to the hub for curated research and deep-dive guides.
Latest guides
-
AI Governance Implementation Guide
Operationalise the EU AI Act, ISO/IEC 42001, and U.S. OMB M-24-10 requirements with accountable inventories, controls, and reporting workflows.
-
AI Incident Response and Resilience Guide
Coordinate AI-specific detection, escalation, and regulatory reporting that satisfy EU AI Act serious incident rules, OMB M-24-10 Section 7, and CIRCIA preparation.
-
AI Procurement Governance Guide
Structure AI procurement pipelines with risk-tier screening, contract controls, supplier monitoring, and EU-U.S.-UK compliance evidence.
Coverage intelligence
- Published
- Coverage pillar
- AI
- Source credibility
- 71/100 — medium confidence
- Topics
- AI Governance · Risk Management · Standards
- Sources cited
- 2 sources (iso.org, nist.gov)
- Reading time
- 5 min
Cited sources
- Industry Standards and Best Practices — International Organization for Standardization
- NIST AI Risk Management Framework
Comments
Community
We publish only high-quality, respectful contributions. Every submission is reviewed for clarity, sourcing, and safety before it appears here.
No approved comments yet. Add the first perspective.