AI Briefing — NIST releases AI Risk Management Framework 1.0
NIST published version 1.0 of its AI Risk Management Framework on 26 January 2023 with a companion Playbook, providing guidance for trustworthy AI across govern, map, measure, and manage functions.
On 26 January 2023 the U.S. National Institute of Standards and Technology released the finalized AI Risk Management Framework (RMF) 1.0 and an online Playbook. The RMF offers voluntary guidance to help organizations design and deploy trustworthy AI by organizing practices into govern, map, measure, and manage functions that address safety, security, bias, and accountability risks.
The framework is intended for voluntary adoption across sectors and is expected to inform procurement and regulatory expectations. AI teams should benchmark lifecycle controls, documentation, and monitoring against the RMF core outcomes, and integrate the Playbook’s references into model governance workflows.
- NIST release announcement summarizes the RMF scope and intended audience.
- AI RMF 1.0 (NIST AI 100-1) provides the full framework and implementation guidance.
Continue in the AI pillar
Return to the hub for curated research and deep-dive guides.
Latest guides
-
AI Workforce Enablement and Safeguards Guide — Zeph Tech
Equip employees for AI adoption with skills pathways, worker protections, and transparency controls aligned to U.S. Department of Labor principles, ISO/IEC 42001, and EU AI Act…
-
AI Incident Response and Resilience Guide — Zeph Tech
Coordinate AI-specific detection, escalation, and regulatory reporting that satisfy EU AI Act serious incident rules, OMB M-24-10 Section 7, and CIRCIA preparation.
-
AI Model Evaluation Operations Guide — Zeph Tech
Build traceable AI evaluation programmes that satisfy EU AI Act Annex VIII controls, OMB M-24-10 Appendix C evidence, and AISIC benchmarking requirements.




