AI Briefing — NIST opens input on Artificial Intelligence Risk Management Framework
NIST requested public comments to shape a voluntary AI Risk Management Framework covering accuracy, explainability, safety, and security, launching its multi-stakeholder drafting process.
On 29 July 2021 the U.S. National Institute of Standards and Technology issued a Request for Information seeking input on an Artificial Intelligence Risk Management Framework. NIST asked industry, researchers, and civil society to identify practices for managing AI risks across accuracy, explainability, safety, bias mitigation, privacy, and security, and outlined plans for public workshops and draft releases.
AI program owners should review their model governance controls against NIST’s questions, prepare comment submissions, and align documentation so future RMF drafts can be mapped to existing assurance processes.
Continue in the AI pillar
Return to the hub for curated research and deep-dive guides.
Latest guides
-
AI Workforce Enablement and Safeguards Guide — Zeph Tech
Equip employees for AI adoption with skills pathways, worker protections, and transparency controls aligned to U.S. Department of Labor principles, ISO/IEC 42001, and EU AI Act…
-
AI Incident Response and Resilience Guide — Zeph Tech
Coordinate AI-specific detection, escalation, and regulatory reporting that satisfy EU AI Act serious incident rules, OMB M-24-10 Section 7, and CIRCIA preparation.
-
AI Model Evaluation Operations Guide — Zeph Tech
Build traceable AI evaluation programmes that satisfy EU AI Act Annex VIII controls, OMB M-24-10 Appendix C evidence, and AISIC benchmarking requirements.




