NIST AI RMF 1.0 and Generative AI profile update
NIST's AI Risk Management Framework has four functions—Govern, Map, Measure, Manage. In July 2024, they released a generative AI profile for mitigating hallucination, bias, and copyright risks. If you are building AI systems, inventory what you have got and set up continuous monitoring.
Accuracy-reviewed by the editorial team
In January 2023 the U.S. National Institute of Standards and Technology (NIST) released Version 1.0 of its Artificial Intelligence Risk Management Framework (AI RMF). Intended for voluntary adoption across industries, the framework helps teams incorporate trustworthiness into the design, development, use and evaluation of AI systems. It lays out four core functions—Govern, Map, Measure and Manage—that apply across the AI lifecycle. Together, they guide teams to identify and document AI systems, assess risks, measure impacts and implement controls, all while cultivating a culture of accountability and transparency.
Core functions and generative AI guidance
The AI RMF emphasizes risk‑based processes rather than prescriptive requirements. Under Govern, teams establish oversight, roles and policies for AI risk management. The Map function calls for documenting the AI system’s purpose, context, training data and teams to understand potential harms. Measure requires evaluating model performance, fairness, robustness and privacy through testing and monitoring. Finally, Manage focuses on implementing mitigations, incident response and continuous improvement.
As generative AI exploded in popularity, NIST was tasked by Executive Order 14110 to develop additional guidance. In July 2024 NIST published AI 600‑1, a cross‑sectoral Generative AI Profile. This profile builds on the AI RMF 1.0 and provides suggested actions to manage novel risks posed by generative models. It highlights the importance of curating high‑quality, lawful training data, preventing prompt injection and jailbreaks, evaluating outputs for hallucination, bias and copyright infringement, and ensuring clear disclosure when synthetic content is generated. The profile stresses documentation, chain‑of‑custody procedures for training data, model evaluation protocols and human‑in‑the‑loop oversight.
Implementation challenges and updates
Adopting the AI RMF and its generative AI profile requires cross‑functional collaboration. Many teams lack full inventories of AI systems, and generative models often rely on opaque third‑party providers. The profile recommends mapping supply chains and establishing procurement policies for third‑party models.
Measuring risks is also complex; metrics for robustness, privacy and bias must evolve with emerging attack methods. NIST continues to develop complementary resources, including guidelines for watermarking AI‑generated content and a Code of Practice on marking synthetic outputs, expected in 2026. The EU’s AI Act and other forthcoming regulations will probably reference the AI RMF, making early adoption a prudent compliance strategy.
Implications and recommended actions
To use the AI RMF effectively, teams should:
- Establish AI governance structures. Designate accountable officers and multidisciplinary committees to oversee AI risk management, set risk appetites and allocate resources.
- Develop an AI system inventory. Document existing and planned AI and machine‑learning systems, including purpose, inputs, outputs, deployment context and responsible teams.
- Align development with the four functions. Use the Govern–Map–Measure–Manage cycle to guide design and procurement. For generative AI, apply AI 600‑1’s controls: vet training data sources, perform red‑team testing for jailbreaks and hallucinations, and implement human review of outputs.
- Implement monitoring and incident response. Deploy continuous monitoring for model drift, bias and security vulnerabilities, and establish incident response plans that address AI‑specific harms.
- Engage teams. Communicate with affected groups, including customers and regulators, about AI capabilities, limitations and risk‑mitigation measures. Transparently disclose when generative content is used and provide opt‑out mechanisms where appropriate.
Our analysis
NIST’s AI RMF and the Generative AI Profile have become de facto baselines for trustworthy AI. Although voluntary, they shape regulators’ expectations and complement emerging laws like the EU AI Act. By adopting the framework now, teams can embed risk management in their AI practices, build consumer and regulator trust and future‑proof products against stricter legislation.
Generative AI’s potential to transform creative work also introduces unique risks; managing hallucination, copyright and security requires deeper technical controls and continuous oversight. We recommend that technology leaders integrate the AI RMF into existing governance programs, train teams on generative AI risks and collaborate with legal, security and ethics experts to design resilient, transparent systems.
Third-party model governance
The Generative AI Profile emphasizes supply chain transparency, yet most organizations consume foundation models through APIs without visibility into training data or model architecture. Develop vendor assessment questionnaires specific to AI RMF requirements, including questions on data provenance, red-team testing results, and incident notification commitments. Consider contractual provisions requiring model cards and evaluation summaries.
Continuous monitoring architecture
Unlike traditional software, generative AI outputs require ongoing evaluation for drift, emergent behaviors, and novel attack vectors. Build monitoring pipelines that sample outputs, evaluate against safety benchmarks, and trigger human review when anomalies exceed thresholds. Document monitoring methodology and escalation procedures in alignment with Measure function requirements.
Regulatory convergence planning
The AI RMF's four functions align conceptually with the EU AI Act's conformity assessment and the emerging ISO/IEC 42001 standard. If you are affected, map AI RMF controls to these frameworks early, enabling efficient evidence reuse across compliance programs. Track harmonization developments and participate in standards body consultations to influence alignment.
Continue in the AI pillar
Return to the hub for curated research and deep-dive guides.
Latest guides
-
AI Governance Implementation Guide
Operationalise the EU AI Act, ISO/IEC 42001, and U.S. OMB M-24-10 requirements with accountable inventories, controls, and reporting workflows.
-
AI Incident Response and Resilience Guide
Coordinate AI-specific detection, escalation, and regulatory reporting that satisfy EU AI Act serious incident rules, OMB M-24-10 Section 7, and CIRCIA preparation.
-
AI Procurement Governance Guide
Structure AI procurement pipelines with risk-tier screening, contract controls, supplier monitoring, and EU-U.S.-UK compliance evidence.
Coverage intelligence
- Published
- Coverage pillar
- AI
- Source credibility
- 86/100 — high confidence
- Topics
- AI · AI RMF · NIST AI RMF · AI regulation · generative AI
- Sources cited
- 3 sources (nist.gov, airc.nist.gov)
- Reading time
- 5 min
Comments
Community
We publish only high-quality, respectful contributions. Every submission is reviewed for clarity, sourcing, and safety before it appears here.
No approved comments yet. Add the first perspective.