NIST AI RMF 1.0 and Generative AI profile update
NIST’s voluntary AI Risk Management Framework provides four functions—Govern, Map, Measure and Manage—to build trustworthy AI. In July 2024 NIST released a generative AI profile outlining actions to mitigate risks such as hallucination, bias and copyright infringement【542888435075233†L126-L129】. Organisations should adopt the framework, inventory AI systems and integrate continuous monitoring and governance.
In January 2023 the U.S. National Institute of Standards and Technology (NIST) released Version 1.0 of its Artificial Intelligence Risk Management Framework (AI RMF). Intended for voluntary adoption across industries, the framework helps organisations incorporate trustworthiness into the design, development, use and evaluation of AI systems【542888435075233†L100-L105】. It lays out four core functions—Govern, Map, Measure and Manage—that apply across the AI lifecycle. Together, they guide organisations to identify and document AI systems, assess risks, measure impacts and implement controls, all while cultivating a culture of accountability and transparency.
Core functions and generative AI guidance
The AI RMF emphasises risk‑based processes rather than prescriptive requirements. Under Govern, organisations establish oversight, roles and policies for AI risk management. The Map function calls for documenting the AI system’s purpose, context, training data and stakeholders to understand potential harms. Measure requires evaluating model performance, fairness, robustness and privacy through testing and monitoring. Finally, Manage focuses on implementing mitigations, incident response and continuous improvement.
As generative AI exploded in popularity, NIST was tasked by Executive Order 14110 to develop additional guidance. In July 2024 NIST published AI 600‑1, a cross‑sectoral Generative AI Profile. This profile builds on the AI RMF 1.0 and provides suggested actions to manage novel risks posed by generative models【542888435075233†L126-L129】. It highlights the importance of curating high‑quality, lawful training data, preventing prompt injection and jailbreaks, evaluating outputs for hallucination, bias and copyright infringement, and ensuring clear disclosure when synthetic content is generated. The profile stresses documentation, chain‑of‑custody procedures for training data, model evaluation protocols and human‑in‑the‑loop oversight.
Implementation challenges and updates
Adopting the AI RMF and its generative AI profile requires cross‑functional collaboration. Many organisations lack comprehensive inventories of AI systems, and generative models often rely on opaque third‑party providers. The profile recommends mapping supply chains and establishing procurement policies for third‑party models. Measuring risks is also complex; metrics for robustness, privacy and bias must evolve with emerging attack methods. NIST continues to develop complementary resources, including guidelines for watermarking AI‑generated content and a Code of Practice on marking synthetic outputs, expected in 2026【109655055543575†L248-L261】. The EU’s AI Act and other forthcoming regulations will likely reference the AI RMF, making early adoption a prudent compliance strategy.
Implications and recommended actions
To leverage the AI RMF effectively, organisations should:
- Establish AI governance structures. Designate accountable officers and multidisciplinary committees to oversee AI risk management, set risk appetites and allocate resources.
- Develop an AI system inventory. Document existing and planned AI and machine‑learning systems, including purpose, inputs, outputs, deployment context and responsible teams.
- Align development with the four functions. Use the Govern–Map–Measure–Manage cycle to guide design and procurement. For generative AI, apply AI 600‑1’s controls: vet training data sources, perform red‑team testing for jailbreaks and hallucinations, and implement human review of outputs.
- Implement monitoring and incident response. Deploy continuous monitoring for model drift, bias and security vulnerabilities, and establish incident response plans that address AI‑specific harms.
- Engage stakeholders. Communicate with affected groups, including customers and regulators, about AI capabilities, limitations and risk‑mitigation measures. Transparently disclose when generative content is used and provide opt‑out mechanisms where appropriate.
Zeph Tech analysis
NIST’s AI RMF and the Generative AI Profile have become de facto baselines for trustworthy AI. Although voluntary, they shape regulators’ expectations and complement emerging laws like the EU AI Act. By adopting the framework now, organisations can embed risk management in their AI practices, build consumer and regulator trust and future‑proof products against stricter legislation. Generative AI’s potential to transform creative work also introduces unique risks; managing hallucination, copyright and security requires deeper technical controls and continuous oversight. Zeph Tech recommends that technology leaders integrate the AI RMF into existing governance programmes, train teams on generative AI risks and collaborate with legal, security and ethics experts to design resilient, transparent systems.




