Original publication: Sept. 19, 2024 — National Institute of Standards and Technology (NIST) news release.
NIST released the draft AI 600-1: Generative AI Profile to expand the AI Risk Management Framework 1.0 (AI RMF) for generative systems. The release emphasizes that the profile translates the RMF’s functions—Govern, Map, Measure, Manage—into outcomes tailored to text, image, audio, code, and multimodal generators. It calls on developers, deployers, and regulators to submit public comments to the U.S. AI Safety Institute by November 4, 2024.
The announcement highlights risk themes the profile addresses: documenting provenance and training data rights, detecting hallucinations and unsafe content, protecting against prompt injection and model theft, and ensuring humans remain in oversight loops. NIST stresses the need for cross-sector adoption because generative AI is increasingly embedded in consumer services, enterprise tooling, critical infrastructure, and federal missions.
The U.S. AI Safety Institute (USAISI), housed within NIST, paired the draft with implementation updates. USAISI committed to:
The press release ties the profile to ongoing Executive Order 14110 deliverables, noting that the draft helps agencies satisfy near-term safety expectations for foundation and generative models.