NIST AI 600-1: Generative AI Profile (Draft) — Extended brief
Original publication: Sept. 19, 2024 — National Institute of Standards and Technology (NIST) news release.
Purpose and scope
NIST issued the draft AI 600-1: Generative AI Profile to extend the AI Risk Management Framework 1.0 (AI RMF) for text, image, audio, video, code, and multimodal generators. The draft translates the RMF functions—Govern, Map, Measure, Manage—into concrete outcomes for generative systems and invites public comment through November 4, 2024 via the U.S. AI Safety Institute (USAISI).
Control expectations and operating posture (5+ minute brief)
- Govern: Establish accountable owners for generative model lifecycle decisions; document provenance, data rights, and license compliance; maintain auditable change and release records; and align deployment approvals with organizational AI governance and policy guardrails.
- Map: Classify model use cases by impact tier (consumer, enterprise, critical infrastructure, and federal missions); log training and fine-tuning sources; record safety-relevant model characteristics (context window, modalities, alignment methods); and tag third-party dependencies for supply chain review and governance playbooks.
- Measure: Run structured red-teaming for prompt injection, data exfiltration, unsafe content generation, and model theft; verify outputs against factivity and copyright expectations; instrument usage with content authenticity signals; and integrate benchmarks into AI RMF-aligned dashboards and AI tools.
- Manage: Enforce human-in-the-loop oversight for high-risk contexts; monitor model drift and alignment regressions; gate external releases with safety and provenance attestations; and adopt fail-closed responses for suspected policy violations, aligned to incident response and procurement playbooks.
Implementation risks to monitor
- Prompt injection and data leakage: Cross-domain prompts can override safety instructions or extract proprietary data, requiring layered content filters and audit logging.
- Model theft and misuse: Weight exfiltration, API abuse, and replication risks drive the need for access control, rate limiting, and artifact signing.
- Hallucinations and unsafe content: Unsupported claims, harmful outputs, and copyright violations require provenance tagging, retrieval grounding, and reviewer escalation paths.
- Training data rights and consent: Missing permissions for data, code, or media introduce legal exposure; traceability and license validation are mandatory before deployment.
- Supply chain dependencies: Third-party models, libraries, and safety filters can introduce unvetted behaviors; apply SBOM-style tracking and continuous verification.
Timelines and deliverables
- Public comment window: Through November 4, 2024, USAISI is collecting feedback on the draft profile.
- Generative AI Evaluation Program: USAISI is launching reference tests for red-teaming, safety alignment, and content authenticity to support AI RMF-aligned dashboards.
- Measurement protocols and benchmarks: Forthcoming artifacts are intended for integration into organizational risk reporting and tooling.
- Consortium working groups: Expanded agency and industry pilots will provide operational data to refine the profile before finalization.
- Procurement language: NIST will issue acquisition clauses agencies can deploy immediately to align contracts while the profile is finalized.
Linkages to AI pillar guidance and governance playbooks
Organizations can align profile controls to internal AI guardrails by mapping Govern and Manage outcomes to governance playbooks, connecting Map and Measure activities to AI toolkits for evaluations and monitoring, and harmonizing procurement clauses with enterprise policy standards.
Related feed and operational updates
Track follow-on developments in the site feed for USAISI evaluation releases, red-team protocol updates, and procurement templates linked to the AI RMF. Cross-reference entries that mention NIST AI 600-1, Executive Order 14110 implementation, or USAISI working group outputs to keep deployment playbooks current.