Policy Briefing — NIST launches AI Safety Institute Consortium governance framework
The U.S. National Institute of Standards and Technology formed the AI Safety Institute Consortium on 8 February 2024 to coordinate test methods, risk management research, and standards contributions from more than 200 organisations supporting the AI Risk Management Framework.
Executive briefing: NIST formally established the AI Safety Institute Consortium (ASIC) on 8 February 2024, consolidating public- and private-sector expertise to advance evaluation methods for generative AI, red-teaming protocols, and future regulatory sandboxes. The consortium will publish shared tooling and reference architectures that operationalise the NIST AI Risk Management Framework and Executive Order 14110 directives.
Consortium mandate
- Evaluation science. ASIC workstreams will produce measurement science for synthetic media provenance, bias testing, and safety benchmarks that federal agencies can adopt in forthcoming AI procurement rules.
- Standards coordination. Members are expected to align contributions with ISO/IEC JTC 1/SC 42, IEEE, and international partners to avoid fragmented technical standards.
- Incident sharing. NIST plans to pilot confidential incident reporting mechanisms to inform future guidance on monitoring and redress.
Program actions
- Join relevant cohorts. Enterprises building or deploying foundation models should join ASIC cohorts that match their use cases to influence testing protocols.
- Align evidence. Map existing AI assurance artefacts (model cards, safety evaluations, bias assessments) to the AI RMF core functions so documentation can flow directly into upcoming federal requests for information.
- Track federal adoption. Monitor OMB’s implementation of M-24-10 and subsequent agency policies that reference ASIC outputs for contractor obligations.