NIST Launches U.S. AI Safety Institute — November 17, 2023
The National Institute of Standards and Technology established the U.S. AI Safety Institute to develop test methods, evaluations, and guidelines that implement the President’s AI Executive Order.
NIST will convene industry, academia, and international partners to create benchmarks for AI safety, security, and trustworthiness. The institute expands on the AI Risk Management Framework by coordinating red-teaming guidance, generative AI evaluations, and sector-specific safety cases.
- Testing infrastructure. NIST will build reference datasets, evaluation protocols, and measurement science for advanced AI systems.
- Collaborative governance. The institute plans to stand up a consortium to align public and private sector safety practices.
- Executive Order implementation. NIST is responsible for delivering safety benchmarks mandated by the October 30, 2023 AI Executive Order.
Model developers and enterprise risk teams monitor the institute’s outputs to prepare for forthcoming U.S. safety and reporting obligations.
Continue in the AI pillar
Return to the hub for curated research and deep-dive guides.
Latest guides
-
AI Workforce Enablement and Safeguards Guide — Zeph Tech
Equip employees for AI adoption with skills pathways, worker protections, and transparency controls aligned to U.S. Department of Labor principles, ISO/IEC 42001, and EU AI Act…
-
AI Incident Response and Resilience Guide — Zeph Tech
Coordinate AI-specific detection, escalation, and regulatory reporting that satisfy EU AI Act serious incident rules, OMB M-24-10 Section 7, and CIRCIA preparation.
-
AI Model Evaluation Operations Guide — Zeph Tech
Build traceable AI evaluation programmes that satisfy EU AI Act Annex VIII controls, OMB M-24-10 Appendix C evidence, and AISIC benchmarking requirements.




