← Back to all briefings

AI · Credibility 97/100 · · 7 min read

AI Briefing — January 26, 2023

NIST released the AI Risk Management Framework 1.0 alongside a Playbook, Crosswalk, and Roadmap, establishing the govern-map-measure-manage cycle U.S. enterprises now use to run trustworthy AI programs.

Executive briefing: At its January 26, 2023 Trustworthy & Responsible AI workshop, the U.S. National Institute of Standards and Technology (NIST) published the AI Risk Management Framework 1.0 together with an interactive Playbook, a Crosswalk of related standards, and a Roadmap for continued research. The framework codifies four core functions—Govern, Map, Measure, and Manage—that organizations must cycle through to identify, analyze, and remediate AI risks spanning safety, security, privacy, explainability, fairness, and resilience.

Key industry signals

  • Govern function formalizes accountability. NIST details roles, policies, and culture enablers required to sustain an AI risk program, mandating cross-functional oversight and inventory hygiene across the AI lifecycle.
  • Playbook operationalizes controls. The companion Playbook lists concrete actions for each subcategory—such as model cards, dataset lineage, and human factors testing—so enterprises can evidence implementation.
  • Crosswalk connects global standards. NIST mapped the AI RMF against ISO/IEC 23894, OECD recommendations, and the U.S. Executive Order 13960, helping regulated entities align disclosures with existing governance regimes.

Control alignment

  • NIST AI RMF. Establish an inventory of AI systems, assign risk owners, and define thresholds for shifting from experimentation to production, as prescribed by the Govern and Map functions.
  • ISO/IEC 42001 readiness. Leverage the Measure function’s metrics guidance to design management-system controls that will be required as the ISO AI management standard finalizes.
  • Executive Order 13960. Use the Manage function to prove agencies and contractors are addressing privacy, civil rights, and performance monitoring obligations for AI in federal missions.

Risk measurement priorities

  • Instrument quantitative metrics (error rates, drift, robustness scores) and qualitative assessments (human factors, contextual harm analyses) so residual risk is defensible during audits.
  • Integrate independent testing—red teaming, adversarial probing, and domain expert review—before high-impact systems clear go-live gates.
  • Track training and inference data lineage to connect governance artifacts with privacy impact assessments and model cards.

Enablement moves

  • Brief Chief Data, Privacy, and Information Officers on the AI RMF taxonomy so they can harmonize model registries, policy waivers, and procurement questionnaires.
  • Update vendor due-diligence packets to demand AI RMF-aligned disclosures—including transparency on third-party components and fallback procedures.
  • Launch change-management campaigns teaching product teams how to document intended use, assumptions, and monitoring plans inside the Playbook templates.

Zeph Tech analysis

  • Governance now has a U.S. benchmark. Regulators and procurement teams can point to AI RMF subcategories when challenging undocumented AI deployments.
  • Measurement discipline differentiates leaders. Organizations that can quantify bias, robustness, and data quality through the Measure function will be ready for European and state-level AI assurance demands.
  • Roadmap signals future obligations. NIST’s research agenda highlights socio-technical evaluations, explainability, and workforce competence—areas buyers should incorporate into multi-year AI oversight budgets.

Zeph Tech is helping operators translate the AI RMF Playbook into system-of-record workflows so governance, compliance, and engineering teams share a single view of AI risk.

  • NIST AI RMF
  • AI governance
  • Risk management
  • Trustworthy AI
Back to curated briefings