AI Governance Library

NIST AI 100-1: Artificial Intelligence Risk Management Framework (AI RMF 1.0)

Voluntary, risk-based guidance from NIST that helps organizations govern, map, measure, and manage AI risks so that AI systems are not only high-performing, but also safe, fair, accountable, and aligned with societal values.
NIST AI 100-1: Artificial Intelligence Risk Management Framework (AI RMF 1.0)

⚡ Quick Summary

NIST’s AI RMF 1.0 is a voluntary, technology- and sector-agnostic framework for managing risks arising from AI systems across their lifecycle. It starts by defining AI as socio-technical: impacts emerge not only from models and data, but from how people build, deploy, and use them. Part 1 explains how AI risks differ from traditional software, outlines key trustworthiness characteristics (valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias managed), and highlights challenges in measuring and prioritizing risk. Part 2 turns this into practice through four core functions—GOVERN, MAP, MEASURE, MANAGE—each broken into categories and subcategories that organizations can tailor. Appendices add role definitions for “AI actors,” human-AI interaction issues, and how AI risks diverge from classic IT risk.

🧩 What’s Covered

The document is organized into two main parts plus appendices.

Part 1 (“Foundational Information”) reframes AI risk as the combination of likelihood and impact on people, organizations, and ecosystems, including long- and short-term, localized and systemic harms. It stresses that AI risk management must consider civil rights, safety, economic and environmental impacts, and that AI systems can both amplify and mitigate inequities. NIST details why AI risk is hard: lack of agreed-upon metrics, opaque models, reliance on third-party components, shifting deployment contexts, and the gap between lab testing and real-world behavior.

A dedicated section on “Audience” uses the lifecycle diagram on page 10 to introduce five socio-technical dimensions (Application Context, Data & Input, AI Model, Task & Output, and People & Planet) and maps them to lifecycle stages like plan, collect/process, build, verify, deploy, and operate. The table-like graphic on page 11 then lists representative “AI actors” for each stage—designers, data scientists, TEVV experts, integrators, operators, end users, impacted communities—and positions them as shared risk owners.

The trustworthiness section defines seven characteristics and explicitly discusses trade-offs (e.g., privacy vs accuracy, interpretability vs performance) and the need for context-sensitive balancing. Each characteristic gets a concise treatment: validity and reliability (including accuracy and robustness), safety (prevention of harm through design, testing, monitoring, and graceful failure), security and resilience (including adversarial threats and model exfiltration), accountability and transparency (documentation, provenance, redress), explainability and interpretability, privacy-enhancement (including PETs and data minimization), and fairness with harmful bias managed (systemic, computational, and human-cognitive bias).

Part 2 is the operational “Core”. The central diagram on page 20 shows the four functions:

  • GOVERN (cross-cutting culture, structures, accountability, inventory, supply-chain, stakeholder engagement).
  • MAP (establishing context, categorizing the system, clarifying capabilities and goals, mapping risks—including from third parties—and characterizing impacts).
  • MEASURE (choosing metrics and methods; evaluating each trustworthiness characteristic; tracking risk over time; and assessing whether measurement itself is effective).
  • MANAGE (prioritising risks, defining treatments, planning responses and recovery, handling residual risk, monitoring third-party elements, and continuous improvement).

Each function is unpacked into detailed tables (Tables 1–4) with categories and subcategories that read like an implementation checklist: e.g., GOVERN 1.1 requires understanding legal/regulatory requirements; MAP 5.1 calls for documenting likelihood and magnitude of impacts; MEASURE 2.11 requires evaluation of fairness and bias; MANAGE 4.1 demands post-deployment monitoring, appeal and override mechanisms, decommissioning, and change management.

Finally, the Profiles section defines “Current” and “Target” AI RMF profiles for particular domains (like hiring or housing) and suggests using gap analysis to prioritize improvements. Appendices describe AI actor tasks in more depth, spell out how AI risks differ from traditional software (e.g., data drift, emergent behavior, privacy amplification, environmental cost), and highlight human-AI interaction challenges and the intended attributes of the AI RMF itself.

💡 Why it matters?

AI RMF 1.0 has become a foundational reference for “responsible AI” programs, especially in the US context. It translates abstract ideas like fairness or accountability into a structured set of outcomes that risk, security, and product teams can actually operationalize. By treating AI as socio-technical and emphasizing “People & Planet” alongside data and models, it pushes organizations beyond narrow model-centric evaluations. The GOVERN–MAP–MEASURE–MANAGE structure also aligns neatly with existing enterprise risk, cybersecurity, and privacy frameworks, making it easier to plug AI risk into existing governance structures rather than inventing a parallel process. For any organization that expects to align with emerging regulation or external audits, AI RMF is a natural benchmark and a shared language for internal and external stakeholders.  

❓ What’s Missing

  • The framework is intentionally non-prescriptive: it lists outcomes but rarely says “how” in concrete, sector-specific terms. Organizations still need substantial internal work (or companion playbooks) to turn subcategories into procedures, controls, and KPIs.
  • Generative and foundation models are implicitly covered but not treated separately, which can leave gaps for model-as-a-service, open-weights, or agentic systems that emerged after 2023.
  • There is limited direct guidance on board-level oversight, incentives, and reporting lines (e.g., how AI risk intersects with audit committees, compliance, or ESG reporting).
  • International regulatory alignment (EU AI Act, sectoral rules) is only touched at a high level, so mapping AI RMF to specific legal obligations still requires additional work.
  • No maturity model or scoring approach is provided; assessing “how far along” an organization is in implementing AI RMF is left to users.

👥 Best For

  • AI governance, risk, and compliance teams designing or upgrading an enterprise-level AI risk framework.
  • Product, data, and ML leaders who need a common language to align safety, security, privacy, and ethics work.
  • Public-sector bodies, regulators, and auditors looking for a neutral reference point for expectations around “trustworthy AI.”
  • Researchers and standards bodies building methods, benchmarks, or tools for TEVV, safety, or bias assessment.

📄 Source Details

  • Full title: NIST AI 100-1 – Artificial Intelligence Risk Management Framework (AI RMF 1.0)
  • Publisher: National Institute of Standards and Technology, U.S. Department of Commerce
  • Date: January 2023
  • Length: 48 pages + online AI RMF Playbook
  • DOI: 10.6028/NIST.AI.100-1

📝 Thanks to

Thanks to the NIST AI RMF team and the broader multi-stakeholder community (industry, academia, civil society, and government) that contributed workshops, comments, and use-case input to shape this first version of the framework.

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.