December 6, 2023

Last Updated on January 14, 2024

NIST AI Risk Management Framework:
What You Should Know and Why You Should Care

AI introduces unique business and cybersecurity risks, while also serving as a new vector for familiar risks like the> and fraud. OrganizaAons need to ensure that their AI adopAon is safe, legal, and ethical while delivering posiAve business results.
To help public and private sector organizaAons manage AI risk, the US NaAonal InsAtute of Standards and Technology (NIST) recently released the AI Risk Management Framework (AI RMF). Intended for voluntary use, this new guidance can help infuse trustworthiness into AI technologies and encourage innovaAon while reducing AI risk.
This arAcle introduces you to the NIST AI RMF and how it can help with addressing AI risk.

What is the NIST AI RMF?

As mandated by the NaAonal ArAficial Intelligence IniAaAve Act of 2020 (P.L. 116-283), part of the NaAonal Defense AuthorizaAon Act of 2021, NIST developed the AI RMF as a resource to
help the many organizaAons now developing and/or using AI systems to beXer manage AI risks and support trustworthy and responsible AI. The framework is the result of an open and collaboraAve process across the full spectrum of AI interests.

Looking to provide flexibility and applicability for organizaAons of all sizes and sectors, the AI RMF is rights-preserving, collaboraAvely developed, non-sector specific, and use case agnosAc. According to NIST, it is “…intended to adapt to the AI landscape as technologies conAnue to develop, and to be used by organizaAons in varying degrees and capaciAes so that society can benefit from AI technologies while also being protected from its potenAal harms.”

To support the AI RMF, NIST has also released companion documents like the NIST AI RMF Playbook and AI RMF Roadmap within its new Trustworthy and Responsible AI Resource Center.

How does the AI RMF view AI risk?

The AI RMF puts AI risk in the context of trustworthiness. Increasing trustworthiness characteristics in an AI model can help reduce the risks it presents to stakeholders.

To be considered trustworthy, an AI system should exhibit and build on these 7 characteristics:

  • Valid and reliable—the AI system is stable and works as intended.
  • Safe—when used as intended, the AI system does not endanger human life, health, property, or the environment.
  • Secure and resilient—the AI system can withstand unexpected changes or adverse events, including cyberattacks.
  • Accountable and transparent—information about an AI system and its outputs is available to those interacting with it.
  • Explainable and interpretable—the mechanisms underlying an AI system’s operation and the meaning of its output are understandable and trustworthy.
  • Privacy-enhanced—the AI system employs technology and techniques to “safeguard human autonomy, identity, and dignity.”
  • Fair, with harmful bias managed—the AI system addresses concerns about equality and equity, such as harmful bias and discrimination.

In NIST’s view, “Neglecting these characteristics can increase the probability and magnitude of negative consequences.”

How can using the AI RMF benefit organizations?

The AI RMF gives organizations a structured yet flexible process to view, measure, monitor, and communicate about AI risk more comprehensively. Applying this process can increase the benefits your business derives from AI, while reducing the likelihood and severity of negative impacts to individuals, groups, communities, organizations, and societies.

Some of the potential benefits that AI RMF users can derive include:

  • Improved processes for governing, measuring, and documenting AI risks and outcomes
  • Improved awareness of relationships and tradeoffs among AI trustworthiness characteristics, socio-technical approaches, and AI risks
  • Stronger policies and processes to establish organizational accountability for AI risks
  • An organizational culture that prioritizes the identification and management of AI risks, including downstream risks
  • Greater capacity to perform testing, evaluation, verification, and validation (TEVV) of AI systems

What’s next?

For more guidance on this topic, listen to Episode 126 of The Virtual CISO Podcast with guest
Peter Voss, founder/CEO of Aigo.ai.

Don't Get Hooked!

Phishing emails are tricky. Based on our Cyber Security Awareness Taining material, the 10 Tips for Detecting Phishing Emails infographic provides a cheatsheet of what to look for in unfamiliar emails.
Download our Detecting Phishing Infographic now!

Don't Get Hooked!

Phishing emails are tricky. Based on our Cyber Security Awareness Taining material, this infographic provides a cheatsheet of what to look for in unfamiliar emails.
View our Detecting Phishing Infographic now