December 6, 2023

Last Updated on April 23, 2024

AI introduces unique business and cybersecurity risks, while also serving as a new vector for familiar risks like theft and fraud. Organizations need to ensure that their AI adoption is safe, legal, and ethical while delivering positive business results.

To help public and private sector organizations manage AI risk, the US National Institute of Standards and Technology (NIST) recently released the AI Risk Management Framework (AI RMF). Intended for voluntary use, this new guidance can help infuse trustworthiness into AI technologies and encourage innovation while reducing AI risk.

This article introduces you to the NIST AI RMF and how it can help with addressing AI risk.

 

What is the NIST AI RMF?

As mandated by the National Artificial Intelligence Initiative Act of 2020 (P.L. 116-283), part of the National Defense Authorization Act of 2021, NIST developed the AI RMF as a resource to help the many organizations now developing and/or using AI systems to better manage AI risks and support trustworthy and responsible AI. The framework is the result of an open and collaborative process across the full spectrum of AI interests.

Looking to provide flexibility and applicability for organizations of all sizes and sectors, the AI RMF is rights-preserving, collaboratively developed, non-sector specific, and use case agnostic. According to NIST, it is “…intended to adapt to the AI landscape as technologies continue to develop, and to be used by organizations in varying degrees and capacities so that society can benefit from AI technologies while also being protected from its potential harms.”

To support the AI RMF, NIST has also released companion documents like the NIST AI RMF Playbook and AI RMF Roadmap within its new Trustworthy and Responsible AI Resource Center.

 

How does the AI RMF view AI risk?

The AI RMF puts AI risk in the context of trustworthiness. Increasing trustworthiness characteristics in an AI model can help reduce the risks it presents to stakeholders.

To be considered trustworthy, an AI system should exhibit and build on these 7 characteristics:

  • Valid and reliable—the AI system is stable and works as intended.
  • Safe—when used as intended, the AI system does not endanger human life, health, property, or the environment.
  • Secure and resilient—the AI system can withstand unexpected changes or adverse events, including cyberattacks.
  • Accountable and transparent—information about an AI system and its outputs is available to those interacting with it.
  • Explainable and interpretable—the mechanisms underlying an AI system’s operation and the meaning of its output are understandable and trustworthy.
  • Privacy-enhanced—the AI system employs technology and techniques to “safeguard human autonomy, identity, and dignity.”
  • Fair, with harmful bias managed—the AI system addresses concerns about equality and equity, such as harmful bias and discrimination.

In NIST’s view, “Neglecting these characteristics can increase the probability and magnitude of negative consequences.”

 

How can using the AI RMF benefit organizations?

The AI RMF gives organizations a structured yet flexible process to view, measure, monitor, and communicate about AI risk more comprehensively. Applying this process can increase the benefits your business derives from AI, while reducing the likelihood and severity of negative impacts to individuals, groups, communities, organizations, and societies.

Some of the potential benefits that AI RMF users can derive include:

  • Improved processes for governing, measuring, and documenting AI risks and outcomes
  • Improved awareness of relationships and tradeoffs among AI trustworthiness characteristics, socio-technical approaches, and AI risks
  • Stronger policies and processes to establish organizational accountability for AI risks
  • An organizational culture that prioritizes the identification and management of AI risks, including downstream risks
  • Greater capacity to perform testing, evaluation, verification, and validation (TEVV) of AI systems

 

What’s next?

For more guidance on this topic, listen to Episode 126 of The Virtual CISO Podcast with guest Peter Voss, founder/CEO of Aigo.ai.

Don't Get Hooked!

Phishing emails are tricky. Based on our Cyber Security Awareness Taining material, the 10 Tips for Detecting Phishing Emails infographic provides a cheatsheet of what to look for in unfamiliar emails.
Download our Detecting Phishing Infographic now!

Don't Get Hooked!

Phishing emails are tricky. Based on our Cyber Security Awareness Taining material, this infographic provides a cheatsheet of what to look for in unfamiliar emails.
View our Detecting Phishing Infographic now