May 31, 2024

Last Updated on June 7, 2024

ISO 42001:2023, the new international management system standard for AI, is a significant step forward for AI governance, security, and privacy.

Is your organization is considering early adoption of this certifiable, voluntary framework? This article offers 5 quick tips on complementary AI governance guidance that can support your decision-making process, help you create an AI management system (AIMS) implementation roadmap, and get you started with an AI acceptable use policy or code of ethics.


Why “more is more” with AI governance guidance

Just as your business can draw on existing standards, frameworks, and research like the sources below to help you understand and implement ISO 42001 compliant controls, regulators will use this same information to help them develop legislation and compliance requirements.

Getting familiar with a range of AI governance and risk management ideas will help your business to create an AI management system (AIMS) that not only complies with ISO 42001, but also meets many of the requirements that impending regulations might introduce.

While the information sources recommended below are not necessarily aligned with ISO 42001, they share a common purpose—each represents a response to AI’s recent rapid development and the impact of large language models (LLMs) like ChatGPT on so many facets of business and society.

The short sections that follow introduce 5 of the most useful AI governance resources for your team to check out.


Source One: NIST AI Risk Management Framework

The NIST AI Risk Management Framework, first released in January 2023, is “… intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design development, use, and evaluation of AI products, services, and systems.”

This collaboratively developed document can help organizations identify and rate their AI risks, and to create risk mitigation strategies. It offers comprehensive, targeted guidance to help companies implement an AI risk management framework using a “govern, map, measure, manage” approach.

If you’re looking to identify and begin addressing your AI risks, the NIST AI Risk Management Framework offers specific, prescriptive guidance to support that process. It includes examples and features an accompanying playbook to help you apply the guidance to diverse business scenarios and AI risk types. Though it does not provide a lot of control level details, the NIST guidance complements ISO 42001 around developing high-level AIMS policies and procedures.


Source Two: The EU AI Act

Any organization that develops AI models or uses third-party AI tools to deliver a SaaS application should pay close attention to the EU AI Act and whether it could apply to them. Adopted by the EU Parliament in March 2024, it establishes a regulatory framework for AI development and use within the EU.

This is a lengthy and complex law, with a focus on classifying risk levels associated with AI systems and defining requirements for AI governance on that basis. It also includes a “cheat sheet” questionnaire to help organizations determine what requirements apply to them.


Source Three: President Biden’s executive order on AI

President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence is not a regulation but rather a directive for US government agencies to prioritize the creation of standards and requirements on developing, using, and governing AI. For example, it directs NIST to continue building on the AI Risk Management Framework and its other AI-relate initiatives.

The order makes it very clear how the US government views AI risks and opportunities. Based on this strategic directive, organizations in the US government supply chain may be among the first to see AI regulations derived from NIST documents and potentially ISO 42001 as well.

At a minimum, organizations considering ISO 42001 certification or another path to AI governance should at least read a summary of the order.


Source Four: ISO 23894

ISO 23894:2023, Information technology – Artificial intelligence – Guidance on risk management, offers guidance on how organizations that develop or use AI systems and services can manage the associated AI-specific risk. This standard is also designed to help companies integrate risk management into their AI program, and explains how to effectively implement and integrate AI risk management activities into business processes.

Because it focuses specifically on AI risk management versus the full spectrum of AI governance topics, ISO 23894 has a narrower scope than ISO 42001. While the latter standard does include risk management topics, it does so within the broader AI governance context.

For example, ISO 42001 includes a requirement for businesses to conduct regular AI risk assessments and document AI risk. Whereas ISO 23894 includes in-depth guidance on how to plan and execute a best-practice risk assessment.


Source Five: The HITRUST CSF AI Requirements

To help organizations in healthcare and other sectors keep pace with the constantly changing cybersecurity threat landscape, HITRUST released version 11.2.0 of the HITRUST CSF AI Requirements in Q4 2023.

The new version updates the framework with new guidance, including new assurances on AI risk management—making HITRUST one of the first cybersecurity standards to offer third-party certification of an organization’s AI risk management efforts. HITRUST will soon offer AI risk management certifications as part of its assurance reports.

Organizations looking to implement ISO 42001 will benefit from researching/comparing the HITRUST approach and controls for AI risk management.


What’s next?

For more guidance on this topic, listen to Episode 136 of The Virtual CISO Podcast with guest Ariel Allensworth, Senior GRC Consultant at CBIZ Pivot Point Security.