April 18, 2025

5 AI Stakeholder Roles for ISO 42001: Where Does Your Business Fit?

AI continues to transform business operations and everyday activities with advanced automation, personalization, and decision-making capabilities. But as problematic AI results continue to show, responsible practices across AI lifecycle stages are essential to mitigate risk.

Introduced in December 2023, ISO 42001, Information technology — Artificial intelligence — Management system is an international standard that promotes trustworthy, ethical AI development, deployment, and use with an emphasis on risk management. While it bears many similarities to ISO 27001 and other ISO management system standards (e.g., a structure that includes clauses 4 through 10 plus annexes), ISO 42001 uniquely requires stakeholders to determine their role(s) with respect to the AI systems being certified.  

For organizations that seek to align with ISO 42001 or otherwise leverage AI governance best practices, this article explains ISO 42001’s five AI stakeholder roles and how they define an organizations’ relationship to AI. 

What is ISO 42001?

Created for organizations of all sizes and industries that provide or utilize AI-based solutions or services, ISO 42001 seeks to support responsible AI development and use—even as the technology rapidly advances. It offers guidance and specifies requirements for entities that provide and/or use AI-based products or services to establish, implement, maintain, and continually improve an Artificial Intelligence Management System (AIMS).

As the first available AI management system standard, ISO 42001 emphasizes the practical aspects of pinpointing and managing AI-related risks and opportunities across the full spectrum of AI use cases, versus focusing on the details of AI applications. This makes ISO 42001 highly valuable for any AI stakeholder.

ISO 42001 adds further value by empowering AI stakeholders to certify through independent assessment that they have established policies, objectives, and processes to build and manage a comprehensive and effective AIMS. The standard defines 38 controls and 10 control objectives that organizations will potentially need to implement to achieve certification.

Key principles that ISO 42001 emphasizes include:

  • Transparency and clarity regarding how AI systems operate, including the data they ingest.
  • Ethical practices and perspectives to ensure that AI systems respect human rights and avoid biases. 
  • Risk management processes to identify and address potential AI risks.

How can ISO 42001 benefit AI stakeholders?

Among the top benefits that AI stakeholders can realize through leveraging ISO 42001 are:

  • Vastly improved AI governance through a structured, comprehensive approach to managing AI systems.
  • Significantly better AI risk management, including the ability to find and proactively mitigate previously unknown risks.
  • Increased trust and confidence in AI systems through addressing critical cybersecurity, privacy, transparency, bias, safety, and other concerns.
  • Streamlined compliance with emerging AI regulations and standards.
  • Greater peace of mind around responsible AI innovation and opportunities.
  • Enhanced decision-making and strategic alignment around AI use cases and outcomes.
  • Optimal resource utilization on AI projects, including improved process efficiency and compressed time to value.
  • Competitive advantage through superior AI agility as well as the ability to demonstrate robust AI governance to customers, investors, and other stakeholders.


ISO 42001 can benefit any business regardless of its size or industry that implements an AI system anywhere in its operations—even those using AI only for limited tasks. Market pressure to embrace ISO 42001 or another AI governance platform is greater in some verticals, such as FinTech, HealthTech, BioTech, or EdTech. In general, the more an AI offering directly impacts an end user or client’s wellbeing, the greater the relevance of ISO 42001 certification.

Does ISO offer any other AI-related standards?

ISO 42001 is part of a group of international standards from ISO to help minimize AI risks and maximize its benefits. These currently include:

  • ISO 22989, Information technology — Artificial intelligence — Artificial intelligence concepts and terminology, defines AI terminology and concepts to support other AI standards while enabling unambiguous communication among AI stakeholders.
  • ISO 23053, Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML), sets out a framework for describing generic AI systems that leverage machine learning (ML), with the goal of supporting interoperability among AI systems and associated components.
  • ISO 23894, Information technology — Artificial intelligence — Guidance on risk management, offers best-practice guidance to help organizations that develop AI products and services to identify, assess, and manage AI-related risks.

What are the 5 ISO 42001 AI stakeholder roles and why are they important?

As AI systems become ubiquitous, organizations that develop and/or use AI systems need to understand how they fit within the wider AI ecosystem. This is why ISO 42001 puts five AI stakeholder roles at the core of any company’s AI governance, risk management, regulatory/policy compliance, and ethical AI adoption activities. A business can perform more than one of these roles. 

ISO 42001 defines the five key AI stakeholder roles and responsibilities as:

  1. AI Customer—uses AI to optimize business processes but must ensure compliance.
  2. AI Provider—delivers AI solutions so needs to balance innovation with regulatory demands.
  3. AI Producer—creates AI models so needs to address bias and ethical risks.
  4. AI Partner—enables AI integration and governance while managing associated responsibilities. 
  5. AI Subject—must be protected from unfair AI-driven decisions.


ISO 42001 specifies the requirement for determining your AI role in clause 4.1, a foundational clause that covers identifying key influencing factors, such as internal and external issues that affect your ISO 42001 scope.

Some of the most pivotal decisions that follow directly from your company’s AI stakeholder role(s) include:

  • Establishing the correct context and scope for your AI management system.
  • Determining the level of applicability of the various ISO 42001 controls or other best-practice AI governance recommendations.
  • Stepping off on the right foot with AI risk assessment.
  • Defining clear responsibilities for supporting roles within your AI governance program (e.g., AI Compliance Officer, AI Risk Manager, AI Engineer, Data Scientist)
  • Determining your company’s strategic objectives for managing AI. 

Each of the five roles has a distinct part to play in the AI lifecycle, from development to deployment to use. The sections that follow describe the five roles.

Role 1: AI customer

An AI customer is an entity that purchases, subscribes to, or integrates AI solutions for business or personal use. They may use AI for data analytics, automation, predictive modeling, and/or enhancing customer experiences, among other purposes. 

Typical AI customers include:

  • Businesses purchasing AI-powered tools, such as cybersecurity solutions or HR screening tools.
  • Retailers and distributors using AI-enhanced demand forecasting models.
  • Government agencies leveraging AI for fraud detection.

Critical challenges and responsibilities for AI customers include:

  • Defining business requirements for AI solutions.
  • Validating that their AI providers comply with regulations and ethical standards.
  • Validating that their own AI use complies with regulations and ethical standards.
  • Monitoring AI performance and addressing risks associated with AI deployments.
  • Maintaining oversight of AI integration and its impacts on operations.
  • Addressing bias, transparency, and explainability considerations across AI-driven decisions.
  • Maintaining data privacy and information security when interacting with AI models.

Role 2: AI provider

An AI provider is an entity that delivers AI solutions, platforms, or models to AI customers. AI providers develop AI technologies, provide infrastructure for AI operations, and/or offer AI-as-a-Service (AIaaS).

Some of the major AI providers include OpenAI, Google Cloud AI, and Amazon AWS AI.

Critical challenges and responsibilities for AI providers include:

  • Ensuring their AI offerings meet security, reliability, and regulatory compliance requirements (e.g., EU AI Act).
  • Balancing AI model accuracy with fairness and bias mitigation.
  • Providing updates, support, and training for AI customers.
  • Monitoring AI risks and introducing mitigation strategies when needed.
  • Handling liability for AI errors, harmful outputs, and other negative impacts from AI customers’ use of their technology.

Role 3: AI producer

An AI producer is responsible for designing, building, and training AI models or algorithms. This role focuses on the technical creation of AI systems rather than their operational deployment. 

Examples of AI producers include:

  • Anthropic—produces AI models like Claude.
  • NVIDIA—develops AI hardware and software for ML training.
  • DeepMind—creates AI models for a range of applications.

Critical challenges and responsibilities for AI producers include:

  • Developing viable AI models, datasets, and training processes.
  • Optimizing AI for accuracy, efficiency, and ethical performance.
  • Conducting bias assessments and AI validation tests.
  • Addressing AI fairness concerns and mitigating unintended biases.
  • Managing intellectual property (IP) concerns and ethical use of AI datasets.
  • Balancing AI performance with transparency and interpretability.
  • Collaborating with AI providers and AI customers to meet market demands.

Role 4: AI partner

An AI partner collaborates with AI providers, producers, or customers to enhance AI capabilities, integration, or governance. They may offer expertise, complementary technology, or consulting services. 

Examples of AI partners include:

  • McKinsey AI Consulting—assists clients with AI adoption.
  • IBM Watson—partners with healthcare providers to enhance AI in healthcare.
  • Microsoft and SAP—collaborating to integrate Copilot and other Microsoft AI capabilities into SAP business applications.

Critical challenges and responsibilities for AI partners include:

  • Providing domain expertise or technology to enhance AI performance.
  • Facilitate AI integration across business functions and industries.
  • Delivering consulting services to ensure responsible AI adoption.
  • Co-developing AI systems with AI providers and AI customers.
  • Ensuring AI partnerships align with ethical guidelines and compliance requirements.
  • Addressing AI risks and responsibilities in joint ventures.
  • Managing co-developed AI models and associated IP.

Role 5: AI subject

An AI subject is an individual who is impacted by AI-guided decisions or whose data is used for AI training and operation. AI subjects are often customers, employees, or members of the public who interact with AI systems.

Examples of AI subjects include:

  • Job applicants assessed by AI-supported HR/hiring tools.
  • Bank customers who receive AI-based credit risk assessments.
  • Social media users who interact with AI-generated online content. 

Critical challenges and responsibilities for AI subjects include:

  • Understanding how AI impacts personal data and decision-making.
  • Protecting personal data and privacy in AI systems.
  • Exercising their rights with respect to AI-driven profiling and data usage (e.g., privacy rights under GDPR).
  • Challenging AI decisions or outcomes that may result in bias or unfair treatment.
  • Advocating for transparency and accountability in AI interactions.

AI provider, AI producer, and AI customer roles compared

Many organizations concerned with AI governance and ISO 42001 compliance are service providers, including SaaS providers, managed service providers (MSPs), and cloud hosting providers. Often these businesses fulfill the AI provider role, but they can also be AI producers.

AI producers are effectively AI developers, which create AI products and services, design and implement AI models, or serve as model verifiers. ISO 22989 defines an AI producer as “an organization or entity that designs, develops, tests, and deploys products or services that use one or more AI systems.”


AI providers can be AI platform providers or AI product or service providers. ISO 22898 defines an AI provider as an entity that “provides products or services that use one or more AI systems.” Thus, companies that both develop AI models and provide the technology within a service offering to customers would hold both AI provider and AI producer roles.


If a company uses AI from a third-party source (OpenAI, for example) it is an AI customer of that vendor. If the same company integrates that third-party AI into the services they provide to their own customers, they also fill the AI provider role.

The following table summarizes the ISO 2001 AI stakeholder roles and responsibilities.

Role Key Responsibilities Key Challenge ISO 42001 Compliance Requirements/Goals
AI Customer Uses AI in business processes Must validate AI Providers’ regulatory compliance Maintain AI oversight, including data privacy and information security around AI models
AI Provider Delivers AI solutions to AI Customers Balance innovation with regulatory demands Ensuring AI offerings meet security, reliability, and compliance requirements
AI Producer Creates AI models Address bias and ethics risks to AI Subjects Optimizing AI accuracy, efficiency, and ethical performance
AI Partner Supports other AI stakeholders’ AI capabilities, integration, and/or governance Deliver consulting services to support responsible AI adoption Ensure AI partnerships embody ethical and compliance guidance
AI Subject Understanding how AI impacts personal data and decisions Advocating for AI transparency and accountability Exercising individual and collective rights around AI data usage (e.g., profiling)

Table 1: AI stakeholder roles compared

What’s next?

Throughout the AI lifecycle, organizations must validate AI technologies’ sound, fair, and unbiased use. CBIZ Pivot Point Security provides AI governance and advisory services to help organizations establish clear and defined parameters concerning the use of AI. We work closely with your team to evaluate the effectiveness of controls, verify alignment with evolving regulations, and implement management systems and strategies that enable you to realize AI benefits while proactively addressing risks.

ISO 42001 with its comprehensive controls and third-party certification regime instills a high degree of confidence among customers, partners, and other stakeholders regarding your commitment to AI governance, risk management, and continuous improvement. For organizations interested in implementing an ISO 42001 aligned AI management system (AIMS), CBIZ Pivot Point Security will provide your team with expert implementation guidance to establish and execute a comprehensive roadmap to achieve ISO 42001 certification. Contact us to start a conversation.