Last Updated on October 21, 2025
2 Little-Known AI Roles and Why They’re Important
Regulatory scrutiny and market demand are driving a need for organizations across sectors to demonstrate safe, responsible, and sustainable AI development and use. This is especially important for “high-risk” AI systems whose failure or compromise could endanger health and safety or threaten citizens’ rights.
AI consumers may be less impacted today by emerging AI regulations in the US and globally. But they are motivated alongside AI creators and customizers to establish robust AI governance internally. Both governance and compliance require consideration for the role(s) an organization plays in the AI supply chain, as well as its AI stakeholders.
Many organizations engage in one or more of the common AI roles, including AI Producer, AI Provider, AI Partner, and AI Customer. But two less typical AI roles—AI Subject and Relevant Authority—are also central to developing, extending, implementing, or using AI systems in line with stakeholder needs.
This article covers what you need to know about the AI Subject and Relevant Authority roles and how they can affect your business.
Key takeaways
- AI roles and stakeholders (aka interested parties) significantly impact an organizations’ risks, responsibilities, and accountability across the AI lifecycle.
- Identifying relevant AI roles and stakeholders along with their AI goals and requirements is essential for effective AI governance and optimal AI outcomes.
- The ISO 42001 AI management system standard mandates identifying AI roles and stakeholders as a critical step towards compliance and certification.
- The Relevant Authority and AI Subject roles will be important stakeholders for many organizations in the AI supply chain.
Why are AI roles and stakeholders important?
AI roles strongly affect an organization’s risks, responsibilities, and accountability within the AI lifecycle. AI roles also influence your company’s AI stakeholders—those entities with a financial, legal, ethical, and/or personal interest in your AI program.
By identifying relevant AI roles along with AI stakeholders and their requirements, companies can better govern AI activities, manage AI risk, and exploit AI opportunities, including legal and regulatory compliance. Addressing stakeholder needs also helps build trust and demonstrates responsible participation in the AI ecosystem.
Further, identifying AI roles and associated “interested parties” is mandated as an essential preliminary step in achieving ISO 42001 compliance/certification—a key competitive move for growing numbers of AI businesses.
What are “interested parties” in ISO 42001?
ISO 42001:2023, Information technology – Artificial intelligence – Management system is the first global standard and certification program that defines requirements for building, operationalizing, and maintaining an Artificial Intelligence Management System (AIMS). Unlike popular cybersecurity standards like ISO 27001 and SOC 2, ISO 42001 covers the unique risks and considerations associated with AI, such as safety, responsible use, transparency, ethics, and bias.
ISO 42001 compliance and certification enable businesses that deliver and/or consume AI-based products or services to reduce AI-specific risks, improve AI’s utility and value, and build stakeholder trust in AI solutions. A first step in achieving these benefits is to understand the needs and expectations of “interested parties” (another term for stakeholder) around a company’s use of AI, as covered in the standard’s Clause 4.2.
Within ISO 42001, an interested party is any entity that can affect, be affected by, or view itself as affected by a decision or activity associated with an AIMS. Organizations must identify AI interested parties and define their needs and expectations to determine AIMS context and scope.
AI interested parties can be both internal and external to an organization, as illustrated in Table 1.
Internal AI interested parties | |
Employees | Staff who develop, use, or are impacted by an AI system or related policy, such as AI developers and data scientists. |
Senior management | Executive leaders who oversee AI governance and ensure that AI programs align with strategic business goals and risk management. |
Security, compliance, and legal teams | Professionals who ensure that AI systems comply with policy and meet applicable legal, contractual, and regulatory mandates. |
External AI interested parties | |
Customers and users | Those who use AI systems or services and are directly affected by any issues with reliability, security, safety, etc. |
Suppliers and partners | Businesses that provide AI system components like data, APIs, or software, as well as AI auditors. |
Investors | Those with a financial interest or potential interest an AI system’s and directly concerned with managing associated AI risks. |
Regulators | Government entities and policy makers that enact and enforce laws around AI safety, privacy, etc. |
Society at large | Groups and individuals concerned with or potentially impacted by potential AI issues like bias, ethical impacts, or privacy violations. |
Table 1: AI interested parties
What are the six major AI supply chain roles?
AI interested parties parallel the roles that organizations play in the AI supply chain. Companies that develop, extend, and/or use AI systems should define their supply chain role(s) as an initial step in establishing AI governance and addressing regulatory compliance.
Table 2 shows the six key AI supply chain roles per ISO 42001 guidance.
ISO 42001 role | Description |
AI Customer/User | An entity that uses an AI system. |
AI Provider | An entity that creates and/or delivers AI products or services to AI Customers/Users. |
AI Producer | An entity that leverages AI Provider technology to extend or customize an AI system. |
AI Partner | An entity that delivers supporting AI services or components but does not control development of the associated AI system. |
AI Relevant Authority | An entity that enacts AI-related laws and/or enforces regulatory compliance or adherence to internal policy or industry best practices. |
AI Subject | Individuals or groups that are affected by AI system activities or outcomes. |
Many companies fill multiple AI roles. For example, AI is now part of every modern software development lifecycle, making every AI Producer an AI User.
Besides the roles commonly applied to many AI programs or ISO 42001 AIMS contexts, the Relevant Authority and AI Subject roles will be stakeholders for many organizations. For example:
- Your AI acceptable use policy may impact AI Subjects and well as AI Customers.
- Your AI compliance program impacts any Relevant Authorities that enforce or audit your compliance posture.
What is an AI Relevant Authority?
ISO 42001 defines a Relevant Authority as any in-house or external stakeholder that has a direct oversight or guidance function impacting an organization’s AI program and/or AIMS. Relevant Authorities uphold requirements that AI ecosystem members must consider when developing or interacting with AI systems.
Examples of Relevant Authorities for your business may include:
- Government bodies that enforce AI-related laws within applicable jurisdictions, such as the European Commission or various US states that have enacted AI laws.
- Standards bodies, which define general and/or industry-specific frameworks or best practices.
- Independent third-party certification bodies, which are accredited by national or international authorities to validate compliance with ISO 42001 or other standards/controls.
- Your own C-Suite and board, which per ISO 42001 are responsible for the overall direction and strategy of your AI activities.
- AI Governance committees, Chief AI Officers, and other in-house entities that may be accountable for overseeing a company’s AI program and strategy. This would include an AIMS if applicable.
- AI compliance executives charged with verifying that an AI program or AIMS complies with internal policies and requirements.
Because they often have expectations and demands around privacy, data security, and fairness that can directly impact a company’s AI development activities, some organizations also consider their customers to be Relevant Authorities.
Why are Relevant Authorities important for AI ecosystem participants?
Relevant Authorities are important stakeholders for all AI supply chain participants, whatever their role(s), including those seeking to achieve or maintain ISO 42001 certification. Incorporating Relevant Authorities into an AI strategy, program, and/or AIMS is essential because:
- The mandates and interests of Relevant Authorities are central to legal, privacy, and ethical compliance and risk management for AI systems and activities generally (e.g., compliance with the EU AI Act).
- Meeting Relevant Authorities’ needs around regulations and policies helps build and sustain trust in AI activities.
- Per ISO 42001’s Clause 4.2, Relevant Authorities are among the interested parties for nearly every AI endeavor—and therefore “in scope” for most AIMS certifications.
“Relevant Authorities is a very pertinent role because it influences how all those other roles would go about developing, operating, and providing services that utilize AI because of the oversight and authority that they have,” summarizes Danny Manimbo, ISO & AI Practice Leader at Schellman. “Areas like AI policy and responsibilities are also fundamental elements of ISO 42001 that apply regardless of a company’s role.”
What is an AI Subject?
AI Subjects are any individuals, groups, or other entities that is subject to processing by an AI system or is otherwise impacted by it. Common examples include:
- Customers, employees, or end-users interacting with financial services chatbots, or healthcare patients receiving AI-driven monitoring care.
- Data subjects whose personal data has been used or processed by the AI system, such as when training the AI.
- Citizens whose lives and prospects may be affected by AI-supported decisions, such as job hires, credit/loan decisions, or medical diagnostics.
“AI Subjects fall into that interested party category,” explains Danny Manimbo. “You need to know who the subjects and intended users of an AI system are, how they’re meant to interact with your system, and what they expect to see as far as proper results.”
Why are AI Subjects important for AI ecosystem participants?
Businesses in the AI supply chain should understand how their AI activities impact AI Subjects and meet their requirements around privacy compliance, fairness and transparency in results, avoiding discrimination/bias, etc.
For example, ISO 42001 requires organizations seeking certification/recertification to conduct AI impact assessments to ensure they have adequately mitigated potential AI Subject risks. Considering AI Subject rights helps safeguard AI ethics and compliance while reducing the frequency of legal actions and sanctions.
What’s next?
For more guidance on this topic, listen to Episode 153 of The Virtual CISO Podcast with guest Danny Manimbo, ISO & AI Practice Leader at Schellman.