Guest: Danny Manimbo
Bio:
Danny Manimbo is a Principal with Schellman based in Denver, Colorado. As a member of Schellman’s West Coast / Mountain region management team, Danny is primarily responsible for leading Schellman’s AI and ISO practices as well as the development and oversight of Schellman’s attestation services. Danny has been with Schellman for 10 years and has over 13 years of experience in providing data security audit and compliance services.
Summary:
In this episode of the Virtual CISO Podcast, host John Verry welcomes back Danny Manimbo to discuss the significance of ISO 42001 in AI governance. They explore the roles defined within the standard, including AI customers, providers, and producers, and the importance of understanding these roles for compliance and ethical AI use. The conversation also touches on the evolving regulatory landscape surrounding AI and the implications for organizations. Danny shares insights on the future of AI, the challenges of certification, and the need for responsible AI practices.
Keywords:
ISO 40001, AI governance, roles in AI, certification, responsible AI, AI standards, compliance, AI ethics, AI regulations, AI subjects
Takeaways:
- ISO 40001 provides a framework for responsible AI use.
- Understanding roles in AI is crucial for compliance.
- AI governance is becoming increasingly important.
- Relevant authorities play a key role in AI regulation.
- AI subjects are impacted by AI decision-making processes.
- The future of AI is uncertain and rapidly evolving.
- Organizations must adapt to changing AI regulations.
- Ethics in AI is a growing concern for businesses.
- AI consumers need to establish best practices internally.
- The certification process for ISO 40001 is complex and nuanced.
John Verry (00:00.818)
So, all right, my sound levels look good. Just talk for a second. Yep, yep, yep, can live with that. right, here we go. Hey there, and welcome to yet another episode of the Virtual Seesum Podcast. With you as always, your host, John Berry, and with me today, a returning guest, Danny Menembo. Hey, Danny.
Danny Manimbo (00:05.068)
Yeah, test, test, test. One, two, three. Good to go. All right.
Danny Manimbo (00:22.646)
Hey John, thanks for having me back.
John Verry (00:24.854)
Yeah, you brought it last time. Hopefully you’ll bring it again. In fact, I know you will because I’ve had this conversation with you before. Maybe I’ll understand it the second time. So before we get going, I always like to start simple. Tell us a little bit about who you are and what is it that you’re doing every day.
Danny Manimbo (00:39.874)
Yeah. Yeah. Hey everybody. Danny Menembo. I lead our ISO and AI services at Shellman. been with the firm for 12 years based in Denver, Colorado, and also one of our CPAs. So very involved in what we’re doing on the attestation and software porting side of the house.
John Verry (00:57.778)
So I know the last time I had you on you were on with Ryan Mackey and I don’t remember if we did the traditional what’s your drink of choice? So I’ll ask it again, what’s your drink of choice?
Danny Manimbo (01:00.61)
Yeah.
Danny Manimbo (01:09.122)
Drink of choice. Well, it’s been a few years since I’ve been on actually. So no more alcohol for me. Actually, it’s been about three years since I’ve had a drink. I did do athletic brewing. I’ve kind of dove, as we were alluding to before the show, I’ve dove headfirst into the endurance sports, particularly ultra marathons out here in Colorado. So alcohol isn’t really helping with that. been having a good time with it though. That’s right.
John Verry (01:31.154)
you
Ultra-marathoning, okay. Yeah, yeah. I’m glad it’s you, not me. And what defines an ultra-marathon? Is it like 100 miles? Is that what an ultra-marathon is?
Danny Manimbo (01:42.126)
Good
Anything longer than a marathon. usually the gateway into the sport is the 50K, which it’s 10, 5Ks. It’s a 31 miles and they go anywhere, you know, a hundred mile, two, 300 miles. So I’ve got a 135 mile race through Death Valley next month in July. It’s called Badwater. So that’s where I’ve ended up.
John Verry (02:12.796)
God bless you. You’re a better man than I, but you already knew that. All right, so let’s get down to business. I’m excited to have you on. AI is, of course, everywhere exploding. it gets to a point where we all need to either demonstrate or have demonstrated to us that AI that we’re consuming is responsible and trustworthy, transparent, et cetera, all the good things that we’re looking for.
Danny Manimbo (02:17.55)
Yeah.
Danny Manimbo (02:25.069)
Mm-hmm.
John Verry (02:41.49)
ISO 400001 has stepped up and has become a sort of a go-to for many of the clients that we work with. And part of that is this concept of roles, which I’m excited to talk with you about today. But let’s start simple. Let’s start with what exactly is ISO 40000? What does it do? What is its value? Why are people using it?
Danny Manimbo (02:47.022)
Mm.
Danny Manimbo (02:55.576)
Yeah, of course.
Danny Manimbo (03:00.43)
See you there.
Danny Manimbo (03:05.006)
Yeah, it’s value is that it’s really solved the need and filled the void for everything that you just mentioned, having a framework or a standard or certification that allows its users or adopters, organizations who are looking to be certified to demonstrate that they’re using AI in a responsible and trustworthy manner. Now, what does that mean? Basically, it’s a lot of people will ask the question, you know, if I have SOC 2 or if I have 27,001, am I clear?
or does that cover 42,001? And really, as you know, and I know, those standards and frameworks were really primarily security focused. Obviously, you can add on availability, privacy, confidentiality, et cetera, to a 27,000 or to a SOC 2, but it doesn’t cover those risks that are unique to AI. So think bias, ethics, transparency, responsible use, safety. So that’s really the void that 42,001 fills.
It came into the marketplace at the end of 2023, which was timed well because I think right at its release, not a lot of people knew how to use it. And I guess we’ll get into that as we start talking about the rules in terms of who it applies to. But what did we see last year? We saw regulation, EU AI Act. saw some state by state level regs here in the US pass. South Korea put out regulations. it’s, Microsoft updated their SSPA to include AI.
requirements even require a 42001 certification if you’re what they refer to as you know high risk and everybody’s got a bit of a different definition for that so We’re starting to see this more of a demand and a need for AI governance and that’s really the what 42001 fills
John Verry (04:50.77)
Yeah, and you didn’t, you didn’t, know, like the other thing which about it, which is really interesting to me, and I kind of positive that I think we’re going to see this happen and work backwards into let’s say from 42,001 to ISO 27,001, you know, both of them are risk management frameworks. Right. The only difference is I thought it was really clever with 42,001. And then I was like, I don’t wonder why it’s not in 27,001 is the idea of the system impact assessment, which is where in 27,001.
Danny Manimbo (05:06.008)
That’s right.
Danny Manimbo (05:16.952)
Yep.
John Verry (05:20.018)
we’re concerned about cybersecurity risks and how they’re relevant to us. in ICE, excuse me, in ICE 40 2001, we do risk assessment, how it’s a risk to us, but we also have to do a system impact assessment, which is how it’s a risk to society and to individuals outside of us, which is kind of a fascinating concept really.
Danny Manimbo (05:23.662)
That’s right.
Danny Manimbo (05:32.312)
Yeah.
you
Danny Manimbo (05:39.862)
It is, and that’s where you get into the fact that this is not a security assessment, right? When you think about system impact assessment, it works with the risk assessment, but it’s not the same, right? So you think about, you know, maybe a risk assessment, at least how ISO defines it more at the organizational level, system impact assessment, it’s almost a level down at the system level. And to our customer, our clients who have maybe say five systems in scope, we were more than one, typically we’d look to see that you’d have a system impact assessment for each of those, because they’re going to have different contexts.
different users, different risks, different data and what would be considered quality data and establishing those thresholds and what could go wrong, what’s misuse or abuse of this system look like, different geographies served. So that’s really what you’re really picking apart each aspect of that system. And all that kind of bubbles up into the risk assessment and decisioning around the implementation of controls. Do we accept this? Do we implement controls to mitigate the risk?
Introduced the NXA controls and the SOA and all that good stuff.
John Verry (06:42.886)
Yeah, OK, cool. So good level setting from an ISO 400001 perspective. So let’s get into this concept of roles. So what is a role and why is it so important?
Danny Manimbo (06:47.053)
Mm-hmm.
Danny Manimbo (06:51.021)
Yeah.
Yeah. Yeah. And so very important because, and I’ll take a step back and roles are nothing new with ISO standards, right? We saw this probably introduced for the first time with 27,017, which is an extension on 27,001, right? You could be a cloud service customer. You could be a cloud service provider and it had different implementation guidance based on either of those roles that you chose or both.
27701, which was released right after GDPR came online and that’s the privacy information management system. You could be a processor controller or both. And again, that helps dictate the direction and how you implement the standards. So 42001 is very much the same. There are variety of roles in there and getting those correct is very important because it helps, you know, basically determine the applicability.
the extent of the applicability of the 42,001 standard and how you would approach those activities that we just talked about, like the system impact assessment. If you don’t have a clear concept of what your role is, it would be very difficult to properly do a system impact assessment, assess proper risk, implement controls, define all that. And the last thing I’ll say on the roles is 42,001 really got this right because if you look at any of the AI regulation out
the main rents, EU AI Act, what we have out here in Colorado with the Colorado AI Act. They are all geared towards roles, right? You’re either developer, deployer, 42,001 uses producer provider, so kind of like tomato, tomato, I guess. But those also, you know, basically help direct the teeth of those regulations, right? You’re going to have some aspects of the regulation that are geared towards the deployers of the systems, some that are geared towards developers and others that are shared, right? How do those two…
Danny Manimbo (08:43.032)
parties communicate to ensure that the deployer is aware of their responsibilities from the developer.
John Verry (08:51.634)
Yeah, and at a real fundamental level, you have the implementation of the controls, the Annex 8 controls, right? And how do you know how to implement the Annex 8 controls if you don’t know exactly the different roles, right? And I think the easiest one would be everyone is an AI consumer these days, right? So an AI user, right? AI customer, guess the right term that they use. So we all need an acceptable use policy that says what’s an acceptable use of AI. Now, if we’re also doing AI development,
Danny Manimbo (09:03.662)
That’s right. Yeah.
Danny Manimbo (09:17.314)
Yeah.
John Verry (09:20.978)
That acceptable use policy will also define acceptable use from a development perspective as well. So that would be an example where you wouldn’t know how to implement some of the controls without knowing exactly which roles you want or more roles you can accomplish. Yeah, that makes sense. All right, so let’s start like so there are six core rules, I believe. I think a few of them are very simple to understand. We can probably get through them first. then I think the last two or three.
Danny Manimbo (09:26.786)
Mmm.
Danny Manimbo (09:33.27)
Did you that? Exactly right. Yeah.
Danny Manimbo (09:40.078)
Mm-mm.
John Verry (09:46.822)
to me, kind of view them together and kind of consider them together as the easiest way to suss out the slight differences between them. So let’s talk about the easiest one probably is the AI customer slash AI user.
Danny Manimbo (09:47.138)
Yeah.
Danny Manimbo (09:51.438)
Sure.
Yeah.
Danny Manimbo (09:58.801)
Yeah. Yeah, and to your point, most organizations probably fit into this role. We haven’t seen as many, most of our clients are SaaS providers, MSPs, et cetera. They’re providing some type of product or service. So they’re primarily focused on that aspect of their role, as opposed to the AI user or customer role. I’ll tell you where it’s come up the most is there are some organizations that will approach us and focus only on the user role.
because those use cases are often when they are not providing any type of product or service that incorporates some degree of AI externally to their clients, but their employees are using it and they’re using it internally. Maybe they have a chatbot or something along those lines that can query policies and, they don’t want, you know, their employees unknowingly putting in client data or sensitive data into these systems. So they want to basically establish best practices internally. And that’s often where
that user role can come into play. it’s a bit of this, it’s kind of like in 2017 that I referenced before, right? You have the idea of a cloud service customer and the idea of a cloud service provider. Well, most SaaS providers are going to be both, right? There’s an upstream and downstream kind of relationship in that supply chain. They’re a customer to AWS, but they’re a provider to their customers, right? And it’s often that downstream provider role that they focus on because
who’s ultimately asking for the certification, right? It’s their customers as opposed to, know, AWS or et cetera. So they’re more focused on that element of their scope.
John Verry (11:35.558)
Yeah, so yeah, and that I think scope is the perfect word, right? Because what you’re effectively saying is despite the fact that every company at this point is an AI consumer, an AI customer, doesn’t mean that they have to include it within the scope of the certification because the scope of certification is typically focused on customer attestation, right? So the systems we’re providing, where it gets kind of crossed over and gets interesting. I know we were doing this with an implementation recently was if you’re using AI,
Danny Manimbo (11:38.338)
Mm.
Danny Manimbo (11:47.15)
Great.
That’s right.
Danny Manimbo (11:54.894)
That’s correct.
John Verry (12:04.346)
If you like, so, so all development now is AI enabled, right? I mean, you know, gets AI enabled, you know, copilot, mean, and you know, copilot is baked into visual studio or whatever the heck they call it these days. So almost every, every tool that we’re developing that we’re using to develop for a producer or provider, you know, the tools that we’re using in the construct and design and implementation of the solution are AI. So, you know, to me, you have to actually incorporate that into.
Danny Manimbo (12:15.63)
Mm.
John Verry (12:33.358)
into your scope at that point. So I think most of the times I think they really should be including it. Have you seen that? What are your thoughts there?
Danny Manimbo (12:35.384)
Yeah.
Danny Manimbo (12:41.678)
Yeah, we’re seeing a degree of it. And that’s, I think when you start to think about all the influences of your scope, you know, kind of where you sit in the supply chain, who can influence the operation of your AI system, you are at some point or another. And the fact that, you know, the annex a controls talk about third party relationships, you know, you’re at some point or another going to need to consider those risks as part of the AI system. Right. And there’s nobody kind of in that, you know, we’re in full control of what we’re doing.
or very few, I should say. But so I think the risks are considered. I think it’s whether or not you want to focus the main components of the certification on that. But I would expect that to be integrated. Let’s just say you’re a provider and you’re using an API plugin with OpenAI or Anthropoc, whatever it may be. It would be absolutely something our team would look for to see that you’ve considered.
what the risks are by using that system. You’ve done your due diligence on these. You understand what that shared responsibility is, all as a component of your risk assessment, the impact assessment, your third party relationship. So to your degree, it’s part of it, probably whether or not you formally put the fact that you’re a user on your scope statement, but still something that we look to see is visited.
John Verry (14:02.13)
So probably the most unusual one is relevant authorities. So what are relevant authorities and do relevant authorities have a tendency to actually pursue certification?
Danny Manimbo (14:08.45)
Yeah.
Danny Manimbo (14:17.268)
Yeah, so that one. And for those who maybe aren’t as close to this with as John and I, if you look at there’s 42,001, which introduces this concept of roles in clause 4.1, it’s the first clause of the standard, right? for lot of the reasons that we’re talking about now, we have a foundational it is ISO 22989 gets into a little bit more definitions on each of these roles for somebody looking for more information on this. But relevant authority is on basically the way right hand side of the pendulum.
where most of our clients sit are producers and providers and even that user role as John just mentioned. But relevant authorities, while we haven’t seen, it’s not our typical client profile, but what a relevant authority is is, and they could be internal or external. mean, think about folks who have kind of the oversight for responsible and ethical use of AI. So when I think of relevant authorities,
Let’s go back to the regulation that could be the European commission, the folks who put into effect the EU AI Act, right? So that’s an external relevant authority. could, so they could be considered what is an interested party and something you need to define in clause four. As a result, they’ve got external issues that you need to consider, right? Compliance with the EU AI Act. You could also have internal relevant authorities within your company. So think of a compliance officer, think of an ethics board, think of an AI.
governance committee, chief AI officers that we’re starting to see implemented within an organization. So when I think that, I think of it more along those lines as an interested party because we haven’t had, you know, that party come to us specifically looking for a 42,001 certification, not to say that it couldn’t happen, but I do think it’s a very relevant role in that it influences, you know, basically how all these other roles that we’re going to talk about would go about.
operating, developing, providing services that utilize AI because of the oversight and the authority that they have.
John Verry (16:17.926)
Yeah, it tweaked my interest because my daughter is in AI governance, second line of defense in the bank, right? And would conceivably, could they be considered under a relevant authority as one of that policy, automate or regulate it because they’re effectively regulating the banks or they’re the policymaker for the bank’s implementation of AI and could they theoretically seek certification of their governance processes?
Danny Manimbo (16:27.64)
Okay.
Danny Manimbo (16:46.346)
It could be, and that would be an interesting, I would love to do a certification based on that because it’d be very unique. You know, it’s like, I mentioned that the roles kind of dictate the extent of the applicability of the standard. we see with the user, you know, not quite as, much applicability as certainly a producer or provider might have. And going through the standard with an entity or a party like that would be very interesting to see what they’ve kind of determined is applicable. Of course, you know, the AI policy and roles and responsibilities, some of those fundamental.
areas of the standard would certainly apply regardless of role. But when you start to get into more of the technical, 42001 is not overly technical standard, but what was quote unquote technical would be interesting to see what their interpretation on a lot of that is because, you know, while it may not be them implementing it, they inform how other organizations do it.
John Verry (17:33.02)
Yeah, the other one that’s a little bit of an odd one is AI subject.
Danny Manimbo (17:37.549)
Yeah.
John Verry (17:38.578)
Can you explain AI’s subject?
Danny Manimbo (17:40.768)
Yeah, so I suppose that would be, again, this is another one of the roles that we haven’t seen as somebody coming to us looking for certification, but again, would likely fall into that interested party. Because when we talk about things that you need to understand with respect to your AI context and how you would perform, say an impact assessment correctly, but you need to know who the subjects are, right? You need to know who the intended users of the system are.
how they’re interacting with your system, what they expect to see as far as results to properly use it. So if you think of an AI subject, that could be an employee, that could be an end user interacting with a chat GPT or a GPT or a chatbot. That could be a consumer, that could be a citizen, that could be a patient who is subject to the outcomes of AI system decisioning. So think credit scoring, think hiring.
decisions, which they’re starting to put regulations in, you New York and Illinois have. And where AI gets really interesting is in healthcare. could that, I mentioned patients, could they be the subject of the outcome of decisioning around health diagnostics? I mean, that’s where this stuff starts to get into, you know, ethics and safety and all that. But those would really be.
John Verry (18:59.41)
How is that a role? Because if I was, let’s say that I’m, we do a of work in radiology for some strange reason. In radiology, AI is widely used. Probably one of the most widely used areas in all of medicine, radiological imaging, AI interpretation. So let’s say I work for GE or Siemens, one of the large companies that are,
Danny Manimbo (19:04.502)
Yeah.
Danny Manimbo (19:10.286)
Mm-hmm. Okay.
Danny Manimbo (19:15.16)
Mm.
Danny Manimbo (19:22.062)
Mmm.
John Verry (19:27.57)
doing this and let’s say I develop AI that does something. All right, so I’m gonna get a 42,001 as a provider or producer. We’ll talk about those terms in a second. But how would I get as an AI subject, right? Because I’m not the AI subject. The subjects are something that I’m providing or producing or developing. So I don’t even understand how you would, a patient’s not gonna come to you and say, saw 42,001 me. So that one has got me puzzled.
Danny Manimbo (19:35.672)
Mm-hmm.
Danny Manimbo (19:41.219)
Yeah.
Danny Manimbo (19:46.178)
Yeah.
Danny Manimbo (19:51.736)
is right.
Yeah. I don’t think we have all the answers on all of this yet because like I said, know, yeah, because we, like I said, we’re often reacting to where the market goes with this. So talk about 42,001 as they are now, you know, what are people asking you about more? It’s, know, okay, it’s 42,001 probably because you get a certification. So it’s ultimately the, the, the value you receive by undergoing the audit. And then we’re looking at, okay, who are the people within 42,000 or the organizations within 42,000?
John Verry (19:59.46)
Okay, good. I feel better.
Danny Manimbo (20:24.536)
us at Xiaomi, it’s been the providers and the producers. Yeah, no, nobody’s come up to us. the European Commission hasn’t rang us.
John Verry (20:32.146)
Can you even envision like that? I can’t envision it. That’s why it was fun. All do me a favor.
Danny Manimbo (20:40.086)
I can’t because yeah, think about it the other way. Like if you’re, if you’re the end user of a, a company that has a SOC two, like it would be like you getting a compliance, you know, a SOC two for yourself or something along those lines. And that’s really not the intent. because it’s the SOC two is intended to be, you know, who gets them as the service provider, right? So not the, cause they’re trying to instill trust with their end user or the subject and not the other way around. But,
Yes, I mean, something to think about.
John Verry (21:10.77)
If you end up with a use case, if someone just make a mental note that John various grist and he wants to know that one makes my head hurt a little bit. All right. So let’s so I started with the easier ones and the ones that are less likely that someone’s going to use anyway. Right. Because I think we both agree that the right now the main market for this is SAS providers or people that are embedding AI into product. I mean, that’s that’s what we’re working with on an everyday basis. So
Danny Manimbo (21:14.818)
yeah. Yeah. huh.
Okay.
Yeah.
Danny Manimbo (21:34.495)
Mm-hmm.
John Verry (21:39.374)
In terms of that, right, there’s three different roles, right? AI partner, AI provider, and AI producer. And I know we’ve struggled a bit, especially early on with this, when we were doing our first couple of clients with ISO 40001 to figure those things out. In fact, you were nice enough to jump on the phone with one of our joint clients to actually talk about that. So let’s talk about that. What are each of those roles and how does somebody suss out those subtle differences between whether they’re one, the other, or both?
Danny Manimbo (21:44.91)
Mm-hmm.
Danny Manimbo (21:52.376)
Yeah.
Danny Manimbo (21:58.424)
Mm-hmm.
Danny Manimbo (22:01.87)
Sure.
Danny Manimbo (22:09.612)
Yeah, sure. So let’s take them one by one as the partner was the first one. So with a partner, so yeah, partner, provider, producer. So with partner, I look at that as somebody who can influence the operation of an AI system, but does not have full control over its operation, if that makes sense. So I’ll give you an example. We have a client who
They are a component of the supply chain of the foundational models and the model producers, right? In that they provide training data to basically ensure the proper use and operation of that AI model, right? So they’re providing trained data such that it’s garbage in garbage out, right? Do you want quality data going into that system to ensure that it functions the right way? So they don’t own the
They’re not, they don’t own the technology. They are simply, you know, in a way providing it with the right, uh, you know, data so that it operates the right way. So you have, you know, folks who are, who are providing like data sets and things like that. Those, this could be an example. So they kind of sit in that supply chain, if you will, for, for a model. But again, they don’t have, they don’t have complete control over it. You can also look at, you know, third party integrations, that type of thing that are supporting the operation of the system. But again, they don’t.
They don’t have full control over it. Yeah.
John Verry (23:35.196)
So that would be, so real quick, so I’m providing training sets, so I’d get my ISO 400001 and the focus would effectively be on like data quality at that point, right? Okay, okay, I got you. So if I’m a provider to multiple AI developers, producers, right? As an AI partner, the value prop would be to not to the end customer, but it’s to,
Danny Manimbo (23:47.17)
Huge on data quality. Yeah, absolutely.
John Verry (24:03.398)
the next step up in the food chain, right? Okay, that makes sense to me.
Danny Manimbo (24:05.134)
That’s right. Exactly. So to who you’re impacting. that demand for a 42001 certification, your customers are probably, you know, they could be, I don’t know if it’s an autonomous vehicles and they’re relying on your data to power their AI models. could be, yeah, for the foundational model, you know, creators are relying on your data to ensure the proper operation of their system, right? So, absolutely.
John Verry (24:22.322)
Yeah, right, right.
Danny Manimbo (24:33.132)
And then, so next I suppose would be providers, right? So they are using AI and now they’re interact. So we’re talking about where these things are interacting directly. So the providers kind of have that downstream interaction, right? They’re providing a product or services that directly interacts with end users or guess AI subjects. If we go back to the terminology that we’re using. your SaaS, that type of provider.
John Verry (24:56.07)
their customer base.
Danny Manimbo (25:01.752)
They could also be considered a producer if they are more hands-on in the development of that system, but they could also just be using a third party source AI system. They don’t develop or tweak internally to where they’re strictly in the provider role. And then I’ll jump to producer and if you have any feedback, we can certainly chat about it, but the producer I see is almost upstream to the provider.
They could be, that could be your open AI, Anthropic, Google Meta, et cetera. They’re the ones who are developing the models, creating the technology that can be utilized by an organization like a provider to provide AI services to end users, depending on the scope and how they intend to use that model, if they package it up and provide it externally themselves.
to their end users, they can be considered a provider as well. That’s where the concept of having multiple roles in 42,001 could fit in, right?
John Verry (26:06.674)
Okay, so the vast majority of people that are gonna pursue ISO 400001 are going to be an AI provider, right? That’s the predominant place, right?
Danny Manimbo (26:17.23)
I’ll put it this way, we haven’t at this point at Shellman issued a 42,001 certification that hasn’t had at least provider on there as one of the roles.
John Verry (26:23.858)
Provider, I agree with that. And many of them will also be a producer. And where I think you were involved in this conversation, maybe not, maybe with somebody else, but where the line between tuning things as a provider crosses over into producer, right? So in the conversation that I was having, it’s a client that’s in the healthcare space
Danny Manimbo (26:45.379)
Yeah.
John Verry (26:54.766)
They’re in the Google Cloud, and there’s a Google model, and I don’t remember the name of it, that processes images. But now what they’ve done is they’ve done some additional training of that model. And all of these words probably mean a lot, like training versus tuning versus right. But the model didn’t completely work for their field of use.
Danny Manimbo (27:03.203)
Okay.
Danny Manimbo (27:09.23)
Mmm.
Danny Manimbo (27:14.595)
Yeah.
Danny Manimbo (27:22.062)
Tim.
John Verry (27:23.986)
So they had to provide some additional steering, almost sort of like a grounding or almost sort of like a rag. I don’t know the right term exactly for how they did it. But it was, OK, it’s almost there. Let’s give it a little more guidance. Where does giving it a little more guidance? I think we all know that if you’re doing machine learning and building your own model, I think we all know, OK, you’re a producer. But the vast majority of our clients in smaller SaaSes are not.
Danny Manimbo (27:27.212)
Yeah.
Danny Manimbo (27:32.088)
Sure.
Danny Manimbo (27:44.654)
Right.
Yeah. Yep.
John Verry (27:53.714)
ground up developing, right? Their own models, right? They’re building on one or more models. Where does, like, or gluing together three models in a novel way, does that make you a producer or not? Like, how do you draw those lines? That’s one of the places where know we have, with our clients, struggled to figure out, like, you know, which, we know you’re a provider, and we think you’re probably a producer, but we’re gonna have, you know, it might actually depend upon.
Danny Manimbo (27:55.422)
Mm-hmm. Correct.
Yeah.
Danny Manimbo (28:07.32)
Yeah.
Danny Manimbo (28:12.974)
yeah.
Danny Manimbo (28:18.03)
you
John Verry (28:21.343)
from an ISO 42001 perspective, the registrar and what their opinion is.
Danny Manimbo (28:24.974)
Yeah, this is where it gets a bit gray. don’t make that decision for our clients because if you think about what I was talking about in the beginning of the podcast, how the regulations are geared towards developers and deployers and it’s like this quickly becomes a legal conversation. And then in terms of, we got asked the question, well, which one am I? Because we’re trying to determine compliance to the EU AI Act.
I can interpret, you know, how our, our inter I can communicate our interpretation of the standard and how things would operate, but ultimately they need to meet that determination for themselves. When I think of the producer, it’s yeah, certainly if you’re hands on in the development, I also look at it from, you’re hands on with the training of the model, you know, that could, we kind of err on the side of likely producer, but again, you know, ultimately kind of giving our clients that input and having them decide for themselves.
It’s not as cut and dry as 27701 where if you decide you’re a processor or controller, these are the controls for you. These are the controls if you go this way. 42,001 is one control set and you determine your role and then you kind of determine how many of those 38 controls apply or at least partially apply to you because as you know, there may be shared responsibilities and things like that.
Yeah, I probably didn’t answer your question very well, but this is probably one of the conversations that we have every day with our clients as far as do we also fit into that producer category? And then I ultimately asked the question, are you doing any additional hands-on training or influencing of how that model operates? I know you didn’t develop it yourself, but if you did, likely that’ll kind of seat you into that producer category. If you look at maybe not so much 229.89, but 42,000.
has a lot of sub roles under producer to where, depending on our clients interpretation of where they fit, they could sit in one of those sub producer categories, but not all producers could be created equal, right? Because with what we did in ThropicSodded, that’s gonna be very different from somebody who’s just doing some additional training on the model, right? Because of the complexity of how we would look at those controls. So, very different and…
Danny Manimbo (30:42.028)
something that our clients need to, our job is to equip them with knowledge and information, et cetera, a certification body, but they’ll ultimately make that decision on their side.
John Verry (30:50.642)
Yeah. So it’s, you know, we, we end up, we all end up in the same boat, right? You, you living on the, uh, at test side, us living on the consultative part of the equation, you know, I’m constantly telling people like, look, from doing this for 25 years, I would tell you this particular regulation likely applies, but that’s not my job to do that. Your it’s your council’s job to tell you that, right? Like, you know, we’re, we’re the client right now and they’re like, we’re being told HIPAA doesn’t apply. And I’m, I’m just like, I’m like,
Danny Manimbo (30:54.509)
Yeah.
Yeah.
Danny Manimbo (31:06.542)
Mmm.
Right. 100%.
John Verry (31:20.178)
Okay, who told you that? like, I mean, okay, I’ll believe but I can’t believe that it doesn’t apply to this use case, right? So the same kind of thing. Now, let’s just say forsake argument though. So I agree with you completely. You know, at the end of the day, it’s a client’s obligation to make determinations of legal applicability, not consulting organization or an app test organization. Flip side of that is you walk in, I’m developing ground up models and I have myself as a provider, but not a producer, right? At that point, at that point,
Danny Manimbo (31:25.856)
Yeah, right.
Danny Manimbo (31:33.912)
Thank
Danny Manimbo (31:37.998)
Mmm.
Danny Manimbo (31:46.082)
Yeah. Yeah.
John Verry (31:49.094)
despite the fact you can’t legally tell me that you’re not going to position it. As an attest party, you’re to say, hey, listen, you know what? I’m not comfortable giving you a certification at this point. Or would you be able to provide it, but you’re only providing it for provider side? And then it would be the recipient of the ISO 42001 certification’s responsibility to look at which roles were specified and their statement of applicability.
Danny Manimbo (31:52.984)
Sure.
Danny Manimbo (31:56.344)
Yeah.
Danny Manimbo (32:00.11)
That’s right.
Danny Manimbo (32:09.346)
Yeah.
Danny Manimbo (32:15.17)
I think there’s got to be a degree of reasonableness there to where, like I said, any of the main foundational model producers out there, if they were to say, hey, we’re just going to gear this towards being a provider.
It doesn’t tell the full story and it doesn’t tell the full story of what their responsibilities are externally to society, to the ultimate providers that are using their models, et cetera. So yeah, you would certainly scrutinize that and just make sure that you are doing your professional due diligence and whatnot as far as.
Yeah, making sure that that’s because ultimately we want to give a we want to give a certification that’s fair, that’s valuable, et cetera. And that tells the right story. So, yeah, yeah, of course.
John Verry (33:00.05)
You want it be win-win, right? It’s a win for the person receiving it and it’s a win for the person actually getting a copy of that and basing their confidence in that provider on that assurance.
Danny Manimbo (33:06.656)
And yeah.
Danny Manimbo (33:10.978)
That’s right. So, and they’re not going to get value out of the certification if they’re scoping it down so small that it’s, you know, we only want to look at this small area of our business, but not in all these other areas. And we know, you know, as professionals that those other areas are really the bolts of their services.
John Verry (33:27.423)
All right, I think we beat up roles pretty good anything else on roles you want to chat about
Danny Manimbo (33:30.158)
Yeah. No, I think that’s about as in depth as you can go. Yeah.
John Verry (33:34.29)
Okay, cool. And I feel pretty good. I kind of almost stumped you one time and I consider you way smarter than me. So that makes me feel good. All right. So, so if I can, if I can ask you a question, right, this is not on roles, but it just some fascinating stuff has been happening in AI that I’ve been kind of tracking. So, so there seems to me to be increasingly disparate views of where this is all going. Right. So
Danny Manimbo (33:41.922)
Ha ha.
Danny Manimbo (33:46.679)
Yeah.
Danny Manimbo (33:52.898)
Mm-hmm. Yeah.
Danny Manimbo (34:02.658)
See ya.
John Verry (34:02.822)
So you got some people that are saying, we’re to have AGI, Artificial General Intelligence, in the next three years, five years, two years. I I see some crazy stuff. But I think there’s a consensus in the next decade, if you look at the AI believers, let’s call them. And then there’s groups that are tracking AI performance. And the data is clear. There is a degradation of the accuracy of the major models over time.
Danny Manimbo (34:10.19)
Mm-hmm.
Danny Manimbo (34:21.41)
Yeah.
Danny Manimbo (34:30.253)
Mm-hmm.
John Verry (34:31.602)
And you know, some people are now predicting model collapse, right, because of synthetic data contamination. Right, it makes sense, right? If you’re training AI data on AI data, eventually it’s chasing its tail and it collapses, right, which is where that term comes from. Would you dare to offer an opinion?
Danny Manimbo (34:36.558)
Hmm.
Danny Manimbo (34:49.978)
That’s a billion dollar question. I would say looking a decade out with AI is probably too far. Like we have no idea what this stuff is going to look like in a decade. If you think about a decade ago, 2015, know, mean, no chat, GPT, no, no, no. So it’s like, you know, this stuff has come online so much quicker and advanced, you know, so much quicker as well than anybody thought. So I think, you know, whether AGI is a decade out or
John Verry (35:00.956)
didn’t exist effectively.
Danny Manimbo (35:15.662)
three years out, I mean, it’s coming, right? And then, you know, as far as what the model’s gonna look like, I’m not sure. I mean, I’m gonna give you kind of an off topic analogy. Like I’ve got six year old twins. I had a buddy at work, he was talking about how his daughter, she’s 16, she was doing her driver’s exam. So that’s 10 years, right, that I have to be sitting in his shoes. I don’t even think my kids are gonna be driving the cars that we’re driving today.
Right. So, mean, I don’t know if they’re going be taking a driver’s exam, what they’re going to be doing. So it’s so hard to predict. And it’s, you know, we have this conversation a year from now. I think, you know, we’ll probably both have different, much different inputs on it, maybe even six months from now. Just, I mean, look at all the regulations that are happening. Like, nobody knows what to do. Like, we put out regulation in Colorado. We peeled it back. Now we’re putting it back out. And the same thing in Texas. So it’s yeah.
John Verry (35:58.012)
So, and,
John Verry (36:06.374)
Texas has one now too? I didn’t see that. I knew New York City’s Bias Act, that’s a pretty good one. Colorado AI Act, I Illinois put something out. Texas has something out too?
Danny Manimbo (36:14.51)
Yeah, it’s called Traga. It’s basically, it was Texas responsible AI usage or something along those lines. So, you know, they put out the initial version and there’s always, you know, basically pushback from, big tech, of course. And then there’s that, you know, kind of dance that happens between lawmakers and big tech and some, politics, I’m sure. But then it got revised and they had exemptions and they might add a runway to when it get implemented. it’s like, people are just trying to figure out this delicate dance between regulation and allowing for.
innovation we saw with, you know, with Trump’s administration is doing the big, beautiful act and potentially a moratorium for all state level AI regulation for 10 years and not putting in anything even lightweight at the federal level. it’s, I mean, there’s a lot going on. But you see how I just tap dance around answering your actual question there. But yeah.
John Verry (37:05.585)
Yeah, you know what? I was thinking you have politics in your future. You’re running for office at some point. You gave a very thoughtful non-answer. Very well done. Very well done. With a little less meandering than Trump. mean, pretty well done. So the other thing which was fascinating, you see where they tested an AI recently and they were telling the AI they were going to turn it off, they were going to replace it with a newer version?
Danny Manimbo (37:11.649)
I really, yeah.
Yeah.
Yeah. Yeah.
Uh-huh.
Danny Manimbo (37:29.23)
Mm-mm.
Danny Manimbo (37:33.394)
Oh, I hear it.
John Verry (37:34.39)
and it resorted to blackmail. it told the program, if I recall the story right, it told the programmer that it was going to tell his wife that he was having an affair. They intentionally gave the AI that information to see if it would use it as, and it did.
Danny Manimbo (37:49.774)
It’s like all the stuff that these movies were made about is they kind of started happening.
John Verry (37:54.748)
So that was really interesting. was having a conversation with a guy on our team that’s more AI literate than I was. And I was like, I’m amazed it did that. goes, I expected it. I’m like, what do mean? goes, it’s a trope. He goes, and it’s on the internet. mean, like SkyNet. It’s a common theme and it has been in movies and books for a dozen years or something of that nature or since one was,
Danny Manimbo (38:06.478)
Mm-hmm. Yeah.
Yeah.
Danny Manimbo (38:20.718)
Sure.
John Verry (38:24.774)
the original Terminator, probably 2000ish. He said, think about it. He goes, it’s a JNI engine. It was trained on the internet. This trope existed. These stories have existed. It just used stuff out there. goes, why wouldn’t you have expected that? And I’m like, because I’m an idiot and you’re not. And that’s why I asked you the question. So it actually, in a weird way, made a lot of sense. It still scares the hell out of me.
Danny Manimbo (38:26.712)
Yeah.
Danny Manimbo (38:32.206)
Mm-hmm.
Danny Manimbo (38:40.174)
Mm-hmm.
Danny Manimbo (38:45.934)
Yeah, it’s like in Terminator 2 when they, so they’ve realized, that’s what I watched when I was growing up. I was born in 1987, but they see the damage that the AI has created and machines have created. So they go back in time to try to destroy the people who made it, right? And just, so it’s kind of crazy, but yeah, it’s interesting times. But like I said, I would love to do this again, June 2nd, 2026, see where we’re at. Yeah, done.
John Verry (39:11.794)
All right, you got it. It’s a date. It’s a date. Okay. All right. So, if anyone wants to catch up on this or just as a reference to this, uh, Danny wrote a fantastic, I think the best thing that I’ve seen a blog, it’s on the Shulman website, uh, on, on this, this role issue. In fact, I cheated and use some of that for today’s podcast. So I thank you to Danny and, but, but if anyone really needs to see this in writing, uh, as a reference,
Danny did a great job on that. If somebody wanted to get in contact with you, Danny, what would be the easiest way for them to do that?
Danny Manimbo (39:52.13)
Yeah, absolutely. Hopefully I’m not breaking up there. yeah, LinkedIn would be great. You can always message me on there and Danny.manimbo at Shellman as well.
John Verry (40:01.444)
Awesome. Thanks for coming on man. Good to catch up again.
Danny Manimbo (40:03.234)
I appreciate the opportunity, John.