April 12, 2024

John Verry (00:00.414)

but because I yes I have I think the first podcast I ever did was with One which I’m back on her name Katie For on CMMC when she was in charge of all the CMMC. She’s the Pentagon five o’clock on Friday night She’s recording with 45 minutes later. I go. I’m sorry. I never hit record

She says, well, I guess I’m going to be better the second time. And she stayed with me to 6.30 on a Friday night recording it.

Ariel Allensworth (00:23.928)

That’s a tough one.

Ariel Allensworth (00:28.716)

Well, it’s a good idea to open with that because then you’ll always remember to hit record.

John Verry (00:34.436)

I have not forgotten since. Alright, you ready to roll? Alright, cool. Hey there, and welcome to yet another episode of the Virtual CISO Podcast. With you as always, your host John Veri, and with me today, Ariel Ellensworth. Hey Ariel.

Ariel Allensworth (00:40.276)

Yeah, let’s do it.

Ariel Allensworth (00:54.06)

Hey John, how’s it going?

John Verry (00:55.986)

It’s Friday afternoon, sir. And I’ve only got two more meetings after this podcast, so I’m looking forward to the weekend.

Ariel Allensworth (01:04.248)

We’re getting there.

John Verry (01:05.35)

Yeah, you tell me about it. So I always start easy. Tell us a little bit about who you are and what is it that you do every day?

Ariel Allensworth (01:12.924)

Yeah, so again, Ariel Allensworth. I’m an information security and privacy and AI consultant for CBS Pivot Point Security. And I help organizations implement frameworks that help them be provably secure and comply with their internal and external requirements. It can come from regulations or contractual requirements or just the organization’s objectives. I also conduct

variety of risk assessments and internal audits associated with different types of frameworks. So essentially that’s what I do the majority of the time.

John Verry (01:52.21)

Excellent, thanks. I like to ask before we get down to business, what’s your drink of choice?

Ariel Allensworth (01:58.572)

I really enjoy a good hazy IPA. But from time to time, I’ll also enjoy a vanilla porter, but I’m a beer drinker.

John Verry (02:07.732)

Yeah, I, well, have you had the Breckenridge vanilla porter?

Ariel Allensworth (02:12.36)

No, I don’t think so.

John Verry (02:13.79)

That’s a nice little one. You can get it on tap or you can get it in a bottle, but the Breckgrinch vanilla porter is actually pretty darn good. I drink a lot of stouts. Porters and stouts always confuse me despite the fact that I drink a lot of beer. There’s such a subtlety between them that I don’t really quite understand. I always kind of tend to think of porters as being a little bit smoky, but then recently I’ve had a lot of porters that if I didn’t look at the bottle, I would have thought it was a stout. So I haven’t quite figured that out completely yet.

Ariel Allensworth (02:29.557)


Ariel Allensworth (02:39.72)

Yeah, there’s some gray area between the two for sure.

John Verry (02:42.894)

Yeah, so, all right. So anyone over the last year that has consumed any type of media has probably gotten tired about seeing articles about AI and the way that it’s going to influence the next century, I guess.

For I think many organizations that creates massive opportunities right new approaches new services new products Has the promise of driving employee productivity? But it also creates a new types of risk that we really don’t fully understand yet And it also obliges companies to comply with relevant regulations and guidelines so Today what I wanted to chat with you about is using AI in a provably secure and compliant manner Because that’s suddenly become a contractual

and or regulatory obligation for many of the organizations that we work with. So let’s start simple. From your perspective, what are some of the risks that AI introduces to most organizations?

Ariel Allensworth (03:49.332)

Absolutely. So some of the ones right off the bat, and this has a lot to do with the large language models that we’ve seen come out and become popular over the last couple of years, but misinformation caused by the models hallucinating. And for those don’t know what that is, the model can just output information, it doesn’t mean it’s correct. And so when organizations either formally implement a tool that leverages AI, or employees or people

John Verry (04:17.626)

Thank you.

Ariel Allensworth (04:19.048)

use a tool informally, sometimes they can rely on that information as if it’s ground truth and that’s not always the case. And so depending on the processes that the AI tool is supporting, you can have risky outcomes, you can have unnecessary bias or even potentially discrimination. So hallucinations and misinformation is a…

John Verry (04:39.086)

Thank you.

Ariel Allensworth (04:45.244)

a huge problem if there’s not verification done on the output of these models. And then also, there’s, depending on how models are trained, it’s possible if you haven’t vetted and tested the model appropriately, that you can sometimes extract some of the information that was used to train the model. And that’s not always a good idea because it could be proprietary information, intellectual property, things of that nature.

are really the two big ones. And then another one which has been around for a while, but is just even more prevalent with the popularity of artificial intelligence recently, is automated decision-making risk. So this has been something that’s been out for a while. So for example, when you go to apply for a loan, in many cases, there’s an artificial intelligence that uses the variables from your application to determine the amount of risk to the bank.

of you defaulting on that loan and it applies a specific interest rate. Now if the model is faulty or trained on data that is biased, sometimes that can cause discriminatory outcomes for certain groups of individuals. So some people may get a different interest rate based on faulty training data when really they should be getting a more fair or perhaps a much higher interest rate than the model predicts.

And this can apply across many different areas. Things like insurance and finance and loans is a really good example of this that’s been implemented long before these large language models. But anytime an organization is trusting a model to make some sort of decision or even a recommendation that the organization is gonna take into account, they have to make sure that they are not introducing bias because they’re gonna be accountable for the discriminatory outcome.

John Verry (06:42.73)

Yes, on that hallucination risk, I think the most famous incident was the gentleman, the lawyer who goes into court and he argues case law that the AI engine completely made up. So he cited up a legal precedent that the whether it’s chat, GPT or whatever it was actually hallucinated. So, yeah, that’s scary. And then, you know, on the on the usage side, right?

One of the things I don’t know if you see when you chat with people, when I start to talk about AI risk with some people, they’re like, no, you don’t understand. We don’t, we’re not doing anything with AI. We are not. And they mean they’re not developing AI enabled applications, but I think people forget that AI is sort of becoming omnipresent in our lives and that, you know, they and their employees are using these tools provided by third parties that are using AI. And I think that theoretically, right.

that imposes them to many of the same risks that they would if it was even their own tool,

Ariel Allensworth (07:44.308)

Yeah, absolutely. There’s definitely a risk of data loss, for example, from employees putting proprietary information into chat GPT, for example. So let me talk about a similar scenario. So it’s hopefully common knowledge, and there should be training within most organizations that tells employees you don’t send proprietary information to an email address outside the organization.

Or you don’t type in proprietary information into Google. And so neither of those things are explicitly artificial intelligence. But there are policies on that, and there’s training so that employees know that the best practice is not to share that information. Well, artificial intelligence and large language models, especially, introduces this new type of risk. And people may not necessarily understand the activities that they’re doing.

cause risk for the organization. There may be excitement around improved efficiency and workflows within the organization. And sometimes that can cause people to miss the risks like data loss from inputting proprietary information into these models.

John Verry (08:56.862)

Right, so as an example, correct me if I’m wrong, you know more than I do, but in chat GPT, right, there is a switch that says, do not allow my data to be used for training purposes or something very similar to that. But if you don’t click that, your proprietary data could become part of the large language model, correct?

Ariel Allensworth (09:17.808)

Exactly, and not just ChadGBT, but the different models that are available and different tools that are available. You have to review the terms and conditions and the settings and features of each of these tools to understand, are you going to be compliant with your organization’s policies as they are? And it creates a very ambiguous landscape of how to use these tools. And so it can be really difficult for organizations to address this because this came on so quickly.

And for example, I use chat GPT all the time. I don’t put proprietary information into it at all. I use it for more creative type work. But at the same time, I’ve read and understand C-Biz’s policy on the use of artificial intelligence. Not every organization has a policy like that.

John Verry (10:10.922)

Agreed. All right. So let’s talk about something which is driving a lot of the conversations that I’m having with our clients and I’m sure you as well. Regulatory and or what I’ll call semi regulatory guidance. You know, we’ve got the EU cyber act. We’ve got NIST AI RMF, the risk management framework, the presidential executive order. I think it’s called on the responsible use of AI. What isn’t that they specify? Are they are they all pretty similar? What are people up against?

Ariel Allensworth (10:39.156)

Yeah, so I’ll start with the EU AI Act. I believe that’s the one that you’re referring to, because there’s also, OK. So the AI Act was actually adopted by European Parliament on March 13 of this year. So that establishes a regulatory framework on the development and use of AI within the European Union, and that affects.

John Verry (10:46.892)


Ariel Allensworth (11:07.616)

citizens of the European Union. And so it’s very similar to the GDPR that has been out for many years. And so while this is a regulation within the EU, it applies in much the same way as the GDPR. And so at a very high level, because it’s a quite large piece of legislature, but it requires that organizations classify

their AI systems into specific risk categories. And then depending on the outcome of that categorization, they have specific requirements they have to meet within the law. And then how do you know this is applicable to you? Well, there’s a variety of criteria, but fortunately they’ve developed a tool. It’s kind of a quick questionnaire that allows you to evaluate whether or not this applies to you or not. The NIST AI Risk Management Framework,

John Verry (12:00.302)

Thank you.

Ariel Allensworth (12:03.752)

That is not regulation at all. That just provides really good targeted guidance on how to implement a risk management framework for artificial intelligence. And so that specific framework focuses on four areas. It uses a govern, map, measure, and manage model. And that’s just really…

Again, it has very targeted guidance in each of those areas. It’s one of the pieces here where if an organization is saying, what can I do to address the risk around AI, it is incredibly helpful because it’s specific, it’s prescriptive, and it has really good examples, and it has an accompanying playbook to go with it that helps you understand how it applies to specific types of companies and specific types of artificial intelligence risk. And then finally,

with the executive order that is a Little less applicable. I would say as far as like a regulation because it’s more of an order to government agencies to develop standards and develop regulations on the use and development and governance of artificial intelligence, so it’s the executive branch

recognizing that there are significant risks and opportunities associated with this. And so we need to put some significant resources towards understanding what those are and then take actions to, of course, reduce the risk and take advantage of the opportunities. So.

They’re not really similar, but they’re all in a response to this rapid development of artificial intelligence and large language models, and for it being incredibly relevant to a much, much larger group of people, and with people having a general knowledge of it. And so organizations should definitely just watch this space, leverage NIST AI RMF.

Ariel Allensworth (14:14.496)

as a really good resource if you want to understand how to manage AI risk. Pay attention to the EU AI Act, see if it applies to you. And then of course, just pay attention to the executive order and news and updates around that because you may find there are new regulations or new standards that come out as a result of the orders that’s been given to the different government agencies.

John Verry (14:40.742)

Yeah, I would argue that the presidential executive order, while not yet important, what it portends is because we know from previous presidential executive orders, what happens is, so as an example with the NIST 800-218, the secure software development, that stemmed directly from a presidential executive order on said issues. So I think really what we look at when we look at the presidential executive order, I think it gives you an idea of…

where our government is going and then we know that our government is increasingly flowing down regulations to organizations within the United States.

Ariel Allensworth (15:20.348)

It’s really good. It contains some excellent information as well to understand how the government views specific risks around AI and opportunities around AI and maybe where the government falls short. So what’s interesting is as this space moves very quickly, there may be portions of the order that become more relevant in areas that tend to be gaps. But it’s…

provides really good direction and.

a pillar for organizations to understand. If there are requirements within the United States government that apply to organizations for some reason, we can understand at least where this is going. I think it’s gonna take some time for these government agencies to develop the guidance or standards that the Biden administration is calling for.

But if anyone is curious about the executive order, I highly recommend reading a summary of it because there’s a lot of very, very relevant information.

John Verry (16:33.066)

Yeah, I agree completely. So I know that we as an organization and you personally are pretty excited about the new ISO 42001 standard. It’s a certifiable standard relating to IAI. Tell us a little bit about what it is and how can it be used to address the risks and the compliance requirements that we’ve just discussed.

Ariel Allensworth (16:58.528)

Yeah, absolutely. So ISO 42001 is an internationally recognized standard for implementing and maintaining an artificial intelligence management system. And so there are essentially there are seven main clauses that establish kind of framework requirements. And then there are a set of controls unique to

artificial intelligence that helps support those management requirements. And one thing that ISO does, both with 42,001 and ISO 27,001, is it requires the organization to comply with its internal and external requirements. So that could be requirements from stakeholders like customers or users or suppliers, also legislative and regulatory.

requirements as well, organizational objectives. And so it’s a fairly, it seems like a lightweight standard because there’s only 39 controls. However, it’s quite unique. So organizations that haven’t taken a formal approach to governance with artificial intelligence yet, they may find it challenging to implement some of the requirements because it requires just new knowledge, new experience.

and doesn’t necessarily act as just an extension off of another standard like ISO 27001.

John Verry (18:32.518)

It’s interesting

Ariel Allensworth (18:59.04)

Yeah, absolutely. There’s elements within each of those clauses that says, you have to do these specific things. And then, of course, there’s some language that says, like you said, consider this, or you should do this. And one thing that’s really nice about this standard is that it comes with implementation guidance and some informative annexes so that you’re not just left with these requirements and no

guidance on how to actually implement them and comply.

John Verry (19:30.378)

Yeah, so one of the things that I was excited to see when I first read the standard is that in much the same way that ISO has been doing, the International Standards Organization has been doing for A1, actually it’s the International Organization on Standards, I don’t know why it’s ISO instead of iOS, but let’s not delve into that right now. But I like the fact that they’ve maintained a consistent management system approach, right? So clause four through 10.

has now been standardized across many of the ISO standards, like ISO 9001 and ISO 27001, ISO 27701, and now ISO 42001. So, you know, you mentioned clause four through 10, those seven clauses. Can we just touch on them briefly? Maybe we’ll walk through each of them, I’ll throw one out, and can you just give us a couple sentences on like what that component is, what that envelopes, because I think it’ll give people a decent idea of-

what ISO 42001 really is. So let’s start with the clause context of the organization.

Ariel Allensworth (20:31.508)

Yeah, so this, I will talk about most of these as they apply to 27,001 and 42,001. And then of course, I’ll make call-outs that are specific to 42,001. So this one is, it’s pretty much equal for both. You have to understand the context like within information security. Why does information security matter or AI matter to your organization? Well, it could be there are specific internal and external requirements.

John Verry (20:52.773)

I got it.

Ariel Allensworth (20:58.652)

It may be that you’re developing a product that you need to secure or that you need to leverage AI in order to use. And it helps establish really the scope and the boundaries of the management system, both for information security on the 27,001 side and AI with 42,001. So yeah, this is really all about scope in the management system, the internal and external stakeholders, and what are their requirements.

John Verry (21:27.83)

Yeah, increasingly, like we’ve already seen it with one of our customers become a contractual obligation. And now we’re going to see it become a regulatory compliance requirement, right, regulatory compliance like with the EU AI Act.

Ariel Allensworth (21:41.736)

Exactly. And you may have suppliers that you may get a questionnaire from a supplier that you have had for a long time, and they may be asking you, do you use artificial intelligence, and you may be somewhat blindsided by these things. But if you were to get that, that would be something you’d want to document as an external requirement is that your suppliers, or your customers or your regulating bodies have these requirements for information security or artificial intelligence.

John Verry (21:59.03)

That’s all, thank you.

John Verry (22:12.074)

Yeah, clause five is around leadership. What does that entail?

Ariel Allensworth (22:15.572)

Yeah, so how you can establish a management system, how is it gonna be managed? You need to have leadership involvement and commitment and a strong tone from the top in order for the management system to be effective, to have effective governance of information security and artificial intelligence, effective risk management. And so this is establishing how leadership is committed, what are they committed to, developing an artificial intelligence policy.

So this is specific to 42,001 is identifying the artificial intelligence policy. What is the organization’s objectives for artificial intelligence? Because this can drive a lot of the subsequent or topic specific policies unique to artificial intelligence. And then of course, it also establishes the relevant roles and responsibilities within the organization to ensure that the management system can achieve the objectives set by leadership.

John Verry (23:14.786)

Gotcha. Clause six is planning.

Ariel Allensworth (23:18.988)

So this is about how can the organization assess artificial intelligence risk and treat that risk. And so this is establishing the processes for doing that. What are the criteria with which your organization needs to assess and treat that risk? There’s an additional component here that is not seen in ISO 27001, and that has to do with an artificial intelligence impact assessment, as well as establishing

artificial intelligence objectives. So the AI impact assessment may be something that could be informed by ISO 27701. So organizations a lot of times implement a privacy impact assessment to understand specific risk unique to privacy, PII, personal data. So that is a similar approach here with artificial intelligence. How are these systems going to impact individuals

groups of individuals or society. It depends on the application of the artificial intelligence system, but this is something that is new here is making sure you have a risk assessment and risk treatment plan for artificial intelligence within the organization, but also that you conduct specific AI impact assessments and those can be used to inform your risk assessments.

John Verry (24:45.922)


Ariel Allensworth (24:48.704)

So this is important because it helps the organization establish what are the resources that are required in order for you to achieve your artificial intelligence objectives for the management system. So establishing the human and capital resources, system resources, things of that nature. And then also ensuring that you have the right people, the right knowledge, the right assistance.

to develop the management system, maintain the management system, and then making sure that you effectively communicate the implementation and maintenance of the management system so that all of those roles and responsibilities that were identified in the earlier clause, that all of those people understand what their responsibilities are and what their roles are. So effective communication of that, effective communication of policies relevant to the management system, and then of course,

recording and maintaining any documentation, both required by the standard, but also required by the organization in order to ensure they’re compliant with this standard. So something that may not necessarily be required by the standard is you measuring how many users interact with your AI application every day. Standard doesn’t specifically require that, but it does require that you maintain metrics and that you monitor the AI system.

And so in order to do that, you would then have required documentation because the organization chose that specific metric.

John Verry (26:23.65)

Yeah, and under seven they cover what I think is critical around AI is awareness, right? Because if everyone isn’t aware, like you said, we can encumber, somebody can procure a new SaaS application that’s making some type of business decision on our behalf. And if they’re not aware of the implications and what our acceptable use of said applications is, and what processes they need to go through, we can get ourselves on.

Ariel Allensworth (26:53.192)

Exactly. And those roles and responsibilities can span from top management all the way down to end users. It can spread to customers, external users of the artificial intelligence systems, developers and engineers, just a variety of people, internal and external, towards the organization. And if those are not communicated effectively, then the management system is going to falter.

John Verry (27:10.445)

Thank you.

John Verry (27:23.508)

Clause eight is around operation.

Ariel Allensworth (27:26.732)

So this one is how is the organization applying the items that we described under the planning phase, right? So is the organization conducting their AI risk assessments and treating AI risk based on the methodology that was established in the planning phase? Is the organization conducting the AI impact assessments? Is risk being effectively treated? And then of course,

John Verry (27:32.147)

I don’t know.

Ariel Allensworth (27:55.5)

There’s always a measure of continuous improvement, but I’ll kind of hold off on that until we get to the last clause.

John Verry (28:02.294)

Yeah, and if you think about it, if you remember the old early days of ISO, we used to call it plan, do, check, act. Yeah, so you talked about planning, operations is the do, and now we’re about to go into check and act, right? So the next one, clause nine, is around performance evaluation.

Ariel Allensworth (28:08.544)


Ariel Allensworth (28:20.096)

Yeah, so how do you know that your management system is performing as intended, right? What are your objectives that you established in the previous clause, and are you meeting those objectives? In order to do that, you need to monitor specific metrics that map to those objectives, and then you need to evaluate those measurements, and then make sure that you’re communicating.

those measurements of effectiveness to top management so that they can adjust the management system as needed to make sure that they’re achieving those objectives or maybe changing the objectives or changing the metrics themselves. And then it also establishes requirements for making sure the organization has an internal audit program. And that’s part of that monitoring and measurement requirement. You can have metrics, but you also need to have an independent

internal audit of the management system and the methods used to maintain that management system. And then the final, sorry, the final piece of that is that management needs to review, there needs to be a management review of the management system on a regular basis. And so they’ll review things like audit findings, results of a risk assessment, any internal or external changes that might affect the management system.

John Verry (29:23.39)

And then finally, oh, sorry.

Ariel Allensworth (29:44.364)

and then of course just input. And that management review should be conducted by a committee of top management that includes relevant stakeholders. So for artificial intelligence, this could be your CISO, it could be your CIO, your CTO, business stakeholders, artificial intelligence experts, for example.

John Verry (30:08.634)

Yeah, you developers. I mean, if you are developing your own large language models, it’s going to be the people that are the machine learning experts that have to be integral to this. And then last, but not least, improvement.

Ariel Allensworth (30:17.097)


Ariel Allensworth (30:22.86)

So this one, ISO has a strong approach for continuous improvement, right? This isn’t really like a pass fail of this standard. It’s that you can implement something, identify gaps, identify the effectiveness of the management system, and then just continue to improve it. Strengthen your implementation of management clauses, strengthen your implementation of controls. And so,

This clause establishes requirements for the organization to do that continuous improvement. So if there are audit findings, management should review those based on the requirements of the previous clause, and then document what’s being done to address those findings. What is being done to address new risks that are identified, or to potentially treat existing risks that have changed. Maybe the impact.

John Verry (31:14.254)

Thank you.

Ariel Allensworth (31:18.996)

or probability has changed, or there’s changes within the organization that reduce the effectiveness of existing controls. So there needs to be processes put in place to ensure the organization is continuously improving the AI management system and the information security management system, as is the case in ISO 27001. And then the second component here is that

the organization understands and establishes a methodology for identifying nonconformities to the standard and then what actions to take in order to address those nonconformities. So this is similar to the audit findings because that’s where these nonconformities are gonna pop up, but the organization needs to have a formal process for what to do when nonconformities.

are identified.

John Verry (32:18.19)

So for any of our listeners that are ISO 27001 certified, this is probably pretty comforting because it all sounds darn familiar. Effectively, what we’re just doing is replacing, it’s now instead of an information security management system, it’s an artificial intelligence management system with the vast majority of the components of it being very similar to near identical. So you mentioned earlier,

that we’ve got these controls. So it was interesting to me, we have Annex A controls, like we have Annex A controls in ISO 27001, which are, like you said, 39 fairly prescriptive forms of guidance to help us understand what does it mean, and how do we control the risk associated with AI. And then what I thought was interesting is that they added the Annex B, which you referenced before the implementation guidance, that was sort of like a shorthand version of ISO 27002, right?

like it from an analog perspective.

Ariel Allensworth (33:18.4)

Yeah, I was pleasantly surprised when I first read the standard because a lot of organizations don’t know that ISO 27002 is out there. Hopefully they’re informed over time, but I’ve definitely spoken with organizations that they’ve implemented 27001 and just have not really leveraged the guidance from 27002.

Ariel Allensworth (33:47.7)

document is incredibly helpful. So pleasantly surprised.

John Verry (33:54.626)

So as GPT, at least, based artificial intelligence, by definition involves massive training data sets. And increasingly, these data sets include personal information. Can you talk a little bit about how ISO 27001, ISO 27701, ISO 42001, in my mind, for many organizations, they’re ultimately going to form sort of the Holy Trinity.

of provable, demonstrable security around privacy, personal information and artificial intelligence.

Ariel Allensworth (34:35.124)

Yeah, exactly. So the combination of those or even just ISO 27001 and 42001 is that it’s an integrated management system. So when we went through the clauses, those were the majority of that information was applicable to both 27001 and 42001. So for organizations that are looking to address information security, privacy and artificial intelligence risk,

and internal external obligations, it’s much more simple to do an integrated management system because you can enhance, let’s say you’re ISO 27001 certified, you can enhance existing documentation, existing processes to set the framework for the AI management system because you already have a lot of those things in place. You have a management committee, management review.

and some relevant policies. And so then you just have to do some of the work that’s very specific to ISO 40-2001. And then a lot of the concerns and risks associated with artificial intelligence can have something to do with the security protection of personal data, because you’re right, a lot of times that’s included in the training data sets, or a lot of times that’s included in information that’s…

John Verry (35:40.97)

or that.

Ariel Allensworth (35:58.104)

collected and processed through an artificial intelligence application. So that’s one of the amazing things about 42,001 is not only is there a fairly small control set, but the clause requirements align really well with ISO 27,001.

John Verry (36:18.35)

Quick question for you. I hadn’t thought about this prior. So ISO 27001 and 27701. 27701 actually alters the construct of the 27001 clauses. And it’s intended to run in a more unified way, right? Instead of having information security and a privacy management system, you have a lot of people call them PIMs, right? A privacy and information security management system. Is it your thought process that we would, with clients that are doing all three?

that the AI management system would sort of be integrated into that same and we’d run this through that same single logical construct.

Ariel Allensworth (36:58.392)

I’ll say it depends. So generally, yes, because it’s just more efficient. There’s a lot less duplicative effort if you integrate those things together. So the areas where it might be separate, so you may have an information security management committee and if you’re to do something that’s integrated, you’d have an information security, privacy, and artificial intelligence management committee.

But depending on the size of the organization and the implementation of artificial intelligence and how that applies to the organization, you may want to have a subcommittee, but ultimately keeping the leadership commitment, the accountability, the roles and responsibilities integrated is really important. I think that the separation of these things would really happen down at the tactical level with topic specific policies.

John Verry (37:32.972)

out of it.

Ariel Allensworth (37:54.716)

and unique processes to information security, to privacy, and to artificial intelligence.

John Verry (38:01.881)

That makes complete sense.

John Verry (38:07.45)

Um, so what I’m finding is that this area is so new that many of the clients that I’m talking to sort of don’t know where to begin or the clients that we’re acting as a virtual see so for that just, they know they need to do something, right? They’re becoming aware of that, but they don’t really know where to start. So if somebody’s listening and they’re not ready to jump right in and get to like an ISO 42001 level of dealing with AI, what was your guidance for where they should start?

Ariel Allensworth (38:33.973)

Yeah, so I would.

provide guidance from a couple different perspectives. So the first one being organizations that aren’t developing or implementing AI formally, but whose employees may be leveraging kind of informal or ad hoc AI applications. So consider developing an artificial intelligence use policy similar to an acceptable use policy. You want to understand how is the organization using

artificial intelligence informally, and what risks does that introduce? And then establish requirements in your policy that can help reduce some of that risk. And so I would say this will apply to almost any organization out there. So if you’re not approaching AI, it’s not something that you’re thinking about at this time. Your employees probably are because the tools that are out there today can make…

almost any process somewhat more efficient in some way. So somebody out there is probably using it in your organization. So at a minimum, consider developing an AI use policy. There’s really good guidance out there, especially within the NIST AI risk management framework. There’s enough information in there to help you understand what are some of the risks associated with this, and that can help you develop requirements to put in a policy. And then the…

John Verry (40:01.302)

Yeah, I like what you said though. Oh, I was just gonna say, I was gonna double down on what you said because I don’t know that I’ve emphasized it quite enough when I’ve chatted with clients is that, you know, don’t create the, you know, like a lot of people are gonna go to the internet and download a policy or they’re gonna ask someone to give them a policy, whatever it might be, but you really need to understand risk before you deploy a policy because the policy, you know, policy is a control and control is a mechanism that reduce risk. We don’t even know what an appropriate policy is unless we understand the risk associated with our use, correct?

Ariel Allensworth (40:30.684)

Exactly. And absolutely use templates. Go find templates and see what other organizations are doing. But you have to contextualize it. And how you do that is understanding the way it’s being used in your organization and the risks that it presents.

John Verry (40:47.658)

And I don’t think people understand, like I was having a chat with a client the other day and they were like, well, we don’t allow, we block ChatGPT, I’m like, do you block Bing? And they’re like, no, Bing’s got ChatGPT built into it, right? So, you know, anyone who’s using Edge is using ChatGPT. And I said, on top of that, virtually a significant percentage of the SaaS applications you’re consuming, right? And we see numbers like a typical enterprise is consuming a thousand SaaS applications.

What percentage of those SaaS applications are applying some level of artificial intelligence? If you’ve got all of your human resource tools, and that’s where danger comes into play. If you’ve got an HR application that is scanning and reviewing and screening resumes, they’re using AI. And if they’re using AI and they’re using a biased model, you could potentially have an issue there. Most of the security tools out there now

that are doing threat intel and assessing security logs and things like that, they’re using AI. So there’s so many tools that are using AI, right? Not only do we have to have that policy, but then we also have to, we got to figure, like, isn’t part of this all figuring out which vendors are actually using AI, and then making sure that they have the right control mechanisms in place to protect us?

Ariel Allensworth (42:10.76)

Exactly. So you can have ad hoc use of AI in the organization, employees using things like chat gbt and bang and co-pilot and things like this. But then there may be applications that have been formally approved for use in your organization that have new features that leverage AI. And so if you’re going to implement a policy or even if you’re going to do more in this next

Do an assessment, how is artificial intelligence being used in the organization? And you need to define what is artificial intelligence mean to this organization. There’s a variety of definitions out there for what it means. So you need to establish what your organization defines artificial intelligence to be and then go out and find where in your organization that that’s being used.

John Verry (43:06.37)

Gotcha. So that implies to me that updating your third party risk management program, vendor due diligence program, whatever you’re calling it, vendor risk management, to begin to ask those questions about the use of artificial intelligence and how they are or are not complying with the guidance, good guidance that out there. That would be another next step recommendation beyond getting that policy, understanding risk and get the policy in place.

Ariel Allensworth (43:35.996)

Yeah, so that’s going to be a really good opportunity to determine how artificial intelligence is being used in the organization. A couple of angles here. So if you ask your SAS providers, do you leverage artificial intelligence to provide these services?

John Verry (43:49.595)


Ariel Allensworth (43:58.264)

you may find in a lot of cases they say yes, and you didn’t know this previously. And it may not be because they implemented artificial intelligence recently because of large language models, although certainly that’s a case a lot of times. But maybe you’re just discovering that they’ve always used artificial intelligence in some capacity, and then because of the awareness as has happened with the rapid development of large language models recently, you now have to pay attention. Okay, what does that mean? So there is…

John Verry (44:11.95)

Thank you.

Ariel Allensworth (44:27.564)

the aspect of understanding how do we reach out to our vendors and understand how they’re using artificial intelligence in the services they’re providing us, but then also how are they using artificial intelligence within their organization, and what risk does that present just by them using artificial intelligence? So they may use ChatGBT internally. It may not be used in a way that is within the service provided to you.

but it might introduce additional risk for that organization being able to provide that service to you effectively.

John Verry (45:03.606)

Yeah, and I like something you said, and I never thought of it. The sort of defining what AI is within the context of organization, because I don’t think people realize how pervasive it’s become, right? You know, anyone that sit down in front of their TV and hits Netflix, that recommendation engine is artificial intelligence. I mean, you know, if you are talking to, and I won’t use her name, because she’s sitting next to my desk, but if you’re talking to a virtual assistant of that nature,

You know, artificial intelligence, if you’re talking to your car, all of these systems are using AI at this point. So it’s a lot more pervasive than we think, and I think that organizations need to be aware of that.

Ariel Allensworth (45:47.784)

Yeah, something that it can be really easy to get caught up in the hype of artificial intelligence right now. But what has happened recently is the relevant development of large language models, where it doesn’t take data scientists to interact with these models. Anybody can interact with these models and find benefit in that. And so there is unique risks with large language models.

John Verry (46:04.188)

and thanks.

Ariel Allensworth (46:17.512)

And that’s causing a lot of the reactions like the executive order, the AI act, and different standards and frameworks coming out to help people manage risks and opportunities with these things. But then that’s also causing people to say, OK, wait a minute. There’s artificial intelligence outside of large language models. How do we apply this to artificial intelligence that we’ve been using for years now?

John Verry (46:42.271)

One pointed risk I’ll point out that I just heard about through a client was they were piloting Microsoft Copilot, which I think is going to be a fantastic product. I’m excited to use it. I hope I get a chance to use it soon. However,

If you do not have the right restrictions on data sources within your organization, Copilot finds them and uses them. So they had a human resource folder that didn’t have the right permissioning on it, that had confidential salary information on it, and somebody asked Copilot a question and was able to retrieve all of that data. So for anyone who’s thinking about

do understand that there are probably some things you need to do ahead of that to avoid anything unexpected.

Ariel Allensworth (47:35.584)

This is a really good opportunity to talk about how ISO 27001 is really relevant here and supports ISO 42001. So in this instance of access control, right? Governing access control for users, governing it for systems, that’s been a lot of the context for decades now. And now we have to say, how do we…

appropriately manage access for artificial intelligence systems because they are kind of in the middle between systems and users. It depends on how you implement them. It could behave like a user, but in the case of the HR file, it just being a permissions issue, that’s just a simple information security control, but the context wasn’t there.

John Verry (48:27.502)

But yeah, pretty cool. We beat this up pretty good. Is there anything we missed from your perspective?

Ariel Allensworth (48:35.824)

I did want to point out something else about ISO 400001. So we have Annex A, which are your controls, and then we have Annex B, which is the guidance on how to implement those controls. But there’s also an Annex C and an Annex D. And so the Annex C provides, both those two annexes are informative, so they’re not requirements, but Annex C provides…

really good guidance on and examples of AI related organizational objectives, as well as what are some of the potential sources of artificial intelligence risk. So it’s just another step above guidance and helps organizations really understand the context of the AI management system. And then finally in Annex D, it contains some brief guidance on integrating the AI management system with other management systems like

John Verry (49:00.659)

Really, really excited.

John Verry (49:22.69)

Thank you.

Ariel Allensworth (49:29.948)

It mentions 27,001, 27,701, and even ISO 9,001. So it’s just some extra information that really rounds out the standard documentation. So I think that organizations that especially who have already implemented ISO 27,001 should have no issue understanding how to really implement the management system. I think where the gaps can occur, especially

How do you develop secure artificial intelligence, right? You have to have the right resources in your organization to understand that. And that goes back to the clause requirement of having appropriate resources, appropriate competence with those resources as well. So I just wanted to give a shout out to AnnexCND because that goes kind of above and beyond what 27,001 does.

John Verry (50:25.01)

I like your pointing out the 9001 as well, because increasingly we’re starting to see people talk about using 9001 to ensure the quality of their SDLC, their development processes. And I think in light of the potential risk associated with AI, right, because you were talking about just risks about decision making that was around data, but we do have systems that are like autonomous vehicles, right, they’re making decisions that can have life and limb or weapon systems, targeting systems, things of that nature, right, they all have life and limb.

So ensuring the quality of those development processes around the AI process, I think is a fantastic thing. And I agree with you that instead of maybe having a Holy Trinity, a Holy Quartet might have been a better term earlier, where you’re actually using all four of those standards that you talked about. So thanks for bringing that up.

Ariel Allensworth (51:14.248)


John Verry (51:16.151)

So give me a real world or fictional character you think would make an amazing or horrible. We can do CISO, we can do AI lead, we can do whatever you want.

Ariel Allensworth (51:26.628)

Um, so I will do, I would say I’ll go with a fictional character, uh, as a CISO, um, and this may be controversial, but I, I guess he could go both ways. And I just picked out the qualities that made it go this way. So I think Darth Vader would make a horrible CISO and here’s why. So he’s ruthless, um, but he’s effective, but ultimately, um, leading out of fear will build a rebellion. And so if that were to apply in an organization, um,

you are going to have a much more effective information security governance program if the people within your organization understand it and support it versus being just forced to comply with it and making their lives difficult. So I thought that’s kind of a fun little way to do it. But I don’t know, maybe Darth Vader would be an effective CISO. But in the end, who knows?

John Verry (52:21.888)

It is a see-saw who could quote-unquote use the force. Come on. That’s not a good see-saw. I don’t know. We can debate this

Ariel Allensworth (52:29.751)

Maybe he can see the cyber attacks before they come.

John Verry (52:35.217)

Good. All right. If someone wanted to get in touch with you, what’s the easiest way to do that?

Ariel Allensworth (52:40.832)

Yeah, so you can reach me at rel.alansworth at cbiz.com. Or you can look me up on LinkedIn. Just search a-r-i-e-l, Alansworth.

John Verry (52:55.106)

Sounds good, man. Appreciate you taking time on a Friday afternoon and shedding some wisdom here on AI. I appreciate it.

Ariel Allensworth (53:02.048)

You bet, John. I love talking about this.

John Verry (53:04.948)

I couldn’t tell. All right, man. Have a great Friday.

Ariel Allensworth (53:09.932)

Thanks, CT John.

John Verry (53:11.15)

Hi, hold on one second. All right, so thanks, that was good. Dude, you’re very good. Thank you. No, no, and it tells, it tells. Like you’ve got a passion for it, and passion is like, you know, that’s like leadership in a way, right? Like when people know someone’s jazzed about something, so this was really good. All right, so we’re gonna do the three quick lightning, but what happened to the second one?

Ariel Allensworth (53:18.36)

Thanks. I do love talking about this stuff. It’s so interesting.

John Verry (53:40.438)

Shit. I wish something happened to my file. What is ISO 42001? What kind of coverage do you pursue ISO 42001? Oh, my, yeah, my, what was the second one? What is ISO 42001? My file got corrupted. What is ISO 42001? What’s the second one?

Ariel Allensworth (53:48.18)

I have all three of them here if you need them again.

Ariel Allensworth (53:55.776)


Do regulations like the executive order, the AI act, how do those apply to a company even if they’re not developing AI?

John Verry (54:13.994)

Okay, thank you. I don’t know why my father got bed.

John Verry (54:23.254)

All right, ready? So the idea behind this is that, what they used to do is cut, they used to try to cut longer clips, they used to try to cut some 30, 45, but what they were just doing was truncating people in the middle of a sentence. So I was like, what? And they were like, well, no one speaks, no one gives us little, I said, I’ll ask, and they asked them to give me 30, 45, no longer the one minute answer. And they were like, oh, you could do that? So all right, so ready? What exactly is ISO 42000?

Ariel Allensworth (54:53.548)

So ISO 42001 is an internationally recognized standard for the implementation and maintenance of an artificial intelligence management system. So the standard contains specific requirements and guidance on how to manage artificial intelligence risk, how to implement controls that support specific requirements to have an effective system for managing the risks and opportunities associated with.

artificial intelligence.

John Verry (55:26.966)

All right, second one. Do regulations like the executive order and the EU AI Act apply to a company even if they are not specifically developing their own artificial intelligence apps?

Ariel Allensworth (55:42.328)

Well, answer is it depends. So both can apply to a variety of organizations. For example, the EU AI Act applies to more than just companies developing AI. It can apply to organizations using artificial intelligence as well. So you don’t have to be developing it. But you need to understand exactly how that regulation applies to you. And there’s tools out there to be able to do that. The executive order will be

become more relevant to organizations as specific deadlines pass, that order specifies. So there may be new standards or regulations that come out as a result of that. So it’s not clear whether or not at this time it applies to companies developing or just using or both. So we’ll have to wait and see.

John Verry (56:25.232)


John Verry (56:33.25)

Perfect. And last question. What kind of companies should pursue ISO 42001 certification?

Ariel Allensworth (56:41.336)

So if you are developing AI or you are implementing AI in some fashion, whether it’s just internal to your organization or you’re implementing it into a product or you are developing and launching AI as a product, it’s definitely a good idea to pursue ISO 42001 and especially if you’re already

Ariel Allensworth (57:11.08)

it’s going to be very familiar, it’s going to be a little bit smoother of an uplift than if you don’t have ISO 27001. And if you don’t have ISO 27001 implemented in your organization, consider implementing it in addition to ISO 400001. Consider doing them together, because you’re going to do a lot of the same type of work if you were to implement those separately, and this is going to help you establish that integrated management system that we talked about earlier.

And then finally, if you’re not developing AI and you’re not really implementing it formally in your organization, this is where you may not need to implement ISO 42001 at this time, but definitely consider doing a risk assessment to understand how it’s being used informally in your organization and consider developing an AI use policy to potentially mitigate those risks.

John Verry (58:02.914)

Thanks, man. I’m late for a 3.30. Thanks. Appreciate it.

Ariel Allensworth (58:06.11)

All right, take care, John.