April 27, 2026

Summary:

In this episode, John Verry and Mike Armistead discuss the evolving role of AI in cybersecurity, exploring both its potential benefits and risks. They delve into how bad actors are leveraging AI for cyber attacks, the importance of education in recognizing AI-related threats, and the need for robust governance and risk management strategies. The conversation emphasizes the necessity of preventative measures and the integration of AI into cybersecurity frameworks to enhance defense mechanisms. As AI continues to shape the landscape, the discussion highlights the importance of adapting to new risks while leveraging AI’s capabilities for better security outcomes.

Keywords:

AI, Cybersecurity, Cyber Attacks, Cyber Defense, Risk Management, Education, Prevention, Governance, Future of AI

Takeaways:

  • AI is transforming the cybersecurity landscape in 2025.
  • Bad actors are using AI to enhance their attacks.
  • The speed and scale of AI-driven attacks are unprecedented.
  • Defenders need to catch up with AI advancements.
  • Education on AI risks is crucial for future generations.
  • Preventative measures are essential in cybersecurity.
  • AI can assist in risk management and governance.
  • Understanding your business context is vital for cybersecurity.
  • AI can help mitigate risks to the business effectively.
  • The future of AI in cybersecurity is both promising and challenging.

Bio: 

Mike Armistead is a serial entrepreneur and corporate leader who has built and grown technology companies for more than thirty years. His career began at Pure Software, Reed Hastings’ first startup before Netflix, and later at Lycos during the early internet boom. He went on to co-found Fortify Software, which grew into a category leader before being acquired by HP. Mike then co-founded Respond Software, which was acquired by FireEye for $186 million in 2020. When FireEye split into Mandiant and later became part of Google, he continued to focus on the next major shift in technology: artificial intelligence.

Today, Mike is the co-founder and CEO of Pulse Security AI, where he helps organizations close the gap between technical security tools and the broader business strategies they are meant to support. A Stanford-educated engineer, he has also coached more than 500 youth sports games, an experience that shaped his belief that leadership is about matching strategies to people and situations. His career reflects persistence, timing, and a commitment to building companies that last beyond short-term trends.

John Verry (00:01.201)
Hey there and welcome to yet another episode of the virtual CSU podcast with you always your host John Berry with me today, Mike Armistead. Hey Mike.

Mike Armistead (00:09.89)
Hey John, glad to be here.

John Verry (00:12.044)
Yeah, glad to have you on. Thanks. So I always start simple. Tell me a little bit about who you are and what is it that you do every day.

Mike Armistead (00:19.308)
Yeah, just a bit about me. I’m currently the co-founder and the CEO of a little company called Pulse Security AI. And I guess I need to describe myself as a serial entrepreneur at this stage. I’ve co-founded three cybersecurity companies over last 20 years. But maybe uniquely, I’ve stayed at the companies that have acquired us for many years post-acquisition. So it’s given me some good perspectives along those lines.

John Verry (00:46.796)
I share that attribute. I’ve stayed at CBiz. We were Pivot Point Security. We were acquired by CBiz. And I decided to hang around a little bit as well. So I understand that thought process. Before we get down to business, I always ask, what’s your drink of choice?

Mike Armistead (01:01.154)
Yeah, good.

Mike Armistead (01:08.256)
You know, I’m a Manhattan drinker. do it, you know, I guess maybe I graduated over time, but you know, I like the uniqueness of it. Different tastes, you know, a little sweet, a little bitter, you know, that kind of thing.

John Verry (01:20.864)
Yep. I’m a Manhattan drinker as well, but, but, I always have a bottle of Carpano Antico, vermouth. If you haven’t had Carpano Antico, it will absolutely change your Manhattan experience completely. So it’s the, it’s the best sweet vermouth in the world that comes out of Italy. And it’s phenomenal. The other thing I did recently, I was in Italy and bought a, I was in Modena, and I, I bought a vermouth that was aged in,

balsamic vinegar casks. And I’ve been doing, I’ve been doing either Manhattan or Boulevardier. So I like Boulevardier. Have you ever had a Boulevardier? Okay. Yeah. So, so I do my Boulevardier now with, that.

Mike Armistead (01:53.677)
Yeah.

Mike Armistead (02:01.231)
yeah, yeah.

Mike Armistead (02:05.902)
Yeah. You know, another one just that I, a little trick was, they’re called Grand Manhattans, but you, you use half vermouth, half Grand Monet and you do, and you use orange and then you use orange bitters with it. It’s really good. Yeah.

John Verry (02:16.524)
John Verry (02:23.436)
Ooh. Now you gave me something to try tonight. Because I do like Grand Marnier. I mean, that orange flavor is going to kick it up a notch, which is going to be interesting. It’s funny because you see it a lot with a lot of bourbon drinks that will add apple, right? But rarely do you see them add the orange. All right, I got to try that. All right, so let’s get down to really what we’re here to chat about, which is not Manhattan’s and bourbon.

Mike Armistead (02:25.71)
There you go.

Mike Armistead (02:31.618)
Yeah. Exactly.

Mike Armistead (02:41.742)
you

Mike Armistead (02:48.899)
you

John Verry (02:50.156)
So, I would say 2025 is definitely sort of the year of AI, if you will. It’s certainly taken off, I think, in some cases for better, perhaps in some cases for worse. And I thought it would be kind of an important thought exercise to talk with somebody who’s been around the industry for a bit to think about whether AI will have a net positive or negative impact on cybersecurity postures, if you will. So, oh, I was gonna say one thing.

Mike Armistead (03:16.727)
No, it’s a.

John Verry (03:19.158)
Let’s start with the most obvious question about that is how are bad actors using AI today, right, to enhance their cyber attacks?

Mike Armistead (03:29.342)
Yeah, I guess the way they’re using it mostly, I’ll kind of get to the end result of it, is that they can do things at scale or at speed that was never really thought of before. You know, I think it had always been as a defender, yeah, you know, the attackers have a lot of time, they’ve got a lot of, you know, their patience or things like that. Now you add to it that they could

basically do denial of service type things at scales that we haven’t really thought about and imagined. Or they could just use, and that’s something that happens quite often is they could use the infrastructure and a new kind of infrastructure to do ordinary old attacks and get at vulnerabilities that you would think were solved for a while ago. But AI kind of brings another level to it within that. But there’s, you

I tell you, the AI, mean, it’s everywhere. It is truly in things that are unique, like deep fakes. You know, we have prompt injection types of attacks that are really pretty scary. But like I said, there’s plenty of other things where they’re really just making use of a vulnerability that would be, know, improper input validation or misconfigured types of servers that allow for access that shouldn’t be there. Those kinds of things are still very real.

in the AI world.

John Verry (04:56.256)
Yeah, yeah, you can. You can see it where like, know, you got products like horizon.ai, which is sort of like a pen testing tool that does uses AI to automate some of the attacks. You can imagine a clever person who is attacking you using similar or creating their own tools to do exactly that. Right. So, you know, these vulnerabilities that before may have been more challenging or taking too much time or you didn’t.

Mike Armistead (05:04.823)
Hmm.

Mike Armistead (05:16.736)
Absolutely.

John Verry (05:25.72)
fuzz it the right way. You didn’t come up with the right combination. Perhaps they’re to come up with something that they wouldn’t have come up before, correct?

Mike Armistead (05:32.526)
Correct. And I think the age old problem still, we’re in an arms race as always, and the attackers right now are being innovative and using these things. And I think we’re a little bit behind as the defenders into just how we can both detect and hopefully even before the detection happens, we can have some preventive measures that just reduce the blast radius of anything they’re trying to do.

John Verry (05:59.52)
Yeah, the other thing too is if you look at like, I saw something where one of the major, I won’t use their name in case I’m wrong. One of the major, AI companies, through like vibe coding, like 80 % of all the new code that they’re producing is actually AI generated. so if we think about that again, turning it negative, if a third, malicious individual was using those same technologies to write, malware, right.

You know, it’s kind of scary, right? You know, what next?

Mike Armistead (06:32.331)
Yeah, mean, even for my team, in this day and age, we make use of AI to assist our developers. Now, it’s all not a panacea there. However, if you think of the sophistication of attackers, it does reduce what that sophistication has to be because they can just imagine how they might do an attack and have the coding tools help them.

instantiate what that attack could really be. they can do it much more rapidly. And in fact, if you think of something like VibeCoding, great to point out is it is something that you can do very rapidly and you can kind of get to good enough. I think the enterprise side, when you think about VibeCoding, that isn’t typically enterprise-ready software.

I mean, there still needs to be all the things that would go into having a resilient software and that too, when you’re thinking about it as an enterprise. So in some ways the attackers also have that advantage with Vive coding because they don’t necessarily have to have that level of it. No one’s going to go after their product. So it’s like a big advantage to them.

John Verry (07:46.646)
Right. Right. That’s a really good point. Yeah, that’s a really good point.

Yeah, the other thing which is interesting, we did talk about speed and we did talk about scale. The other thing which I’m seeing, and there’s been some interesting stories on is AI being used to slow down attacks, right? That you can use AI for a long time to build trust over an extended period of time at little to no effort, which then allows you to gain trust and then exploit trust, right? So, like I saw a story recently where an individual was approached online

online, whether it LinkedIn or some other site, about a job. And they went back and forth with text messages and email back and forth. And then finally, the person sent the link to the application, but it was a malicious link. they were fished into giving information that they shouldn’t have given. I think I might’ve been the subject of a long con recently and embarrassingly almost fell for it.

Because I got contacted on LinkedIn by someone saying hey, I know you’re somebody in security and been doing this long time would love some input on a new product with thinking about bringing the market. I usually ignore that kind of stuff, but the person gave two people’s names that I know and I’m like, okay. Maybe this is somebody, know, I was referred to you by so I moved this to email person. We went I went back and forth two or three weeks on emails with this individual, you know,

And then finally he said, okay, we’re here with this. What do think of this? And he sent me over a design. It was a PDF. And I went to click on it I was like, what am I doing? Let me, let me, let me send this to somebody. And I sent it to someone on a team and they were like, yeah, was, it was a malicious document. So I do think it’s really interesting that like all of the ways that people can use AI. And I think most of us are not yet thinking through all of those different risk scenarios.

Mike Armistead (09:46.486)
for sure. think, you know, even separate from cybersecurity, I often think about what do we have to do to educate ourselves in this new world that there’s a lot of this usage and critical thinking. And just like you did at the very end, it’s so vital. And if I could think of a skill that I’d start to teach by junior high, high school.

kids, know, that might have our way past that. it would be this critical thinking. Don’t believe this. And it’s unfortunate that it gets to a thing where you have to be questioning this. you know, you’re going to be verifying using different angles on almost everything these days if you really want to stay safe, because it just won’t. You can’t believe the first thing that you see. And AI really does help accelerate that happening.

John Verry (10:43.852)
Even just now you jump on a social media site and you know what percentage of the you if you’re on Chad or if you’re on tik-tok or you’re on Instagram reels or any of these what percentage of the videos that you’re seeing now You know you see these videos of a you know a child and a lion jumps out of nowhere and attacks it and the dog saves the day You know and you know like that wasn’t real, you know, you know, there was the there was the I don’t know if you saw it but they my brother-in-law was like, oh my god a a

Mike Armistead (11:06.411)
Yeah.

John Verry (11:13.292)
And they this tidal wave coming over the top of a crew. I’m like, dude, that’s not, that wasn’t real.

Mike Armistead (11:23.083)
Yeah. You know, it’s funny. think we’re all maybe subject to the wave, the wave thing. think that’s the first one they’ve all started on. But even then, I mean, in some ways it’s like, okay, that’s clickbait, you know, but like you were mentioning, it gets serious when they’re using social engineering techniques that are long, tried and true, but now they have this, this feeling of reality to them that you’re just like, you know, wow, is this.

John Verry (11:30.369)
Yeah.

John Verry (11:36.443)
Yes.

John Verry (11:42.134)
Yep. Yep.

John Verry (11:51.424)
Could this be real?

Mike Armistead (11:52.918)
You said, like you said, you had two people you knew, they were referred to, it sounded very legit, you know, and…

John Verry (11:58.636)
And now, like, I mean, if you think about it, like one of the reasons why spearfishing, right, or whaling was not as popular was it took more time and energy and research for someone to do that type of work. Now you can have the AI do that, right? So, so it is interesting.

Mike Armistead (12:12.395)
Yeah. Yeah. mean, right. And think about banks and other financial services. Why, you know, in fact, I feel a little bit better about, you know, why are money now? A lot of times you take it through multiple steps and you’re maybe even holding up your, license next to your face, you know, and all those things. it’s an ask. Good ones ask a question that is not relevant. You know, completely.

John Verry (12:36.886)
Yep, exactly.

Mike Armistead (12:38.739)
and see what the response is because that’s a good way of spotting a fake because it just, you know, you’re not, it doesn’t know how to deal with that kind of thing. So, I mean, as we, as we talked about, you know, I haven’t been in this for a while. It just still feels like, you know, hate to bring it back to this, but it’s kind of the art of war where knowing yourself and knowing your attacker, you know, you don’t have to worry about a thousand battles.

But if you don’t know one side or the other, you got to worry about every battle. so, and I think that’s the situation we’re in. I think, you know, interestingly enough, I think it’s I’ve long, I’ve kind of developed this over the last few years is that we’ve really gone the detection side of things as the defenders. And, you know, I’m, I’m glad we’re, I think we’re starting to see the prevention side.

come back a little bit more, which is, you know, think about how you have to set things up without having to just detect it. Because, and I know it’s a truism that you almost have to think of yourself as they’re already in. And that, I’m not, you know, disputing that, but it’s more of, boy, think about your architecture thing. You know, think of something like ransomware. You know, have you actually done an exercise where you’ve tried to recover?

from a complete backup or are you segmented in a way that decreases the blast craziest and things like that. I these are practical kinds of things and they’re preventative. And so, I think the one thing we can also realize that whether it’s AI or an attacker using AI or something, they’re gonna go for the low hanging fruit. And so I do think there should be a swing back to preventative types of things because I think it can help solve a lot of these things.

John Verry (14:33.834)
Well, that’s a natural pivot to the the next slide So we’ve been talking about how the bad guys are using things like deep fake which and if you don’t know about deep fake please learn about deep fake, know videos and audio that are Indistinguishable from from actuality and if you’re not changing your process, let’s say for Accounts payable, you know when someone changes a account things that nature you’re gonna get yourself in trouble So let’s pivot to the other side of the fence right so in the side that you live on

is the answer to the challenges that we just talked about using AI to combat these issues, right? And to improve our cybersecurity posture. And what’s interesting is, you know, when I was thinking about this, you know, I was thinking about the standard things like anomaly detection, these tend to be more, like you said, detective. So as you’re thinking this through, give me some examples of where we’re using this from a detective perspective. And can we use AI from preventative perspective? That would be an interesting question.

Mike Armistead (15:29.005)
Yeah, great. The previous company, the one I’m in called Respond Software, we were only short of the timing of, I don’t know, seven years. We started in 2016 and we were an AI assistant for a tier one SOC analyst. So it was all about, can we help detection? And the interesting part to that, and I think this is where AI can really make a play.

Is that with security, we know so much of it is about the detection, but the context that that’s within. And I like to use the term, you know, it’s, it’s, it’s very much, I know situational awareness is a, is a common term, but your situational security means everything. And, and so, yes, AI can make a big play in off of the tools you maybe have today, or to do something that’s always been hard, which is.

How do you bring context into these generic signals that you’re getting from your detection tools? know, and the cornerstone of a sock being a SIM has long tried to do that kind of thing. But I think you can do it at almost a higher level because you can bring more context in using AI to be able to say, hey, you know, I’m going to, need to correlate some signal I’m seeing way down here. That seems a little odd, but.

Boy, if I correlate that to my IAM or to some other tool set, then I’ll feel better about the detection itself. So I think on detection, there is a lot. And you see a lot of tools. mean, everybody, you go to the shows. Everyone’s going play, and they do AI.

John Verry (17:12.687)
Everybody’s just the same way when we all had a bunch of security tools and compliance became out and everyone put blue compliance paint on everything. Now they’re putting purple AI paint on everything, whether it’s real or not. So we agree with that. So clearly I can see the detection. That’s intuitive. Can you think of ways to use AI in that preventative sense? Because I agree with you, we have gotten to a point where

Mike Armistead (17:22.989)
Exactly, exactly.

John Verry (17:40.908)
I think it’s a healthy exercise to say at some point we’re going to be breached at some level, or we’re going to have an incident at some level. Breach has a privacy component to it. But if I just said we’re going have an incident. So being prepared for an incident, the incident that will eventually happen is good practice, right? But we don’t want to get to a point where we’re just assuming incidents and we stop on the preventative side, right? And I like what you said. The lowest fence is the one they’re to climb over, right? You don’t need to run faster than the bear, you just need to run faster than the guy that’s with you.

Right? think that’s the same thing from a cyber perspective. So to your point, preventative still has a tremendous amount of value. Where can we use AI or where can we use AI from a preventative perspective?

Mike Armistead (18:22.517)
Yeah, no, I think there is a great use of AI. And if you think about where it actually is good at, and it’s things that we have to put humans to decipher. So there’s so much unstructured data in any security organization. It might be your policies. It’s going to be your assessments, whether they be pen tests or whether they be even just audits.

John Verry (18:24.652)
Hmm

Mike Armistead (18:51.853)
of what you have and things like that. But those tend to be snapshots, and you don’t really see trends from that. And AI is really good at pulling this kind of information out of these unstructured sources and then watching them in a continuous way. So you’re able to start to see the trends with all that. It’s funny, as we age, you’re…

John Verry (19:19.437)
I’m not, I’m not a great, are you? Okay.

Mike Armistead (19:20.917)
I know besides you, John, but those of us who have to go through, know, my yearly physical is now a bit of a numbers game, not an absolute like, you in the band? But watching where the numbers go and that’s a little bit, you know, think about what a security program should be. It is really to the end goal is you’re trying to mitigate the risks to the business, not just cap, you know, detect things and

do a bunch of metrics that are common kind of insecurity. It really is to help mitigate that. So AI can really help out in those things that most organizations already do, which is like, let’s put some forward thought into what our policy should be, along all these different lines. Now let’s see if we’re actually adhering to them. That’s where I think AI can be a place, because you don’t have to consolidate all this data. What else AI is really good at as a technology is,

It can go out and get the data, bring back and do that analysis there. Or it could do the analysis at the edge. it’s, you know, it’s good at those kinds of things. No, think we were so much over the last 10 years or so we’ve been so, we got to consolidate everything and normalize it so we can query it. And it’s like, as I don’t, I’ll say something contrarian. That is never going to work. Maybe, maybe it’s contrarian. Data lakes, they’re good, but you can’t get everything in them.

You can’t go over that enough money. You don’t have enough ability to even make those transit translation stuff. AI can do that. Like it’s good at that kind of thing. You can go out and just give it an example. It can pull what it needs to pull out of that and do that. So I think AI is good for that kind of thing. I also am going to, and this is a bit about what we’re doing in this company is I think where’s the assistant for the leaders. I mean, we have a lot of assistance for the.

you practitioners, but that AI assist in what the leader’s trying to accomplish can range from everything from helping them engage with the board. So being on top of their game as far as, and looking externally, like what’s in the news, what’s going on out there. know, those kinds of analysis take somebody to kind of take the generic thing and relate it to their situation. And AI can do some of that.

Mike Armistead (21:46.294)
It can also just tie together the different parts, like all the way from that kind of more generic, I’ll even call it more GRC like risk, all the way down to the operational issues that you’re finding today. And can you make that tie into these categories? And I think there’s an assistant there for the deputies and the managers and the leaders to kind of help pull some of that together to give it a story, not just, you know, signal that’s going on.

John Verry (22:15.148)
Yeah, it’s an interesting perspective. so the one side, when you were talking about the AI, simplifying the process of ensuring things are happening, it’s sort of like compliance, but not compliance from a regulatory perspective, but compliance with the cyber security program, right? Which is different, right? You can have a cyber security program that’s intended to address security, intended to address compliance with laws and regulations or both. But you know, the reality is when we fail to, if we architect,

the optimum cybersecurity program, right? A perfectly balanced cybersecurity program is each control is directly proportional to risk that we’re trying to mitigate. Once we hit that state, any deviation from that, either a change on the context of the input, right? The way the business operates, the type of data that we’re processing, laws and regulations that happen outside the organization, client contractual obligations that change things of that nature. Or if we fail to execute a control per the definition of the cybersecurity program,

either one of those were incurring risk at. it sounds like what you’re saying is, can we use AI to keep us aware of those things? mean, imagine an AI assistant where I could say to, hey, we’re a 450 person healthcare, doing business with these types of organizations in this location, doing this, doing that. And the AI could turn around and say, hey, have you evolved your program to do X, Y, and Z? Or you could ask the AI saying, hey,

Where are we at from a cyber perspective? Are we doing all the things that we’re supposed to be doing? And if it could go and find all of that evidence, especially if we have any form of like, you can read a policy, if the policy referred to an artifact that lived in a location, the AI could tick and tie, I think is what you’re saying, and tell us whether or not we’re doing all those things.

Mike Armistead (24:02.963)
Exactly. mean, think about, let’s take a risk category that everyone is facing right now is your supply chain. No matter how you define your supply chain, mean, there’s supply chain risk because, and you may have given questionnaires to every one of your top vendors or your most critical vendors. Do you check that? How often do you check it? What?

Their environment changes too for exactly what you just said there, John, about, know, just their business changes a little bit. And there’s a, there’s an efficiency they can get by using AI or using anything kind of in that it increases their attack surface. And suddenly you’ve increased your attack surface. didn’t even know about it. That this is the kind of thing where we can start to apply AI to be able to kind of help, and do those, what would really is a manual kind of task, but it can help in that.

John Verry (24:57.462)
Yeah, it gets really interesting, right?

You know, and even we get to a point where we’re using AI and those very positive examples that you just came up with, right? I think we also need to recognize and maybe you can use AI to solve the AI risk is that, you know, AI in and of itself presents risk, right? You know, my favorite line to people and you get a, it’s amazing how many people have to stop and think about what you said, because I don’t think they think of it this way. You know, when you prompt AI,

Mike Armistead (25:19.063)
Mm-hmm.

John Verry (25:31.34)
you’re getting a prediction, not an answer. Which is, and AI wants to be pleasing and AI will come up with an answer if it doesn’t, which is, it doesn’t know it, which is why we have this concept of hallucination, right? It can make mistakes or it can hallucinate, you know, intentionally, right? It wants to provide you with an answer to the question, you know, the problem. So, how do we, if we’re going to use AI, right? Is it just a comprehensive AI governance program?

that we would need to put in place to ensure that risks like model poisoning and bias and hallucination and more importantly, I think these days, I think in a security implication explainability, right? So if I cut off access or I shut a system down, or if I said a particular threat was not, was benign, right? You know, that explainability component, right, is critical. So

What’s the solution to using AI to solve these other problems without it creating its own problems?

Mike Armistead (26:35.437)
Yeah, we sure as a silver bullet here, but there’s no silver bullet. the part of it is. Yeah. I’ll explain it maybe this way. It’s like so much of security is still to use an old maybe football analogy, you three yards in a cloud of dust, right? You’re you’re you’re needing to do.

John Verry (26:38.444)
Yeah, I’ve been silver billah hunting for 25 years in cyber and I haven’t found one yet.

Mike Armistead (27:01.037)
the right kinds of checks at the right times and do things. And they tend to be kind of manual, again, according to your situation and stuff too. And so I don’t think it’s like, we’re not going to get one thing or even a set of things that’s going to solve this. It’s going to be applying the AI to do it at a much higher speed, at a much higher accuracy rate. You’ve got to watch that.

of the things that you would want to do if you had unlimited people and unlimited budget, you know, kind of thing. And so you’d want to be doing some of these checks all the time, but you never could just get around to it. think about it. You could check. I want to check all my IT tickets that are being just to see if there’s a pattern there that I can know something is compromised. You’re never going to put a person to go do that. They did quit on the third day sort of thing. But

But an AI is not going to quit on you. Now, you bring up some really good points about AI itself. Now, we’re using AI for our own applications. And boy, do we see the challenge that you get into. I everyone kind of thinks of AI because of the consumer side of it is almost a single prompt kind of thing. What’s more effective is an AI system. So whether you call that an agentic system or something like that.

Um, but you, you, you really need to watch, uh, you know, your guardrails are super important in this. And, and some of the things like what we do is you, you limit inputs into it. Like you said, if you start treating your model or your AI as, as the database of answers, uh, you’re going to soon find yourself in real trouble. But yeah, what it can do is it can do tasks for you. And, and it does a great job of summarizing things or doing that. But even, even we put.

on the AI usage, we put truth directives on it. We put, and if people don’t know what that means is basically you tell it not to lie to you. Or what you end up doing is you’re telling it, if you’ve made this up, tell me you’ve made it up. And so you have, I’ve verified this against this source. I have a good reason that this is good. And then I, yeah, I made this up. And it’ll, actually, it wants to please you, right? So it’ll actually tell you these things in that too.

Mike Armistead (29:24.839)
but you’re also, there’s many other kinds of, usually it ends up being a SIS, a series of prompts that you do to refine it, within that. And again, if you go talk to the developers and how they using AI, that’s how they use it. It might generate something to start, but then the rest of the time it’s refining and, and you have to be very careful with it because it can lose its way on that and different models follow directions better than other models and, and that too. So you’re.

There’s a lot of diligence in this area. And why we hope to do, and we hope others do it too, is the suppliers of the Defender AI should take on a lot of that nitty gritty stuff that compromises AI so that you don’t have to think about it if you use one of these tools.

John Verry (30:19.958)
Right, but if you look at the laws and regulations that govern AI at this point, they all say, and I mean the New York City Bias Act, Colorado AI Act, those, they all effectively say that if you use a third party AI and it does something wrong, right? It discriminates against a group of people in the New York City Bias Act as an example, you will be held liable for it unless…

Mike Armistead (30:30.914)
Mm-hmm.

John Verry (30:46.188)
you have in their case, they refer to it as a bias impact assessment in Colorado AI Act. think they use the term comprehensive AI risk management governance program. So if you are working like, we work with a lot of SAS’s and most of them are kind of adopting ISO 42001, right? Because it’s a certifiable framework and it’s the way that they can show their customers like, we take AI governance, AI risk management seriously. So I agree with you completely.

It’s our obligation as consumers of AI to make sure that we’re using it responsibly and much like any SAS model, right? You know, nine, nine twelfths roughly of of of SAS. If you look at the Microsoft shared responsibility matrix or the responsibility to third party service provider enough, you know, in roughly 25 % or ours, you know, it’s your job to make sure if you’re using Grammarly and if you are using Grammarly, pay attention to this. It’s your responsibility to say do not trade my data.

And I can point you to law firms that have used it for &A that have not clicked that box, right? And they may pay the price for that. So I agree with you completely. I think we just have to do it in a responsible and cautious way. And we need to vet and validate that the third parties that we’re relying on are also doing the same.

Mike Armistead (32:00.814)
Well, what you’re touching on is again, how do you bring your policies to life? Cause you know, everyone wants to do the right thing or most people they want to do the right thing. it’s, it is true. so in this, in these early days, there is both guidance that needs to be what kind of policies should you be enacting? mean, large organizations need to do the education that, you know, you, I know the LLM works better for you if you put the, this context in it.

John Verry (32:09.534)
If they know what the right thing is. mean, that’s the other side of it.

Mike Armistead (32:30.625)
But that context is proprietary data or it’s sensitive data. And you just can’t do that because they’re going to take it. And now it’s out there. we were talking about, we started the show kind of on the cleverness of the attackers. There’s these prompt injections that make use of the entire flow that somehow connects full-trade data. I guess it was Echo, I forgot the whole thing with Microsoft, but it’s like,

John Verry (32:54.912)
Yeah, some of those attacks have been scary.

Mike Armistead (32:59.839)
It’s in the chain of it. you have to, so what you’re having to do, and this is back to your question that we started with on the preventative side. It’s, you you have to bring these things to life. These policies can’t just be documents that yeah, okay, checkbox done. No, no, it’s like, are they being followed? Can you tell if they’re being followed by other indicators? Maybe there’s, you have DLP, you know, types of tools that will help you do that or your endpoint might be helping you to do that. Certainly.

You you have proxies that will know what people are using what. The hard part has been getting all that data and being able to process it and stuff too. And so it’s really is about, you know, connecting these dots in a way and doing it such that, you know, you get ahead of things.

John Verry (33:46.412)
One thing I I apologize, I did not have time to spend a lot of time on your website before this. It’s been a heck of a crazy couple of weeks. Give me the one minute elevator pitch. Tell me what pulse security does, because I’m professionally curious.

Mike Armistead (34:02.061)
The 32nd one is we’re still in stealth, so I’m not going to tell you anything. No, but I’ll tell you, generally where we’re aiming is basically an AI assistant for that security strategy team or the deputy CISOs or the people that have to do that work that a board member comes to the CISO and says, hey, I read about in the Wall Street Journal or the Financial Times, this hack. Are we susceptible to this?

And that even that it’s nothing earth shattering, but even that little bit takes a deputy off for a week to investigate it and see if they have all these systems and plays and stuff. And so our system is to basically get a bunch of prompt books that are secure in themselves that are trustworthy that the leadership can use to help define their program and know that the program is progressing.

in a good way. And then so I am on the preventative side a little bit more because we want to make use of those detection tools because they are they get a lot of the data but we’re trying to connect the dots.

John Verry (35:03.788)
Gotcha, so.

John Verry (35:13.45)
Right. Yeah, it’s so so it’s you know, I definitely think that that is a. You starting to see people do some really interesting things, even just even if you go to chat GPT and you load in some additional models, there are some pretty interesting cybersecurity models that people loaded out there. So I can imagine if you could if you could take a foundational model.

you know, enhance it, know, provide some enhancement to it, put a rag in place to support it and give it some of the context and knowledge that a foundational model doesn’t have access to, but still leverage the ongoing development of the foundational model, its internet grounding so that we have common thing. you know, asking that question, what are, you know, what are my peers and other healthcare organizations of this size doing for this particular problem?

And instead of having to pay someone like me to do it, they can ask that question and get an answer that would be, you know, ideally 95 % plus as good as the answer that somebody like me might give it.

Mike Armistead (36:22.221)
Yeah, I mean, although no one’s going to replace your judgment because you’ve been in the industry for a long time. You’ve seen a lot of things, but can it get 80 %?

John Verry (36:28.148)
No, no, look, mean, it can, mean, look, look, we like, who is it that said one of the, one of the smarter people in the industry said that, you know, the field of management consulting is, could see a thousand X reduction in revenue, right? You know, if you think about it logically, the world of consulting is based on information asymmetry, right? I have information that you don’t, right? The internet.

automatically collapse that or like if you’re new to the market with like, know, if you’re new, new to the market with when we, when we first developed our ISO 27,000 one service offering, it was not that many other people that really knew it really knew how to get people certified knew the nuances and things of that nature. Right. You know, and then what happens is 10 years later, 15 years later, now there’s a lot of people that can do it. Right. And the, the, uh, uh, you know, becomes more commoditized. Right. So your ability to generate revenue, right. The amount that you can generate on a per project basis goes down because

There’s lots of other people that can do that. Realistically, when you look at the internet, well, the internet is a mechanism of reducing information asymmetry. now AI is a way of leveraging that information to even better contextualize that so that information asymmetry even comes down a bit more. So I do think, I look at what consultants are doing and we’re looking at it internally and saying, okay, which parts of this business are defensible and which parts are going to be harder to defend?

I think anybody that’s not smart enough to see that is somebody who’s gonna be in trouble.

Mike Armistead (37:58.136)
Well, and I think you bring up, I agree with you. And I do think bring up some good points. think maybe some of the uniqueness that’s going to happen here at this tech and why I decided to jump back into the game one more time is the, it’s not, so there’s giving the LLM context it doesn’t have, so it can make use of that. And that’s great. And usually those are things that are, you know,

CISO has got to think what’s around the corner at times. so it’s, know, giving it access to think what’s around the corner, whether it be new threats or, these different kinds of regulations that are coming and requirements and things. There’s all sorts of stuff that it can help there and help with the strategy. The really unique part, having been in enterprise software, you know, my whole life, I think this is the time when also it can become very bespoke. This software, these systems can become bespoke to

the organizations themselves because you don’t have to give this to the LLM, but you’re to have your own knowledge base. You’re have your own knowledge graph. That’s about you. And now you’re using the LLM to fit it all together. it’s about you. It’s not about all the other healthcare organizations. It’s you, your architecture, your views, your people, your sizes, where you operate, all that kind of stuff. And I think…

John Verry (38:59.552)
Yeah, absolutely. You want to have your own knowledge base. Imagine it right. You’re going to. Yeah. Yep.

Yep. Yep.

John Verry (39:17.568)
Yep. Yep. Yeah. Imagine you’ve got, you know, a GRC solution, draw the van to secure frame logic gate, whatever, whatever it is, it’s sitting in your GRC platform. It can read your policies. It knows who your third parties are. Right. It knows what the status is. Your third party risk management program. It can see your compliance. If you’re adding. Yeah, absolutely. So now what you’ve got is you’ve got, you know, pulse securities rag of special sauce. Right. You’ve got

Internal data sources to ensure that we’re completely contextualizing it. We’ve got the internet, right? To kind of even further contextualize it to things that are happening outside of our doors, right? If you think about it from a SWOT perspective, right? It knows your strengths. It knows your weaknesses and then opportunities and threats are external to you. It knows about those. It knows the special sauce because you know, Mike’s done this for 100 years and Mike and his team put the special sauce in over here in this rag. Yeah, I could see that like the story makes complete sense to me. I mean, like I, you know.

Mike Armistead (40:13.197)
Yeah, Good. Yeah.

John Verry (40:15.244)
That’s what I’m saying. I I can see the future being radically different than where it is. And that lends to the last question I was going to ask you. So the way we got to this point is perfect. I was going to ask you, what’s the one thing security leaders should be doing today to prepare for an AI-driven future? And if the answer is different, also a business leader, right? Is the answer the same for a business leader and an information security leader, or would you have different answers for the two?

Mike Armistead (40:45.547)
I’d say it’s still the age old challenge of know what your business is doing because they, in fact, you’re there to mitigate the risks to them and in the business sense. so they’re making use of this newer technology for sure in the AI side. so it’s what are you doing to help mitigate then those risks that might be to your business? So that’s not anything new. It’s just that it’s happening.

You know, it’s happening, first of all, very top down. mean, use AI everybody, go. You know, and so everyone’s got this mandate to go use it. so you’ve got to get, it wasn’t, you know, when you think about the cloud and the transition to the cloud, right? I mean, it was, it wasn’t like top down. It’s a go, you know, we got, I mean, there might’ve been that a little bit, but it was more of the business and security and everybody kind of realizing of, you know, we saw a lot of headaches if we actually let someone else host this thing.

And it may save costs and things like that. So it’s happening very rapidly at a big speed. And I just think that you need to understand the business. It’s understand your situation. It’s going to get back to your, what is your particular, what are your crown jewels or the things that you have to protect? Know those really well. I don’t think, you you have to know your access is becoming the thing because

we’ve got these distributed systems, whether there’s the models of the clouds or the things like that, you need to know about access. so where you go always zero trust and things like that, I think you have to these days. You have to know about what systems you got in place and what they’re doing and your laptop, the phone, the things like that. So an EDR is critical. And I guess my roots in AppSac will never die. It’s always…

The things that run the business have to be strong themselves without a lot of vulnerabilities because they’ll be discovered. They’ll be discovered now very quickly and, and you have to do that. So, you know, I, I wish I had a, again, you know, this, oh, you know, it’s this one, I have this one little jewel for you. It’s like, it’s a lot of the things that you probably are striving to do today. But boy, if you don’t do some of these things, I think you’re at such a disadvantage. Um, they’d have to do it.

Mike Armistead (43:09.729)
And I’ll tell you one thing, if I can make one little pitch too, one of the other things we’re doing as a company that I just saw a need for was hosting a CISO community that can talk about more business engagement, more what you’re doing in your career and things like that. And we’re called Security Impact Circle, but today we’re really focused on what corporate directors want to hear from the security people. Not really the other way around.

security people want to tell the corporate directors, because they only think of risk and not even technically most of the time. And they’re on usually more boards. And so they come with these really different kinds of perspectives. we just want to have best practices communicated and thought about in the industry. And yeah, it’ll probably help our business down the road, but that’s not the reason we’re doing it. So yeah, so all those things got to get done.

John Verry (44:08.534)
So it’s interesting to me, Because in one respect, your answer was a cop out. In the other respect, your answer was perfect. And the reason it was perfect is because it’s just what you did was you said AI is just one more thing and fundamentals are fundamental. When you come down to architecting an optimized cybersecurity program, right? The first, and you follow ISO, you follow NIST, you follow

FedRAMP, you follow any of them, right? They all logically say the same thing, right? Understand the context of the organization, right? Information you’re processing, laws that are relevant, client contractual obligations, third part, know, business management, know, the business that we’re in, what we’re trying to accomplish, right? Our risk appetite as a company, all of those types of things, right? Understand the risks to achieving that, right? And then put controls in place proportional to those two things, right? In alignment with those two things. That’s foundational.

So like what you really said was, hey, AI is just one more thing we got to think about, and it’s one more tool we can use to accomplish that. So it’s actually a perfect answer.

Mike Armistead (45:15.949)
Yeah, it’s like, you your security hygiene has to be there. Maybe the hygiene changed a little bit because we’ve got this new thing. You’ve got to protect yourself again.

John Verry (45:25.676)
You have another risk. So we have another risk we have to account for or another set of risks, right? AI presents a whole different set of risks that we have to account for, right? So we have to adapt our programs to address AI risk. And we have another set of tools that we can use to mitigate the risks and requirements we had prior and to ideally, it’ll get interesting, right? Use AI to manage the risk associated with AI. And I think logically we’ll end up doing a lot of that as well. So it’s kind interesting.

Mike Armistead (45:50.797)
Yep.

Yep. agree. It’s a, know, it’s a, it’s a super sick space. I also, my background, I was in the web one. Oh, I was an executive at a company that, you know, was going to change how bricks and mortars were all going to go away. Bricks and mortar stores were all going to go away. It’s like, okay. I know it is true. There were, there were a couple of winners coming out of that.

John Verry (45:56.94)
Kind of interesting. All right, did we?

John Verry (46:16.876)
Amazon’s helped. They’ve gone away a little bit. There’s less than they used to be.

Mike Armistead (46:25.025)
We don’t even know what’s going to happen with AI 10 years from now and how it really was the foundational effect in that too. But I do think it’s back to the hygiene that you were just describing because that’s what’s critical now. There’ll be fundamental shifts that are going to evolve here over the next five, 10 years. I don’t think it’s the next two or three. I think that’s our hygiene years. After that, you got to pay attention to different things.

John Verry (46:52.512)
Yeah, it’s going to be fun. Anything we missed.

Mike Armistead (46:56.535)
Well, don’t think so. No, was a great conversation. Yeah.

John Verry (46:58.796)
Cool, appreciate it. If somebody wanted to get in touch with you, had any questions with regards to, like, I mean, you’re not gonna tell them yet because you’re still in whatever it’s called.

Mike Armistead (47:07.277)
I mean, they can please watch the space of pulsesecurity.ai. That’s our domain. But also look at securityimpactcircle.org. Those are two places that they’ll be able to figure out what we’re up to.

John Verry (47:24.022)
Cool. And I’m just like, curiosity, when will you come out? When do you guys figure you’re going to be ready to hit the market?

Mike Armistead (47:31.071)
Yeah, boy, having, you know, it’s interesting, you have different strategies at different times. We’re with design partners right now. We have our software kind of being used. And so it’s really a matter of how, you know, when do I need the general market to help me grow that too? But I think it’s the beginning of next year. You know, it’s kind of say the first half of next year, though, we’ll definitely be in market.

John Verry (47:55.83)
Well, good luck with it all. When you do get it out, me. Because now you have my curiosity peaked. Thanks, Ben.

Mike Armistead (48:01.92)
Okay.

Mike Armistead (48:05.366)
Awesome. Very good. All right, John, thank you.

Back to Blog