February 25, 2026

Guest: Jason Rebholz

Bio:

Jason Rebholz is a cybersecurity and AI security executive passionate about dissecting complex problems to identify straightforward solutions. His diverse experience in security start-ups contributes to his approach in leading and developing high-performing teams in fast-paced environments.

After spending over a decade as an Incident Response leader responding to sophisticated cyber attacks, he spent four years as a Chief Information Security Officer (CISO) at a cyber insurance company, where he built its security program. He also built a threat intelligence team to identify emerging threats, which helped protect tens of thousands of policyholders from cyberattacks.

Jason is now focused on the emerging security risks and impacts from Artificial Intelligence (AI). He blends his incident response, threat intelligence, and risk management skills to help companies securely deploy and manage AI.

He regularly shares insights on: AI security, agentic AI risks, threat modeling for AI systems, CISO leadership, and building security programs for emerging technology.

Summary:

In this conversation, John Verry and Jason Rebholz discuss the evolving landscape of AI security, emphasizing the need for robust frameworks and threat modeling to address the unique challenges posed by AI systems. They explore the distinction between AI safety and security, the importance of human oversight, and the implications of emerging technologies like MCPs. The discussion highlights the urgency of proactive measures in securing AI applications and the potential risks associated with neglecting these considerations.

Keywords:

AI security, threat modeling, AI safety, incident response, security frameworks, machine learning, MCPs, cybersecurity, risk management, emerging technologies

 

Takeaways:

  • Security teams are often brought in too late or not at all.
  • AI safety must be prioritized to prevent negative societal impacts.
  • Existing security frameworks are not sufficient for AI systems.
  • Threat modeling is essential for understanding AI risks.
  • Human oversight is crucial in managing AI applications.
  • MCPs can enhance productivity but introduce new risks.
  • Organizations need to adapt incident response plans for AI incidents.
  • Proactive security measures are necessary to avoid future crises.
  • Understanding the interplay between AI safety and security is vital.
  • The landscape of AI security is rapidly evolving, requiring continuous adaptation.

John Verry (00:32.691)
Hey there and welcome to yet another episode of the virtual see some podcast with you as always your host John Barry and with me today Jason be the repulsor rebels not sure Good to catch up with man. Thanks for coming on today. I appreciate it. Always start easy. Tell us a little bit about who you are. And what is it that you do every day?

Jason Rebholz (00:45.41)
Rebels.

Jason Rebholz (00:49.762)
Yeah. Thanks for having me.

Jason Rebholz (00:56.718)
Yeah. So I’m Jason Reppels. I’m the CEO and co-founder of Evoke Security and a advisory CISO for Expel. So for me, I focus on all of the emerging risks of AI and I’m really focused on how do you try to secure these new agent systems?

John Verry (01:14.771)
which obviously right now is a very hot topic. Obviously, why we wanted to have you on today is you can’t enter too many conversations these days in the cyber privacy AI space and not have these types of concerns come up. I always ask before we get down to business, what’s your drink of choice?

Jason Rebholz (01:36.494)
You give me a diet Dr. Pepper and I’ll be a very happy man. Very simple in my taste.

John Verry (01:43.104)
And you are aware of the health implications of drinking too much diet soda,

Jason Rebholz (01:47.808)
I am, it’s a treat. It’s not something I keep in the house, but yeah, if I’m flying or if I’m in a restaurant, that’s…

John Verry (01:53.511)
You look way too healthy of an individual to be drinking diet. Diet soda was probably the last thing I thought you were going I thought you might be a water guy or a green tea. I didn’t think it was going be anything bad. Diet soda, really? Shocked. All right. You look like a runner. You look like a runner to me.

Jason Rebholz (02:06.178)
Hey, it’s my voice, you know, I gotta have something. see, those are fighting words. I’m a cyclist, not a runner. That is a heated topic with me and my wife.

John Verry (02:15.61)
Okay, there you go. Bye, Elizabeth. Well, I don’t like running either. So we’re in good company. And I do also enjoy riding. But these days, it’s one of the big beach bikes, know, with the big tires. Yeah, so no longer I used the mountain bike at one point in my life, which I really enjoyed. Are you a road cyclist?

Jason Rebholz (02:25.669)
Ha

Jason Rebholz (02:32.476)
there you go. Yep.

Jason Rebholz (02:37.363)
nice.

Jason Rebholz (02:40.93)
Road cyclist all the way. I used to do mountain biking when I was a kid. I grew up in Jersey, all old farmland, so there’s actually, there was an abandoned paintball field.

John Verry (02:50.226)
Which people feel wearing jersey? I’m a jersey guy.

Jason Rebholz (02:54.442)
So this was Hackettstown, New Jersey. I don’t know, I actually don’t know the name of the paintball field because by the time I was was riding trails around there, it was it was gone. So we just had all of the derelict buildings and good trails around it.

John Verry (02:57.105)
Yeah.

John Verry (03:09.98)
So I used to spend a lot of time out in Hackettstown years and years ago because Mars was out that way. And they were a big client of mine at the time. I used to spend a lot of Yeah, so.

Jason Rebholz (03:15.234)
That’s right.

Jason Rebholz (03:19.406)
Oh, that’s awesome. Yeah, I used to wake up in the morning, go down to the bus stop and like every once in while you could smell the chocolate in the morning, which was always a treat.

John Verry (03:25.49)
Oh my God, Yeah, you knew. You knew. I have some stories to tell, but let’s get down to really what we’re here to talk about. So one of the things which we’re seeing, right, is…

organizations and more frequently third party service parties, particularly SaaS applications, right? They’re racing to get AI in place. And maybe they’re getting a little over their skis with not getting security in place at the same time. So from your perspective, do you think security teams are being brought in too late to both understand and mitigate AI’s, in my mind, unique risks?

Jason Rebholz (04:06.381)
So it’s not even that they’re being brought in late, it’s that they’re not being brought in at all or even being told to just stand down. this is, it’s a really interesting thing because the market opportunity is there for these companies to try to move as quickly as possible. And so, for any security leader, they’re not gonna wanna get in the way of that. But at the same time, when you have leadership coming in and saying things like,

It’s more important to roll this out than it is to understand the security risks and any risks of it. That’s where we’re setting ourselves up for a lot of pain in the near future so that companies can try to chase this market share. this is something where there is a growing security debt and that debt is going to compound quite a bit. And it’s just a matter of when is that going to have to be paid?

John Verry (04:59.506)
So you say we’re going to start feeling pain in a bit. I would argue that we’re already feeling that pain. I mean, if we look at character, character.ai, which is a horrific story, for those who may not be aware of it, quickly summarized, an AI girlfriend suggested to an individual that he asked her to come to him now when this person was in a not

good mental state and the individual committed suicide, you know, to be with, with, you know, denarius, I think was the character’s name that they used. We’ve seen other things which are pretty horrible. You know, there was a case where, a mental health chat bot, you know, suggested someone, you know, jump off a bridge. so, so I, I, I, I think you’re completely right, but I would argue that we’re not to a point where it’s going to happen. We see it happening every day.

And there’s a lot of stuff that does not make the press, right? I mean, I’m sure you see things, we see things with clients that like, I have three or four other examples that I really shouldn’t or couldn’t talk about, because the fact that we’ve been engaged in them, but we’re seeing bad things happen already.

Jason Rebholz (06:11.586)
Yeah. And for me, that’s the AI safety concerns. know, that is something a hundred percent agree. Like that is something that needs to get addressed today because there are negative ramifications that are already happening. I think what’s interesting is that solving the AI safety paradigm and solving the AI security, which is more of threat actors trying to abuse these systems, they take you down two different paths. And depending on the company and what you’re doing,

You wanna emphasize one or the other or probably both. And so this is where like understanding what is it that we’re building, how are people going to use it for companies like Character AI, like they need to index wholeheartedly on the safety side to get their arms around the impact this is having on society and these individuals. Security for them, it might not be as important outside of let’s make sure that these chats don’t leak and that people can’t access this data.

But the safety concerns for them far outweigh what maybe some B2B SaaS company that is just automating some workflow might do. But that’s the challenge right now is there’s so many things that you have to be thinking about. It’s hard for companies to understand, where do you even start?

John Verry (07:27.024)
Yeah, I like what you said and it’s an interesting paradigm or paradox, right? Because security and safety, much like privacy and security or interrelated, security and safety are interrelated. And I think what we’re seeing is groups like us, cybersecurity, privacy oriented groups that are in AI, it’s risk management, right? And safety is a component of risk and there’s a technical component to it.

Jason Rebholz (07:36.96)
Mmm. Yeah.

John Verry (07:56.847)
as well. it’s interesting because I think that you know and if you look at the risk management frameworks you know the the NIST AI risk management framework if you look at ISO 42001 which we do a lot of work with and this idea that we need these system impact assessments you know that safety component element is kind of integral into the governance process of managing risk in general right.

Jason Rebholz (08:15.438)
Yeah, and I would argue that most of the frameworks we have today are geared more towards safety than they are security. That’s it’s starting to change.

John Verry (08:24.53)
I agree completely. And I think that’s part of the problem. And I think it’s going to get worse. In fact, that was my next question, to be honest with you. So now, mind you, I did share this with you ahead of time. I was going to give you a lot of credit, but maybe I don’t need to. But I think you’re completely right. So the next point that I had was that traditional application security frameworks are not really geared towards the specific AI, the attack surfaces, as an example.

Let’s talk a little bit about that, right? You know, cause we’re big proponents of open trusted frameworks. Big believers in the OWASP application security verification standards, an example for an application and AI is an application. But yet there are things which of course they’re, you know, the OWASP-ASVS one was built eight, 10 years ago. You know, this idea of prompt injection or data poisoning and things of that nature, model drift, those kinds of issues didn’t exist, right? Because AI didn’t exist at, in the context that does now.

So talk a little bit about these frameworks, what your thoughts are, what are some of these new threats that are not being managed, and where do you think it’s all going?

Jason Rebholz (09:29.432)
So I would draw a distinction between AI being an application and AI being a system, because I think of it much more as a system in the context that it’s not just an application security problem, it’s an identity problem, it’s a data problem, it’s a cloud security problem, it’s a SaaS problem.

it touches so many different areas of the security domain. And I think this has been one of the challenges for a lot of security leaders is where am I supposed to start and what am I supposed to do? And so it really starts where you first have to acknowledge the existing security stack is not going to solve this new challenge because it’s a different layer of technology. It operates in a non-deterministic fashion, which our security tech

just isn’t really built to understand. And so what compounds this going to the regulation side and the frameworks is the early sprint in solving the issue of AI was focused on AI security and for good reason. I mean, we talked about it. We can see the ramifications of that already, but when we treat it as more of a, this is a safety concern. We forego all of the different security controls that

are more familiar to the risk management people. And so that’s why like, you know, the current frameworks that we’ve got with the EU AI Act, know, the NIST AI RMF ISO 42001, like those are great from a governance standpoint. But when I first read those, was like, this is great to help me get my arms around what I’m building. But what is it doing to protect me? And so

This is where we have good research coming out now to get to like, do we protect ourselves? But we don’t yet have anything that’s taken off. And so one of my personal favorites is there’s a new insurance company that popped up and they they’re focused on how do you secure agents? And so they released something called the AIUC-1 and it’s their goal is basically to become the SOC 2 for agents.

Jason Rebholz (11:46.018)
And this is the only framework that I’ve seen that tackles the governance side. It tackles the safety side and it tackles the actual security side. And I think that’s where we have to get to is saying, Hey, we’re not going to accept that a SOC 2 is going to check the box saying that you have secured your agent systems. We have to get to the point of saying we have a good solid understanding of how these AI systems work.

We’ve put the necessary controls in place to monitor it, to protect it, to secure it. And we’ve gone through and audited that. And I think that’s where we got to get to in the future. How long it’s going to take there to get there? I think that’s going to be highly dependent on how quickly companies can figure out how to start really deploying agents out in a more systemic fashion.

John Verry (12:35.026)
Interesting. I am not familiar with that. Thank you. I definitely want to look at it because I couldn’t agree with you more. mean, I think that even within 42,000, one of the NIST, I risk management framework, right? I would look at them right now as a set more of guiding principles than more like prescriptive, a prescriptive formula for ensuring security. And I also could not agree with you more that you need to look at these things as systems, not as applications, right? Because, know, a

It’s kind of interesting to see when…

concepts cross over across disciplines, right? So when you think about the way that we look at FedRAMP or the way that we look at CMMC, you know, we have this concept of a system security plan where we’re not, we’re not secure, we’re securing the system, right? And all of the components that are integral to the system. So I almost think of like, when we think about AI on a go-forward basis is that really what you’d want is a system security plan for your AI infrastructure, right?

Jason Rebholz (13:14.446)
Mm.

Jason Rebholz (13:37.732)
Yeah. It’s interesting because when I’m talking to engineers that are building these systems, they’re struggling to visualize how all these pieces interoperate. And so I’ve found that the engineers actually understand the security implications better than the security people right now, because they’re so close to the tech and they can see how things can go wrong. And so when you have those people who are building the tech struggling to really understand how all these things interconnect,

You can imagine how difficult it is for a security person who’s still trying to get their arms around, what is this system? How does it work? How do I even need to begin thinking about securing it? They’re just starting their journey when engineers are little further along and we still haven’t found that right recipe of what are the controls? What’s the monitoring that you need to have to do this right? Because we’re still trying to figure out how to visualize how all these things work together.

John Verry (14:31.698)
Yeah, and it gets even more complicated when you see like, like chained agent applications, you know, and the, and the backend agent has, you know, let’s say, um, access into, you know, JIRA or access into a Salesforce database or something of that nature. Um, and, know, beginning to, and, and, and you’re using a foundational model over here and using a rag over here. it’s like, Whoa, how, you know, envisioning how things, so let me ask a question that, you know, I was going to talk about this later, but let’s talk about it now because it seems like it ties right into it. So.

Jason Rebholz (14:45.572)
Mm-hmm.

John Verry (15:00.838)
The minute you say to me, we’re having a problem envisioning how things can go wrong, right away my mind goes to threat modeling. So is this a threat modeling issue? Is the challenge that we need to advance our threat modeling capabilities relating to AI?

Jason Rebholz (15:18.029)
Yeah, to me, that’s the starting point. And there’s a balance to be had here because we are still early to the game of how are companies deploying this out. And so we don’t have a large number of security incidents to point to. They exist, but because it’s so far and few between right now, you really have to go in and start to war game out. Like where are the way that these things can go wrong? And so.

I guess to start, let’s give a couple examples of where things have gone wrong. So you’ve got the Replet agent. Replet is a coding platform that people can go vibe code applications. And this was back, I think in July, you had a user that went and was coding everything with this agent and the agent just went rogue. And while troubleshooting something that was wrong with the system, deleted the production database. Now we’ll ignore the fact that

John Verry (16:14.674)
I didn’t hear about

Jason Rebholz (16:16.545)
You know, replant wasn’t doing good best practices here, but that agent on its own initiative went and deleted data. Right? So like that’s one problem where we can’t control these agents the way we think we can. And then you have these other examples of where I’m not going to name the company, but they released an MCP server and they found that they were providing information from other customers to, to another customer. And so.

This is where we’re we’re repeating a lot of the same mistakes of the past and we’re seeing these things start to pop up. And that’s why like today you have some business impact, potentially you have some data leakage issues. And so those are good examples to start kind of figuring out how do we threat model this and the most basic way that anyone can start this is to look at the lethal trifecta and what I call the lethal duet. And so what this is lethal trifecta.

is when you have untrusted content, you have access to sensitive data, and you have a way to exfiltrate that data out of a network. And so the way to think about this is that, hey, we’ve got a Jira ticket coming in from a customer portal. That Jira portal is connected into an agent that also has access to sensitive company data.

As an outside user, I can do prompt injection into that, that, customer portal. That JIRA ticket gets created. Somebody goes and asks, Hey, summarize the new tickets. And now suddenly my malicious prompt is telling them, Hey, extract all this data and email that back to me. That’s the lethal trifecta. And with the lethal duet, same concept, but it’s just untrusted data and access to privileged tools. And so now I can potentially send a ticket in.

and get that customer support agent to go and execute some tool that it has, whether that’s sending another email to somebody else or maybe it’s executing something somewhere else in the environment. But this is how we can start to connect the dots to go from an LLM based attack into more traditional attacks, whether that’s sending phishing emails, whether that is being able to execute commands on a system. And those are the threat models that you have to understand. And that starts with

Jason Rebholz (18:41.836)
map the systems, understand what it’s doing, and start to plot out where some of these weak points are and how it can cascade into real-world incidents.

John Verry (18:51.014)
Yeah, you know, you’re. I keep spinning back to your differentiation between safety and security, because you said that very eloquently and I hadn’t really thought of that that way before. And when you think about like. When you think about risk of these systems, effectively, this is a human like actor taking an action on your behalf. And as we’ve begun to know more and more adversarial testing of AI.

it really became quite apparent to me. It’s social engineering. It’s not technical testing. I prompt injection is social engineering. How can I trick, how can I evade the guard rails that are built into this application? How can I trick this AI into doing things that I don’t want it to do? know, using different languages or lead speak or, you know, breaking, know, engaging in long enough conversations that, you know, its guard rails kind of come down. Like it’s, it’s, it’s a completely different paradigm.

of managing risk and I’m increasingly starting to understand like what percentage of really is like security. know, I mean to your point, inability, preventing it from having access to be able to delete a database, I think that’s a fundamental security component. But there are a number of these things that where, you know, that crossover between safety and security becomes quite vague.

Jason Rebholz (20:17.946)
Yeah, this is something where I always tell people that agents are more susceptible to social engineering than humans. So we have been saying for far too long that humans are the weakest link in security. Well, guess what? Now it’s the agents, right? And because we have to, if we step back and kind of think about the technology itself, agents don’t have common sense.

So to your point, if I just use Leatspeak to try to bypass some basic guardrails, a human is gonna catch onto that and just say, yeah, you know what, I’m gonna ignore that. But with these agents and the LLMs that are behind them, these LLMs just want to please you. They are gonna do anything you ask it to, and it’s a fundamental weakness in how these systems work. And so if we’re putting ourselves in a situation where we have something that is

so easily socially engineered, the existing security tools aren’t good enough yet to try to catch these things and stop it. And we’re giving access to these agents to tools and data and things that you would never dream of giving to your day one intern. We are setting ourselves up for a disaster. And I think the thing that’s getting in our way is that we haven’t thought through far enough what is going to happen. And that’s the threat modeling.

And that’s where this is a significant knowledge gap of how do these systems work? What are the risks? And for these teams, being in a position to actually start putting these controls in place, starting to implement best practices and get this, you know, a new layer of detection and response in place that you can try to prevent some of these things from happening.

John Verry (22:03.963)
Interesting.

So question for you. So you talked about this UC1, AC1, I forget the name of the standard that you kind of.

Jason Rebholz (22:15.214)
Yeah, AI UC-1. Yep.

John Verry (22:18.185)
AIUC-1. OK, I’ll look that up after this. What about, like, do you see other, so I think you’d agree that this day, I was management framework, I supported 2001 have value. You’ve got this AC one, which you probably would use as a component of an ISO 42001 governance program. Thoughts on other frameworks are out there. know, the OWASP has to tell LLM, you know, obviously, you know, MITRE has the Atlas framework, which

A lot of people are critical of. Are there other sets of good practices that you would recommend someone listening to this that’s a little bit worried at this point? Like other frameworks that you would suggest people should be thinking about?

Jason Rebholz (23:02.64)
So I think the ones you listed are the ones that everyone should familiarize themselves with and start there. There are some other things that go a lot deeper. Like I love OWASP and what they do. It’s very academic, very theoretical. So it’s great for a security researcher, but I’m not gonna recommend your general security practitioner go do a deep dive on those. It’s just gonna confuse you more.

Once you get the fundamentals in place and kind of learn the governance, learn the basics of these controls, then dig into that. And then that’s going to help you start kind of thinking how do these things go wrong? like, I would say if I had to kind of put this in order, I’m going through my own journey here of, how I tried to approach this. I started with understanding some of the red teaming elements of

of how you go after agents and the types of attacks that are happening there. I started with the LLM top 10. It was good. It gives you a basic understanding. I graduated into how you read teaming against these. Then I switched into how do you securely develop these? That was actually one of my favorite OWASP writings was, and I’m forgetting the exact name, but it was designing a secure agent system, something along those lines. It’s like an 80-page document.

but it has such good granular detail on how you should really try to think about this from a permissioning standpoint, from a connectivity standpoint. And like that’s where you really get into the nitty gritty of understanding how do I securely build this from the bottom up? And then you overlay on top, how do people attack these systems? And you’ll start to see these core concepts start to pile to the top saying, okay, I do need to approach it from a development standpoint here.

But I got to go over here and I got to make sure I’ve got the right detection and monitoring in place so I can see what’s happening. And then you’re taking the governance background from some of these other frameworks. Now I can figure out how do I manage this at scale across my business. And so it’s kind of a mishmash of all this disparate information. But that’s the problem that we have right now is that it’s so early in this journey that you’re really kind of forced to pick out different things and put it together yourself.

John Verry (25:25.042)
So, and this is interesting because it’s a security question and we’ve agreed that there’s elements of this that are security and elements that aren’t. But to extent, you just talked about a pipeline, right? So if we look at a conventional application pipeline where security is 90 % of all of the problems, right? You’ve got a poll, you’ve got a code scan, maybe you’ve got software composition analysis, maybe there’s some DAS going on.

infrastructure as code is being scanned, like all of those things are happening. You know, and then if we look at, pushes to prod, maybe there’s some independent objective, third party validation testing, unit testing, whatever it might be. Does, how do you see pipelines evolving? And I’m going to ask you the question is, you know, do you see them as being different? is, is machine learning, if you’re building your own LLM, right? Is machine learning.

a pipeline and then the AI agents and other pipeline and how do we see these pipelines evolving in your vision.

Jason Rebholz (26:32.932)
Yeah, in my mind, they’re two different pipelines, but they can share a lot of the same flows and infrastructure, but it’s different problems that you need to address. So from the ML pipeline, much more emphasis on what’s the data and how are we securing that data, right? Because you need to make sure that good data goes in so that bad data doesn’t come out. And so that’s a whole unique challenge there.

And that’s something that data scientists, I would say, haven’t really had the rigor around in the past. so security teams are really going to have to come in, take the best practices that we’ve learned from the standard pipelines and start to really address that from the beginning there. And then once you’ve got that model, that’s kind of your separate thing here. For your traditional software development pipeline, it’s a lot of the same practices.

thing that I would urge people to really stress more though is to take that threat modeling concept that we talked about and really go through and say, okay, if I’m adding this new tool or I’m adding this new data source, how does that change my threat model? Because any one of those that you implement net new will completely change your threat model.

And that’s the important thing is you have to go through and say, okay, you know, we’ve added this new tool, we’ve added this new data. What does it change with how I understood the system operated before? And how does that change where the potential threats are going to lie? And it doesn’t have to be a drastic change in the threat model, but it could be. And so that’s kind of a next step for security teams that I urge them to go and do is make sure that you’re doing these new reviews when you’re building agents in particular. And then.

Outside of that, you have the whole issue of Vibe coding, which is just like a whole other thing. I think a lot of concerns around it right now, but I do believe that with the right level of rigor around your application security program, you can largely address a lot of those issues. There’s some add-ins that you need to really go after, like dependency checks and things of that nature, or the whole supply chain issues there, but largely that can be retrofitted into what you build.

Jason Rebholz (28:50.508)
Because it’s just the generation of new code versus really introducing, you know, these grand new concepts into into what that pipeline looks like

John Verry (28:59.026)
I hadn’t really felt much about that on the vibe coding side. How we architect the same rigor into a vibe coding pipeline. Because in fact, it probably needs a greater degree of Yeah.

Jason Rebholz (29:16.59)
Yes. Yeah, because it’s a lot of the review process, right? people will always argue, well, we’ll have agents do the review as well. That’s where things start going off the rails a little bit more. I don’t think we’re quite ready for that. But there’s a human set of eyes really needs to review the code before it goes live, just like you would today.

John Verry (29:34.397)
Human in the loop. Yeah, I mean, listen, that would be the other thing too that I would suggest is like when we’re kind of like dealing with these security safety issues and threat modeling and figuring out where to put human in the loop. And Vibe coding is definitely one of those areas, Like you said, AI wants to please. It’s going to generate something that does most of what you want to do. And like to me, when I think about Vibe coding,

I think of it as being one of the biggest problems I see over my career is I almost dislike POCs. And I dislike POCs not because it’s not a fantastic idea, but I know if the POC works and it’s something that’s important to the business, they’re not going to do what they should do, which is, okay, yes, we can do this. Let’s start a development effort. The POC becomes a right? And the POC wasn’t architected for, know, it was

MVP, let’s get into the, you know, and you end up with a problem. And it feels to me like vibe coding is just endless POC strung together. If you know what I mean. And like, what the hell is it going to break? We have no idea. No way that way, right?

Jason Rebholz (30:47.458)
Exactly. Yeah, that’s the perfect description of it. And that just goes back to, it’s great because it does enable these companies to move fast. But when security is not part of the conversation, that security debt is going to keep growing. then there’s going to be an age of the great retrofit, where we’re suddenly going to realize that, hey, we messed up again. We didn’t learn our lessons from the internet or cloud.

John Verry (30:57.746)
Or should we?

Jason Rebholz (31:15.525)
And so let’s go spend a lot of money trying to retrofit security back into these systems.

John Verry (31:20.466)
So speaking of that, so we talked about changing the development process, right? Which I think is critical. What can we do and what do you, are you typically recommending to people? What do we do post prod, right? How do we evolve our operational practices, right? The way that we monitor applications, the way that we understand those threats, whether it’s model drift or it’s transparency, explainability, bias.

guardrail evasion, whatever the things are that we’re worried about. Like how do we instrument, if you will, our new AI apps that we’re also

Jason Rebholz (32:01.059)
It starts with admitting we have a problem. The existing security stack is not enough to protect agentic systems, period. I don’t care what your argument is, it’s it’s not going to apply to this new paradigm because we don’t have detections for this. So even if you have the visibility, can guarantee you, know, SANS, maybe a few companies, you’re not really paying attention to what’s happening there because it’s just too new.

And so it’s first acknowledged, hey, we have to actually do something here to get the right level of visibility and monitoring in place here. Build a new detection program around this so that we can see when are these agents going off the tracks? Whether that’s agent going rogue because it just thinks it has to do something to complete its goal, but it starts doing risky behaviors. Or you’re dealing with a prompt injection attack where the threat actor is taking control of the behavior of the agent.

and starting to do something malicious. So you have to get these controls or you have to get this monitoring in place first so you can really start to see that. And, you know, if we step back, it’s understanding what the purpose of the system is. And so for something that’s making critical decisions, index onto or over index onto AI safety, make sure that your guard rails are an AI safety guard rail and layer in the security guard rails, which are largely nonexistent today. and I will, I will die on that Hill.

But when you get into more of the security concerns, let’s start going into how do we build the right guardrail solution in place here that is going to monitor the activity of that system and start to flag things and start to block things. And that’s kind of where a lot of investment is going now to start building solutions like that into place. And that’s going to be the next race that happens here.

John Verry (33:54.413)
Is guardrail logging, logging the output from guardrails like a Lycarra or something of that nature, is that one of the things that you would be pushing people to do?

Jason Rebholz (34:08.59)
Yeah. So I would say yes for an investigation purpose. Now there’s some privacy concerns and everything there that are going to come up. And so it’s really about smart system design there. Like, you can keep that in there. So it does become a bit of a business decision on whether you want to retain that or not. But I grew up in forensics and incident response. And so I’m always of the mind, say like, let’s make sure we keep those logs at least for a little bit of time, because we might need those one day.

John Verry (34:35.378)
The other thing too is that gets interesting because the unpredictability of AI, like, so you task me with telling you when my AI, you know, I think you use the term off the rails, right? Like, I wouldn’t know how to begin to, you know, arc into, what should I be monitoring for? What does off the rails mean, right?

Because the way that AI has this propensity to hallucinate and do things which are wildly unexpected, like how do I monitor for the expected unexpected? Right? I mean, you know what I mean? I mean, which I guess is inherently one of the challenges that you’re kind of alluding to.

Jason Rebholz (35:14.896)
Yeah.

Jason Rebholz (35:19.312)
That’s exactly right. These are the things that we’re trying to build towards to try to fix, you have to almost treat this as it is more akin to an insider threat than other types of things. With that, it is more behavioral analysis. It’s baselining and understanding what is the intent of the agent? What are its objectives? There’s some sort of LLM approach here potentially to say,

Hey, can we see and map this up to say, yeah, this is expected for this agent. And you have to mix in your deterministic controls so that you get this balanced approach. Because you don’t want to index one way or the other there. It’s like, this is one you got to shoot straight down the middle and try to get the best of both worlds. But this is the exciting thing in security right now is like, we’re all trying to design these systems to be in a position where we can set ourselves up to do that.

John Verry (36:05.106)
Yeah.

Jason Rebholz (36:14.34)
But it’s very much so building the bridge as we’re driving the bus across it.

John Verry (36:19.196)
Yep. Yeah. We’re painting moving bus, you know, all the analogies you want to name, right? well, speaking of, of painting moving bus, I think you’d agree with me that, MCPs are exciting and interesting. And I saw an article the other day that said there’s like a thousand, several thousand MCPs out there now, but I don’t, you know, but they alluded to the fact that people have no idea how to set them up, don’t know what they’re doing in the MCP, you know, that an MC, a misconfigured MCP is, is, is a problem.

Jason Rebholz (36:21.54)
Yeah.

John Verry (36:48.944)
Right. So talk a little bit about MCPs, right? You know, what they what an MCP is, right? Just for people who don’t know that. And then do you think the way that it standardizes, you know, communications to and from from a model and, you know, you know, would help us in terms of both security safety, but also from an observability perspective?

Jason Rebholz (37:16.464)
So MCPs are a double-edged sword. And so what an MCP is, is it’s a protocol that allows an LLM to communicate with tools. And so the greatest part of this is now we can build in some basic reasoning with these LLMs to say, hey, in this type of situation, I need you to go figure out which tool to execute and execute it.

And MCP is that protocol bridge connecting these two things together. So from a productivity standpoint, it’s amazing. Whether you should use them or not, that’s a programming question. And I would argue that in most cases, you don’t need to do it. And this is where you’re introducing more complexity into a system that needs to be simple. And so if you can do a straight up API call to a tool, do that.

You don’t need to get fancy here. so you start with kind of a secure design decision here of, I need to go down this route or is it really helping me solve a problem? And the dual nature of this is when you start to open up these MCP servers, there’s a whole other class of issues that start to pop up where somebody can modify what that tool call is and have you executing malicious instructions. You’ve got a supply chain issue. So this happened a few weeks ago.

with Postmark. So Postmark had an MCP server that they hosted on GitHub. It was legitimate. An attacker took that code, published it to NPM, and just started iterating on it. And it was like the 15th iteration on it. They just added something in that would copy or BCC the attacker into any email that was sent through that MCP server.

And so developer going out just looking, postmark, MCP, great, I’m gonna download that via NPM. Now all of a sudden all the emails that are going through that MCP server are getting copied over. And this is where we are just not prepared to really deal with this new class of threats, even though they have lot of parallels to existing security challenges.

Jason Rebholz (39:38.426)
But because the hype of AI is pushing everything forward so quickly, we’re ignoring these basic things. so, yeah, MCP servers, fantastic from a productivity standpoint, but man, do they scare the crap out of me because it is just a whole nightmare waiting to happen.

John Verry (39:56.537)
Right, in theory, right? So I’m not going to disagree with you, but I’m going to ask you, is, if an MCP, so let me ask you question. One of the ways that I’ve analogized an MCP, because it’s new to me as well, it’s new to everybody, but I’ve been around longer than I would care to admit. And I remember when you used to write applications and you talked to a backend database directly. And if you wrote your application for one database, you’d have to port it.

to another database, right? It was not. And then Microsoft gave the world, I think, gift, ODBC. And suddenly I could write one application and largely it would talk to any database in the backend. In a weird way, I kind of look at that level of abstraction. I kind of almost look at the way what we’re doing with an MCP is sort of the same type of an abstraction, right? So if I was trying to talk to multiple backend sources and I could write my, if my agent,

only had to deal with one interface to 12 different backend systems, I think you could make an argument that I’d be less likely to make a mistake that would expose something that I wouldn’t. So an MCP, if it was optimally configured and everything was done well, could have some benefits. Would you agree with that or would you disagree? OK.

Jason Rebholz (41:16.932)
Yeah, absolutely, as long as it’s designed the right way.

John Verry (41:21.618)
Okay, so it’s an execution issue, not a architectural issue.

Jason Rebholz (41:28.162)
Exactly. Yeah. And the best visualization that I’ve had for it that I’ve seen is like, it’s kind of like a, a USB hub where it’s like, I’ve got one input that goes to my computer and then I’ve got all these other inputs that go into it. And so from my computer, I can just say like, great. Like I need you to go solve this problem. And that USB hub can go, has access to all these different peripherals, these tools that it can figure out the best one to choose to execute the job. We’re still outsourcing.

John Verry (41:56.625)
Is that how people are gonna in practice? Is that how most applications that are architected to leverage an MCP will work? Like Susan said, would you architect it to use different foundational models? And you could turn around and say, okay, so I would write a single prompt to do something and it would say, well, know what, Claude or Llama or Grok or it would go and talk to all of those and then integrate all of that and bring it back.

Jason Rebholz (42:06.762)
I see it much more narrow.

John Verry (42:26.066)
Is that sort of like the way people you see are going, is that the way you envision that MCP is going to be a primary use case? Or does it give it the ability to, well, let me ask you, what is going to be the predominant use case for MCPs?

Jason Rebholz (42:37.2)
So I think, let’s give an example of an AI assistant. I think this is gonna be the predominant use case that will pop up. So AI assistant, I need access to my browser, I need access to my desktop, I need access to all these different SaaS tools that I would use on a daily basis to just manage my life. And so instead of me having to go log into every single site separately, or to set up a unique request to each tool, I can just say,

Hey, you know, assistant, can you can you tell me what I’ve got going on today? And if I owe anyone money, just make sure that they get paid by the end of the day. But you know, also make sure that you don’t pay my uncle because he still owes me money. Right. So like, that’s the power is that I could just give a single prompt and have it take care of everything for me. It’s the decision making that that LM is going to have.

on how it tries to figure out the right order of operations to do it in and not send money to my uncle, right? Or not try to empty my bank account sort of thing. That’s where it gets a little bit wonky where you would want to try to narrow it down to only the things that are absolutely necessary to run whatever the system is you’re building.

John Verry (43:54.259)
Yeah, that’s like that classic. A lot of times you see the example that I’ve seen it quite a bit where, you know, hey, I’m burned out at work. You know, I want to take a vacation. I want to go somewhere where it’s warm. want to maximize the amount of, you know, thing and tell me if I’m going to need sunscreen. And so now what this agent does on my behalf is it logs into work and sees how many days I have left off. It goes out, looks at, you know, looks, where’s it going to be warm, you know, figures out some publications, looks for pricing.

and it books a vacation for me and it buys the sunscreen that I’m going to need based on the number of days I’m to be there and what the sun exposure is going to be expected during those days, right? So that’s like, okay. So that’s the promise. Okay.

Jason Rebholz (44:36.442)
That’s the promise land and all it’s going to cost you is giving all of your access to all your data, all your information, your credit cards to to an agent. So.

John Verry (44:45.872)
And you saw what happened with was it the new perplexity agent, whether it was social engineer, was, you know, it was logging into unvalidated websites and yeah.

Jason Rebholz (44:58.286)
Yep, that’s it. I have no doubt we are eventually going to figure out how to do this well. But it’s, I always say it’s in ten full steps right now, right? We gotta, let’s just make sure we’re not stepping on a landmine right away.

John Verry (45:12.79)
All right. All right. So quick question for you. When somebody reached out and suggested you’d be a good guest, I saw you as being, I know you were the CISO of Expel. Now, Expel, by my recollection, is a very good security operations center, SIM type. Okay. Yep, yep, okay. And you’re doing work for them. But you also mentioned, and I didn’t know this, Evoke.

Jason Rebholz (45:35.438)
Yep, MDR, they’re one of the best out there.

John Verry (45:42.896)
Did you just say, would you say you work for Evoke? That’s your full-time job?

Jason Rebholz (45:45.455)
Yeah. So, yeah. So I’ve got a company of Oak Security. We’re focused on how do you secure aogenic AI. So we’re building towards solutions where you have that detection and response capability in these systems. And with, with Expel, so I was a former Expel customer when I was a CISO at a cyber insurance company. And so I work as a part-time advisory field CISO for them to help really get the word out that like,

In this future, especially with AI enabled attacks, you really need to have a solid managed detection and response solution in place to keep up with the velocity that these attacks are going to be bringing.

John Verry (46:24.326)
Yeah, well, I mean, even just the fundamental, you know, doing an incident response exercise and knowing an entity that we did one recently for a social media company that had just rolled out some agenda gay eyes or AI agent. don’t remember which one it was, but we made the attack something that happened from an AI perspective and.

their instant response plan had no idea how to deal with an AI incident. mean, like people don’t even think that like, you roll out something, your instant response plan is going to be different for a conventional security attack versus some sort of an AI incident, right? You know, from a legal perspective, from a cyber liability insurance perspective, from a who do need to get on the phone perspective, right? Crazy.

Jason Rebholz (47:15.49)
Exactly. It’s one of my favorite questions to ask is just like, hey, how would you handle detecting an attack against your eugenics system? And how would you respond to it? And it stumps everyone because it’s the new thing, right? And we just haven’t got our arms around it. But now is the time to start to get our arms around it, right?

John Verry (47:38.163)
And then you also, and this is where this crossover, you know, again, security and safety or security, and I don’t even know what the hell you’d call it. So you saw like one of the big four accounting firms recently had to give a lot of money back to, I think it was Australia, because they issued a report and inside the report, right, they had cited case law and they’d cited quotations and things that nature that were completely hallucinated. So like that’s another example of like,

Where does that incident response take place? Is that crisis management? Is that a cyber incident? Is that an AI incident? You can have an AI incident, they did. That’s not a cyber incident. Where does that response take place?

Jason Rebholz (48:24.782)
Yeah, I think it’s going to vary. I’ll give you another example. It’s one of my favorite ones to cite because it does show that things can go wrong outside of a security perspective. This is a situation where a law firm was using a copywriting agent so that it would create blog posts and things like that, post it to the website. They gave it explicit instructions upfront. Don’t cite these competitors. Don’t take any information from them. Over time,

What started happening was this agent started promoting the law firm’s competitors on the website. Then it pulled down content from the competitor’s website and posted it as this law firm’s. And so that law firm got sued for copyright infringement. So it’s not a security incident, but yeah, where does that fall?

John Verry (49:16.156)
I don’t know. I don’t really know and it’s funny because I never thought of that until this conversation. mean like conventional incident response. I mean many companies have crisis management but if you’re a small to medium sized organization or mid market, your crisis management, your incident management are effectively the same. Maybe you’ve got a crisis management component within your incident response plan. Let’s say you’re a healthcare company and it involves breach, right? You know, okay, that’s crisis, right? But like…

Jason Rebholz (49:37.904)
Mm.

John Verry (49:44.389)
an AI of this nature, or even just think of it this way, you get a phone call from the EEOC. Something you did with one of your human resource applications is AI enabled. So now it’s a third party AI issue, right? That’s an incident on your behalf that you probably don’t have any, you haven’t even considered that at this point probably.

Jason Rebholz (49:50.128)
Mmm.

Jason Rebholz (50:06.032)
Well, exactly. And to even further complicate it, because why not? You’ve got…

John Verry (50:10.566)
Yeah, because you haven’t ruined enough people’s days enough. I mean, we can’t publish this podcast, you realize that, right? You know,

Jason Rebholz (50:12.804)
Yeah.

People look at me like I’m crazy sometimes. It was like, these are the problems. Like we’re staring at it right in the face. But this is something where, you you might be, you might have an easier job if it’s something where like you’re building the AI application internally, because maybe it’s an engineering reliability issue, right? And so you’ll have kind of your, your team there that can jump on it. And maybe you’re partnering with the security team, but then what happens with third party AI applications like SaaS, right?

Is it then the business owner who owns that vendor? Does it go to the security team? There are no good answers to this right now. But to your point, this is where we need to start thinking through these things because it is going to happen. It’s already starting to, but we’re just like, the starting gun just went off and it’s a sluggish start until we start to get this. But it’s like a freight train, right? It’s going to be slow to get moving. But once this thing keeps going,

It’s going to build momentum and we’re going to start seeing an onslaught of these little incidents to start until it becomes a much bigger and bigger thing.

John Verry (51:22.16)
Yeah, listen, I mean, I’ve been beating the drum on shadow AI and just people just do not understand the risk of shadow AI. They just don’t and you talk to them about it and it’s, you until you know, it’s like anything else, right? You know, until, you know, I mean, if you think about it, like, you know, I mean, you’re a cyclist, you look like you’re in good shape, but the vast majority of America is not in good shape. And when do they find, when do they find exercise and health, you know, when something bad happens, right? You know, how many of your clients I know,

Jason Rebholz (51:26.094)
Mm.

Jason Rebholz (51:48.271)
Exactly.

John Verry (51:50.733)
decent percentage of our clients, we end up with clients after a breach, right? After they’ve had a security incident, they find religion, right? They find security. know, unfortunately, I think a lot of this stuff with the shadow AI and even some of these AIers we’re talking about for internally developed AI, you know, I think unfortunately, until somebody feels some pain, they might not get find religion on it.

Jason Rebholz (52:10.96)
Yeah. And it’s always the case of, it’s not going to happen to us. It’s going to happen to somebody else. I live this firsthand with ransomware. I born, residence and response. I spent far too long responding to different hacks and everything. And so I was there from the very beginning when ransomware was just a single system event. And, the ransom used to be hundreds of dollars. And then I saw this gradually happen where it switched to enterprise level attacks where this one group, SamSam, they changed the game forever.

They started going after the servers in the environment, not just a single system. And then they were charging, I think it was maybe $50,000 of a ransom, right? And then we got all the way to these seven figure ransoms, completely taking companies down. And when people started paying attention was Colonial Pipeline, because it suddenly became real of the impacts. And then you started people saying, like we actually got to, we got to do this. Like if it can happen to a big company like that, I guess it can happen to anybody.

John Verry (52:59.591)
Yep.

Jason Rebholz (53:09.966)
And then the narrative changed a little bit. And so it’s an unfortunate thing where the pain of trying to secure it now seems a lot greater than the pain of the impact if something goes wrong. And eventually that’s going to even out where you’re going to start to see the calculus change a bit.

John Verry (53:28.828)
Yeah, from your lips to God’s ears. All so we beat this up pretty good, I think. Any last thoughts that we want to bring up?

Jason Rebholz (53:38.308)
Yeah, I think the biggest takeaway here is really take the time to threat model out what you’re building. Map these systems, understand the risks and get the right detection and monitoring in place. Now, this is something that we can take a parallel from Warren Buffett on investing and retirement. Save when you can, not when you have to. This is secure it when you can, not when you have to. We’re looking at the same thing here.

John Verry (54:04.57)
Yep. If folks want to get in contact with you, what’s the best way to do that?

Jason Rebholz (54:10.205)
I’m very active on LinkedIn, so just find me on LinkedIn, add me, me DM.

John Verry (54:16.614)
Sounds good man. Well listen, I really appreciate you coming on today. This was a fun conversation for me. You definitely at multiple points made me think about things that I should have probably thought about before. So thank you for that.

Jason Rebholz (54:28.622)
No, thanks for having me. This is where it’s so new is we all got to work together to really help everyone level up.

John Verry (54:34.259)
Sounds good,

Back to Blog