March 28, 2023

 

DevSecOps is the practice of integrating security testing at every stage of the software development process. With DevSecOps, training and educating all teams in risk, security, and mitigation at all stages of development is a top priority– traditionally, app developers don’t pay much attention to security, which increases the risk of vulnerable code being deployed and the application being compromised.

To learn more about DevSecOps in this episode, your host John Verry, sits down with André Keartland, Solutions Architect with Netsurit Professional Services, to discuss tactical steps to implement DevSecOps in 2023.

 

In this episode, Join us as we discuss the following:

– What is DevSecOps and how does it differ from DevOps?

– Getting business stakeholder buy-in for application security

– The best way to get started with DevSecOps

– Who in your org needs application security training and why

– How to assess application risk and why it’s so important

 

To hear this episode and many more like it, we encourage you to subscribe to the Virtual CISO Podcast.

Just search for The Virtual CISO Podcast in your favorite podcast player or watch the Podcast on YouTube here.

To stay updated with the newest podcast releases, follow us on LinkedIn here.

 

See below for the complete transcription of this episode!

 

John Verry (00:01):

Uh, hey there, and welcome to you at another episode of the Virtual CISO podcast, uh, with you as always, John Very, and with me today, Andre Cartland. Hey, Andre.

André Keartland (00:12):

Hey, to be here.

John Verry (00:14):

Uh, thank you for having me. Uh, Andre is doing us all a favor. It is, it is now Friday evening in South Africa, which where he’s, he’s, uh, jumping on the phone with me. I can’t believe he has nothing better to do on a Friday night to spend time with me, but I’m appreciative he chose to do that. Um, Andre, I’d like to always start simple. Um, tell us a little bit about who you are and what is it that you do every day?

André Keartland (00:39):

Okay. So I’m a solutions architect in a organization known as Niche. We are a solutions provider. Uh, started in South Africa in the late 1990s, but, uh, expanded from there. I’ve done work all over the world. Um, been with the company since 2000. Um, we’ve got an operation out in, uh, New York City now as well. But, um, uh, basically trying to make the planet safe. Um, my role is to design and build solutions for people. Uh, I’m not a ciso, but I spend a lot of my time talking to CISOs and, um, trying to, uh, build out security for our customers. Some of it is, uh, uh, uh, um, DevOps and Dev SecOps, but, uh, I also get involved in all the aspects of security from networks to applications to identity, et cetera. And, um, I’m not the sort of architect that, uh, sits in a white tower and, uh, writes, uh, white papers and draws video diagrams. I do tend to get my hands dirty and, uh, get, try and get stuck into bold solutions for our customers.

John Verry (01:51):

Excellent. Um, I, I always ask before we get down to business, um, what’s your drink of choice?

André Keartland (01:58):

Um, I’m very much a whiskey drinker, so, uh,

John Verry (02:01):

Ah, man, after my own horn, look on the shelf behind me, you’ll see you and I share a passion. Yeah. Any particular, uh, so

André Keartland (02:13):

No, I, so,

John Verry (02:14):

So in South Africa, go any particular, uh, any particular whiskeys that you, you get in South Africa that we might not see here in the us?

André Keartland (02:24):

No, we tend to, uh, get our whiskeys probably from the same places as you. So, uh, Scotland, very much. Um, I’m, I’m very adventurous. I love drinking from all over the world, so, uh, a lot of the American bourbons, uh, I love Japanese whiskey, but, um mm-hmm. <affirmative> ultimately, um, my Scottish hot still, uh, reaches out and, um, I’m especially fond of, uh, space side distinctly enough.

John Verry (02:51):

Okay. So, so that’s where we differ. Yeah. That’s where we differ. I’m, I’m not a ped whiskey drinker. I’m not a scotch, I’m a rye and bourbon guy, or, or conventional American whiskey. Um, but, uh, and the one I haven’t drank a lot of, um, that I really sh want, uh, you know, I’ve only drank a little bit of, but I have enjoyed, and, and I’m starting to get into some of the Japanese whiskeys are quite good too. And, and of course the Irish whiskeys are wonderful.

André Keartland (03:15):

Mm-hmm. <affirmative>. Yeah. The, uh, Japanese whiskeys especially have got that very pure taste, you know, they, they

John Verry (03:22):

Very clean. Mm. Yeah. What is it? Uh, I, I think the one that I had recently was San Tori. Is that, is that, is that one of their Yeah, that was quite good, I thought. Yeah. Any other recommendations on a Japanese whiskey?

André Keartland (03:38):

[inaudible] Okay. Also, so, alright. There’s a lot of good

John Verry (03:43):

Ones. I will, uh, I will give that a try. Uh, and, and I’m gonna apologize to Andre a little bit and apologize to everyone because of the South Africa thing. The, the, the bits and bys go slower between here and South Africa on Friday nights, so there might, there’s a little bit of a delay. I don’t think you’ll hear the delay much in the recording, uh, cuz of the way Riverside FM records. Um, but if we step on each other occasionally, uh, that’s why. Um, and Andre, I do apologize. I’ve stepped on you once or twice already, so I’ll try to, I’ll try to take a deep breath and, and in between things a little bit.

André Keartland (04:14):

Yeah, I’ll try and do this. I’m

John Verry (04:17):

Okay. Thank you. Um, so one of the side effects of the, the rise of the cloud, so to speak, uh, over the last decade has been the similar rise in, in DevOps and, and more recently the phrase that we hear is DevSecOps. So let’s start with some basic definitions, if you would, and what differentiates the two, right? So that we can frame this conversation.

André Keartland (04:42):

Absolutely. So Dev ops of course, has been around for a couple of decades now. Um, as the name implies, it’s the mix of dev and ops. So once upon a time you’d have a dev team and a application operations team that were totally separate work for different bosses and hardly spoke. And your end result was your dev development. People would build fantastic apps and at some point they try and deploy them and the ops people wouldn’t know how to deploy them. There’d be comms gaps, there’d be failures, there’d be delays, quality would go down. And then somebody had the brilliant idea to say, let’s merge the two together and have those two teams work in lockstep, be one team, and have integrated processes and integrated tools. That of course, gave us the benefit of applications. Got out faster, the quality went up, um, and everybody was just generally happier, but the people that were still sitting on the sidelines was security.

(05:43):

So you’d get the typical behavior was an application would be entirely developed through its whole life cycle. And then right at the end someone would say, let’s check if this application is secure, let’s do some sort of, uh, security quality check. And typically what would happen is the security people would find thousands of bugs, log them all as issues, and the application team would then have to say, well, we are either going to delay ship, um, while we fix all these security issues, or what we’ll do is let’s ship the app and we’ll fix the security afterwards. And you can just guess how that ended. So that’s why you have so many insecure apps out there. So then somebody had a brilliant idea again and said, how about if we combine SEC into the DevOps? And that’s where we got Dev SecOps that said, let’s integrate the security people and the security processes and security tools into that dev process so that we try and ensure that we start looking at security earlier and we look at security better and more intensively, and we ship code that isn’t just on time and on budget, but is also secure or as secure is what we can make it.

(06:59):

And that’s where DevSecOps came from.

John Verry (07:03):

So you touched a little bit, um, but I’ll ask you to speak a little bit more broadly about what the differences are between conventional development practices and DevSecOps. And of course that’ll tie into what are the, the benefits, if you will, of DevSecOps.

André Keartland (07:20):

So the differences are to a large extent that, um, in DevSecOps, um, you have to have a security mindset throughout the development process. Um, the term that’s often used here is, um, secure development, life cycle or secure software development life cycle where, um, we are trying to say in the entire life cycle of an application from the point where we are initially envisaging it through to the point where we start coding it to the point where we deploy it to the point where eventually the application’s being used and the application is being updated into interviewer versions. All along that way, we are having to try and evaluate what’s wrong with the app, what’s the security risks that we are potentially facing, and let’s try and engineer those out, um, from the beginning so that we eventually have an application that is more secure and less risk of things going wrong.

(08:17):

So in terms of the, the practices that would be different in a DevSecOps operation is, um, that you would have probably either de security people embedded into the development teams, preferably if they exist, if your organization’s too small to have dedicated security people, at least make sure that all the people understand security, that they understand the risk. There’s a big chunk of education in this. Um, traditionally your app dev, um, uh, developers, your architects didn’t always have a security interest that wasn’t part of their education. It wasn’t part of their experience. And they’d end up, uh, focusing on what they knew, which was let’s make the code as effective as possible. Let’s get it out fast, let’s make the functionality as sexy as possible, let’s stay within our budget. And they weren’t necessarily paying much attention to security. It was always in many development teams a idea of security, somebody else’s problem. And, um, again, that was often where it would lead to a situation where bad code would end up shipping and would end up with, uh, bad security practices or, uh, uh, applications that had vulnerabilities in them getting exploited. And then everybody would act all surprised, but they were dead man walking from the beginning because they hadn’t thought about security from step one.

John Verry (09:48):

Yeah. And that was, in my opinion, in a business challenge in that the entity, the the entity would, uh, bonus and incent these groups on getting the application out sooner, getting functionality embedded in the application. So they were naturally incented to dismiss security. Right. Security being, we’d often hear that term as a, a road, a road bump, right? Speed bump in in the development process.

André Keartland (10:17):

Yeah. One of my favorite sayings is, um, measurement drives behavior. So you put in place an incentive program where you are, uh, measuring people on how fast can you get the application out, can you get it in on schedule, on budget, um, but you’re not measuring them on quality and you’re not measuring them on security, which is very often the case. Well then people don’t pay that much attention to it and end result you end up with, uh, as we said, applications where, uh, not enough attention was paid to those aspects of the development. So a big part of making DevSecOps, uh, work of making secure development work is, um, you’ve got to fix the mindset. You’ve gotta get a lot of buy-in from the stakeholders. So the people in business that are actually paying for that application that, uh, are carrying that budget, they need to care about the security.

(11:16):

And often they don’t often your business stakeholder who’s asking for the app, wants the app because they want the functionality and they want their budget to be as, uh, well preserved as possible. So, um, because they don’t necessarily understand what is the impact of not doing the security and doing it well. So again, one of your key drivers for success of getting dev SSE options secure development, um, into place is you’ve got to, uh, do the education right up there in the executive suite, um, convince them to understand what is the impact if the application ships insecurely and you potentially end up with something like your application gets exploited. And we all know businesses can go under if you get a bad exploit in the wrong place.

John Verry (12:08):

Yeah. I think we have the advantage these days in that there’s more awareness of the impact of not having a good security story to tell. So if you’re a software as a service entity and you’re not able to demonstrate to your key customers that your application is secure, then you’re not going to be able to sell your application. So I think there’s a little bit more natural incentive. And then with the rise of the breach and some of these breaches being pretty significant financially, I think the average, you know, uh, business process owner, the average CEO of an organization, the, you know, the average application owner, product owner recognizes the, the potential impact. And then the last thing I think is that we’ve done a better job of educating people on what the true cost is of fixing a bug after an application’s been deployed versus fixing it, you know, during, let’s say a sec, during an agile sprint. And, and with that being so significantly different, you know, that ROI that you thought you were getting by getting the, the product to market faster disappears very quickly. Correct?

André Keartland (13:11):

Yeah. Yeah. So I, I agree with you, uh, but not totally. Yes, there’s more

John Verry (13:17):

Awareness. Well, you can be, you can be, you can be wrong Andre <laugh>,

André Keartland (13:21):

<laugh>, but it’s not really,

John Verry (13:27):

I mean, this is the virtual CISO podcast. I’m the virtual ciso, my opinion, my opinion’s, you know, is fact. I mean, you know, but I’ll allow you to, I’ll allow you to digress and, and, and try to prove that I’m wrong.

André Keartland (13:38):

<laugh>, let

(13:39):

Me demonstrate my, uh, world famous technique for changing the mind of

(13:43):

Synthetics <laugh>.

(13:46):

So you, you would think that all that awareness that’s out there is good. It’s the same way as what people are generally aware of, um, what do they need to do to lose weight, and yet they don’t. Um, same thing happens in applications. We can see that by all these breaches, and you can see it by the number of times that you get major organizations, sometimes vendors of security software and products getting breached and you know, not gonna throw names around. Now you can get onto search engines and you see the news announcements. So, um, there is definitely still a problem. It’s, it’s not been solved yet. Okay. Um, and I think part of the problem is sometimes people are aware that it’s important that you have security. They’re aware that you need to prevent that breach and that it’ll be serious if it happens.

(14:38):

But I think they also have bad ideas about what it’s going to take to actually prevent that event from happening. Um, you have people that have outmoded ideas. You whistle in the year 2023, have people walking around thinking, all I need is really, really, really good perimeter security. Okay, a firewall is going to save me as an example. Um, they might be very focused on the infrastructure security, but then they’re sitting with the apps behind that infrastructure or in that infrastructure that are inherently insecure, have big holes in them. And that’s where, um, again, bad things happen. That’s where you sometimes leave holes that you can drive a steam threaten through.

John Verry (15:23):

Yeah. I think, I think we’re actually in agreement with each other. I was saying that things are getting better. I wasn’t saying that things have been solved. Um, so let you know, every organization has a development life cycle and whether or not they’re, they, they think of it as DevSecOps or not, security is always a configuration, right? All applications have a have a, a user login and password, you know, which is a security control. So if somebody is moving into a more formal DevSecOps, uh, implementation, do you think that’s easier? Uh, if they’ve, if they’re doing it into an existing product? Or do you think it’s easier if they’re starting with a new one?

André Keartland (16:11):

In general, I find it is easier when you can start on a blank sheet of paper because it’s a blank sheet of paper. And if you are savvy enough and you’ve got the, the, um, the drive and the budget, you can build in good security from the word go. However, um, the, the, the counterpoint to that is often, um, if it’s a startup, if it’s somebody that’s, uh, trying something out, they’re building a little prototype, they’re not necessarily ready to take this thing into production, then sometimes they don’t put in the effort to, to put in security sufficiently from the word go. And the classic scenario, that little, uh, prototype ends up becoming the core production system that is still going to be running 20 years from now. You know, I, I wish I had 10 bucks for every one of those I’ve seen in production.

John Verry (17:03):

Could not agree with you more. I always say that’s the biggest danger, proof of concepts are the absolute biggest danger an organization can have. Cuz you’re exactly right, if the proof of concept works, you know, that’s what gets pushed forward. And, and you have this proof of concept that was built to be sort of MVP with no security in it, and suddenly it’s, it’s now a production application. So I, I could not agree with you more.

André Keartland (17:24):

And, and so often what I’ve seen is, um, even if they say, okay, no, we’ve done our proof of concept, now we are going to redevelop, what you find is copy and paste programming. So the developers were literally copying and pasting chunks of their POC code into the production, and that’s not necessarily getting picked up. And, uh, the, the, the little shortcuts that they took is, um, tal there a decade later, again saying that, uh, I sometimes throw around is, um, uh, the road to hell is, uh, built with one shortcut at a time.

John Verry (18:03):

No question about it. So, so you bring up an interesting point and, and that is, when you look at a robust DevSecOps program, there’s a lot of moving pieces. And I think that is part of the intimidation of people moving forward. You know, so I’ll ask you the question is, you know, what is the easiest way to start? Is it, is it, oh, we need to figure out what automation server we’re using, or, uh, do we need to start integrating security stories into our agile sprints or code scanning? What do you think is like, you know, if, if, if you were counseling somebody to move towards a more formal DevSecOps program and they didn’t know where to start, where would you suggest they do?

André Keartland (18:43):

Generally I say to, in every piece of advice I give in anything, um, start of the basics. Always get from the simplest to the most complex. Okay. So to get going, sort out some of your security basics in your environment. So even if you’re not formally doing DevSecOps as a formal program or a standard within your organization, start looking at things like, um, protecting your devs, protecting your dev environments. The developers are often the achilles heel of a lot of, uh, development teams, development programs because, um, a lot of your devs haven’t got a security mindset. They, they’re often, uh, they’re, they’re total opposite of how you want your users to behave. They’re the sort of people that tend to have admin rights on their machines and they’re installing all sorts of pieces of software that they found off the internet. They’re trying things out and they’re copying, pasting little chunks of code and, um, they can sometimes, uh, uh, indulge in incredibly risky behavior.

(19:43):

So just start by sorting out things like making sure that you’ve locked down those developer accounts, have that your developer has a user account and a dev account so that, uh, if the user account gets compromised while they’re clicking on the wrong link in a web browser that, uh, that account doesn’t necessarily have rights to come and do things inside of your dev environment. Give them developer machines that are, uh, where they’re doing their prod development that are perhaps different machines from the ones that they’re using for their playing around. Um, this is where things like VDI can be extremely useful. So go give them virtual machines where they do the formal stuff and other virtual machines where they can play around. And those are not necessarily the same things and you are trying to do things to try and protect them. You’re monitoring are running xdr, all the usual security controls you would’ve done for users.

(20:37):

But in this case, for users that are playing around with incredibly potentially dangerous things, um, then your, your test environments, making sure that you’ve got somewhere where the devs can actually go and test what they busy developing, that that is preferably a secure environment and that it is properly isolated again from your production environment. I dunno how many times I have seen, I’m sure you have seen where, um, security risks came into an environment, not because production got compromised, but because you had a dev or test environment where there were lower security standards being followed because it’s a playground and bad guys came in and managed to use that as a springboard to attack the rest of the environment. Also, where you ended up with official company data or organization data getting copied into test and dev systems not, wasn’t allowed, but the devs did it because they wanted to test real data. And then that gets compromised and suddenly you’ve got your confidential information out there on the dog web. Not because it came through broad, but because it came through dev. Okay. So that’s always a good place to start.

John Verry (21:52):

Gotcha. So, so, you know, uh, again, following that basics approach, uh, I’m assuming you’d be a, a strong advocate of training of, of the folks on your team. I’m assuming you’d probably, you know, being all controls or mechanisms that reduce risk that you probably would also be an advocate of, you know, threat modeling, risk assessment, you know, in, in order to drive, uh, those security stories.

André Keartland (22:16):

Absolutely. So at roundabout this time, um, if you’re starting to get serious about improving your security in your development process, that is obviously where you’ve gotta make sure that you’ve got the adequate buy-in. We were talking about that a bit earlier on, but that you get the execs on board, you need to get the people who, um, ultimately are going to be paying for all of this to understand that this is important. They’ve gotta give budget to it. You’ve gotta get education and training in place. And the devs and the Dave, uh, people who are driving the dev, uh, dev program. So things like program managers, project managers, they need to get training. And as part of their training, a big chunk of that needs to be on understanding the risk. So, um, again, a lot of people don’t understand the risks inherent in application development.

(23:11):

A lot of the education, the communication about security risks focus on end user scenarios. So a lot of the time we, we’ve all now probably been forced to numerous times sit through some training that explains to us that we should not trust emails and we should look out for fishing I had, et cetera. You know, don’t, don’t install software or uh, USB stick that you picked up in a parking lot. But, um, there’s things that developers can do that can be very, very dangerous. So for instance, if you are a developer and you’re going to be using code of a public reaper, you’re going to be using some library that, uh, got recommend recommended to you by a buddy or that you found in a search engine, what’s in there? Is that secure? Is there possibly, uh, malware or back doors or something similar that are enabled through that?

(24:07):

By you reusing those inside of your application, you’re actually just punching a hole in the security of that entire application. Okay. Um, so this is where it starts becoming important to, uh, you use the word threat modeling. So at this point you need to also start saying, what is it that I’m trying to protect myself against? What are the risks? What are the things that could be going wrong? There are formal threat modeling processes, there’s a lot of recommendations, there’s tools that you can follow, there’s frameworks. Um, but, um, this is also where get advice, get people who understand security and understand risk to start talking to your dev teams, um, and start, uh, prioritizing and identifying what are the realistic things that could possibly go wrong. And this is where diversity of opinion is probably your best friend. So don’t ask the developers what they think are the risks.

(25:09):

Don’t ask the project stakeholder. Don’t necessarily even ask your InfoSec guy, you know, start talking to black ads, talk to pen testers, talk to other organizations that run similar types of software and start, um, serve the web. Do do the research, get into the security sites and start understanding what are application specific risks. What are specific risks for the type of application that you are busy writing. So if you’re doing software as a service, if you’re doing, I ot, if you, for all of those, there are particular risks and categories risks, you need to start doing some research and you can’t expect everybody in the organization to do that research on their own. You possibly also need to do that for your devs and for your dev organization and then spread that as education.

John Verry (26:04):

Yeah. The, the one risk that you pointed out there, the, the use of third party libraries is increasingly a significant issue given the, you know, in here in the US you might be familiar with, um, the NIST guidance NIST s p 800 dash two 18, which is part of the US government’s, you know, what they refer to as the secure software development framework. Um, so yeah, we’re seeing a lot more organizations, uh, in their DevSecOps process integrating software composition analysis tools, you know, things like Black Doc from Synopsis, uh, in order to to account for that. So that way any third party library that gets added by a dev goes through a a, a vulnerability scan. And we, we we’re sure that, you know, we’re not increasing the vulnerability of our application by leveraging a third party library. The other thing that you’re, that I think is important about that is that increasingly as SSD F becomes more of a requirement, we’re increasingly see in customers requiring a digital software bill materials that they can use.

(27:05):

So that way, let’s say when the next heart bleed comes out, you know, they, they’re not in a mad scramble to say, we have 200 applications across our organization. How do we know which ones might be vulnerable? You know, they can, you know, what is that called? An sdx or S P X format, you know, for a software bill materials where they can actually literally do like a query and, you know, it’s a, and and they’ll be able to determine what they did if and which, yeah. Which is, which is really important. So on the, um, so I, I’m, I’m, I’m in agreement with you that, you know, the education, uh, component of this, this training component is critical cuz even if you went to, let’s say, leveraging threat modeling, if people aren’t educated on how to go about it and don’t know the, and don’t have a framework to rely on, they’re in a bad spot. So talk a little bit about, you know, beyond, you know, we’re big fans of oasp, so we’re gonna point people to a lot of the great o OSP guidance. Mm-hmm. <affirmative>, uh, what type of guidance are you pointing people to, uh, with regards to getting them up to speed on DevSecOps processes?

André Keartland (28:05):

Yeah, uh, definitely OSP is one of my favorites. And, um, okay. So, uh, if anybody’s not familiar with it, um, so open web application security project, um, as the name implies, it’s open source, it’s, uh, not specific to any vendor. Um, and, uh, they do have a ton of guidance, um, through their sites and through their programs that, um, do focus on, um, general web application, uh, security risks. They’ve got the top 10 is of course, very well known to anybody operating in, uh, uh, InfoSec. So the, the top 10, uh, web application risks that you need to look for. Um, so, uh, they’re, uh, uh, good, a good place to start. Um, and, uh, yeah, there, there’s, there’s a couple of, uh, um, uh, I, I’d really recommend get onto the web and start, uh, uh, searching for application security and doing your research. And you’re going to find, um, a lot of guidance out there, but os a good place to start.

John Verry (29:11):

Yeah. Uh, yeah, we particularly like the application security verification standard. Uh, I think the os cheat sheets are quite good. Uh, increasingly when we’re working with folks on their assessing the maturity and charting a path to improvement, we’re using the software assurance maturity model, the sam, uh, which is similar to bbc, if you’re familiar with that. Yeah. Some people use B ssim, you know, I like Sam because it’s a maturity model baked in, so it kind of inherently has not a score and an ability to set a target and then measure progress towards Target. Uh, but BBC and Sam at, at the beginning were the same effort, and then they forked at some point. Uh, so they’re both really good. Um, absolutely. Yeah, I would agree with that.

André Keartland (29:53):

Yeah, that idea of a maturity model, any security that you’re going to try and improve, um, whether you’re trying to improve your application development security or your network security, um, it’s really important, that classic maturity model, you start off by saying, well, where am I? What, what do I have? What is the state of my security? Set a target, where do I want to be? And then build a program to get from where you are to where you want to be and measure along the way to measure your progress because, you know, it’s, uh, try to say something like, Rome wasn’t bolt in a day. You’re never going to build a security culture in an organization in a year. So you gonna have to start breaking it down and saying, what do I do this quarter, next quarter, quarter after that? What do I do next year? What do I do the year after that? And you follow that program and you need to measure your way as you go along. And something like Sam is one of the models that’s useful for that.

John Verry (30:52):

Yeah. So you talked about a lot of different components of, of, of DevSecOps. You know, we talked about, we talked about the threat modeling and risk assessment. We talked about code scanning, uh, we talked about the importance of education, um, we didn’t touch on, but we could touch on the importance of policy and standard. Right. Um, so I’ll ask you a question. You know, do you find that there is a, an ideal sequence? So putting this all together, like one roadmap, if you would, that you could lay out for somebody and say, here’s the path, or does that change based on an organization’s unique context? Like, are, are there things that would influence that? Like budget, the, the languages that are being used from a development perspective, whether or not they’re doing, uh, you know, whether or not they, you know, what their staffing levels are, what their experience level is, uh, if they’re doing infrastructure as code, you know, so standardized model or do you have to look at each organization as you’re working with them in a unique context?

André Keartland (31:52):

Well, I’m essentially a consultant, so the answer I’ll give is always going to be, it depends.

John Verry (31:58):

<laugh>, it’s my favorite answer. So

André Keartland (32:03):

The thing I’ll say is, how much money have you got? But <laugh> no, the, um, it is, and it depends because every organization is different and there are so many variables. So you mentioned a few of them, you know, so a, a a couple of very fundamental ones in a DevSecOps type context. Um, touching on some of the things we’ve discussed already up to now is are you developing your own software or are you buying software that was bought by someone else, but you have to deploy it? Um, are you taking software written by someone else and you’re customizing it? Um, are you, if you are developing, are you developing on something new? You started on a blank sheet of paper and you’re now building from scratch? Or the other scenario we haven’t really explored yet. You asked me earlier on which one’s easier between blank sheet of paper and existing app.

(32:59):

Existing app is extremely difficult because now you are dealing with the legacy that you inherited from whenever that application was developed, the decisions that were made when that application was developed, that application was possibly developed in an earlier, more innocent age when security wasn’t something that people really cared about. You might be sitting with an app that was originally written in cobalt to run on mainframes, and now you’re trying to put it into the cloud and you’ve still got some of that application that’s traveled along and you’ve got security risks inside of it. So that will affect, um, your decision being about how you’re going to tackle this. And then what technology are we using? Are we running it on-prem? Are we running it in the cloud? Are we tied to particular vendors or are we, uh, trying to go for a best of breed strategy where we are trying to find the best tools from everybody?

(33:54):

Okay. So yeah, it is, it is very, uh, it depends in broad strokes. Um, I find after those initial steps that I spoke about, so things like the education, the securing your dev environments, then, uh, and, and identifying your threats and identifying your risks. Another good place to ensure that you start your security is that you ensure that, um, you, you, you include your security as early in the day of process as possible, essentially, um, the best place is right in the beginning when you are still designing your architecture and you are still planning what is going to be going into this app. This is where the term shift left is often thrown around. Um, when it comes to, uh, DevSecOps option around application security, what it’s referring to is in a typical timeline, can chart typically goes left to right, earliest on the left, most recent on the right.

(35:02):

And um, the, as I said earlier on, the traditional approach with the application security was we do it on the far right end side. Last thing we do before we ship. So we take a look at it and we say, is it secure? Do we have to do anything? So shift lift says, go down that process, move to the left more to the start of the application and start doing security things earlier in the process. So look at securing your app when it’s being deployed before that, look at the app security as it’s being tested before that. Look at it as it’s being developed before that, as the application’s getting architected. And we are making the big decisions about what technologies are we going to use, what platforms is this going to support? What, uh, frameworks are we going to use? What programming languages are we going to use?

(35:56):

Then at that stage, you need to have somebody wearing a security hat or preferably everybody in the room wearing a security hat and saying, how do we make sure that this app is adequately secure? Because the time you’re going to spend sorting out your security architecture at the beginning, it’s gonna cost you, it’s gonna be extra effort, but it’s going to save you a lot of time and money and effort later on. Now process. Okay. Because classic scenarios is you initially make decisions about how you are going to be constructing this app, what frameworks you’re going to use, um, uh, is it going to be a cloud up? Is it on pre map? Uh, how am I going to connect to it? Um, in terms of networking, how will identity for my users be provided? Lot of those things have strong security impact. And if you make bad choices, if at that stage you’re saying, know what, let’s do what is easy to do, but not necessarily what is most secure to do, then a year down the road you finish writing your app, you’re not trying to get it into prod, and suddenly you realize, I’ve got a problem.

(37:09):

I made architectural decisions that are now extremely difficult to change without going back and rewriting big terms of my application. So at that stage, it would’ve been really good to have made those decisions upfront. You have that choice if it’s a blank sheet of paper, new application, more difficult if it’s a existing app. But even with an existing app, if you’re saying, we need to do updates, we need to do revisions, we are writing a new version, we need to make sure that at that point that you start envisioning what are we changing in this app already sitting down, you’re saying, what do we need to do security wise at that stage?

John Verry (37:51):

Yeah. And it’s, you’ve mentioned earlier, uh, you talked about iott and all of these decisions become that much more important if you’re deploying, uh, if, if this software is going to run on IOT devices and you’ve gotta deal with the, you know, non synchronous and ability to update these devices that are, you know, it’s a lot different than updating a server or an application that’s sitting in, in, uh, aws, you know, and you can just log in, log in and to touch it, right?

André Keartland (38:17):

Absolutely. Now, and that’s why I said it’s important to understand your risk and understand what environment am I going to be running in and what does that mean in terms of how I can secure it? What tools will be available to me? You know, if you are running an IOT device, you can’t exactly that. You’re going to be running a full intrusion detection stack on top of that device, as an example. Um, that device might be running in an

John Verry (38:43):

Environment, right? It’s a lightweight sensor. Yeah,

André Keartland (38:45):

Yeah, yeah. Absolutely. Now, there’s something that I’ve seen is, um, and this is of course a problem of iot. IOT often goes into industrial environments, and those industrial environments, they don’t want you touching the tech at all because you screw up something in the software, their factory stops or their mind stops. So, um, they are very, very, uh, conservative about letting you touch that stuff again. So if you made bad choices, then you might have known so security vulnerabilities and you don’t really have the ability to go and change it because now you’ve gotta go and convince the factory owner or the mine boss to say, let me touch that device. Let me make changes because I think there might be a security risk. So again, yeah,

John Verry (39:33):

The you need

André Keartland (39:34):

Do that and you need to have my provision for that upfront.

John Verry (39:39):

Yeah. That, that’s, that’s actually interesting, right? The, the, the, what the app is going to be doing, an app that’s sitting on a phone has a unique set of considerations. IOT devices have a unique set of considerations. You know, a SaaS application has your platform as a service has unique, uh, implications and even like within the IOT world, right? The OT of iot, right? The operational technology like you pointed to that has another whole unique set of challenges, you know, so often that, you know, those are IP isolated, you know, networks where, you know, you might have, you might put something out and say, oh, well, we’ll patch it later on, but that device might not see the internet for 10 years.

André Keartland (40:18):

Yep.

John Verry (40:19):

<laugh>. Which, which is, you know, and you’ve gotta account for that in your development process. Yeah. Really cool stuff.

André Keartland (40:28):

Yeah. Related as well is fairly upfront. You need to know if there are any standards or frameworks that you’re going to need to comply with. So if this is going to be going into a ISO 27,001 environment, if you’re going to have to comply to certain Nest standards or, um, you know, com uh, regulatory frameworks, you know, things like hippa, gdpr, et cetera, um, again, if you didn’t build the application with that in mind, you could find yourself having to go and retrofit something that is very, very difficult into an application that was never written for it. And again, that’s something that’s really important that you understand that as early as possible.

John Verry (41:09):

Yeah. We’re dealing with a lot of vendors on the IOT side that, you know, did not account for California SB 3 27, you know, some of the NIST guidance, some of the Anisa guidance on IOT devices, and, and now they’ve got, you know, install bases of tens of thousands of these devices that are not in compliance. And because of the challenges of in, you know, inconsistent, uh, connection to the internet, they really have no practical way to actually update the devices to make them, again, compliance. So they’ve got this giant risk of 10,000 devices that the California Attorney General could come barking at them about, and they really have no, no remedy to fix it.

André Keartland (41:43):

Yeah. And suddenly you’ve got a massive fine, uh, well, the customer’s got a big fine and, and that of course, you, you, you speaking about this earlier on, but a lot of the time the, the security standards that you are trying to meet or the security goals are not necessarily your own of your organization, but especially if you are writing software for other people, you have to be ready that, um, you know, you were talking about having to give them a software bill of material. Um, a lot of the time we are now also helping, uh, ISVs and software developers to, um, security check their applications so that, um, they can give the, uh, person that’s going to be buying the app and using the app a guarantee that this app is not going to be introducing a security risk into their environment. Um, a lot of our customers are big banks and, um, those people are, have become very, very, very gun shy because, um, they’ve realized a lot of the time when they, they lose data when their, you know, the, the, the thousands of credit card numbers get published out there on the dock web, it’s often not their systems, it’s software that was being run for them by some external source vendor or service provider.

(43:00):

And those people didn’t have the same security standards and they got compromised.

John Verry (43:05):

Yeah. We, we counsel, like, you know, if we’re doing third party risk management vendor due diligence on behalf of a client and they have a SaaS app that has sensitive data, you know, you start to look at the, the stack of things that we’re now asking for, you know, we’re gonna expect ISO 27,001 SOC two as an overarching framework to know that good practices are being followed. Uh, on the application side, we’re gonna look, we’re gonna look for a digital software bill of materials. We’re gonna look for an O os Bay svs level two assessment. Uh, we’re gonna look for a software assurance maturity model assessment cuz we wanna make sure that the development processes that create the secure apps are, are in existence, right. Just not that one instance of the application past that particular test. So if you’re listening and you’re producing a software application, that’s really what your future looks like, right? It’s the secure software development framework, Sam ASVs level two, uh, ISO or SOC two high trusts if you’re in the, in the healthcare space potentially.

André Keartland (44:01):

Yeah. And that’s why a lot of these things we busy talking about over here, it might be in your audience CISOs that are listening to this and going, yeah, it sounds interesting, but, um, it sounds expensive and it sounds like hard work. And, uh, I’m gonna give it a pause. They’re not necessarily, you’re not gonna get that choice, you know, at some point,

John Verry (44:21):

Listen, in fairness, it is hard work and it is a lot of money. <laugh>, <laugh>, I can’t blame them for wanting to dodge it <laugh>, but they’re not gonna be around for long. They do, right?

André Keartland (44:32):

Absolutely. Absolutely. And, and yeah, cyber insurance is driving, uh, a lot of, uh, the behavior now people, um mm-hmm. <affirmative>, uh, are having to get, uh, cyber risk insurance and, um, the insurers are now saying, show me the controls that you have in place and show me what you’re doing to protect your infrastructure, but also to protect your apps. And again, all of those proof points that you were talking about are having to be shown to those external stakeholders. And yeah, they’re driving a lot

John Verry (45:02):

Of this, the underwriting processes.

André Keartland (45:04):

Yeah. And that, that means that, um, a lot of these things we are talking about are suddenly becoming a priority for people for whom this was not a priority before.

John Verry (45:15):

Couldn’t agree with you more. Um, we beat this up pretty good. Uh, is there anything we missed?

André Keartland (45:22):

Um, I think the only thing to point out, um, we, that all solutions are always combination of technology, people and process. Um, I think we spoke quite a bit, in fact this time about people and process technology-wise, you’re gonna have to make sure that you’ve got, um, a decent DevOps stack in place. Were you doing secure repos? Um, so gets GitHubs, um, and that you do the necessary steps in there to control who can get to your code, who can make changes to your code, um, and that you have the release processes that you’ve got the c i CD pipelines, et cetera, that you also incorporate security into that control who is allowed to make changes to that control credentials that are being used inside of there. It’s one of the most common threats that, that or, uh, uh, um, problems that we discover in terms of people’s code.

(46:23):

You start scanning and you’re finding usernames and passwords and certificates and all sorts of other secrets scattered throughout their code. Um, and anybody who’s got source code access can compromise those accounts and do bad things. So put in place a strong DevOps with a security flavor, and there’s a lot of good security tools that you can incorporate into that to scan your code and look for those vulnerabilities, look for those, uh, risky, um, add-ins and libraries, et cetera, and eventually also give you that certification reports that you can use to prove to your stakeholders. So take care of those as well. But technology people process do it all.

John Verry (47:07):

And it is as difficult as it sounds, <laugh>, but it is worth it. Uh, it, it, it’s a necessity, I would say.

André Keartland (47:14):

Yeah. We don’t have

John Verry (47:15):

A choice. So give me exactly. So give me an amazing or horrible ciso if you prefer DevSecOps engineer, uh, you know, gimme a fictional character, a real world person you think would make an amazing or horrible virtual csa, excuse me, CISO or DevSecOps engineer and why?

André Keartland (47:34):

I’ll, I’ll give you, uh, uh, uh, a fictional, really bad ciso doth Vader.

(47:47):

I, I think he would suck as a CISO because a good CISO is a good leader and is assembling their rebel alliance to, to defeat the forces of, uh, the dark side. Um, Darth Vader is not a good leader. He bullies the people who work for him. He’s been known to literally choke and underling for disagreeing with him. Um, he regards his, um, security analysts as an army of faceless clones. And, um, he makes his decisions based on a mysterious magical force that only he can see and that he never has to defend to anybody else, which is what the worst CISOs in the world do. And then the infrastructure that he had to look after Death Star, um, relied on perimeter security only if you got into, uh, the channel, you could bypass all the controls and he had an unpatched vulnerability in the exhaust port, so now don’t die at him.

John Verry (48:53):

So I think we’re at episode 115, and that has to be the most well thought out answer to that question. <laugh>. I mean, you had bullet points <laugh>, it was like, it was like a thesis statement. I mean, like, yeah, you deserve a PhD in, in bad ciso, fictional characters. Um, thank you. <laugh>. Yeah, yeah, exactly. Plus his, his, his breathing was annoying. We, we have to all agree to that. Um, so, uh, if, if somebody wanted to contact you, what’s the easiest way for them to do that?

André Keartland (49:26):

Find me on my website. So, um, the company is net, n e ts u r i.com. And, uh,

John Verry (49:34):

Excellent. This has been, uh, this is, this has been fun, sir. And, and informative. Thank I really appreciate you coming on today and, uh, and now you’ve earned that whiskey. So I take it, we’re gonna head, we’re gonna put down the, uh, the microphone and head, head and grab a glass of whiskey.

André Keartland (49:50):

It is now

John Verry (49:51):

Whiskey. Uh, what’s, what, what, what, what’s it gonna be? What’s, what’s tonight’s whiskey?

André Keartland (49:56):

Um, easy drinking whiskey is, uh, something called Monkey Shoulder. I’m going to hit some of that.

John Verry (50:02):

Oh, monkey Shoulder. I, I’ve had Monkey Shoulder. Yeah. Uh, so if, I don’t know if you, if you get it there, but like, one of the one, one of the ones, like, I drink a ton of different ones, but one of my favorites that most people haven’t, haven’t had, uh, is Widow Jane, if you’ve ever had Widow Jane, if you ever see a bottle of Widow Jane at a reasonable, you know, you know, that typical, now unfortunately, all good whiskeys are 60, 70 bucks a bottle, but you can pick, pick up Widow Jane in that price point. Um, it’s a, it’s a guy out of Brooklyn, uh, New York, uh, that, that makes just a wonderful little whiskey. One of my favorite, you know, favorite off, off name, you know, I mean like, you know, like, sort of like a no, you know, Romans Creek or Noah, his mill, like in that kind of class where most people haven’t had it, but they’re damn good whiskeys. So yeah,

André Keartland (50:47):

Little small batch, small batch has become very in, in fashion.

John Verry (50:51):

So, oh, I love small batch. I’m a, I’m, you know, I drink a lot of small batch, you know, the, the, the other one, like, you know, uh, you know, another favorite little one, uh, is, is one called Wyoming Whiskey. I had a chance to ski out west, uh, last year at Jackson Hole and stumbled into the cowboy bar there, which is the famous place. I’ve never seen a Wyoming whiskey before. Kind of fell in love with that as the very simple, easy drinking little whiskey. Um, so anyway, there’s so many great, you know, it’s like, it’s like micro beers. I’m a beer drinker as well. And like, you could drink a great micro beer e every night for the rest of your life and not get through all of the great beers that are out there these days. Absolutely.

André Keartland (51:29):

Absolutely.

John Verry (51:30):

So, all right, sir, this has been fun. Thank you.

André Keartland (51:34):

Thank you, <inaudible>.