The Security Champions Podcast
Automation, Generative AI, Shift Left - the world of application security is evolving fast, and so are the conversations that shape it.
Welcome to The Security Champions Podcast, the go-to resource for insights from the front lines of application security. The podcast is cohosted by Michael Burch, Director of Application Security for Security Journey, and Dustin Lehr, the Director of AppSec Advocacy. Each month, one of them shares a candid conversation with security leaders, engineering voices, and software experts.
From championing secure development practices to navigating real-world challenges in modern SDLCs, this show explores how teams are scaling appsec, strategy and culture.
New Episodes drop monthly, with even more security content at https://www.securityjourney.com/
Always remember: Security is a Journey, not a Destination.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This podcast is sponsored by Security Journey.
FOLLOW US to stay up-to-date with new content!
X (https://x.com/SecurityJourney)
LinkedIn (https://www.linkedin.com/company/7574213)
Instagram (https://www.instagram.com/securityjourney/?hl=en)
YouTube (https://www.youtube.com/@UCBVPnBCNcZqx_WAuCsV6BuA )
Online (securityjourney.com)
CONTACT: hello@securityjourney.com
The Security Champions Podcast
Roger Grimes - AI and the Future of Cybersecurity
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Roger A. Grimes, CISO Advisor for KnowBe4, Inc., is the author of 16 books and more than 1,600 articles, with deep expertise in host security and defending against hacker and malware attacks. A frequent speaker at major cybersecurity conferences, Roger is known for his fast-paced, insight-driven presentations packed with practical recommendations.
In this episode of The Security Champions Podcast, Roger joins the conversation to explore the impact of AI on cybersecurity, software development, and industry practices. He shares insights on the opportunities and challenges of AI integration, highlights emerging trends, and emphasizes the importance of responsible AI use alongside strong foundational security principles.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Podcast sponsored by Security Journey, Secure Coding Training for Developers and Everyone in the SDLC. Learn more at securityjourney.com.
FOLLOW US to stay up-to-date with new content!
- LinkedIn (linkedin.com/company/security-journey)
- Instagram (https://www.instagram.com/securityjourney)
- YouTube (youtube.com/c/securityjourney)
- Twitter (twitter.com/SecurityJourney)
- Online (securityjourney.com)
- CONTACT: hello@securityjourney.com
The Security Champions Podcast is brought to you by Security Journey. We help enterprises reduce vulnerabilities through application security education for developers and everyone in the SDLC. Learn more at SecurityJourney.com.
Welcome to The Security Champions Podcast
SPEAKER_02My name is Michael Birch. I'm your host, and today I am joined with Roger Grimes. Really cool part here, not a first-time guest. So welcome back to the Security Champion Podcast.
SPEAKER_01I'm very glad to be back here. Big fan, big fan of your company, big fan of the podcast.
AI Is Another Form of Software
SPEAKER_02Yeah, we're gonna do something a little different here because usually, usually we start this conversation out and I have people tell me their security journey. But we've done that round before. So I think we're gonna pivot here. Let's pivot here because the topic of the day, we're gonna get into it more, but the topic of the day, hot topic, we're gonna be talking about AI and how that's impacting kind of the industry in general, how people are writing software, how security is being impacted at, how attackers are being impacted at, kind of whatever we want to talk about there, right? But I think for this first part, like we always do, let's let's let's do a little different. Let's talk about, let's talk about the spaces we work in right now. Like you work for Noble 4, I work for Security Journey. We're both about enabling education and everything else. How is this industry changing for us based on the way that the industry is changing for AI? I think that'd be a fun kind of topic.
SPEAKER_01Well, first of all, I want to say that I'm not AI. You know, whatever it is I have to do to say that I'm not AI enabled, I'm not an AI deep fake. Uh, and let me say I'm glad I'm not a brand new beginning programmer, uh, you know, coming out of school, and not because I don't think AI is going to replace every single job, but I think every company in the world is trying to figure out how AI fits in and has caused a hiring freeze in a lot of cases, uh, a hiring freeze. And some people are fine, I don't know if everyone's being fired over AI, but there certainly is kind of a freeze, and um just you know, from my standpoint of just training and educating, every single talk I do now has to mention AI and AI deep fakes. I could be invited to talk to the Garbage Truck Association, and if I don't talk, and it's kind of weird, you ready for the weird part of this? Not only am I being told, hey, I want you to speak about AI, and it could be Roger, we invited you to talk about quantum or something, but I'm talking about AI, but they also want me to tell them scary stories of AI, doing scary things. And if I don't, they don't feel like they've been entertained and I'll get low marks on my presentation, on my training, you know, instead of getting top scores, you know, instead of getting fives, is that's the top score, I get a lot of fours. And I found out if I go in and I scare them and I tell the scary stories, real store, real stories and real threats and stuff. But if I don't scare them, they're not they don't like it as much. They it's almost like people are being when they go to a training on AI or something, it's almost like they think they're going to a scary movie. You know, like I I know I'm gonna be scared, I want to be scared, and if this movie does not scare me, I'm going to, you know, not rank it as high. So that is a bizarre thing because I I started when I was educating people about AI, I've been educating people for ever since OpenAI, you know, released ChatGPT in 20 at the end of 2022. Um, I was like, I don't want to be an AI hyper, I don't want to scare people about AI, I want to give them the real thing. Like AI is just software and stuff, but I find that I I I started getting bad scores, and and the people next to me that were selling this AI hype and AI fear were scoring better. So now I go in and I give a scary story, and then I spend the rest of the time talking about the realism about it, and that seems to hit kind of the sweet spot, you know. But it's just kind of weird how there's something in people's psyche about AI right now that you know, not only is it kind of scary, they want to be scared of it or something. I don't know.
SPEAKER_02I you hit it spot on. I can't I can't get on anywhere where it's just not doom and gloom, doom and gloom. And the education space, we're the worst. Like, I see it from all the educators out there. Like, I don't care. I'll talk about my competitors in this, right? Or other people. It is if you AI is going to cause doom and mayhem, and if you don't get a handle on it, it the world will crumble. Like, don't get me wrong, things can go wrong with AI, they could go wrong before too, right? Like, nothing's changed there.
SPEAKER_01I was I was at this one education conference, and there was a woman that was in love with AI. She was from San Francisco and a VC venture capitalist and everything. And I told her, I said, AI is software, it's pattern matching software right now. She exploded on me. She was yelling at me so much she started crying and uh and left the room. And people are like, what the heck happened? I said, I said AI is not human. Like that is what set her off. I was like, it's just software, it's just software. I was like, people are really emotionally invested in it sometimes.
SPEAKER_02Well, and and uh once again, we go back to the doom and gloom. You actually can solve 90% of your doom and gloom stuff if you just do good old-fashioned cyber, right? Like, if like, do we have auth? Are there permissions set? Like, are we watching data inputs and outputs? Are we logging? If I just put the word AI to every one of those, I have your new modern AI strategy.
SPEAKER_01Like Yeah, you know, I I recently, my 16th book just came out called How AI and Quantum Impact Cyber Threats and Defenses. And one of the big things I realized is that there's two categories of AI attacks. There's attacks from AI, uh, which is you know, AI doing just the normal traditional thing, trying to hack your password, trying to find vulnerabilities, blah, blah, blah, blah, blah. And then there's attacks against AI. So, and those attacks could or couldn't have AI in it. So, prompt injection, uh, data poisoning, uh, you know, just it's funny. And I there's dozens of types of attacks in each one. Although I found like some of the stuff like indirect prompt poisoning is to me like cross-site scripting. You know, like it has kind of this similar type of flavor to it. But you're right, and that uh a lot of the problems we're all seeing is that we're like, oh, we made it overly permissive. Oh, we let it, you know, or or uh what was the other one? Like the SQL injection attack where oh, I can get to the uh the system prompt. I can do this thing that gives me the system prompt on the AI. Well, that's SQL injection.
SPEAKER_02So no, you mean it's bad I gave it root access to my system? I don't I don't know what's going on.
SPEAKER_01Yeah, and then uh one of my friends I talked to all the time, I said he said, certainly people are not going to give it full system access and root or you know, domain ad. I said they're going to go out of their way to give it that and launch it without testing. I mean, that is just the nature of humanity.
SPEAKER_02Even on the dev side, you think that people will be more restrictive, but there's a little button. I'll I'll call out I played around with a bunch of them, but I'll call out codecs in this one. There's a button while you go through that says what level access you want, and you go full agentic. Why do they love it? Well, it will build my containers, it will run my tests, it'll give back my test reports, it automates my job. But on the other side of that is, yeah, and if you're not watching, it could wipe out your subsystem, right? Like it could do havoc while it's there, right?
AI Problems Are Human Problems
SPEAKER_01So that there's we're gonna have I I think we're gonna be going through years of these big operational interruptions. And then what's funny is just like you said, really just comes back to hey, you should have done thorough testing, you should have figured out least privilege, you know, you should have done unit testing, you know. Although it's it's it's I would say that AI, because of the way that it is with especially with LLMs, and you don't know what the data is going to be, and we don't know how it's gonna interact with other agents and stuff, that it is making it more complex. But I gotta tell you, you're ready for the here's the funny part. We're all taught in programming school that you need to threat model what you're building, right? Look at what you're trying to do, blah, blah, blah, blah. I'm going to a bunch of cybersecurity companies, and I'm like, hey, have you threat modeled this? And I'm met with silence. Like, I'm just amazed that even some of the cybersecurity uh companies aren't threat modeling. You know, I'm like, well, we're threat modeling because it is what you're supposed to do, but it's amazing how many people. And they're like, oh, we got this feature and that feature. And I can easily start to see, oh, this could be used in a bad way. And yeah, have you threat modeled this? And they're like you can just tell they're not.
SPEAKER_02Oh, I had a great, I had a great one. I was talking to somebody and they were doing like a home shop project. They're like, I'm building out this system that can go in and triage a ticket and action it and actually maybe fix the problem. I was like, great. Have you done anything to protect how the tickets are coming in to make sure that a malicious ticket doesn't come in? And they're like, uh like just silence, right? Like just say, and that's not an advanced attack. That is the normal input coming to the system. Have you thought how that can impact it?
SPEAKER_01That's input validation, right? Like, that's 101 stuff. You're right. Somehow people, and let me say, I don't want to uh, and I'm not gonna I don't dog everything. Uh you know, we're using inside of our company, we're using AI to do some fantastic things, fantastic automation. And some of the data we've seen is pretty incredible. Just like we have an AI that will run your whole thing, you know, doing security awareness training, choosing the simulated phishing platform campaign. And what we find out is that if you allow the AI to pick the campaigns, it's actually more successful at phishing your employees than if you had a human admin. Well, and that to us means increased education because that person fails the test, which means they didn't know the thing, and they have to get some more education, which should translate into lower cybersecurity risk. So we're actually seeing it used in a good way uh that is providing better outcomes. It's not all bad, but you do have to be thoughtful.
SPEAKER_02No, I and I I I want to caveat really hard. I'm very pro AI, right? Like, like, like I call out these things because I see human errors that I find funny. That's not an AI error. That's the human error that's that's uh improper use of the tool, right? But I'm very pro AI is great, like, and and one of the things I think that we need to figure out though is fixing that human factor so that as we're leveraging this tool that's that does some phenomenal work, that we're not failing at that. We we enable people so that they can actually be effective with it. Actually, I I used to tell you that it's like it's like a kitchen knife, right? Man, you give that to a chef, they've got to get that kitchen, they're going to go good. But you give it to the wrong person, they're gonna cut off their finger, right? Like you got to be careful, so you gotta teach them how to do it right so they don't hurt themselves because it's still dangerous, right? It's a tool, but it can hurt you.
SPEAKER_01Yeah, yeah, that's perfect. You know, and I'm just thinking off the top of my head here. I mean, there actually could be more training for someone involved in using all the AI today than maybe even the normal coder, because again, there's these two major classifications. You have to worry about every possible attack that was just a traditional attack, eavesdropping, you know, buffer overflows, social engineering, and then you have this whole other class of new things that are just kind of we didn't have to worry so much about data poisoning before, right? And the just the sheer amount of like agenic stuff that could be coming and putting like your input validation has to be really good, right? And um just uh there's certainly I think the complexities got more difficult. Um, and then and uh uh you know, and there's some people talking about, well, we're gonna let the AI, uh, must said this. AI should do its own invisible coding. You know, it's gonna be more efficient for it to create its own language instead of writing in Python or C or whatever it might go. I'm like, that's crazy talk. Because for sure the AI is gonna have some problems, and you need a human being is going to be needed to see what that problem is, because if you're vibe coding and AI's barring and barring and barring from all of its own code, it's it's going to become uh, you know, that that error will be propagated throughout the system, and it's not going to be smart enough to figure out what the problem is because it made the problem. Uh, and and so you're gonna need a human being that understands that code. So I'm a very big believer AI shouldn't be doing uh autonomous killing human beings. It should not be doing invisible languages that we cannot interpret, you know, can and it should not be doing invisible in programming languages that we don't understand. I mean, they're like, well, it's really efficient if it just yeah, yeah, yeah. But we're gonna humans will all humans are not going to weigh, not even in programming, and we're gonna be needed to be having to be very smart, skilled at looking at the mistake the AI made and figuring it out and fixing it.
SPEAKER_02I'm gonna do a call out for uh um, I'm not sponsored, but um Code Combat, if you've ever heard of it, it's this great, it's a great platform, and what it is, it's for kids. It's to teach kids how to write software. So I have my middle schooler on this program, and what they do is they've gamified it to the max. You come in, you have a character, and you have to code through this like map, but you control your character. I haven't doing Python code, and then when you complete the level, you get like a sword, and the sword gives you a new function you can use on the next, like, like the next map. It's really well done. Yeah, it's really for kids, it's really, really well done. It's and she's going through this, but they've added something that I thought was amazing. So they added an AI section to it for the kids, right? So my kid, and I've I've gamified this. I actually my kid gets to earn like more screen time or more other stuff, like, or if she gets enough hours of doing this stuff, she gets like an ice cream party or whatever. So, like I gamified getting her into it, but that they've added this AI section, and the AI section is teaching the kids vibe coding. You they are prompting and building their own, like they'll have like a part of a game built, and they'll have the kids finish components and and customize the game out while prompting with their own, like like to get there. So they're integrating even into their process. Their their primary purpose is to teach kids software development, but they're pivoting with the industry. They've made vibe coding a core component of how they're teaching the youth how to develop applications. But here's the important part they're not abandoning what else they're doing. You still need the core and fundamentals, but they're supplementing it, they're adding it, and they're enhancing their training program with vibe coding for kids. Like that is that is the way the industry needs to go.
SPEAKER_01My my son is a big-time programmer for a really large uh internet firm, uh, really excellent program, but he said he has two boys that are uh uh probably 10 and 13 now. Yep, 10 and 13. Uh he said he left them in the car to go buy something in the store for like 10 minutes, came back out, and they had coded their own game. Like he's like, they literally coded from start to finish with playing the game. When I came back, he's like, Yeah, things are changing. Oh yeah.
SPEAKER_02So it's it's I mean, even the middle school class, there's literally so I looked at like we were just literally two days ago, we were picking electives for my kids next, my my daughter going into seventh grade next year. And for the classes, two of them were software development and two of them were engineering, like pure just like build engineering, and those were four the electives she could choose from. Of course, she picked the software development ones, that had no influence for me, but uh, but no, it's like it's like I said, I wasn't taking that type of stuff at her age. I don't think that stuff was available or even kind of thought of being taught at that that that level yet.
AI Still Needs Humans in the Loop
SPEAKER_01So yeah, vibe coding is really even my son again, and he's a programmer, uh, worked for a large company, and he said he thinks uh three to five years his job's gone. He said it's already coding better than I'm coding. And I said, Well, how much of your job is actual coding? And he said 50%, which I gotta tell you is higher than what I've read. I've read most places it's 20%, and your other 80% is going to meetings and looking at reports and uh designing things, being an engineer. But uh I said, hey, you know, only half your jobs coding. He said, That's the part I like. I like writing code, you know. Uh but you know, the other part, I I really just and I'm not a full-time programmer, but I just don't think that it's gonna completely get rid of the program. Is it gonna change the nature of programming drastically? Are you gonna be writing less code? Yes, but you know, absolutely you should never teach anybody Python or language in college. And I'm like, that's crazy talk. Like, that's like telling somebody that can use a calculator or a computer they don't need to understand the math that they're doing or something. Yes, certainly the the computer and the calculator can do all the math faster, but that doesn't mean that you don't teach them trigonometry in the quadratic equation, right? They have to understand what it's doing. So I, you know, I I think that colleges and uh other things need to teach, you know, you need to learn a good language or two to understand what it is doing. And again, for sure, the AI is going to break. And there, you know, and there's going to be need, you know, people are gonna need to understand what happened and why it didn't do this thing and understand coding. If you don't understand coding, you're not gonna be able to fix the vibe coding thing, right? It's I mean, that's just they're like, oh, I'll just get another AI to go fix it. It's gonna be the ultimate mind thing, group think, right? It's gonna it's gonna keep barring this library from each other, and now we have it, and it's just not gonna see what's wrong because it's AI. It AI cannot think like Einstein yet and figure out something that hasn't happened. It's based upon you know past history.
SPEAKER_02When I hear when I hear this idea of like, we know, and I've heard this a lot, like don't get into software development, don't have that that younger people kind of scare off thing. Um I start thinking of the movie Idiocracy, where I'm like, cool, but then when everything falls apart in like 500 years and nobody knows how to fix anything, like what do we do?
SPEAKER_01Yeah, yeah, yeah. Now I will tell you, and you mentioned cyber attacks and hackers. We are for sure. This I'm I'm I've called out since last year that 2026 is going to be the year when um AI takes over for hacking, right? We're already seeing it's exploding. Uh the uh, you know, we've already seen that AI enabled social engineering is more successful than humans, which kind of makes sense because a lot of the social engineering people are in other countries and different languages and didn't, you know, it wasn't a first language that the victim was in. But ChainAlysis just put out a report a couple of weeks ago. This blew my mind. Chainalysis, so they they follow cryptocurrency. And then the no BS, they're following the cryptocurrency, you know, the paths and the wallets. They said the scams that used AI stole 4.5 times more value than the scams that didn't. I was like, well, that's the end of the the if you're a hacker, you're not gonna hack stupid. You're going to use, and most of the tools are AI enabled today already. They're just going to be more so because the AI is stealing more money. I mean, it's the the writing's dumb. 2020. I I think your kids, my kid, I think grandkids, when they hear the term hacking, are going to kind of think of hacking bot, right? Because that's what's going to be doing the hacking, not somebody in a hoodie over top of you know something. And we're already, you know, you're already seeing the hack bots win a lot of these contests. Let me say when I was a penetration tester and stuff, the tools I used would find about half the problems, and I would find the other half. And sometimes it's a combination where I'd see uh the tool would put out a weird thing and it wasn't able to be exploited, and it was I I I couldn't verify what it found, but I would my brain would get going. I'd start to experiment. I was like, ah, this might and that's the way I found in. So I think the AI tools and you know, and hackbots are going to be able to find more things, but I think humanity is undervalued by some of these AI hypers, and I think you know the bill will come due soon, and we're gonna find out, yeah, we're gonna humans are gonna use AI like tools, just like every other tool we've ever had. I don't think we're all gonna be sitting home fat, chunky monkeys on a couch, you know, like some people are thinking about. I'm like, why would this tool be any different than every other tool? But I do think specific jobs, like if I you know, it's gonna we gotta figure out what the programming thing's going on. I wouldn't want to be a transcriptionist or a radiologist right now, you know, and then all the jobs that are coming, we don't even know. Like, you know, just two or three years ago, we didn't even have people that could say, I know how to do really good prompts and get a job. And now that job may pay$450,000,$450,000 a year. Yeah, this is what I'll tell you about programmers. Open AI and and Anthropic are still hiring programmers. Go to the job sites, and those programmers are being offered like insane amounts of money.
SPEAKER_02I was about to say, yeah, if you haven't been over there, um their salary range is quite lucrative for the engineering department.
SPEAKER_01But I but uh yeah, I will say in the cyber attacks, I I am also predicting that we're gonna see more zero days. It's already zero days are still already over 50% of the exploits you're seeing. I think we're gonna see uh a higher number of critical things as well. Uh typical criticality, like what's high in critical risk and vulnerabilities, is about a third, historically about a third. I think that's gonna increase. I think the number of zero days we're gonna find is gonna increase. I think the number of total vulnerabilities are gonna increase. We had 48,000 total vulnerabilities, publicly announced vulnerabilities last year. I think some people are like, it's gonna be over 50. I think it's gonna be close to 100,000. Like, I think it's just gonna shoot up this year. Um, with a lot of the vibe coding making all the mistakes. And uh uh but I think we're gonna have more zero days, more exploitation of zero days. More exploitation and period. And you know, I think the struggle is going to be how do you get secure vibe coding? Right?
SPEAKER_02That's great. Actually, and here's a here, I now is we're already like way into this. I'm gonna do we're gonna do a quick pause. I do have to give us a break for our sponsor, and I'm I think when we come back, I'd actually like to pull on that string of what secure vibe coding looks like.
VIBE Coding: Vision, Interfaces, Build Loops, Enforcement
SPEAKER_00Awareness programs don't provide enough in-depth learning. Ensure your technical teams are getting the knowledge they need to build safer software. Security Journey provides hands-on secure coding training in an application sandbox that allows developers to identify, break, and fix common security vulnerabilities. Give your developers the opportunity to recognize and prevent common and emerging security issues before they become a problem. Visit securityjourney.com to try our training today.
SPEAKER_02Welcome back to the Security Journey Podcast. I'm here with Roger Grimes. We are talking about hot topic, AI. But more specifically, I think at this moment, we're gonna jump in, we're gonna talk about vibe coding, AI augmented development, impact on security, and is there this concept of can we embed security into it? Um, I'll actually, Roger, as you brought that question up, I want to kind of give you a preview of something we've been working on Security Journey, a concept, right? So we just uh I actually published my own little book. I don't publish books, I publish like pamphlets, like small books.
SPEAKER_01Um I write books in the age of TikTok, so you're much smarter, but keep going.
SPEAKER_02Yeah, I need something that you know it's funny. I I I give I I have a handful. Uh this is my fourth one. They're like they're usually around 50 pages, like smaller, condensed to the point. Um, and people are like, oh, this is great. I'll read it on my flight home. I'm like, perfect. That's exactly where you need to be at. And those begins crank through real crank. We what but the one we just published was called the secure vibe coding framework. Um, and a little tongue-in-cheek of that that more of the idea that if you're using AI to write software, how can we do that securely? Like, how can we take a moment, pause for a second, and let's embed security into our processes now versus let's waiting five years to figure out how to do it, right? Um, we actually built out an acronym. I'd love your kind of like an opinion on our thought process here. So I and every good framework has an acronym. So we came up with VIBE, right? So I turned VIBE into an acronym. It stands for vision, interfaces, build loops, and enforcement. At a quick high level, the idea is vision is how do we take intent and make it a vision that the model can work with? And what I mean by that is as an engineer, where do I get my intent from? Well, a product owner comes in and tells me they want something. I get a ticket from JIRA, and then I take a fuzzy intent, which might be like, I need to be able to download a PDF on customer reports, and like, okay, that's that's something. As an engineer, I know, okay, this is gonna touch customer data. There's sensitivity. It's crossing a trust boundary because now it's leaving and going into another domain where someone can share that data, right? So we got to think about access control, trust boundaries, things that I the constraints I don't want to do. There's a lot of stuff us as engineers automatically know we want to do. The model's not gonna make all those assumptions for you. It might make some of them, but it won't not to the level that you have years of experience and training and and focus, right? So the idea of vision is how do I take intent and bring all my good experience and create a vision out of it that the model can actually work with with proper strengths. The I is interfaces. The idea with interfaces is how do we like build a road? And I want I want to build the road before I let the AI drive so I know it'll stay on the road, it'll follow the stop signs and everything else. And that theory is like explain the trust boundaries to the model and let it know what it's allowed to do and not allowed to do and when to stop before it takes action, right? Uh, what I what I tell people always here on this one, um, oh, I actually have another one. I got another thing I do want to bring up. Slop squad. Don't forget me. I want to bring that up. That's an attack that has to do with AI one of these times. But the idea there is, and there is a thing about it, is I don't want it to bring in third-party packages without my permission. Don't go start downloading stuff and adding it to my project, right? That's a constraint and trust boundary you should not cross. It needs to be defined. And it needs to be defined permanently. I shouldn't be prompt engineering that every time, right? Um, the B for us is build loops. Basically, biggest problem I've seen people do is they're like, man, AI is so fast. So I'm gonna change a whole lot of stuff at once and do one big PR. That's great. You lost all your speed on that big PR because whoever's reviewing that code either just isn't reviewing it or it's gonna take them forever to review it, right? So you've lost all the things. So, how can we decompose, break that up, and then do quick slices really fast that people can kind of consume, test, reverse, and like do the good practices that we should be doing anyways as developers. And enforcement is how do we build enforcement? What's our testing? How do we build provenance into this, all that type of stuff, right? So the idea of like, hey, let's just let's take our good fundamentals and let's apply those to how we're using AI to build software. So that's kind of the thought of I do think there is such a thing as secure vibe coding. I think, I think we just have to be purposeful. I think the problem we're running into is people just don't know what they should be telling to the people that do right now. They know they want to go fast and they get a lot of pressure from the top down, like, hey, AI, get in there, let's produce more. Um, but they're not quite sure how to approach the security part of it yet.
SPEAKER_01Yeah. I I think you have some percentage of people that aren't even thinking about it that well, right? They're just, oh my God, this tool's phenomenal. And then you probably have another percentage that think it's handled. Like, I think it's all handled. I think the AI's got it. Like, you know, I always talk to people, like I remember I used to interview Python coders all the time. And uh I'm a big cybersecurity guy, so I'd like, well, what do you know about you know cybersecurity or how to secure Python? And I was surprised by how many of them said it's secure, it's just secure. And I'm you know, I'm like, well, that's not the guy I wanted to hire. I want somebody that you know looked at it a little bit in a little bit more mature way and understood that any language can be abused and it's just not innately safe, you know. Like I remember at you know, uh Sun Java, which what did that become? Apple Java, no, Oracle Java. That was built to be safe from the ground up. It's the most abused programming language on the history. At one point, Cisco said it was responsible for 80% of the compromises. I forget which years, like 2005, but like that language built was built from the start to be super safe, and it got abused to death. So um I think the same thing. Uh I think there's a lot of people that say, well, I'm using, let's say, you know, Microsoft's or you know, whoever cloud or whoever it might be to build this thing. Well, obviously, they've built in all the you know secure things. I don't have to worry about it. I think there are some people that just think it's innately in there and it it's not, you know, and they're like, well, or I can just run an AI, tell the AI to run on it and tell me I have no code, you know, it's it's it's unfortunately not so easy yet.
SPEAKER_02I will I'll say this too when I think about that. You can't put ownership on the AI, right? Like, like when I have it do this stuff for me, whether, and this isn't even a coding problem. This is whenever we're using this could be Chat GPT, this could be Claude, this is whatever I'm using it for. When I take what it does for me and I do something with it, I'm the one responsible now. I'm the one that has to deal with the consequences of whatever happens now that I'm using it, right? That AI, like you're not gonna be able to sue Claude. They tell you, hey, might be insecure. You gotta check it yourself. Like they tell you.
SPEAKER_01Yeah, and they're and they're definitely, you know, if you look at what's going on right now, there seems to be a lot of hints of uh vibe coding causing some operational issues. Like I just saw another Amazon thing, and they're like, oh, some new coding took us down for 10 hours. Well, what are the odds are AI was involved in that coding? Probably pretty high. But I'm not necessarily blaming the AI. I'm like, well, that there's some human probably multiple human failures in there somewhere, right? They they didn't do the the testing well, they didn't do the code review well, they didn't, you know, I don't know, but probably multiple failures and people just moving too quickly and being maybe amazed by what it did. Oh my god, which by the way, this even our own teams, when I look at our own programming teams, what they're working on, it does seem like we went from no productivity increase to hey, we're getting what looks like a 30% productivity increase. To now I'm I'm shocked by how quickly they're bringing up products, right? I mean, it's it's it's there's obviously a huge increase in productivity. As far as I know, we're not firing anyone. We're just producing more code and more features that our customers want. But um I do think that you have to be thoughtful about the processes, like you've talked about, putting around all of that. You can't just be this I don't know, you can't be an unprofessional vibe coder that's not understanding the totality of what you're doing. I mean, you you've got to be professional about it, you've got to test it. I love your idea, I love your vibe thing. I think, although I would somehow work in threat modeling somewhere in there.
SPEAKER_02So threat modeling is that that part where I the interface is is the threat model, right? So that's the step when I take vision and before I start coding is what can go wrong? How do I that's that step? That's the the let's look into the problems where we can go. That's it, it's like mini threat modeling, it's not a full one, but well, at least yeah, think of it a little bit, right?
Why Vibe Coding Requires Mature Developers
SPEAKER_01The same thing like input validation. You as a mature programmer, like, hey, you can't always trust the inputs. And uh and and you know, before that was something you had to worry about on a field level. Well, now it's possibly the Agenic AI. You don't and you you don't know the data that might necessarily be coming in. Uh the output could even be used in a weird way. It's it's definitely um, I will say this that when I look at the complexity of the threat modeling, it's it it's you know, there's been a big increase in the complexity of what you have to understand and worry about uh with it, because you don't, you know, if you if you just let your AI agent go and you're not thinking about all these traditional things you have to be worried about, eavesdropping and you know, input control and output controls, uh availability, uh resource overutilization, SQL injection, you know, indirect prompt injection and that sort of stuff. Um you you have to I think I think that vibe coders, they're not gonna, you know, everybody can be a vibe coder. I think to be a good coder, you have to be a mature coder. Like I think it it's gonna require more of that than ever before, and we'll make the programmers that do it more valuable when they do it well.
SPEAKER_02Yeah, I think I I I still see vibe coding almost as a bad term, right? I I like there's AI, it's like it's like uh it's the difference between a coder and a software developer or software engineer. An engineer understands design, they have experience, they're solving a problem, they're not just churning code, right? It's the same thing. You got a vibe coder, right? Maybe you're just churning code, or do you have someone that's like AI augmented engineer that's like a professional and knows what they're doing and all of a sudden can move really fast, right? Like that I think there's gonna be a big delineation there when you start seeing the quality of engineers going through.
Why Buy What You Can Build?
SPEAKER_01Yeah, yeah, yeah. Yeah, I think uh Mark Cuban said something a week or two ago that I really loved. He said, What I would tell anyone is go into an industry, find out where they have gaps or slowness, and then code and solve that problem, right? Like he's like, that's the programmer of the future. I thought that does make a lot of sense, right? That you're gonna have a lot more solutions, the cost of entry is gonna be lower, you're gonna be able to, you know, make this amazing piece of software. But I think a lot of these people, you know, your company's gonna go up and down based upon how secure and good that code is. You still have to understand scaling, right? How to scale something. You still need to understand cybersecurity. And if you have someone go in to solve your problem and make a new company and they don't understand how to do appropriate software engineering, that company is gonna be handed its butt, you know, as people tear it apart.
SPEAKER_02One of the things I think is gonna have a huge impact too is the vendor industry. There's a lot like uh like let's let's talk about threat modeling. I think that I think this is a great one for a second, right? First is the side of a lot of people using AI to augment a lot of the threat modeling tools, but even more so, I see a lot more people building their own threat modeling stuff, right? Because why am I gonna go buy it? I can just build it, right? I and I I'm seeing that I think a lot of a lot of tools, like if you're this niche tool that does like one thing, you gotta watch out because I think instead of buying, there's gonna be a lot of companies like I just have to spin that up in a day. And yeah, I I mean I do it prototyping all the time. We're like, man, I have an idea and I go on a tangent, I spend two hours, I gotta working at POC on my idea. I'm like, like that's gonna be impactful for the the those a lot of the industry.
SPEAKER_01Yeah, although at the same time, too, people you know, the the uh stock market was just this has just been crushing the software industry for a couple of weeks now. Glad I didn't have any major investments in zombie. But uh I I I I think it it's not gonna, you know, I don't think you're getting rid of Office right away. I don't think you're getting rid of Excel. It may we may actually change who needs Excel and how you use it, but if you need Excel, if someone was to make something to replace it and make it better, the per that person is probably gonna charge for it. So I think you have a lot of niche projects and stuff, but if someone makes something that truly is utilized by us all, like, oh my god, that's fantastic, they're going to charge for it. It's not going to be free. And you know, you I don't think you're gonna have easy, I don't think an easy by coder makes you know any complex piece of software that we make. But I think you're right. There's gonna be if you make a niche project, something that's small, yeah, it it you're certainly vulnerable to it. But I would say even then, if you become a guy that makes the best niche product, you can probably charge for it. You know, if you if you just demonstrated that you do that thing better than anybody else with high accuracy and success, you know, that there most people probably aren't gonna give away for free if they make the best of something.
SPEAKER_02I I agree with that too. And well here here goes back to though, is at that point, you're not you're not you're not paying what you would pay the developers, you're paying for the insight and skill of the person that built that, right? As in, like if I'll take threat modeling as an example, right? If I have somebody that is the best at threat modeling process, understanding how to approach it, set it up, how what the output should be, the success of adoption. If I give that person an AI and allow them to start coding really fast, they can probably build something better than anybody else has out there, right? Because they're able to express their expertise into that application. That's the people I think they're gonna win. It's not just the people that are just coders, right? It's the people that have an expertise that they can express now that they couldn't necessarily, or it would have taken a whole lot more work to get there.
SPEAKER_01And even then though, like suppose a person with expertise, suppose I'm a person, somehow I'm gonna uh simplify the mortgage industry. You know, you're gonna go be able to buy a house in a day or something like that. That was always the promise of blockchain, right? You buy a house and 30 seconds later all the paperwork's done and everybody's paid, blah, blah, blah. So let's say, let's make up that new fantasy for vibe coding and AI. Someone figures out a way to make mortgages happen in 30 minutes or something like that, you know, increases the industry. Everybody has a lot of money and capital and everybody's happy. That person still has to design it accurately, has to secure it accurately. I mean, there's even if you're a uh subject matter expert in mortgages and you're like, hey, I can do this, that person is not a security expert. That person doesn't is not even thinking about input, you know, making sure the input is clean and and interfaces and maybe setting up you know MCP servers or whatever. So you know, I think you still end up with a team. And they say, well, the AI, AI to check and see if it's secure. Um it's not there yet to be able to do it. It's not there yet to look at every possible combination. And and again, I think even if you're saying secure something, it it takes a an engineering mindset of what does that even mean?
SPEAKER_02Well, and a lot of the time, like this good example, AI is still going for the happy path, right? I I will say this. I I let's talk about using it as a security scanning tool. Scanning tool, right? Claude Code came out with their whole thing, and we watched the market all tank, where oh my gosh, there was panic, right? Um I looked at that, and when I saw that happen, I was like, that's kind of ridiculous, because all they actually did was give you an interface to scan, tell the LLM to scan code. You didn't need that to start with. So and a good anecdotal story is um I was doing some stuff for NC State, and I was actually um, there's an open source um EMF EMR project, which is basically a medical record management like project that's open sourced. And one of the things I was doing with them is we were doing some security testing for that. And with no other tool except Codex in my IDE, I was like, hey, start pen testing this. Let's do this. I gave it some criteria of things I wanted to look for, and I gave it some direction and guidance and some limits, and I said, just start looking for vulnerabilities. In 30 minutes, I found a legitimate eye door that was allowed you to steal a practitioner's digital signature, and it found it in like 30 minutes is going. Is that a big deal? Just a little one, just a little one. Like, and and so and so we went through the process, submitted it to the project, the open source to get it past the whole thing. But it didn't take any specialization. I just told the thing to do it, look, and let it go for a bit, and it did it.
SPEAKER_01And I still I can tell you what though, I still think you're underselling the way that you're guiding the tool and the guardrails you were putting on it and what you were telling it to look for. You know, I I think anyone thinks that if you can turn Claude on a project and hey, it's done, hey, I did a Claude review and my security review's over. You know, that's just the beginning, the code review. Yeah, yeah.
SPEAKER_02With the caveat.
SPEAKER_01Yeah.
Vibe Coding Still Requires Testing
SPEAKER_02With the caveat, yours your code can be flawless. Yeah, so no, all I was gonna say is you're right on it. What we're missing is there's like 20 years, or all the years of experience and everything else that goes into that prompt that's missing for the larger audience to do that.
SPEAKER_01Yeah, yeah. So it's certainly interesting times for coding. Uh, and you know, I have seen phenomenal stuff. I I've seen friends of mine that are not programmers envision a project and then come back with some really cool mock-ups, like really, really quickly, where I'm like, wow, you know, that was that that's wonderful stuff. Uh, you know, it's gonna be interesting. Let me see. When I started to learn how to program, I was an assembly language programmer, right? Where you're putting in three-letter instructions, move this into the CX register, you know, add one to the DX register and stuff like that. And I can remember, you know, there were certainly a lot, uh, COBOL, I think, was like the primary language back then, but C and C and all that stuff. And I remember guys, uh famous people saying, uh, anyone that doesn't use assembly language is you know just stupid. It's you know, all these other higher level languages put too much bulky code and you're crazy. And they and they still built some really great stuff in Assembler, but I was like, ah, but you know, when you use a higher level language, you can be faster. And this is just kind of that iteration, right? Is that you're using this tool, but it's gonna be a bloaty tool, uh, it's gonna have its own issues. Uh you have to have the understanding, but it is allowed for sure gonna allow people to move very quickly. And that that's a I guess, you know, how many I wonder what are the operational lessons? How much blood needs to be on the ground, operational blood, before we realize, oh, we need to apply a mature set of of standards, of guardrails, of the vision, of you know, testing things, you know, like just because you vibe coded doesn't mean you can't do testing. You know, like you should still do testing. You can't just, oh, it looks great, it's running, I'll throw it out there and bring my company down for 10 hours, you know.
SPEAKER_02Yeah, that that's absolutely true. And like I said, I like I go back to very pro AI. I uh and I I don't like the companies that go out and doom and gloom it. And the the the the hard part is I think a lot of the ones that do that are so focused on the the threat to their own industry or the way they're telling their story, they're not ready to switch. A lot that's why a lot of the training providers like, no, you just gotta keep training on the stuff you did before twice as hard. It's like, well, no, maybe we should still do the foundationals, but we should probably be training on some new stuff too, right? Because things are changing. I think that that's one of those kind of things where we're gonna be enabled, it's gonna be adopted. This isn't going into this isn't going anywhere. This is the way you do it, right? I don't think I've talked to a single organization that's been like, nah, we're not using AI to code. I haven't talked to one, that's not a single one. That's just not the way it is. So that's this is the new reality. And the question is, how are we gonna adapt to it to make sure that we're doing it right?
SPEAKER_01Yeah, or even new problems. Like uh just this week, I've run into three different people that said I ran out of AI credits or tokens. Like I had this great solution, it was doing this thing, and all of a sudden, like they didn't even know there was such thing as AI tokens or credits. Like all of a sudden in the middle of their project, they're out of credits. They're like, what do you mean I'm out of credits? You're like, oh yeah, that's a thing. And another guy, I saw another meme. It was great. I fired my$120,000 a year programmer to have to spend$500,000 in credits, you know, an AI token credits to build build the app. You know, that that was something none of us had to think about before. Even like I'm gonna use Agenic AI and make this solution. What is the the credit back in? There's a cost. All these big AI base AIs need to make back their money. And so, you know, that's a new type of revenue calculation, cost calculation. People so like, oh, I can really build this and buy and have great coding, but if it's backed in into an LLM, there's likely to be some money that you're gonna have to spend.
SPEAKER_02And it's not arbitrary, like these costs add up quite a bit as you start going. Um, that actually goes just the overall impact. Like, I like this stuff is expanding, but that there has to be a ceiling on some of this stuff, right? Like, like the the amount of bandwidth that that these companies are going through, like, but that's why they're growing so big. Go back to why Anthropic has like a million uh job postings out right now because they're having to keep up with an insane demand that's uh spreading across the market. Because guess what? People pe people aren't at full adoption. That means we're still on the up curve of how many people are gonna want to use this. And I I think there's gonna be caps that we're gonna run into. We think tokens a problem now. I mean, I almost see like a blockchain problem, right?
SPEAKER_01Like it's uh you're trying to get a energy. I hear energy is the big one, right? Like my most into the AI thing, investors, they're like, we need nuclear, we need power, um, we need wind, we need solar. Like, they're not trying, they're like, we need it all, you know, because they understand what fully utilized AI means. It's it's power, right? It's an infrastructure. We got to redo our infrastructure. Do we have enough power to bring in, you know, or musk is throwing and thinking about throwing them up in space?
SPEAKER_02You know, yeah, it's it's it's a well this gets processed somewhere. And it's it's no longer it's not the days where it first came out where most people are using just Chat GPT instant response. Uh these are all thinking models, they're doing multiple iterations and and work for everything they produce, even for the the day-to-day person using or using these the thinking models now, right? Where that wasn't a thing like three years ago. Like it's it's been a it's a huge shift. So, but I will say I I really enjoy a thinking model over the instant response stuff.
SPEAKER_01Yeah, you know, uh you're and what you know what kind of blows my mind uh is that there's a good chance that in the near-term future, I mean, within five years, that it's gonna change to you know, email is gonna lessen, uh, websites are gonna lessen. We're all gonna be interfacing with everything through our AI desktop agent or agents, you know, and that uh the you know, you're probably not gonna build a website the same way because the majority of its interfacing is gonna be an API, you know, to AI agents. And that's kind of interesting, right? We've all we've spent all I'm sure you and I, a big part of your career was UI, you know, user, you know, interface. How does the, you know, and do I have this right? Well, that will probably be less valued in the future. Um, and it's just kind of wild. Just uh to me that in my lifetime I've seen the web come and explode, and then this one thing came in that seems like it's gonna significantly decrease people going to the web, like that's just gonna go away, or you know, or be a you know, when would you go to the web? What what would actually make you go to the web versus you interfacing with your personal AI agent? You know?
SPEAKER_02Well, and that goes back to even right now, though, like I I wanna and I don't know the metrics, I'd love to see this. How has it impacted web browsing up to this point? Because now you go to Google, right? You ask at something, how many people go past the prompts for AI response?
AI Is Confident (Even When It’s Wrong)
SPEAKER_01The stats are not pretty, and even like and I'm a CISO advisor or PR person, people don't read the articles I write anymore, right? There, and uh one of the one of the tricks we've learned is, and you'll notice this if you look at articles today, they have three bullet points at the top. That is what the AI is picking up. And if you so if you want someone to possibly then see that you're the source of what the AI picked up in there, you got to have these three uh dots at the front of at the top to be you know for the AI to suck it up, put it in their model, and then put a one. So, oh let me check and see where that goes. If they go, but uh I think I heard it's it's it's it's far less than 50% of the people you know ever click and go to the source. Which let me tell you, right now you need to go to the source. I've been embarrassed twice just this week, where I said things. I asked an AI to verify something, what I thought was very simple, and uh it gave me incorrect information. And when I went to the source, the sources did not say what the AI said. Uh and uh and I'm like, oh, I'm I'm not even following my own advice, which is to make sure you verify what it's telling you. Thank God I'm not a lawyer using it in my uh legal case or whatever, but uh, I even freaked out. One of my daughters getting ready to go to grad school, and I said, Tell me how much am I gonna have to pay for you know grad? And it came back with an insane amount of money. Like, you know, I think my daughter told me to be prepared for like$24,000 a year, and it came back 80,000 plus. And the AI said, you may see lower numbers, but they're wrong. And so I freaked out. I'm like, oh my god, I gotta come up with$80,000 a year all of a sudden. I was barely, I was having a hard time with the$24,000. Now you're telling me$80,000 a year. Oh my god. Turned out it was it was false. It just made it up.
SPEAKER_02Well, the hard part, it goes back to right, um it's fluent and confident all the time. Yeah, and it has nothing to do whether it's right or not.
SPEAKER_01Yeah, did you say that one where it said if you say you're wrong, it's like, oh, I'm wrong, I'm sorry, but and you could actually go back and forth each time you tell it it's wrong, it switches its position confidently. I'm like, that is the exact opposite of human beings. We don't give up on our initial you know thing we told you till death do us part.
SPEAKER_02Well, here's the worst part, too. Is I've like you've probably read of this. I've been wrong and thought it was wrong and told it was wrong, and it accepted that and said, You're right, I'm wrong, and it wasn't. Right? Yeah, it goes back to because it doesn't care about truth, it does not care about facts. That is not how this system works.
SPEAKER_01Yeah, yeah, yeah. Yeah, so I I I and again, I think humanity is undervalued not only in programming, but everything, you know. Like I I really do. I think people like I my classic example, you go, they talk about doctors and lawyers being put out of work soon. I was like, when you go to your law, if you ask the lawyer how much time is actually spent giving legal advice to clients, probably not a whole lot of his time, you know, 10, 20 percent. When you walk into a law office, let's say you're someone getting a divorce, he's looking at you immediately, he or she, and sizing you up by the way you dress, the tone of your voice. Do you want to take that SOB for everything they have, or do you just want to get out? You know, and the lawyer's making like very quickly because of humanity making all of these decisions, some correct, some incorrect. And the AI doesn't have that, right? What do they say that half of communication is nonverbal? We certainly the AI certainly doesn't have that piece yet, right? Until it starts looking at us with a face scanner or something like that. So, and and just things that I always tell people that you know there's a lot more going on in our brain. Like we know all the things that we can't say. We can't say something that's racist, sexist, or you know, something like that. And we just innately know we can't do those things, or I, you know, there's all sorts of we have a lot of tokens in our brain that are there all the time, that we bring the context that we bring to that situation that it's very difficult for an AI to bring.
SPEAKER_02Yeah, absolutely. I absolutely think once again. I'm not dimagloom. I don't think it's taking all our jobs, I don't think it's the end of the world. I just think the world's changing. And you know what? That's okay. We'll have to change with it, we'll adapt, and some interesting, really cool stuff might be on the other side of this. I think so. I am confident that that it's gonna be for the better, long term. I I'm gonna.
SPEAKER_01As long as they don't turn in the Skynet become self-aware and all that stuff. Uh, but uh, which I used to tell people don't worry about it, but it turns out we're gonna allow off fully autonomous weapons only. No, that's like the one thing that we shouldn't do. We should have human in the loop.
SPEAKER_02Yeah, when the government asks to turn off the part of the one of the three laws of robotics, you gotta question it.
SPEAKER_01Yeah, have you not read Ray Bradbury and seen the movies, What Happens? But I I too, I'm like you. I you know, I just everyone's like, this time is different. How many times has humanity said this time's different? Like, and I was even reading stories about you know the cotton gen and stuff was supposed to make us all in luxury and nobody working and all this, you know, like every inventory computers are gonna make us and we're gonna have all this luxury time, and and you know, we're still all working 40-hour weeks, if not 60 hour weeks, maybe less than Victorian times and Egyptian times, but we're still putting in a substantial amount of hours despite computers and the internet, and you know, we're working, some of us working harder than ever. And I think you know, why would AI be different?
Final Thoughts
SPEAKER_02I was about to say, I mean, I've had AI at my fingertips, and I feel like I'm putting in more work, right? Because I can now, right?
SPEAKER_01Because I can. Hey, that's a great point. That's a great point.
SPEAKER_02Yeah. All right. I think uh I think we're almost about the hour. I think this has been a great conversation, Roger. Uh love, love talking about the AI stuff. I don't think it's doom and gloom. Um any last words for our guests? Anything you want to part them on, some sage advice about what the future looks like.
SPEAKER_01Um, you know, again, I I think AI is going to be a tool that we all use to do more and be more efficient, but you're going to have to apply, you're going to have to keep in mind yesteryear's lessons and you know, all the same threats we had before, the same cybersecurity threats, the same considerations, they apply uh to what you're making with AI. And uh you still have to be smart and responsible and mature in what you're doing. And the people that do realize this those things are gonna be more valuable than people that don't. Absolutely.
SPEAKER_02That is some some amazing advice right there. Um, no doom and gloom. Just be ready. It's changing, be a part of it. Uh, thank you again for being a uh guest on the podcast. Always a great conversation. Um, also, thank you to all the guests, uh, our listeners for jumping on, joining us for this conversation. Hope you got something out of it, and uh hope you come and join me for our next podcast. And as I always say, remember, security is a journey, not a destination.
SPEAKER_00The Security Champions Podcast is brought to you by Security Journey. Security Journey is an enterprise class secure coding training platform with lessons that are built on learning science principles to deliver long-term, measurable results. Learn more at securityjourney.com.