Application Paranoia
Application Paranoia
APEP86 - Is AI Killing AppSec… or Making It More Critical?
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Is AI making application security obsolete—or exposing new risks we don’t fully understand?
In Episode 86 of Application Paranoia, Colin Bell is joined by Rob Cuddy and Kris Duer to challenge the growing narrative driven in part by Anthropic—that AI-powered development could replace traditional AppSec.
The team explores whether AI is accelerating productivity at the expense of understanding, and what that means for developers, security teams, and organisations trying to keep pace.
They also discuss:
- Whether AI is changing how we think (and learn)
- The risks of “vibe coding” and over-reliance on LLMs
- Why AppSec isn’t disappearing—but evolving
- Key findings from the latest AppSec trends report, including AI adoption, API visibility gaps, and ownership challenges
And of course, a new term is born: confidence laundering.
App Paranoia EP86
Colin (2): [00:00:00] This is Application paranoia, episode 86.
colin: Okay. Welcome back to Application Paranoia, where we unpack application security in a world that's moving faster than we can secure it. I'm Colin Bell, and keeping me honest are Rob Cuddy and Kris Duer. Today we are taking on the idea that AI. Might make application security obsolete why thinking that might actually be more dangerous than the vulnerabilities themselves.
Kris: Yeah.
colin: So anyway, before we get there, Rob, Kris, how are you guys doing? What's been happening?
Rob: I'm doing great. And contextually right this, when we're recording this is March and we're right in the middle of March Madness.
colin: It's the end of March. We're nearly at the end of March
Rob: nearly at the
end of it. But we are here in [00:01:00] the States. It's all about March Madness and the, NCAA tournament. I've got a daughter who's very excited that Arizona won
so my house is, doing awesome.
colin: Very good. Very good.
Kris: how is your bracket looking
Rob: I, my brackets are looking okay. The Florida loss kinda hit one of them pretty hard. Yeah. Yeah.
Kris: Percent.
Rob: It's pretty, pretty fun though,
colin: So
I vaguely understand brackets, but do you go out straight away with a bracket if you lose the first week, ?
Rob: there's all these challenges every year to try to get a perfect bracket. , It's never
happened, but the odds are astronomical. , Yeah, something like that, right? It's crazy, but. There's so many places that offer million dollar challenges for a perfect bracket and stuff like that. So people get really excited about it. Yeah, exactly. In a lot of cases, , like ESPN and them, they're free. But it's just, yeah, the, odds of picking 68 teams and all of the different games and exactly how it's all gonna lay out. It just insane.
It's [00:02:00] funny too, 'cause it's the only tournament I know about where , the actual emphasis isn't on winning at all.
It's making it to the final round, right? So everybody talks about the final four, but after that it's all like, oh, okay, you won a national title, but for some reason it's about getting to the Final four. And I think it's, 'cause you know, you win a regional tournament to get there and stuff
like that. So it's pretty fascinating.
colin: And Kris, you've had a brutal winter.
Kris: and
colin: does that mean the discs just get locked away in the cupboard for the
Kris: No, I've been playing all winter. We've had to wear gators, which are things you put on your legs
colin: Oh, I know Gators. Yeah,
Rob: See in Florida, they just
avoid those.
Kris: no,
I hear you. I dunno why I live
here, but here we're
colin: Yeah, I suppose did the discs fly differently in the snow?
Kris: very much so they don't go as far so frustrating. We're out. So
at a,
colin: Excellent.
Rob: Do you ever play in snowshoes?
Kris: yeah,
people,
Rob: That would be funny,
Kris: but my gosh, sometimes it's a foot and a half, two feet of snow. Some of it came up to my hip.
Rob: Dang,
dang, dang. So have you ever had it where you couldn't see the [00:03:00] hole?
Kris: No, you could definitely see it. Can you see the disc when it goes into the snow? Not all the time. No. You definitely,
Rob: Yeah.
colin: It melts and it is just a big mound of disks.
Rob: there's a big pile of them that are just left there.
Kris: So you get to keep it.
Rob: Yeah. It's kinda like when they clean out the creek at a golf course and pull all the balls out. That's right.
colin: Yeah,
So I've noticed I've, I've noticed lately. I thought I'd test you guys out. I've noticed lately that language is changing just as quickly as AI is changing everything else. And, , coming up with terms I got really frustrated 'cause I was using the term vibe coding, but I believe it's nearly obsolete now.
It's now a. It's now agentic coding,
Kris: Of course
colin: So it feels like every week there's a new phrase. So what I thought I'd do, which would be fun, is I've come up with. Six terms, and some of them are real and some of them are made up. And I want you to tell me which ones are real and which ones are made up.
You might know them. The first one, prompt fatigue, burnout [00:04:00] from constantly refining prompts and getting unusable AI output.
Kris: I wanna go with that.
Rob: I feel like it should be real. Yeah. I'll
colin: Yeah, real. That one's real..
Kris: Ding. We got 1.1.
Rob: Yep.
colin: Second
Rob: right. This is no longer
pointless.
colin: The second one. Cognitive buffering. Pausing before speaking to sound more intelligent.
Kris: No, that's definitely fake. Nobody does that.
Rob: Yeah,
That's AI made up.
It's too perfect.
colin: Fair enough. Latency living, feeling like your life is always slightly delayed.
Kris: I feel like that should be, , real, but I don't think it is. That's
Rob: I don't think anybody would call it that.
colin: Yeah. Yeah. So you're too sharp, Kris. Yeah. No, that's not real context. Context collapse when different audiences work, friends and public merge into one online space.
Kris: Yeah. No, I feel like that's real. I feel like that's [00:05:00] exactly what somebody would
Rob: The effect is real. I, but I think, I feel like there should be a different name for it, but I can't think of what it would
be.
colin: real. It's real, unfortunately.
Rob: yeah.
Kris: Is great.
Rob: I feel like that's in that crowdsourcing area,
right? Like it's just.
colin: Yeah.
Kris: No, that's sounds enough. Tongue and cheeky that somebody would call that for real?
Rob: collapse.
colin: algorithmic anxiety, stress caused by trying to please algorithms like LinkedIn and TikTok, et cetera.
Rob: Yeah. People wanna do it, but yeah, they
would, they again, they would call it something
colin: That one's real.
Rob: they. Yeah. Yeah. They see I would expect that to be like algo fatigue or algo
fatigue or something like that, right? Where they would just combine it and merge it into something
shorter, but aligo
fatigue,
Kris: I suppose this world anxiety is pretty high on everyone's list
colin: Okay, one last one. Semantic burnout being exhausted from overexplaining things[00:06:00]
Kris: that I believe is real too.
Rob: yeah.
colin: that one's made up.
Rob: Yeah. But
colin: Yeah.
Rob: You sure it's not, burnout from a virus, scanner?
Kris: Oh boo. Do they even have those anymore? Is that a
Rob: I don't know. It's
colin: okay.
Kris: prompt fire scanning these days? Geez.
colin: Alright, so let's get into it. Rob, you shared something with us a few weeks back around where the AI is actually changing how we think. Talk to us a bit about that and what.
Rob: The premise is really more so with a friend of mine from Carnegie Mellon, and we were talking about, the impact of ai and he shared with me a study that's been done. Asking the question, is AI actually making us dumber? And and the fascinating thing was, I don't know if you've ever heard of the thing called the Flynn Effect.
Is that familiar to you at all? So the Flynn Effect is basically, it was put together by this [00:07:00] guy named James Flynn, who was in New Zealand. He's an academic there. And he popularized this idea back in the eighties. And the idea was very simply that. Younger generations, as they continue to get older, their IQ scores would tend to go up.
So each generation kind of being a little bit smarter than the one before it, because information advances and education and all of that kind of thing. And so that was happening going all the way up into the 1990s. But then the trend has started to flip. When you look now at millennials gen Alpha, like some of these other ones, we're seeing evidence that this may have actually even reversed.
And so now there's this thing called the Reverse Flynn Effect, and the study was looking at where is this coming from?
And a lot of it was theorizing that when you are leveraging AI and screen devices and all of these other things. Cognitive parts of the brain don't develop as well. So things like vocabulary start to drop off.
Abstract thinking starts to drop off, [00:08:00] things like that. And so it's really interesting to see that as technology has advanced. The side impact of this seems to be that we are not thinking as much for ourselves. So the ability to solve complex problems, the think outside the box like that kind of stuff has been going down. And you're seeing this primarily in those 18 to 22-year-old age groups in universities where people are, leveraging chat GPT for papers and doing research and everybody wants instant answers. And so it's a really fascinating study and this, but this idea of. Are we kinda shooting ourselves in the foot with all of this ability to just answer things and think for ourselves quickly without really exercising that?
And I remember back when my kids were really small, in that two, three, 4-year-old age. And they were in preschool, and I remember meeting with one of the teachers who had told us how important it was for them to get outside and play every day, and that the brain [00:09:00] actually develops as we're using motor skills. And climbing up things and sliding and swinging and
all of those
things
colin: disks in the
snow.
Rob: a massive impact. Yeah, exactly. All of that has a massive impact on the brain. And it's interesting now to see how this is, migrating over into the technology spaces that, that when we sit on our devices and we're playing video games, or we're just instantly searching for answers that we're missing out on some key development pieces in our
brains.
colin: so we,
Kris: it was like that raised by wolves thing where you don't have any language and then you turn up 12 and you can't learn language or some nonsense like
Rob: Yeah. Yeah. Yeah, and it's interesting too, 'cause alongside of that there was a second study that's hasn't yet been fully peer reviewed, but being done by MIT. And one of the things they were looking at was just emotional connection. Loneliness, things like that. And what they've theorized right now is that the more time users actually spend talking to something like chat, GPT, [00:10:00] particularly if they're using the voice mode the lonelier, they end up feeling
Kris: That changes when they're in a robot Form factor though, I wonder if you can fake yourself out like the rooster that got all hot and bothered by the cardboard cutout of its mate,
Rob: Yeah.
Kris: things when you have a best buddy that's a robot or something like that.
Rob: It is kind of funny, right? 'cause they did experiment with that a little bit. And one of the things they found was that if you came into it and you trusted the chat bot already and you had some kind of sort of emotional attachment to it, like you would for human relationships then you would feel more dependent, right?
And so if chat CPT kinda spoke in a neutral tone, then the effects were less severe. Than if it was with kind of the voice mode. So it was really fascinating. They did play with that a little bit. But they haven't really, fully concluded what's going on there. But the the one thing they did say was that, and I'm quoting here, right?
So emotionally expressive interactions were present in a large percentage of usage but for only a very small group of heavily advanced voice mode [00:11:00] users that they studied. So that, that kind of, tying those two things
Kris: nut, the laziness of not having to learn it to get to the point where it's beneficial.
A crazy thought.
Rob: Yeah. But it, for those who are, who have
young kids or, and are starting families now, like this is something they're really gonna have to wrestle with. Balancing that screen time, creative problem solving, like letting kids work through stuff.
Things like that. And it's it's fascinating where it's going.
colin: The concept I pulled from something similar was that ai in it, in its own sense, in this capacity is making people sound more confident than they really should be about not understanding things. And I decided I'd get a phrase for it, confidence laundering, passing off AI generated insights as you
Rob: Oh,
colin: Now I'm gonna, I'm gonna play with that.
Rob: you should? Yeah.
colin: Then push that out there, and that let's see if it becomes the word of no but there is a, there is an aspect of that where people are becoming more [00:12:00] confident. Maybe that's a good thing, I don't know. But, the it certainly is helping people with their self-esteem as well.
And and I'm thinking more like teens and stuff where they're able to. Discuss problems in a way that they can't maybe discuss with parents, et cetera. So there is, there are aspects of it but yeah, it's deadly that it could spiral in the wrong direction,
Rob: yeah. Yeah. It used to be that you only saw that if you were in an argument with your spouse and you knew you were dead wrong, but you were making up stuff anyway to defend your
colin: yeah.
Rob: right now
you can hallucinate with the best
of them, I like that confidence laundering. That's
pretty good, Colin.
I, that's pretty good. I
like
colin: gonna use that.
Rob: Yeah.
colin: it.
Rob: There you go. You heard it here first, folks.
That's right.
colin: Um, it.
alright, let's move on. Okay. And then this is a big topic. It's a big topic in a lot of ways. So there's been a lot of [00:13:00] noise over the past couple of weeks around Antropic in particular in lots of different ways. And we can dig into this as we want, but specifically what they're doing with Claude Code and more broadly, the idea that AI fundamentally is changing.
To the point where it's gonna actually get rid of application security. So they've more or less claimed that application security is Dead even before anyone has actually seen the product itself or used the product, which
Kris: There's no.
bubble
colin: yeah. Yeah. Yeah. Exactly. So if AI is writing the code, maybe we don't need the same level of AppSec that we rely on now.
So that's the myth, and we've seen pushback. We've seen pushback. We've obviously written something ourselves but the very early on, as soon as that was announced, I think Veracode and Snyk Booth gave their point of view on this. So the question is really, and I'm gonna push this more at you, Kris, 'cause this is your world really.
Rob: We're just living in
it
colin: Are we looking at an, an evolution at AppSec here, or is [00:14:00] this just empty words that are, may maybe need to manifest themselves in a different way? Or are we seeing a way that we need to approach this differently?
Kris: Yeah, I think replacement is certainly a bit strong. I don't think that this is going to replace AppSec in the way that it exists today, but I do think evolution might be the right word.
You have to, you have to evolve as an apps AppSec vendor. There's no getting around it. You have to include like the power of what an LLM can give you, but it's also very expensive.
It's not free. You're not getting all of these prompts, no money at all.
They, it could be one source file, it could be anywhere from 30 to 50,000 tokens in size. One prompt could it basically say, is this, gimme all of the vulnerabilities in this file that could take you about two to three minutes just to get the answer. It's not faster, it's not smarter, it's just doing the exact same job, but you just don't have to code for it, but you're paying for it. Even more than you would pay for, say, a deterministic vendor. But there's some [00:15:00] interesting challenges that an LLM can solve that even some of the current technologies have a hard time with.
So there's like a Venn diagram, if you will, of they both find this. But regular AppSec does this part better? LLMs do this part better? And I think what you're gonna find is the marriage of both of them, where you were really wanna move to in the future. That's certainly what we're looking at.
I'd be surprised if that's not what others were looking at the same time.
And I don't think one is better than the other. I don't think one on its own is better than not having both.
colin: So are you seeing that evolve even within, to the point where developers don't understand the code they're writing? That's the, that's one of the fallacies that's coming from this, but.
Kris: It's definitely not, at least according to Amazon, who's done on 180 on that little idea where, oh, we're gonna vibe code, and then you know what actually we're gonna pay our senior engineers to make sure this vibe code makes sense. And doesn't totally destroy everything
because they've had outages, they've had major outages on their [00:16:00] website from this Vibe Code nonsense. I don't think vibe coding is. Terrible. But I do think that it's not good in the wrong hands. If you want to be wrong, you can be wrong at super scale. That's pretty bad.
Rob: Yeah.
Kris: It hallucinates, when you talk to it, it's gonna hallucinate with code too. It doesn't always have the right answers.
And you need people who have expertise to know when it is the wrong answer.
And that's what's curious about LLMs in the security space. It's the same problem. You still need some understanding of. What's going on, or at least have it explained to you in human language what the problem is so that you can make your own choices, in my opinion anyway. But ultimately even there, it can hallucinate, it can make it sound like, you know what? You made a great choice. Picking our tool or whatever, how it likes to build you up or whatever. I dunno if you've seen that CEO that. Committed a whole bunch of just regular prompts to GitHub and thought he was the coming of the next coming of God or something like that. It's crazy it's so crazy that these LLMs are making [00:17:00] people feel like they're better than they are because they just build 'em up. They're like, you did such a great question. I love that question. Or way to go. Yeah. And it's crazy to me. But yeah it's like. We're now gonna be able to see bad answers at scale is ultimately what it boils down to.
So you gotta watch out for that. You do need to have some expertise in these different areas. It feeds back into, you have to learn. You can't just trust everything that you read, everything that you see. You have to know what you're doing. You need discipline in your chosen expertise or your chosen vocation.
You need to learn how to do this stuff and security's no different. You can't just. Blindly trust that everything that you see is a real problem or not a real problem. You need to know. And you need to think the problem through like this. It just surprises me some of the questions we get sometimes where I think this is a false positive, and it's like, why do you think that because of X, Y, Z?
And it's that's not right. That's what about this. And they don't think about it or they think this is a real problem and it's let's think it through. What would actually happen? How do you attack this particular area? And what happens [00:18:00] if you do win the attack? Is it real or is it not?
You ask the questions and people just don't want to. They wanna look at something and press the easy button, and there is no easy button in life.
colin: Yeah. Yeah. So Rob, Rob from a, an. I'm a little bit left field but I also noticed in, in, in conjunction with this, around the same time that and you can correct me if I'm wrong 'cause you're closer to it, but in the US federal space this's. Been some sort of blockage of using an tropic, so what, so what's the impact of that?
I know from our perspective, we have to ensure, because we're a CFIUS organization, we have to ensure we don't have that in our code, et cetera. So
Rob: Yeah. Yeah.
colin: With the customer base around that any difference?
Rob: A little bit, but certainly in, in the federal space, right? Because it was declared a supply chain
risk, so you basically had to say, we're not using it. So any of the
vendors, that were. Selling to the government proving that they're not dependent on it. They're not [00:19:00] leveraging it, it's not in anything that they're using. And so that certainly would create some angst for folks that were, leveraging AI or putting it in their solutions. You have to now go attest that you're not and things like that. But it is interesting because it, what was it like? A day later or so, it was a different AI agent that they decided to go
work with. Um, it, it wasn't really that they didn't necessarily want to use AI in, leverage those things. It was more on. Unfortunately, right? Like how you could use it. Do we get complete control? Can we decide, it was those kinds of more human issues. Can we
use it for surveillance?
Can we, all those kinds of use cases. So that's really where so many of those arguments tend to come. And I think it's to a bit to Kris's point, right? A lot of times we'll put this stuff out there. Thinking that it's gonna behave one way and then somebody comes up and uses it in a way that we didn't think about or didn't really plan for, and then you're trying to backtrack to, to deal with that. So I think they're [00:20:00] trying to avoid some of that. And I always go back to a conference I was in a handful of years ago where there was an Air Force colonel who was, in on stage full dress blues. And he said basically, look we want to use ai, but you have to tell us. Where you are getting your conclusions
from, right?
You have to let us understand how your algorithms work what they're doing, the sources, or we can't trust you. And when you're a nation state, there, there are literal lives at stake
colin: Yeah. No, it makes perfect sense. Yeah.
Rob: Yeah.
Kris: Because of course they do. Why wouldn't they?
colin: As a final thought on Kris, the, from a technical perspective, what actually changes here do you think?
Kris: and nothing I think is the.
Rob: Everything and nothing.
Kris: I think LLMs at the very least, I don't know about frontier models, but LLMs in general are gonna be a force accelerator, but you're still gonna need an inexpensive way to discover what's wrong with your program. What's insecure and current [00:21:00] scanners that are out there are deterministic.
They're gonna give you the same answer every time. For reasonably low cost compared to an LLM where you send in an, I don't know, a million lines of code that's gonna cost you $500 per scan or some nonsense like that just to get the tokens figured out. So I still think there's going to be a place for AppSec.
I don't think that's going anywhere. They thought that LLMs are gonna replace lawyers, doctors, et cetera, and those aren't going anywhere. So I think.
I think them saying that it's going to replace AppSec is a tad premature. I think it's beyond premature. I don't think we're ever anywhere close to that at all.
But I do think that ignoring the power of what an LLM can give you is a fool, a foolish choice. You
need to look at what it can do and where it can augment the results or it can augment. The understanding of what those results are telling you, or even in some cases figure it out, figure out in at least general terms how to fix a problem.
All of those things they do very well. It turns out. So there are areas where it can make your tool or our, an [00:22:00] AppSec tool in general better, but I don't think it replaces and I don't think it changes the things that you need to do still in your security environment. It'd be insane to just ignore those and say, just send it to an AI and whatever it tells me.
I believe
completely. That's just. That's just nutty
colin: And it's interesting it's a very similar line to what Gartner wrote 'cause they, they also had a point of view on this. So it's all very interesting. Yeah.
Rob: So I do have a question for Kris though, just in this space where people are now wanting to, whether we call it vibe coating or we call it something else,
Kris: coating.
colin: coding.
Rob: Agenta coating, right? Where do you see that being most effective? What, how should people be using that today?
Kris: In my humble opinion if you need to build a simple viewable tool that you're not gonna ship or put in production five coating's, wonderful. You just ask it to build you some stupid viewer for some kind of format that you maybe need to get a better insight on that maybe you produce as your tool or something like that.
That's wonderful. I've also used it or seen people use it or heard of people use it. To make [00:23:00] benchmarking systems or things like that, which you don't really need to necessarily know the ins and outs of. So that's a good use of vibe coating slash agentic coating. I've used it as a replacement to stack overflow, so it's been pretty good for that.
Tell me how to do X, y, or Z in some other obscure language that I don't normally use. That's pretty useful. But designing and constructing an entire framework to. Ship and deliver production level code that does a specific aspect that's insanity to me to use vibe coding for that, you need to have a discipline in your vocation.
You need to know how these things fit together and what the future considerations are gonna be and how. how. to construct code in a proper way. Vibe coating isn't there yet. It's just not, it can't bring in all of those considerations of time to market and being able to extend it and being able to properly design this, that, or the other construct, whether you're using this pattern or that pattern for your particular solution that you're trying to build, it doesn't know all of that.
It might, if you figure out how to, I don't know, send in maybe a novel worth of data and then you [00:24:00] still have to adjust it I think for major projects it's a fool's errand to go down that road. You're spending more time than is necessary fixing what vibe coating produces than it would cost you just to build the stupid thing. But there are areas where vibe coating has been very useful, at least to us. We've used it for that viewing process or benchmarking process or to get through a minuscule little question about how to do this, that, or the other thing in a specific language that you don't touch very often or something like that is very useful. Especially if you're dealing with. Five or six different languages it becomes interesting.
But code is
colin: It reminds, me. It reminds me, and I'm old enough to remember when Microsoft Access first came out.
Kris: Oh.
colin: And Sun. And suddenly you've got people that, that could do things with databases that couldn't do it before. But the worry is that some of those access databases became mission critical and that's the worry, right there, it's so I, I think Vibe Co, you're right, vibe coding's great.
If you've got a spreadsheet and you've got lots of information and you want to just. Look at that in a different way. I think it's fantastic for [00:25:00] that. But yeah, the worry is the person who suddenly thinks they're all powerful with it. It's
Kris: yeah,
no, we don't. We don't just willy-nilly trust vibe code at all
where we are. We have seniors looking at the code regardless of who wrote
it, regardless of how they wrote it to say, yeah, this makes sense, or No, this doesn't make sense.
But we have used vibe code to good extent ORENT code to good extent in many different use cases for sure.
It sped us along quite a bit in certain aspects of what we're trying to do, but not for production level. This is gonna be shipped to a customer and usable code. That's insanity to me.
colin: Yeah. Yeah.
Kris: Wildly overtrusting.
Holy moly.
Rob: But for next March, Kris, can we use it for our brackets?
Kris: You can try it's gonna be zero, just like people only as good.
Rob: There you
Kris: Are you gonna trade on a winning bracket? You
can't. Who knew Florida was gonna lose?
They lose?
That's insane.
colin: Yeah.
Rob: Wow. That
colin: Alright and [00:26:00] another segment here. I wonder, take a step back and give it a little bit of a plug. Myself and Kris. Doing a sort of a webinar series. But the, one of the first ones that's coming out is all around our trends report that's just been released. I know I wanna pick up on a couple of things that we had, and I know Rob, you didn't have a chance to converse on that but maybe we can talk about a couple of things in there.
So in the first there's a couple of themes that came out to us and we discussed this in the, we'll, we'll, we'll be discussing this in our, um. Webinar when it comes out. But the first one is around AI accelerating versus security code. And one of the trends that we saw from that report is that 76% of people have an adoption of AI in development workflows.
So that's in line with what we are just discussing, and Kris I know you were staggered by that. I'm certainly staggered by that number,
Kris: It's way higher than it should be. Way higher than it should be.
colin: But maybe it's. Answering it in the context of what you just mentioned that you're using it, but you're using it, not for everything,
Kris: yeah, [00:27:00] hopefully,
Whereas 76% of people use it once a month,
that would be happy. I'd be happy with that.
colin: The second theme, which was surprising as well, is that only 20% of organizations are using are regularly assessing their APIs. So if you really think about the explosion of API security that only 20% say that they regularly test their APIs again I found that really alarming.
It's it's and it seems to contradict the the flow of, where things are going,
Rob: do you think that has anything to do with a lack of threat modeling?
colin: It could do. Fret modeling is changing as well. Maybe that's a topic we can pull into a future episode on fret modeling. 'Cause I think that's a world that is changing. But yeah, I would suggest the companies that do fret modeling probably do test all their APOs.
If that's if that's. If that's what you mean. Yeah, possibly. Yeah. But there is, there's still probably organizations that, pay lip service to fret modeling, do a fret [00:28:00] model at the design phase and never go back to it. That, that kind of
Rob: Yeah. Yeah. And that's what I'm wondering, is do you even have an
awareness of how extensive your landscape really is?
colin: Yeah. It potentially, it could be. And the third one, and the, probably the third one we'll discuss here today is that. Despite the investment in DevSecOps, around 37% of organizations are still operating at a central security led model. So we are not necessarily seeing that shift left thing well.
You guys know I am not a huge fan of Shift left anyway because, 'cause
Kris: Fun words. it's, a.
colin: they should just call it, move the problem around, it's like that that, that would be
Rob: it should be called Fix Left.
colin: Yeah. But yeah, I am, but 37% of organizations still see sec, AppSec being the responsibility of the security team.
But anyway, I'm only raising that here because I think there's some really interesting bits of information in that report. And we'll share a [00:29:00] link with that at the bottom of this, we were about to do it, as I said, a webinar. So we're doing a little bit of a plug on that.
Any thoughts on that, Kris?
Kris: It's just, I don't know. I think it's, I think it's the same problem apps. AppSec has always had, where it's not really given the attention it's due, you don't, they don't really fund it the way it needs to be funded, and it's not, I don't know, it's just not given enough. Priority in, in most organizations, and it needs to, you can't just throw AF in front of everything and hope that it just works and you can't trust the world
Coming after.
Rob: Yep,
Indeed.
colin: Okay, so we're getting close to the end, so I'm gonna ask you for one observation each on what we've discussed. So Rob.
Rob: So I think the biggest thing is you need to pay attention to what you are trying to generate, right? If you're gonna leverage ai, if you're going to invest [00:30:00] in, using these engines. And just giving it a prompt, think through what you would expect to get out of that.
What would make sense, right?
Don't just take that answer blindly and run with it. But really think through how if you were in a conversation with an expert had another human being in front of you, what would that look like? How you know, and be willing to dive deep into those things. And don't just trust blindly what you're gonna get. Like you really need to think through does it make sense? Does this line up? How does this fit with other things? And challenge the prompt, right? Think about it from different perspectives and things like that. The more we do that, the better this gets.
Kris: For me, I'd say refine your craft. Don't just trust LLMs to do it for you. You need to learn your craft. If you're chosen or choosing to be a developer, you need to know how.
Rob: If you're chosen,
Kris: be a security person, you can't just hit a button. Hope it works. Learn your.
colin: Yeah.
Rob: The [00:31:00] number of organizations that are now gonna have a chosen one.
colin: So my mine is that AI accelerating faster, but security isn't keeping pace and also confidence laundering, people passing off AI generated insights with as your own confidence.
Kris: Oh my God. Can 24 I can use throughout my.
Rob: Confidence
laundering
colin: Anyway thanks for listening and thanks Rob and Kris. We are gonna be back soon. We promise we're gonna be on a more regular cadence. We've dropped for a bit but this is the start of us coming back with more of the same, so thank you. We're glad to be here. Thanks, Kris. Thanks Rob.
Kris: Till next time.
[00:32:00]