Application Paranoia
Application Paranoia
AP_EP87 Platform vs Precision — Is Security Getting Simpler or Just More Abstract?
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In Episode 87 of Application Paranoia, Colin Bell is joined by Rob Cuddy and Kris Duer to unpack the industry’s growing push toward security platform consolidation.
Are customers really asking for fewer tools, or are vendors shaping the narrative? Is consolidation improving security outcomes, or simply making complexity easier to explain to executives, boards, and auditors?
The team also discusses AI-generated code, customer questions from the field, SAST analysis choices, data flow, false positives, and Kris’s take on AI fear-based marketing.
Plus: NPC streaming, Second Life hacking nostalgia, golf season, proactive SCA monitoring, and a quick preview of Colin and Kris’s upcoming webinar on AI-assisted development.
Webinar: Join Colin and Kris on 6 May for a discussion on how AI is changing how code gets written, trusted, validated, and approved.
Register here: https://www.linkedin.com/events/7449460461881704448/
Application Paranoia — Episode 87
Platform vs Precision — Is Security Getting Simpler or Just More Abstract?
Colin [00:00:00]
This is Application Paranoia, Episode 87.
Welcome back to Application Paranoia, where we unpack application security in a world that’s moving faster than we can secure it.
I’m Colin Bell, and as always, I’m joined by my colleagues, Rob Cuddy and Kris Duer.
This week we’re going to focus on something that’s coming up more and more in conversations across the industry. Everything seems to be moving towards platforms. Vendors are expanding their scope, pulling more capability into a single offering, and positioning themselves as the central control point for security.
On the surface, it makes sense. Fewer tools, better visibility, simpler operations.
But at the same time, when you look at the types of issues we’re still seeing, especially around APIs and identity, it doesn’t feel like the problem is getting simpler.
So today we’re going to explore whether we’re actually improving security outcomes, or just reorganising how we approach them.
We’ll get into that, Kris has an AI deep dive for us, and we’ll also bring in some real questions from the field later on, which is a first for us.
But before we get to all that, hi Rob. Hi Kris. How’s it going? What’s been happening?
Catching Up
Rob C [00:01:16]
It’s been great. I’ve been really enjoying how the year has played out so far. Lots of cool things going on in my world.
It’s funny too, because we talk about all of these changes and the things we’re going to discuss here, and I get to see this perspective from multiple generations because of a lot of the work I do with students.
I’ve got a group of about 21 or so of them getting ready to finish high school and go on to the next stage of their lives. It’s fascinating to see how these spaces impact what they’re doing.
I don’t know if you remember for us, it was probably a long time ago when we were choosing a school or going through admissions. It’s a complete game changer for these guys now. The way decisions are made, how quickly those decisions get made, and how much impact they can have is pretty fascinating.
Colin [00:02:12]
Even career choice is different, isn’t it? My son graduated with economics and did a master’s in it, but it’s really hard to find work in that field. I guess a lot of what they did is now done with AI.
Become a plumber.
Kris [00:02:34]
Yeah, electrician. If I had to do it over, that’s what I’d be going after. Start your own business after you get good.
Rob C [00:02:42]
Refrigerator repair. Stuff like that.
Kris [00:02:46]
AI your way out of that.
Rob C [00:02:49]
That would be tough.
One of our colleagues, and Kris, both of you know him really well, has a daughter graduating with a computer science degree. She had to do a ton of work just to find an internship.
It’s fascinating to see what’s going on even in technical fields and how diluted things have become.
Kris [00:03:12]
For me, golf season has begun.
Rob C
Yay.
Kris
Both kinds of golf. It’s wonderful to get out there and play a couple of rounds. I played 27 holes this weekend. It was awesome.
Rob C
Real golf?
Kris
I play real golf now, yeah. Four whole pars yesterday. I couldn’t believe it. More than I got all of last year in one round.
Rob C
And you didn’t hit the windmill.
Kris
I did not hit the windmill. This is not mini golf. We’re talking golf. Going out there with your fancy set of clubs, hitting the range.
Rob C
What’s your best club to swing?
Kris
For me, probably the nine iron. The club everybody can hit with.
Colin
Play every hole with it.
Kris
Exactly. Everything’s a nine iron shot.
Icebreaker: NPC Streaming
Colin [00:04:30]
On the topic you were talking about, Rob, there’s something I came across that I really don’t understand and it disturbs me greatly.
There’s a trend on TikTok where people go live and act as non-player characters in video games.
Rob C
Oh God, yeah. I’ve heard about that.
Kris
NPCs. It’s creepy. A little bit creepy.
Colin
And they’re making money from it.
Kris
Hats off to them if they can find a way to make money doing that. Good for you. But my goodness. Wow.
Rob C
Thankfully I haven’t come across it too much. I know what you’re talking about, but I don’t know anybody personally doing it.
Kris
They’re NPCs. You wouldn’t know them. They’re not even real, turns out.
Colin [00:05:26]
So maybe that’s where we’ve ended up. We’re building AI to behave more like humans, and humans are going viral behaving like AI.
Kris
Somebody’s coded up an AI agent to do that. It’s bizarre.
Rob C
Remember when we had Second Life and everybody wanted to immerse themselves in that?
Kris
That thing was popular for a hot minute.
Colin
As someone who did a lot of hacking through Second Life, using it as a proxy, you could put an object in one of the rooms and it would fire off a SQL injection attack. If you clicked it, it would fire. The beauty was it was coming from different people’s IP addresses.
Kris
That’s wild.
Rob C
Wow.
Colin
The same person made an object in a very vibrant room that said “click me” and would increase your character’s ability for 10 or 15 minutes. Then it would decrease, and you had to go to him and spend real money to get another shot of it.
Kris
Virtual drugs.
Rob C
A drug dealer. Wow.
Colin
He made a lot of money.
Kris
That is crazy. Hats off to people that figure that out. I wish I was that clever.
Rob C
You kind of have to have a little bit of an evil bent to you, I think.
Colin
They all end up in our industry.
Main Topic: Platform Consolidation
Colin [00:07:03]
All right, let’s get into the main discussion.
One of the things I’ve been noticing over the past few weeks is just how aggressively the industry is pushing towards platform consolidation.
We’ve been playing a sort of platform consolidation in AppSec, but this is at a much wider level. The likes of Wiz and Palo Alto across cloud, identity, APIs, and posture are trying to become a single control plane for security.
Again, they’re saying they can do everything.
Customers are overwhelmed with tools and there’s too much fragmentation. I get that premise. The promise is simple: put everything together.
But when you look at the breaches still happening, specifically around APIs and identity, it doesn’t feel like we’re solving the underlying problem. It feels more like we are shifting where the problem is sitting.
We are really moving away from best-of-breed tools to more platform thinking.
Rob, from your perspective, because you’re out with customers a lot more than I am these days, are you actually hearing that they’re still asking for this? Or is this something being thrown at them by these large companies?
Rob C [00:08:34]
It definitely comes up in conversation, but I see it mainly in two camps.
One is pure cost consolidation. There’s a belief that if we consolidate on a platform, we’re likely reducing licence costs, making things easier, and getting down to fewer vendors. Finance teams are certainly interested in it. That’s where I see it the most.
But it’s also an interesting dichotomy. People want to go in that direction, but they don’t want to get locked into a particular vendor. They still want flexibility and choice.
The other thing I think is more interesting is the assumption that if I get onto a platform, the reporting will be better. The dashboarding will be better. The analysis will be better because it’s all in one place.
We spent years talking about REST APIs, customisations, writing your own tooling, and that kind of thing. I think people are going more in the platform direction because they want information at their fingertips, and they want the tool to do it for them.
There’s this belief that having a platform will give that to them.
Colin [00:09:54]
Are we optimising operational simplicity instead of actual security effectiveness?
Rob C [00:10:00]
Probably.
We’ve seen cases where folks have said, “I don’t really want to do anything. I want to open the tool and have it tell me.”
So there’s trust in the data, trust in the visualisation, rather than going in and questioning what we’re seeing and validating it at a deeper level.
People are assuming the tools are doing the right thing. If you extend that to a platform, then the platform is doing the right thing across all the pieces that make it up.
If you’re not careful, you can optimise a dashboard instead of the data underneath the dashboard, and end up with some interesting problems.
Colin [00:10:50]
That makes sense.
I actually think we’re optimising for decision-making comfort, and I’m going to claim that term as well.
Platforms don’t necessarily make the environment simpler.
“We have one platform.”
“We have full visibility.”
Those things are good, but they are a much easier story for executives, boards, and auditors.
The underlying system hasn’t become less complex.
I like the direction we are trying to take things, where we make it for a certain persona. It’s not necessarily one tool fits all. We may embrace more around ASPM to look at risk and have a platform from that perspective. That makes sense.
But bringing everything together and saying you don’t need the underlying components feels like a bit of a fallacy.
Ultimately, we’ve not reduced the complexity. We’ve just made it easier to justify.
Rob C [00:12:02]
Yes. To me, it fits into that conversation of simple versus simplistic.
You want things to be simpler, but that’s actually really hard to do. If you’re throwing everything onto a platform and just putting it out there, that can be a simplistic approach. You have to be careful.
Colin
Any thoughts yourself, Kris?
Kris [00:12:29]
I can definitely see value in it from a management point of view.
You have to have certain expertise to use all these tools. You have to hire for that expertise or train for it. Theoretically, if you’re using one platform, that expertise is compressed.
From a variety of different angles, I can see where it makes sense.
From a security angle, I can see how it’s less than optimal compared with using best of breed. But I don’t think this is new.
Even in AppSec, using a platform to cover all bases has been a common trend over the past few years. I can’t see how that would be any different for SIEM, WAF, identity, API security, and so on.
Trying to do all things for all people, as much as you can. Jack of all trades, master of none.
But ultimately, in the long run, the cost-benefit ratio matters. I’d have a hard time getting best of breed if I had to hire for 15 products and have experts in 15 different areas, just to run the tooling, when I could have maybe three or four people run one platform across those areas.
For me, that would be a tough sell in most circumstances for a lot of companies, unless you’re huge and have money you don’t care about.
Colin [00:13:52]
That makes sense. It certainly does.
Questions from the Field
Colin [00:13:58]
To bring this to reality, we’ve also got a lot of questions we’re hearing from the field, and I thought it would be interesting to address some of those through the podcast.
Some of them hit on these themes.
There are a whole bunch we got in the last couple of weeks specifically around AI. It’s interesting to hear some of the questions being asked.
What I might do is throw some of these questions at you and see what you think. There’s still a lot of confusion out there.
The first question has multiple parts.
Is AppScan’s ability to scan code impacted if Claude, Gemini, OpenAI, and so on are used instead of traditional development processes?
Kris [00:14:44]
What do you mean by impacted? As in we won’t find anything? Because that’s not true.
We find stuff in vibe-coded things all the time.
AI is good. There’s no getting around it. But it is not a silver bullet that solves every problem humanity has ever faced.
You can’t just type in a prompt saying, “Make me my next product,” and it magically knows how to do it. It doesn’t work like that.
You also can’t type in a complex prompt and have it make completely bug-free, secure code. It’s being trained on code that’s out there, and the code out there is not bug-free or perfectly secure.
So how is AI supposed to know?
Colin [00:15:32]
That probably is the answer.
The same person asks, “Is there a difference in scanning code for review?” They also ask, “Are you seeing an uptick in bad code being generated by these tools?”
You’ve answered the point that we probably see just as much bad stuff going through.
Kris [00:15:52]
Yes, except now it’s at scale.
There’s a lot of value in AI. I’ve used AI to write simple viewers for JSON formats, XML formats, or converters that I don’t feel like figuring out how to write. It’s pretty awesome for that.
But if you want to write something enterprise-grade, with security built in, identity controls, policy management, and all of that, using AI alone? Good luck. It’s probably not going to go the way you think.
Colin [00:16:35]
The final part of that was really interesting.
The same person asks whether it is easy to ask Claude, OpenAI, or Gemini to know whether the code was generated by them, because the developer will seemingly never admit it.
They’re asking us, as HCL, to tell them.
Kris [00:16:56]
That’s an interesting problem, isn’t it?
How much of this is you, ChatGPT? How much of this is Claude? Take ownership of your code.
Does it matter? I don’t know if it does.
There was a story that Amazon was pushing the vibe-coding idea, but now they have their senior developers reviewing every pull request before it goes in. If you do that, does it matter how it was originally generated?
It depends how much time you’re wasting rejecting code from vibe-coded solutions that aren’t right, and how much you have to massage it to fit your legacy product.
If you’re building something from scratch, it’s probably easier to use something like Claude to build it than to maintain, augment, fix, or bug-correct legacy code.
Colin [00:17:55]
What we’re seeing is a lot of executives who are a little bit paranoid.
Kris [00:18:00]
Some CEOs might be too removed from the reality of what’s actually happening to truly appreciate where AI is good and where it doesn’t do the job.
There are definitely places for AI. You’d be a fool to think otherwise.
But does it solve every problem known to man? No, it does not.
We thought we weren’t going to have lawyers anymore or doctors anymore because AI would do it all. We still have lawyers. We still have doctors. Isn’t that weird?
Rob C
One of those we would be okay getting rid of.
Colin
I use Dr. Google all the time.
Kris
I’m sure Dr. Google can do surgery when you need something taken out.
Rob C
I want to know who is writing the AI that will account for an earthquake during a surgical procedure in Northern California.
Kris
You’ve got 22 minutes to finish this off because something’s coming. The big one.
SAST, Data Flow and False Positives
Colin [00:19:20]
A couple more questions.
“I want to understand better the implications of AppScan SAST analysis choices. For example, when do we get data flow analysis results, and what is the impact on false positives?”
We’ve had a few questions like this where people are not understanding the difference between different modes of scanning.
That’s a great question for you, Kris.
Kris [00:19:45]
There are benefits and challenges to both modes.
With data flow, yes, you have a proper data flow through your application, but that doesn’t necessarily mean it’s an insecure, “the world is on fire” data flow. It means this is a data flow that exists.
An example might be getting from an attribute and writing to an attribute of a web server. You have to set the attribute with something dangerous for it to be dangerous.
So just because it’s data flow doesn’t mean it has zero false positives. That’s not a thing in SAST.
And just because you don’t have data flow doesn’t mean the finding is 100% noise. That’s also not a thing in SAST.
They both have benefits.
For example, if I want to check whether a cookie is set secure, that’s easy to do without data flow. It’s true or false right there.
There are things both approaches do well. The ability to do both on the same code base is where you want to be, so you can take advantage of the strengths of both forms of scanning.
There isn’t a one-size-fits-all solution to every possible security flaw in your code. It doesn’t exist.
Colin [00:20:55]
It’s obvious, but I think people see some scanning without trace and think there’s something missing in the interpretation. SQL injection vulnerabilities without a call trace, for example.
Kris [00:21:10]
It’s easy to feel that way.
We are adding data flow to some of our non-data-flow languages over time, and we’re investing a large amount into our hybrid scanner, which does the best of both worlds in one scanner.
It can look for that single line. Maybe you’re using MD5. Maybe you’re setting something true where it should be false, or false where it should be true.
But it also does the data flow aspect.
PHP is coming soon. TypeScript, JavaScript, HTML. We’re expanding the languages, and you’ll see more coming this year too.
We know data flow is important for certain aspects, and we know it’s necessary in this new world to present a good set of findings to AI, whether autofix is used or whether the AI is explaining what’s going on with a finding.
All of that is coming and more to our engines.
Colin [00:22:14]
Very good.
That was just a few questions from the last couple of weeks, so we’ll probably do this more often. I think getting context from people is useful.
So if you have questions, please put them to us.
Kris Segment: AI Fear-Based Marketing
Colin [00:22:35]
Before we wrap up, Kris, I think you wanted to bring an angle on AI into this.
It feels like it connects directly with everything we’ve talked about. What was your angle for this week?
Kris [00:22:49]
I wanted to bring up this interesting aspect of the AI world where they’re using something called fear-based marketing.
They’re trying to scare you into thinking it’s going to be like Skynet, take over your life, remove all your jobs, and there will be universal income because there will be no work anymore.
None of that is true, people. It’s just not happening.
An example is Claude Mythos saying, “We couldn’t release this because it’s too dangerous for the world to have at their fingertips.”
We’ll get into some interesting stories around that, where they accidentally released it to a group of people on a Telegram board or something, and they used it to make a web page.
It was available. It was out there. They figured out the link to Mythos, so they already had access to this supposedly most dangerous model of all time.
The irony.
Ultimately, it needs to be called out because it preys on our primal fears, like the pandemic did, or the world ending, or whatever primal fear you want to come up with.
It generates unprecedented attention for these models.
Yes, they might be good. They might be able to find a vulnerability that hasn’t been seen in 25 years. Maybe they want to get those people involved so they can fix it. Possibly that’s true.
Or maybe it’s overblown. Maybe what they found is that you’re using HTTP instead of HTTPS. We don’t really know the details behind what they’re claiming.
I do think it’s a very smart model. I do think it will be interesting. I also think it’s super expensive. Token spend is going to go way up for people.
When we had ChatGPT 3.5, your job was going to get automated. Students were going to cheat. Google was in trouble. Misinformation at scale. If you ignored it, you’d be left behind.
None of that fully came true.
Yes, students cheat, but they cheated before and they’ll cheat again. If you ask them for an essay, have them handwrite it. You can’t ChatGPT your way into that.
Your job hasn’t been automated. We still have software developers, lawyers, doctors, and all of these things that AI was supposed to replace back in the ChatGPT 3.5 days.
So I don’t see how Mythos, the new ChatGPT, and all the other stuff is fundamentally different.
Possibly it will be better. Possibly it will work better at what it does. But we’re still here. We’re not going anywhere.
What’s even more interesting is supposedly we lost 80,000 jobs in the first quarter this year due to AI. I use air quotes on purpose, because it’s probably not AI. It’s probably bad hiring practices, or the last remnants of over-hiring from the pandemic.
Then another report comes out and says by the end of 2028, we’ll need 7.29 million jobs in the green sector.
Which is it?
Are we losing jobs, or do we need orders of magnitude more people working in this world, in this environment, for software which AI is supposed to replace?
I’ve seen other people talk about how we’re going to have to replace all the people fired because of AI, because it turns out AI can’t replace them.
So I just want to make the point that fear-based marketing is just that. The reality hasn’t hit. We don’t know what’s going to happen.
This could be the car replacing horse-drawn carriages. This could be electricity replacing people who used to light gas lamps at night.
It could be. Who knows?
It probably isn’t going to be that simple. We still have COBOL programmers.
Colin [00:27:00]
It’s interesting, and it’s probably not a popular point of view, but because we are moving toward what is inevitably going to be some form of recession or downturn, or at least there is certainly a smell that this may happen, organisations are starting to cut jobs and make announcements.
But they’re blaming AI, not the economy.
Are they letting people go because of AI, or because of non-AI-related reasons? Probably the economy. But that gets into a bigger topic.
Rob C [00:28:02]
That’s a human issue, right? The economy, that kind of thing. We’d have to admit we made a mistake, whereas with AI you can blame an entity.
The thing I find interesting in all of this is that jobs may not disappear, but they do change.
There’s this question of how easily the worker doing that job now can change with it.
When all these things come out saying your job is going to disappear, it’s partly an indictment of unwillingness or hesitancy to change, adjust, grow, and move in a different direction.
We’re going to need people, and they have to be in the mix. But there’s also a place where you, as a responsible human being, have to adjust to what’s going on around you. You have to adapt, leverage new tools, learn new skills, and figure out how to make that effective.
That’s where the real battleground will be.
The pendulum swings widely in either direction until we figure out where equilibrium is.
Colin [00:29:21]
That’s very good.
Rob Segment: Proactive Monitoring
Colin [00:29:25]
Rob, did you have something to bring to this?
Rob C [00:29:30]
Yes, it’s a little bit related. It’s more about things we have going out.
I was with a couple of customers last week, and it was interesting to see how the pace of change was really impacting their tooling.
We had just released a version of something, and right after it there was a brand new vulnerability published related to GitHub Actions.
It was like, what do we do?
Wouldn’t it be great if you had a way to monitor for those things?
What I found out is that we actually do. In our tooling now, if there is a new CVE announced and you’re using our software composition tooling, we have something called proactive monitoring.
It looks at your scans and can look at your older scans to identify whether a newly published CVE affects you.
I thought that was a cool capability that could actually be really helpful. It runs every 24 hours. You don’t have to have it on, but if you do an SCA scan, it can be turned on by default in that mode.
It’s tracking and looking. Cool capability. Nice work, guys.
Colin [00:30:58]
Was this something that wasn’t apparent to the customers?
Rob C
No, it wasn’t apparent at all.
Kris
Fear-based marketing is real. It’s a thing.
Rob C
For those who are super cost conscious and wondering, that monitoring does not count against your subscription. It doesn’t do anything to your licence count. It’s just there and operating for you.
Colin
Excellent. Good advice.
Webinar Plug
Colin [00:31:36]
Before we wrap up, I just want to give a quick note and a plug.
Kris and I have been doing a webinar series. We did one last month, and we’re going to be doing one on the 6th of May.
The one coming up is all about what we’ve just been talking about, but we’ll probably go a little bit more specific.
We’re going to handle the topic of where large language models start generating code, and the assumptions around ownership, authorship, and all of those good things.
We’ll be digging into how AI-assisted development is reshaping the software lifecycle, and why validating what’s produced and approved is becoming more critical, not less critical.
So we’ve talked about some of this, but we’re going to be doing it face-to-face in a webinar.
If you’re interested, join us on the 6th of May, and we’ll leave a link at the bottom of the podcast.
Register here: https://www.linkedin.com/events/7449460461881704448/
Closing
Colin [00:32:33]
That’s probably a good place to leave it.
There’s clearly a shift happening in how we think about application security, but whether that shift is actually improving outcomes, or just making things easier to manage, is still very much up for debate.
So thanks Rob. Thanks Kris, as always.
And thanks for listening. We’ll be back with more soon.