What's Up with Tech?

From Manual Alert Triage To Autonomous Security Operations

Evan Kirstel

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 25:27

Interested in being a guest? Email us at admin@evankirstel.com

Manual SOC work is collapsing under its own weight. After RSAC, we sit down with Dave Mcginnis, who leads IBM Consulting’s threat management practice, to get brutally practical about what “autonomous security operations” really means when you strip away the marketing. The headline is simple: humans can’t be the bottleneck in threat monitoring anymore, and “AI-assisted” alert triage won’t cut it when machines can generate more detections than teams can ever click through.

We talk through the hard parts that decide whether autonomous SOC automation helps or harms: investigation depth, evidence, and accountability. Dave explains why the new problem isn’t finding a needle in a haystack, it’s finding a needle in a stack of needles and why autonomous investigation has to examine every IP, domain, email, and hash, then document the reasoning for forensics. From there, we explore how response can move past traditional SOAR runbooks toward agents that can connect directly to identity systems, cloud controls, and application platforms.

The conversation also turns to people and risk. What happens to SOC roles when tier-one work fades, where domain expertise still matters, and why tuning, threat intelligence, and integration become the real jobs. Finally, we look at the uncomfortable truth: adversaries use generative AI too, lowering the barrier to sophisticated attacks. If you’re building a modern cybersecurity program, this is a roadmap for thinking end to end, not tool by tool.

Subscribe for more, share this with a security leader on your team, and leave a review with your biggest question about autonomous security operations.

Make your podcast work for your business - Listen to Podcasting Amplified
Practical strategies to turn your podcast into a business growth engine.

Listen on: Apple Podcasts   Spotify

Support the show

More at https://linktr.ee/EvanKirstel

SPEAKER_00

Hey everyone. Fascinating chat today on the heels of RSAC. One theme that cut through the noise, the shift from manual sock operations to autonomous security systems. A lot of confusion and questions about what that actually means in practice beyond the marketing hype. Dave, how are you? I'm good, Evan. Thanks for having me. Thanks for being here. Really uh excited to speak with a real expert, an insider, innovator in this space. Um maybe introduce yourself, your journey over the years at IBM. And uh where within IBM do you sit and uh what's your mission within the company?

SPEAKER_01

Yeah, sure. So um, you know, we're we're sitting in the services business of IBM. We call it part of IBM consulting. Um, and then I lead inside of our cybersecurity services practice, I lead our threat management practice, right? So all things uh find bad things and make them go away, right? That's the that's the practice. Uh we can shorthand it to SOC and those sorts of things. Um, you know, we've we've been on this uh you know, find the bad guys journey for a long time, right? And it's it's been one of those things where we wrote the first rule to find something bad, and we're like, oh, that's a good idea. We should write another one. And that's right around the time where we've lost control over, you know, having humans just process everything, right? Because we can we can come up with more ideas uh to find bad things and create alerts than we could handle, right? So we've been on this AI ML journey, and we can put all the all the buzzwords around it uh for quite a while. In fact, our first ML that came out um in in our stack was uh about seven years ago, uh with a with a product. It really was just trying to like, hey, is this a false positive? Like that, like just there's a lot like false positives kind of stick out. Can we just create some ML that would identify that? Turns out it was also good at solving the other end of the problem, which is hey, this is really bad too, right? So really bad and really dumb are easy things to find. And we've basically been on this journey to kind of you know work our way toward you know full stack, and that's that's kind of you know what we're we're working through now. Uh and I'm sure we'll get into that in the conversation. So uh yeah, so I've I've been looking for bad guys for 25 years, so it's 26 technically, but you know, that's not that's not well.

SPEAKER_00

Thank you for your service. Interesting journey. And uh, you know, at RSA C, of course, lots of buzz, both real and marketing hype around autonomous stocks. Uh, from your unique vantage point, I mean, what's actually changing? What are people getting wrong or right about about this shift?

SPEAKER_01

Yeah, I I mean RSA this year was uh AI fest. I mean, they could have they could have called it AI RSC or RSA. Um, I mean, it was it was uh a ton of of of work. Last last year, and this is I think this is very encouraging, right? I don't want this to come out the wrong way. Last year was a lot of PowerPoints, a lot of hey, we're gonna, right? And and you know, there were a couple of us that had a couple of things that were out there, like, oh, that's that's truly unique, and you know, actually using agentic AI. Um, but there weren't a ton, right? And and this year, what 12 months can bring, right? So we saw a lot of that, right? And and I think you know, the reality is still very early in this journey, right? We've got um point solutions all over the place, right? So we've got pen testing, we've got scanning and and kind of handling vulnerabilities and exposure problems, we've got uh you know, threat monitoring, right? Um, you know, kind of classic SOC things. We've got, you know, helping process investigations and incident response things. What we are doing differently a little bit is like these things need to come together, right? And so there was a lot of kind of labels that said, oh, agentic sock or sockless environments and those sorts of things. Um, we're still a little fragmented, right? So it's still very disciplined inside of uh inside of threat management and you know, but encouraging at the same time. Like I said, I don't want it to come out the wrong way, you know. It's it's uh some of the stuff that does it, like, oh, that's a really cool solution to that thing. The trouble is I've got 15 other things I have to do too. So, you know, we're still we're still in that kind of like mode. But the good news is I we think the technologies and and and the compute, it's always been a problem. Uh, it's there, right? We're starting to see you know those couple things to come together. And and and maybe as the conversation goes, I can tell you a little bit more about how we're solving that. But but that's brilliant. That was RSA. I mean, it was it was really it was encouraging to see a lot of, hey, I wanna, and we're gonna, and see a lot of that actually hit a demo, which was great.

SPEAKER_00

Amazing. Yeah, and uh tell us a bit more about your philosophy, your team's philosophy on how organizations should think about maintaining control and visibility and accountability when decisions are happening at machine speed in an autonomous environment. What is your thinking there?

SPEAKER_01

Yeah, you know, and that's it's a great question, right? So if we could write rules before that could flood us, imagine when the machines are doing it, right? So, like, right, like humans, humans just can't be part of the a threat monitoring, you know, uh solution going forward. It's just, I mean, technically we weren't we couldn't do it five years ago. We definitely couldn't do it, it can't do it now. Um, you know, and I'm not even going, you know, good IA AI versus bad AI yet. That's a thing, right? But right now, it's just we can flood ourselves to the point that we need to have, you know, this this artificial, you know, digital labor workforce, right? That that helps, you know, not assist. Like we need it to do it, right? I don't I don't need tier one folks staring at a pane of glass, looking at a red thing, going, uh, which one of these 12 red things should I look at first? Right. I need to do them all at the right at the same time, right? And then where we need to go past that is it needs to be fully investigated, fully transparent, all visibility. Like, how did this machine make this decision? Which, if you really think about it, who's ever asked a stock operator to do that? It's not often, right? I mean, like, hey, in forensics, like, hey, what the heck were you thinking here? Like, ideally, you're doing it in a lessons learned, but lessons learned time continues to shrink, right? Um, and so, and so you know, that investigation and transparency, right? You you have to look at absolutely everything because you know, you know, now you're in a stack, you're looking for a needle in a stack of needles, right? Which is for the needle and a stack of hay, at least it looked different, right? So, you know, I this investigation piece around every IP address has to be looked at. Every domain, every email address, every hash, every right, right, everything that sits inside of security telemetry, you gotta look at it all, right? You know, humans look, we we do what we're we can't, like we we investigate it up until we get to the first bad one. We're like, okay, it's bad escalate, right? Or I've done this long enough, I'm bored, I'm moving on to the next one. This one's clearly positive, right? And and and neither one of those things is you know is is appropriate anymore because there's just because it's the first bad one, doesn't talk about spread, doesn't talk about proliferation, it doesn't talk about doesn't talk like a zillion things it doesn't talk about, right? And so we see like there's a a great use case in and around investigations, one just from a thoroughness of the investigation, but then document it. Another thing stock analysts love to do write reports, not happening, right? I mean, I did it. No one, no one writes reports, um, right, and then tie that whole thing up, right? You're like, okay, you've built this bottom of the pyramid, you've done all the research, you know about everything about everything. What am I supposed to do with this? Right? You know, is this like you know, just like on the front end, like here's here's 12 red things, which one should I look at first? Now it's how do I how do I like okay, it's priority two. So I'm not I'm not I'm not running around pushing red buttons, but I do need to respond to it in a timely fashion, right? And so that tie now out to that response, and you're feeling good about how we've gotten to this point. Well, why can't I just now orchestrate that directly to the identity system that needs to reset something, to um, you know, the cloud controls, right? You know, why why do I need to stop anywhere? Right. And that's that's kind of where we're where we're going, is that you know, it's it's it's we used to stop the stock at escalate. And we can't. We can't, right? So find a bad thing, fully investigate, document, cool, it's all there for forensics. It gets to the end. Here's three recommendations. The first two have already been taken.

unknown

Right.

SPEAKER_01

So that's that's kind of how that's that's how we're looking at that problem, um, and and solving it. And and and really that next step for us is really kind of saying, like, well, why do I need to stop at a SOAR? Right? Why can't I just have my agents talk to those app agents?

unknown

Right.

[Ad] Make your podcast work for your business - Listen to Podcasting Amplified

SPEAKER_01

And I didn't have to write the app agent, right? Like I just I do my domain stuff, I know security, I know threat monitoring, but I'm not gonna know how to apply every patch or configuration change. So as those at the individual app providers build agents that do those things, well, fantastic. We want to tap into that, and so you know it it really opens up like we used to quarantine emails, like, okay, that's neat. Well, same concept, but go do it for every applicant out there, right?

(Cont.) From Manual Alert Triage To Autonomous Security Operations

SPEAKER_00

So what an amazing future. And you know, as the stock becomes more invisible, uh so to speak, how does that fundamentally change the role of security teams? What's your perspective there?

SPEAKER_01

Yeah, this is this is a fun topic, especially with my teams. Uh you can imagine, right? Like, hey, this is cool tech that we're building. What am I gonna go do? Um what we're seeing is is the one fundamental for sure, Evan, is you still have to understand the domain, right? You can't just say, oh, this is how you investigate, and we just we we just trust the machines to go do that, right? Because every client environment's slightly different than the next. We're using the same tools, but we're configured in different ways, trying to get them to do different things, right? And so we've got this domain knowledge and expertise. The other thing is the machines need care and feeding, right? Like we're we're not to a point where AI is just like, oh, I get it. Snap your fingers and off you go. That's not that's not how it works, right? You need to train, you need to tune, you need right. Um, you know, uh I haven't even mentioned it yet, but like how are we staying current, right? Threat intel, right? That's that's a whole nother, right? How does that apply to said environment? And so so what we're seeing ourselves do, like, so uh we've we've eliminated tier one, you know, from our from our our core services. Oh wow, classic SOC. Um, and we're working on tier two, and we're kind of working our way the AI up to stack, right? So fully transparent, it does all these sorts of things. And so as we move the people out of those roles, they're either kind of going deep into security threat, domain, threat intel, tying out to other components. They're they're going into the integration side on the response, right? Or they're they're kind of leaning in toward this, like, oh my gosh, there's we need to have a zillion agents, right? We're at 30 some odd agents today, right? We might need 200 agents. I I would like to go build, tune, maintain, you know, those sorts of things. So they're kind of, you know, there's these these upskills. And that's just what we're doing, folks, inside of threat, because now we have something that goes like, I'm really interested in like, you know, untapped things like identity threat detection and response, right? Like, that's always been too much data to deal with, right? Like, you know, I had plenty from from from telemetry from the network and the apps, uh, identities, and then that's all behavioral. What a whole nother set of problems to you know to solve. And so we've we've got folks who've kind of like picked up and said, I want to take my threat knowledge and I want to go apply that to you know, identity as as a classic example. We just we just uh we did a bunch of stuff recently there as well. So um so that's that's kind of where we where we see it. Like, yeah, we're displacing the stuff, but it's the stuff that no one really wants to do anyway, right? It's who wants to stare at a console? I do so I so I found ways to get away from that, right? Um, like okay, but I need to trust what's watching it, right? I need to understand the investigations that are happening at tier two. And now as I get into three, you know, just kind of classic, classical sock, right? Uh into those responses. Still need to understand that that whole piece, but I don't need to do it. Right. So what supports all of that?

SPEAKER_00

Brilliant. And you know, every enterprise that I speak with, probably you speak with, is you know, experimenting with automation and security operations, you know, trialing, learning, uh, on and on. What are some of the biggest mistakes organizations are making when they try to automate maybe too quickly?

SPEAKER_01

Yeah, I there's a there's a couple of things, right? I think, I think there's there's there's mistakes that you make that that, and then there's just kind of expectations around like, well, wait a minute, I did that, right? So so kind of group one is I I really think not looking holistically enough. Like, what is it you're trying to solve? Are you trying to solve what whoever's pitching it to you, right, is trying to solve for you, right? Or are you are you targeting your overall program?

unknown

Right?

SPEAKER_01

Are you saying, look, my threat program needs to do X, Y, and Z? These are the three big metrics, whatever they are, right? And it could be mean time to resolve, mean time to identify, like whatever. And it doesn't have to be a mean time, it could be something else, it could be coverage of your defensive detections. Okay, great. Now you know what it is, and so I think what we see is a lot of mistakes is like, oh my gosh, I'm a CISO, I've got a mandate that the whole company's going to AI and I have to be 30% or some weird, you know. I I've heard some weird stuff. Um, like, but I need I need to have AI. This does this. I I have that problem. I'm gonna go do that. And then you it's kind of following that in that process that we've been going through, like, oh, here's the next security silver bullet, here's the next security silver bullet. Like, no, no, no, no, no, no. Like, you have like AI gives us this ability to solve bigger problems, right? And I think I think that that's my that's my hope for RSA next year is that I go and I see, oh, now we're talking about it, right? This is end to end, not just one thing. So that's like when when I see when I sit when I work with some clients who've kind of like either been early adopters in AI or like you said, just kind of orchestration and automation, right? It's I'm just trying to solve this sort of thing. Like I've seen some really elaborate source and they're great, but they only run five percent of the the like I mean, there were 300 run books and like only five of them actually execute. Maybe not the best use to time, like right? Like I get it. You're hey, we're heavily automated, but if if only 60 or sorry, uh if only five percent of those things are actually gonna go, what are you doing with the red? Like, right? So that's that's kind of problem one. Problem two is really just like, you know, when you're looking at AI, like I really want folks to take it from the top down, right? This is this is a mindset shift, right? Like, what why are you running a sock? What did what did you want it to do? Now, I don't want to hear, I don't want to hear about you know, CVE this or you know, miter attack that, like those those are hows, right? Like, what is the outcome that you want that security program to do, right? And it's not just the SOC, it includes your exposure vulnerability manager, it includes your pen testing, it includes on the other side your your your regular response and then your C cert or IR or whatever that that's how you like start there, right? And I think if if if if you know if I if I were to go rebuild the program, like those were to be the sorts of things that we would kind of go look at, like, you know, you know, again, I'm just trying to pigeonhole into like I oh my thumb hurts. I'm gonna fix my thumb. Yeah, but why? Why is your thumb hurt? What are you like? Is it okay that your thumb hurts? Because you know, um, you know, you lost your other arm, like that's a problem, right? So just it's it's it's a it's a strategic thinking of how to apply the problem. Um sorry, apply the solution to the problem. And I'm not suggesting that you don't need multiple of those sorts of things, and then but like who stitches that together? Because stitching it together is how you get to the outcome that you're gonna. That's the first like, well, why do you have a sock? Oh yeah, you get to it's gonna be something big, right? And when they say that, okay, now it now it starts to make a little bit more sense, right? And so I just think that you know, um, you know, things like target operating models have a place, and there's a reason, right? You get to get thinking bigger than one slice of one box.

SPEAKER_00

So no, these are great points, and you you know, you leave me with a very optimistic feeling talking about this vision. The flip side is you know, the adversaries are using these tools in the wild as well. Genai is everywhere, and I, you know, even I personally, my little business of one, you know, has been the subject of very sophisticated attacks as a kind of influencer, amazing, unbelievable. I luckily didn't fall for them. But um, you know, how are you seeing the adversaries use these tools in the wild today as we speak?

SPEAKER_01

Yeah, I I mean it this is this is something like I just I I say this all the time. Like, you know, AI lowers barriers for everybody, right? Not not just us defenders, right? Right? It it's never been an easier time to be a hacker. You don't have to know code, you don't have to know the inner workings of an application, you don't have to know how networks uh are set up inside of corporations, you don't have to know any of that stuff. You just have to know what you want and go after it. And you know, you know, clearly you're not gonna go do this using you know the mainstream, you know, uh chat and and and LLMs, but there's plenty of others. There are plenty of others, right? And and if it's uh there's a couple there was there's an article, I guess, a month ago now, like someone used AI to hack into their robot vacuum, right? Like right, like, and that was just for fun, right? Like, oh hey, I wonder if I customize them all kicking. And then you you find out that the back door to to the to the SaaS service where they're selling the parts and and and and cleaning supplies that the vacuum uses. Oops, right?

SPEAKER_00

Like in that only only 250,000 robot vacuum cleaners, you know, that's all.

SPEAKER_01

And the guy the guy didn't write a line of code, right? Right? It was all in English, right? Like, you know, you don't have to know Java and C and Python. You don't need to know any of that stuff. You just need to say, like, hey, I want to do this. Tell me more about that, right? Like, you know, isn't there another way around? Oh, let me go take another look, right? You know, LLMs are so friendly, right? And so, like, I just like I think that's it's an absolute real thing, right? So we're seeing, you know, you know, a researcher, you know, break into a vacuum, but we're also seeing LLMs being hacked. Um, you know, uh light LLM very recently, that that was big big news uh last week or so, right? Like they're in in in injecting themselves into CI CD pipeline. They're not doing that by hand, right? That was something that you would have never thought, like, oh yeah, that uh right. Well, who helped them think of that? Well, kind of work through other ideas and like, well, try this, try this, try this again in the language I learned when I was, you know, learning to speak, not anything else. So yeah, so the like just I think that that that is kind of the core of this, uh, Evan is you know, you've got yes, amazing amount of compute, amazing amount of technology, and it's for everybody, right? Everybody can use this, right? So it doesn't just make my job easier. For sure.

SPEAKER_00

Um so looking ahead, RSA 2027, 28, maybe 29, what does a mature autonomous stock look like in terms of outcomes or expectations? What what give us some uh some scenarios?

SPEAKER_01

Yeah, um I love that. This is this is what we do all the time. We kick around like, well, what is this? So so um I mean, you know, kind of current thinking is, you know, there's there's like human out of the loop, you know, um, you know, through this entire kind of threat management thing, right? So go find something bad. Okay. Well, how do we find bad things? Well, we can scan for them, we can pen test for them, um, uh, we can monitor consoles. Great. So you found it. Okay, cool. Uh investigate it. All right. And like I said, we're we're already there uh with some of the things that we're doing, uh, you know, from an investigation standpoint. All right, no humans in the way, right? Like I have, you know, high four nines or five nines confidence in this investigation and that these are the correct actions. Okay, now go take those actions, right? And so, yeah, I mean, like, you know, we were talking about like is it sock less? Is it I mean, the sock is still going to be there. The function of of find a bad thing and make it go away, that doesn't go anywhere, right? That's that stays. But how much of the how much is a human in the middle making those pieces like, oh, I found it. Do you need a human for that? Right? And maybe the the right answer is it's not not you need them 99.9% of the time to be augmented AI, autonomous AI, and that last point one, because it's not like that point one isn't a massive amount of things, right? And then, and then you know, now you get into the response, right? And so, yeah, so like I said, I'd I'd be thrilled to go to RSA next year and see these towers within threat starting to get stitched together, where my pen agents are talking to my Exposure agents which are talking to these agents, but then beyond into my defensive controls, right? Like the identity folks are in there. Like, and I'm just gonna stay in RSA in the security land because we can take this into AI ops into a couple of areas. But like if we're going to RSA, like I'd love to see SOX talking autonomously to the identity programs. Like, hey, I need these reset. I don't like to give a quarantine example. Uh, all of the all of the controls as it relates to data, application security. Um, right, we've got good tech in those areas, but I want their agents talking to the threat agents. And then if I could bundle the whole thing together, why are we filling out spreadsheets and reporting on risk? Let's do that in real time too. Like we talk about consumers' controls, right? And we love it as an idea, but it's too hard. Things change. Well, wait a minute, hold on. I'm not doing it. An agent's gonna do it, and those agents can go talk about things way faster than you and I can talk about, right? So, like, is that next year? Maybe a little aggressive uh for for for everybody, but I I think we're gonna see pockets of that, right? And that's you know, that's what we're working on. Like that's what we want to try to bring together so that we have not just a threat program, but we have an entire cybersecurity program, right, that is is autonomously working, and the humans are you know working on a service delivery layer and operators and command and control and that tuning and prompting and things like that, and then overall governance and risk and right. We just see that that like there's a place for this digital labor, and it just it uniquely applies itself to to to to our problem set. Um, so I'm hoping I'm hoping to see a bunch of that next year.

SPEAKER_00

You know, there no rest for the weary who continue on past RSA. What are you excited about over the next weeks and months and what's on your agenda, your calendar, your radar?

SPEAKER_01

Yeah, well, we're coming to your part of the world. We'll be in Boston for Think uh in a couple of weeks. Um got some exciting things to showcase and some demos around the things I've been talking about, around you know, uh autonomous uh investigation and and response and you know, just how can you apply Gen AI and TIC AI to the threat problem? Um hopefully we'll see you there. Uh and hopefully we'll hear so.

SPEAKER_00

I I have to check my email for the invite, but I I look forward to it. Thanks for choosing our neck of the woods, and thanks so much for sharing the vision. It's it's really transformative and so exciting.

SPEAKER_01

Yeah, appreciate you having us on. Uh uh love to do it.

SPEAKER_00

All right. Thanks everyone for listening, watching, and sharing the episode. And be sure to check out our TV show, techimpact.tv on Bloomberg Television and Fox Business. Thanks, everyone. Thanks, Dave. Thanks, Evan.