The Cybersecurity Bridge

Brian Dye, Corelight

SiliconANGLE

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 37:44

Brian Dye, CEO of Corelight joins Jon Oltsik for this weeks episode of The Cybersecurity Bridge

SPEAKER_00

Hello everyone. It's me, John Oltsick, analyst at large at theCUBE Research, and I'm here for another episode of my podcast, The Cybersecurity Bridge. If you're one of the few people in the world who's never seen the Cybersecurity Bridge, I'm going to give you the Kevin and uh home alone look. What are you doing? The Cybersecurity Bridge podcast is about cybersecurity. And in each episode, we look at an area of cybersecurity. We break that the podcast into three sections. In the first section, we talk about the present of that area of cybersecurity. In the second section, we talk about the future of that area of cybersecurity. And in the third, that's the cybersecurity bridge. How do we get from the present to the future? And I've got an old friend on. I've got a lot of old friends that participate, but this old friend, uh really smart person that I've known for years, Brian Dye, the CEO of CoreLight. Brian, welcome. John, thanks for having me. It's great to see you again. And we're going to be talking more or less about uh network security and a lot of other areas peripheral to network security. But before we do, are you ready for a cybersecurity trivia question, Brian?

SPEAKER_01

How about I'm as ready as I can possibly be, even if that doesn't mean I'm ready. Hit me, John. We'll uh worst case I get super embarrassed.

SPEAKER_00

Okay. So I think you'll get this, but we'll see. Um we can't talk about network security without thinking of firewalls. And so the question is who wrote the first technical paper describing firewalls? And the bonus would be if you knew what year and what company and even why they wrote this, but I'll explain. Was it Dave Brizzoto? Was it Nigam Nitra? Was it Bill Cheswick? Jeff Mogul or Gil Schwed?

SPEAKER_01

Oh my gosh. Uh there's a section on like Wait, wait, don't tell me, this uh NPR podcast where like the guest gets to call to the audience. I feel like I wish I had to call the audience right now. I'm gonna go with Cheswick, but it's a complete guess.

SPEAKER_00

Okay. Uh no, it's not dope cheswick. No, I did extensive research. I spent months, months doing this. Uh holding apparently on Google, yeah. And and uh as a Massachusetts native, I'm proud of the fact that the first technical paper was in 1988. It was by a guy named Jeff Mogle at Digital, Digital Equipment Corporation, which uh was in Maynard, Massachusetts, and I brought my kids up in the next town. And it was in relation or in response to the Morris worm in 1988. So Cheswick, I think, was uh instrumental in the actual implementation of firewalls, but uh the first paper, Jeff Mogle. So let's move beyond that. Uh one piece of history, great stuff. So tell me, so I had uh Greg Bell on last year, and uh and he is your colleague at at Corlay. But tell me, what's the state of network security and and how have things changed in the last year?

SPEAKER_01

You know, it's uh it's actually really fascinating. We're in a bit of a reinvention, I would say. And there's two big forces driving it. One is if you look at the last 10 or 15 sys advisories, about 80% of them have a line stuck in there that says, as one of their top three recommendations, that says you should do network baseline and anomaly detection because you need to be able to find and address these perimeter evasive attacks that are doing living off the land. So that evolution of the threat landscape that is doing what attackers always do, right? Going to the weak spot in the defensive kind of uh infrastructure, that's a big one. And the second big one is this AI acceleration, the attacker's use of AI and the acceleration, which leads to automation, which leads to needing great data as as uh as part of that automation program. So all those are forcing, frankly, defenders to take a look at, you know, call it the 15-year-old security stack, kind of net flow, standalone IDS and PCAP, and saying, oh, wait, that's not good enough for the modern era. So, you know, lots to unpack there, but like that that moment of reinvention is probably the biggest thing.

SPEAKER_00

And so let now let's talk about how that's being reinvented. So uh I think of Coralite and I think of Zeek and Bro, I think of Suricata. And so, how are those things being reinvented to address the kinds of things that you just talked about?

SPEAKER_01

You know, it it falls into a couple of categories. First is just raw points of visibility. You know, I am incredibly surprised at, you know, even some of the largest banks that we talk to that you would think would be on the cutting edge of everything. And by the way, they generally are. But even amongst that ecosystem, you'll find folks that are saying, hey, we've been doing a ton at the endpoint, we've been doing a ton on identity, we haven't really been looking at network outside of the firewall for the past eight or 10 or 12 years. And so with this recognition that, gee, I haven't actually even been paying attention to this world, and now I kind of need to make it a priority, that's been a really big thing. Uh, and that what that turns into is placement of points of visibility. Because if what you're looking for is living off the land attacks, sitting there at the perimeter is only a portion of what you're doing, right? So sitting at kind of traditional north-south data center ingress-egress, it's not the full picture. You need to look at your secure enclaves, like your DNS subnets, your VPN concentrators, things like that. And you need to be looking at key points in your East-West. So that's one big thing. Just points of visibility is a big change. The second big change is moving from signatures to behavioral and kind of ML for detection. So if you look at a, you know, the the bane of the traditional IDS world, it's that they become alert cannons that kind of catch the known, but they don't catch the modern or the novel attacks. And so a lot of what's happening in this kind of NDR reinvention is you bring behavioral, you bring supervised and unsupervised ML, you bring more analytics compute to find the types of attacks that go beyond what signatures can do. By the way, you still need the signatures. Why? Cheap, cheerful, effective for a bunch of kind of basic stuff. So I'm not saying to take those out of the equation. That doesn't help anybody. Uh, but you know, you do have to kind of move beyond into some of the more advanced kind of forms of math. Uh, and then the third one is realizing that the evidence, the ground truth of what's happening on the network turns up to be incredibly useful for a wide range of use cases. That could be compliance or fraud or API security or insider threat. I mean, it's amazing what folks are doing. But the real compelling one right now is I need it as the fuel for my AI sock reinvention, right? My the automation programs I'm trying to drive in the SOC. So those are the big three.

SPEAKER_00

Yeah, and I I'm just gonna complain a little bit here. I think there are a couple of things that have happened in the last two years, or not two years, but the last few years, that uh are um just hyperbole by the industry. One is that XDR is all you need, or even EDR is all you need. You don't need NDR, you don't need to monitor the network. Absolutely not true. And then the the second one is that uh network security is all about signatures, and signatures are looking uh in the rear view mirror. And that's not true. So um I know I'm ranting, but uh it sounds like you agree.

SPEAKER_01

Oh for sure. And look, the the I'll I'll pile on a little bit. If you think about the change in customer conversations we've had over the past few years, you know, we would find four years ago it was a it was a binary conversation. It was really interesting. Folks had either kind of wrapped their head around this and said, oh, wait, I have a super well deployed EDR environment. And by the way, if you don't have EDR, you shouldn't even talk to us. You should go get a modern EDR and go get deployed. I'll I'll say that until the cows come home. But you would get folks that were at a super well-deployed EDR still seeing a bunch of issues, and then they were forced into the realization of exactly what you said. Like, oh, turns up the MITRE attack framework has an entire category of threats best observed and found from the network, and there's a reason for that. Like, you know, go figure. So you would have a set of folks that deeply understood that and were kind of among the early adopters of the NDR category. And then you had the other half of folks we were talking to where the first part of every customer conversation was, I've got a great EDR wide wide e Jew. And you know, the ratio of that was 50-50 four years ago, and it's 90-10 now, right? And I think there's a lot of things that go into that. But this recognition that, like, look, let's not kid ourselves, right? Defense and depth exist for a reason. You know, the endpoint gives you depth, the network gives you breadth, it's not a new concept. We're just doing a lot better now.

SPEAKER_00

Yeah, not to mention, Brian, that uh a lot of the attacks go after devices that don't have an EDR agent, as we know. Uh, and the attackers are smart, they know that to circumvent EDR, you could manipulate the EDR itself, uh, change the logs, build some kind of EDR-resistant malware, or you could attack elsewhere, and that's where the network security comes in. So it's really important.

SPEAKER_01

Yeah, and by the way, uh, one of our advisors is Rob Joyce, who used to run cyber for the NSA, used to run tailored access operations for the NSA back when that unit had that name. Uh wonderful human being on many levels. He and I did a joint session at RSA, not this past one, but the year before. And he had a great example of that, where there was an attack that had been published where the attacker got a webcam. They got shell on the Linux operating system in the webcam, and then they were you from that shell, they were doing SMB mounts and driving encryption from the webcam. And, you know, Rob kind of summarized it well. He says, no one has that attack in their throat in their threat profile. Like no one's modeling for that because it's just so creative. But to your point, that's exactly what folks do, right? They're going to bypass the no defenses any way they can. That's the good, the latest example. And you know, we've all heard, and this is absolutely true, that the time period from uh announcement of a disclose of a vulnerability to exploit of that vulnerability used to be three weeks. Now it's hours. And with Glasswig and Mythos and the and that not just those, but honestly the continued progress of the flexion of the foundation models, this this isn't going away, right? This ability to kind of find and exploit these novel attack paths is only accelerating.

SPEAKER_00

Yeah, now before we move on to the future, I want to ask you one more question about the present. And it's it's related to AI, and that is uh in my research, I noted that uh that Coralite and also the Zeke community, the Circata community are now building in guardrails to monitor AI traffic itself. So uh that's probably something people most people don't know about. So can you talk about that?

SPEAKER_01

Yeah, and it it's really an extension of the point I made earlier, where if you have ground truth of what's happening on the network, you've got this fantastic data source. And asking what the use case is for great data is like asking what the use case is for plastic. It's all of them, right? And this is just one good example. So if you think about detecting shadow AI, you see that activity happening on the network. You're right, you see the foundation models, you see MCP servers, you see agents, right? And so it was actually very straightforward to extend our current app identification capabilities. And frankly, even before we did that, folks were using the data we generate in a threat hunting style to go and find these things. So, you know, this a lot of the way we think about the value of NDR as a category and the value that we as CoreLight are trying to provide is if we can give you ground truth of what's happening on your network, that is going to let you answer the question of tomorrow that you don't know about today. And so today's hot topic of the of the day is help me find the new shadow, uh shadow IT, which is you know unauthorized Gen AI applications. 100% true. We've also got folks saying, hey, what about post-quantum? How do I understand the encryption ciphers happening on my network? We've got tons of folks saying, hey, this net new vulnerability, how do I find examples of those exploits? And not to mention the more mundane stuff like, oh, can I automate more of my compliance attestation and kind of things like that? All those are a function of do I have the right data? Do I have the right coverage, vantage points for that data, uh, and do I have the right kind of uh uh time retention of that data? So we really think about the data as the first class citizen, and then you know, we leverage that data to answer all these types of questions.

SPEAKER_00

Yeah, that's a good answer that tees up our discussion on the future. And again, this uh something that most people may not understand is the role of the network in AI model development and application development. And that's true in training, it's true in inference, it's true in data ingestion. So talk about that and then talk about the security implications involved in that.

SPEAKER_01

Yeah, it it's a it's a great point, John. And one of the ahas that a lot of folks also don't realize was because of the open source heritage we came from and the really set of elite defenders that we serve, both in the in the pay community and in the in the open source community, uh, we were able to get engaged in the actual model development and tuning programs at a number of the big flagship providers, because they want those models to understand the canonical security data and be able to act on it out of the box with no further vendor tuning, right? This is to everyone's kind of mutual advantage. And so if we think about what are the implications of that, we see folks doing a uh a progression of AI-based automation kind of across the SOC. And because of the breadth that the network provides, we're one of three really essential ingredients in that automation, right? You need endpoint for depth, you need network for breadth, you need identity to bring it together. I'm not telling, I'm I'm not, you know, I'm not speaking rocket science here, right? Those are pretty foundational. So when you do that, we're seeing folks move through a three-part evolution, if you will. Step one is assistance. Help me do what I'm doing today, but do it faster. And so, you know, we and a bunch of other folks do things like, hey, translate this alert into English, help me, uh, help me understand what I should do next, right? Give me investigation guidance, things like that. Uh natural language query, that's a good example. This phase where a lot of folks are in right now, uh, and we're actually uh leading in the category in this, is moving from assistance to full automation. So we will take the riskiest assets in the environment and run a soup to nuts investigation. And you know, the key here is you've got to be transparent and you've got to earn trust. And of course, you have to be accurate, right? And and this is gonna be one of the key pivots to kind of how I think things need to evolve. But where I think things are going, which is gonna be truly game-changing, is when you get multi-product integration and a true AI SOC that's gonna enable not just kind of not just task automation, but true workflow automation. And instead of having to worry about defending an individual, you know, trusting an individual tool, you get kind of triangulation across tools. So kind of the new defense in depth, I think, is going to be cross-AI triangulation, right, in a really big way.

SPEAKER_00

I think so too. I talked a lot about that with people at RSA. And I think there's some consensus on that. Then there are some other people who have more of a proprietary agenda, but talk more about that because I I mean, historically, I've always thought of Coralite as just a great monitoring and data system. And now that data is going to feed other models, it's going to feed other tools. Now I see a future where that kind of specialization is in the identity side, it's on the data side, with it's already in the endpoint, the network side. So, how does that all work uh collaboratively and come together?

SPEAKER_01

Yeah, there's two things I think that are really driving this. One is domain expertise, and two is the SIM moving to a federated model. So, our kind of mental model of how this agentix SOC is going to work uh is imagine it like a table. You're gonna have your individual detection and response domains, right? You've got EDR, NDR, ITDR, you know, can add them all up. Those are the legs of the table. The job of each leg is to be the master of its domain, right? So we need to provide, you know, we generate the best data we can, generate the best analytics we can, and drive our own kind of AI-based automation within our leg of the table. Then we have to be a constructive partner to the SIM or AI SOC tool or frankly the large language models, kind of whatever folks are doing at that cross AI layer, because there's a subset of data that's gonna continue to need to be aggregated, right? The alerts and a little bit of context around those alerts is kind of a no-brainer. That's your minimum threshold that you need to bring in. But, you know, I, you know, both CrowdStrike and Cisco are investors of ours. And I would tell you that both of them believe that you're gonna have to move to a federated model, right? At least in some way, shape, or form. Different people are tackling it different ways. The certainly the AI SOC startups are certainly starting with a federated model. So you've got the alerts in the context centrally, but now each of us, EDR, NDR, ITDR, you name it, we have to be kind of an agentic partner. So imagine you're doing uh, and actually, we just did an announcement at RSA around CrowdStrike Charlotte and their AI canvas that actually enables this. If you're doing an investigation in Charlotte and you want to say, hey, tell me more about this alert or tell me more about this IP address, you don't want to have to like re-ingest all the possible data and do that locally. You want to be able to do either an MCP or an agent-to-agent call to another tool, in this case Corlite, and say, hey, tell me what you know about this IP. And we're not going to give you the world. We're going to do our own novel analysis and kind of bring the results of that kind of back to the AI analyst workbench, if you will.

SPEAKER_00

Brian, what you just described is SOPA 2026. SOPA is my model that I came out of ESG with. Um, but that's basically what you're what you're talking about, and I'm going to write more about that. Um so let me, you mentioned anthropic in the mythos uh uh release, and there's a lot of press about that this week. And um for those of you people who don't know, that's uh uh it's it's uh apparently uh architected so it can detect vulnerabilities extremely quickly at machine speed. What will that do? So I'm gonna let me throw a little bit of speculation. What's stopping us from understanding what it's finding and translating that immediately through AI into virtual patches on something like Suricata? Uh am I way off base here or is that possible?

SPEAKER_01

You are not off base, and that's absolutely where we're all going. And I would add a use case to that, by the way, because if you think about what's transforming already, and I and I I hope what what everyone's seeing when they look at Glasswing and Mythos is this is doing a thing that's already happening just way better. So it's not like there's something brand new that's never being done before here. It's just doing a lot better, so much better that even the anthropic folks are like, holy cow, we can't let this get in the hands of the bad folks, right? So if you look at the implications on that, I think there's two, there's two sides of it. There's the Vuln side and there's the detection engineering side. Okay, they're related. They're both sides of the same coin, right? So on the vuln side, you know, all of us as providers, I mean, certainly us as a as a provider into really high-end critical uh infrastructure providers, we have had deep in our ethos for a long time that we cannot be the Vuln in our customer supply chain. Like that's a that's a company-ending kind of uh event for us. So, for example, our InfoSec team is four times the size of our IT team, which is just kind of a crazy statement when you think about it, right? Most folks, that's inverted, right? You know, your IT team is four times the size of InfoSec. So like that's been a big priority. And so all of us as security providers need to be super aggressive about using every tool we can to find the vulnerabilities in our own code, ensuring that those are rock solid and can't be exploited. That is an IQ test. Uh, it's not just an IQ test, it's an ethics test. Like, let's be honest, right? Like that needs to be kind of front and center. But the other opportunity here, think about the automation for the defender. Right now, you know, a lot of folks are focusing on the highest value thing we can do, right? Which is security teams are overwhelmed by the volumes of alerts. They have a they cannot get to all of those. So this isn't a dollar savings or a or a hey count reduction problem. This is how deep in my security queue can I go. So we're all going after how do we automate the triage and the instant response process, right? Again, highest value thing makes a ton of sense. Where are we gonna go next? We're gonna keep going down that value chain, right? Because if you look at threat hunting and detection engineering, the same tools that let you go and kind of understand where the vulnerabilities are are also gonna be incredibly effective as saying, hey, here's the types of things we see folks doing in this environment. How do we go and closed loop automate both the threat hunting and the actual alert creation, right? The detection engineering part part of that process. I think this is gonna unlock all sorts of opportunities. And if you go back to kind of where's the evolution here, back to your SOPA paper, because again, you are you are 100% spot on here. I've I've read that work. The we're evolving from human assistance to human in the loop, where most folks are now or where they're trying to get to, to full automation. And the real key question is how do we earn the trust, right? To kind of, you know, as as defenders and as providers to kind of move through that maturity cycle.

SPEAKER_00

I I totally agree. And um I appreciate what you just said because I think there's a little panic with uh anthropic with those, and we should certainly uh have respect for what it can do and what uh vulnerabilities it can open. Um, but we also have to look at the opportunity there, and you just described that. I I think that that's uh it's up to our industry to do that. Um so now let's move on to the cybersecurity bridge, how we get from here to there. So one thing I think, and I I looked again, I did some research and um I saw that Suracana. 8.0 came out, uh Zeek 8.0 and 8.1 came out. So clearly these are active uh projects. Do we have the uh the horsepower in the community to keep up with all of the things that we talked about in terms of just basic security, threat detection, response, data uh ingestion, and the AI component as well? Is that are those communities robust enough to do that?

SPEAKER_01

Absolutely. Uh the i when I think about kind of the strengths of the communities, a couple of things come to mind. One, do we actually have the coding horsepower to make sure that we are uh coding horsepower and automation to be able to find and uh and address not just uh new features, but ensure that we're kind of staying on top of vulnerabilities. And like I said, because of the ethos of the providers, both us and the developers in the in the extended Suricata community, uh I have zero questions on that. I feel like the ethos is there. And bear in mind, what are all uh what are we at Coralite doing, like a lot of other software developers? We're looking to use the AI tools to accelerate what we can do. So, you know, Coralite happens to be really well capitalized and investing very aggressively for growth. You know, we built RD by about 40% last year, which is a non-trivial number. If you can take that type of raw dollar investment and then make that a 2x throughput by using the same kind of engineering, you know, tools for engineering acceleration that the attackers are using for destructive capability, I don't think we're gonna be facing a throughput or a bandwidth problem. I think we're actually we're facing a moment where with when you combine the AI leverage, internal AI leverage, uh, and the actual staffing levels that we're putting in, we're gonna have a ton of calories to go make some fantastic things happen. The second thing that gives me a lot of of courage and confidence here is there is a there is a feedback system that's been happening for the last 30 years. It's actually this year's the 30th anniversary of the Zeek project, right? Uh it's kind of a milestone year. But the reason that that project has done so well, and the reason that the data is so powerful is that it's a living, breathing, evolving data set. So there's this virtuous cycle of, you know, elite IR analysts and elite, you know, uh IR consultants, you know, uh some of the biggest organizations in the world, they're facing these challenges. They find they need a new data type or a new capability in the detection models or whatever it is, right? Not just a model, a capability in the detection models. They go then push on kind of the Bazique project, the Suricata project, those get added in, right? And then the flywheel continues, right? Because novel data enables novel analytics and that whole flywheel continues. I didn't appreciate that until, you know, because I've I've been at Correlate about eight years. Uh, I think maybe the second year in, I got with, sat down with one of our founders, Seth Hall, and I had him show me what the data set looked like eight years earlier. And it was like a completely different data set. It was like the two things had never met. And so you really, you know, it's it's kind of like watching your kids grow up. From one week to another, you can't see your kid change. One year to another, you see him change. And when they bridge from elementary to middle school to high school, you're dealing with different human beings. And the open source project is exactly like that. And that flywheel of innovation. So as these elite defenders are wrestling with shadow IT and AI-driven automation and living off the land detection, all these things, they're continuing to push on us as the providers of that tech to do better. And that that flywheel is what gives me confidence that uh that, or let me restate that flywheel and the bandwidth execute on it, because the two things do have to go together. That gives me a lot of confidence.

SPEAKER_00

Yeah, well described. Um does Zeke still have that bro logo with the eye looking at you?

SPEAKER_01

No, uh, yeah, to be overly candid, you know, the the original name of Bro all those years ago was kind of wonderfully intellectual. It was a reference to George Orwell's Big Brother and the dual-edged nature of network monitoring. It can be used for good or evil. That all made a ton of sense. Right until you got to the late teens and the early 20s, when, like, honey, I'm going to Vegas for Brocon was no longer an acceptable sentence. Uh, that renaming had to change for very obvious reasons. So the project is now called Zeke, and they've actually changed the logo around. So now it's a it's a stylized Z that if you look at it has arrows that basically represent uh they're indicative of network connections. So uh if we we uh being the corp uh the the kind of the the main corporate sponsor for that project, uh we we wanted to help uh help it live up to its reputation from a branding perspective. How about that?

SPEAKER_00

Okay. Because that I logo always frightened me. It's just like yeah, it's like a cyclops looking at you. But I digress. Like a lot of things in life. Just a long time ago. Um so uh I'm gonna ask you something about Coralite. So on Corlite's website, I noticed uh and read a little bit about something called the Polaris project. And if you could describe that, it seems like something you're doing uh that would be very beneficial for specific kind of uh LLMs uh for network security. Am I wrong or is it well, please describe it?

SPEAKER_01

No, you're you're spot on. And really what it comes down to is networks are messy. And so if you think about how you do kind of malware detection, you know, tool set detection, novel attack detection, kind of broadly speaking, security content development, you know, the network doesn't work, doesn't uh it's not easy to do what VirusTotal does, for example, where there's tons of just malware samples, you generate a new model, you throw it against VirusTotal, you see how it works. I'm not saying that's easy because amassing the IP set that VirusTotal represents is a multi-year effort and is a massive asset to the community. So I'm not denigrating anything there. But the network, by definition, is a super messy thing. And because you're tracking behaviors, those behaviors can vary, right? Some of them are more durable. So if you look at how a command and control tool set operates, that's a much more durable thing with a longer half-life. Some of them are very novel to how that particular network operates. And what is normal on that network is going to be very, very different. So you really need to be pretty intimate with your customers to understand what novel, what kind of good content development looks like. And so the Polaris program was a response to that challenge. So what we do is we essentially enter into kind of content co-development relationships with some of our key customers, where they let us put um actual production sensors into their environments. And as a result, we use those sensors that are out of production. So they're there for the labs team to kind of do work on to iterate on new content in a way that we get feedback from the organization we are serving and direction from the organization we're serving. So we make sure that we are not just coming up with things that we think are a good idea, but we're getting kind of real live kind of direction from the defenders we're working with. And we get really good feedback on have you characterized this correctly, right? Because the key with the network is the signal to noise ratio. That's what it all comes down to, right? And so we often think about the difference between a um a false positive and real but uninteresting and dead wrong. Like there's three actual levels that matter here, right? The first two have value, the third one absolutely doesn't, right? But we need, you know, we need the human feedback. And if you think about how that fits into an LM and AI world, this is something that I think all of us as vendors are learning. And actually, I was just listening to a uh uh uh the the recording of a webcast of a presentation from uh one of the uh uh AI detection engineers who just left Meta and super well-staffed, very sophisticated organizations, you might imagine. They realized that simply even going to the defenders themselves and asking them to label what was a good detection from an LLM-based kind of workflow wasn't effective because the analysts didn't agree. So you could give the same set of verdicts to the to a the the common pool of analysts, and you would get a pretty wide-ranging set of views. And when you get that difference in labeling data, it really destroys the ability to of a model to kind of do uh reinforcement learning. So a lot of what we are driving towards is understanding the and assessing the reasoning process of these models. And so who do you need feedback from to assess your reasoning process? The analysts, right? Which comes back to the Polaris program. So both from a content direction and from a kind of content quality standpoint, that program has been super essential for us. And I'm really grateful to the organizations we work with that that see that mutual interest, right? They want to see kind of great content and be able to steer it as well. So it really is kind of a two-way, two-way street of benefits, if you will.

SPEAKER_00

Is that aligned at all to the mitre attack framework? It seems like it might be to me.

SPEAKER_01

It generally is, but one of the nice things about working with the the real defenders is they are not constrained by that framework. I think the mitre attack framework is an awesome asset. Huge credit to the MITRE team for putting that together. It is a, I think it's one of the frankly massively marquee uh uh uh developments that have created common language for all of us. But when you're talking to the real defenders, what they're most focused on is what are the attacks that I'm missing? And then, yes, afterwards, we will make sure that we kind of think about how that map into the MITRE framework and kind of keep building that aspect of the community. But you know, a good example of a non-MITRE kind of piece of content development was that a lot of our customers really care about the volume versus value ratio in the evidence, right? Because we could generate three times more data than we do today. That is a problem if that data isn't valuable enough to pay for the storage and analysis of it. Uh, and in particular, let's say that you're uh putting all this data into a SIM and that SIM prices on a volume basis. Now you really care about kind of the volume versus value of that data. So a lot of what uh what or one of the big initiatives that we've been driving in this program is how do we do a really smart job of making sure that we tune and filter the data volume to be still incredibly impactful, right? So you can't take out the, you can't simply say, oh, I'm gonna alert and only keep the context around the alert. That is a failure, like a deep, deep failure, because now you are believing that your alerting is 100% accurate. Anyone who believes that is smoking something and should not be doing kind of content development, right? So you can so now the question is, how do you take out only the truly known good? The stuff that you've got really high conviction has no further analysis, right? How do you draw that line? And so between our Polaris program, and we actually do a bunch of work with CISA on this particular front, they were a really great kind of co-development partner here, coming up with how do you draw that value, how do you draw that line and kind of make sure that you're getting the best value for volume? That's another good example that is incredibly customer valuable, but is kind of outside of the for uh outside the direct structure of the MITRE attack framework.

SPEAKER_00

Yeah, the value per volume or the volume associated with volume is a critical metric. And we've been chasing that for years. Uh, and there's been a misguided approach that more is better. I see that with threat intelligence, I see that with logs, and more is never better. Uh more is messier. And so to the extent that you can do that, or we can use AI to do that, I I think that's a great uh advancement that we're providing.

SPEAKER_01

Yeah, and the only thing I would add is, and I think this is going to be part of the AI uh evolution, there aren't that many companies that treat the data coverage and quality as its own first class set of metrics. Right? The and it we saw this uh in the early days of the company, that when we looked at the folks that were that were buying from us and we asked them why they bought, one of the biggest words they used was visibility. And to the, I'll call them the converted, that is an incredibly powerful word because think about what the inverse is. When you have lack of visibility, you have a hole in your network, you have a hole in your entire defensive program, and that hole propagates through what you think your detection coverage is, what you think your uh detection volume is, what you think your MT MTTR is, right? You know, to quote one CISO I met with, he said, look, we have we had 3,000 security incidents last year. That's not what worries me. My worry is that we had 3,200 and we don't know it, right? We didn't know the last 200. So this visibility word is incredibly powerful. And one of the things I think is going to happen is that folks, as folks are rolling out their AI automation programs, what do you have to do? You have to map your process incredibly well. Mapping that process will lead you into mapping not just the people and the organizational handoffs and the tools, but it will lead to mapping the data you need. And is that data actually available in all the parts of your environment and for the right duration of time? Because, you know, think about why people uh go to NDR from PCAP. It's because if you keep even 30 days of PCAP, then you're dead blind on day 31. How many folks have a conscious understanding of the implication of that sentence, right? You know, the folks that use visibility as a powerful word, they absolutely get that, right? And that's uh I think the AI adoption and the AI automation is going to exaggerate that understanding.

SPEAKER_00

Yes, once again, you've nailed it. Um understanding what you need, how to get it. That's another area where I find a lot of people struggle, is uh they start with the detection or the response, and they don't understand the link to the data. So Brian, we're just scratching the surface here, but alas, we're out of time. So I'm gonna ask you the question I ask all my guests at the end. Tell me the one piece of advice that you'd give to CISOs, cybersecurity practitioners in the network security domain.

SPEAKER_01

You know, to me it's all around AI automation. And I think the the guidance would be pick the first two processes you want to automate and map them. That is the I think uh just a mandate right now because A, if you're not looking to fight fire with fire, then you're gonna be outgunned worse than you already are. Nobody wants to be in that position. Second, by mapping the process, you will discover all sorts of things about your team, your organization, your technology that have all sorts of positive side effects. And three, once you map the process, you can start to automate it. And my biggest kind of plus one, if I could sneak and do a one and a one A here, the one A would be don't let perfect be the enemy of the good. Right. You know, it's it's uh, I think of it like the Waymo analogy, right? Uh, if you if you've ever ridden a Waymo, way I was I was amazed, it's an awesome experience. I loved it. You know, people get freaked out when a Waymo has an accident. The reality is the Waymo's safety record is way better than all of us as human beings, right? So don't let perfect be the enemy of the good because we're all evolving to a world where the new defense in depth is actually AI cross AI triangulation. And so the sooner we get ourselves kind of on that path, the better it's going to be for everybody. And the very first step is pick the two processes you want to automate and map them out.

SPEAKER_00

And I'll just add, if you pick those processes, then you should be able to map from the processes to the data that you need to follow through on that, which we just talked about. So, Brian. Many, many thanks for participating in my podcast. I really appreciate it. This was this was great. I'm over here nodding and smiling the whole time when I wasn't on camera. So it was it was really uh a good experience for me. So thank you very much.

SPEAKER_01

Hey, John, my pleasure. It's great to talk to you. I have always followed your work and and uh just uh it's a privilege to join the podcast. So looking forward to more in the future. Well, thank you.

SPEAKER_00

You're flattering me, and I'll take it. Uh, but for my audience, thanks for watching and stay tuned for the next episode of the Cybersecurity Bridge podcast.