Cybernomics Radio

Market Watch #2 - AI and the Future of Security Operations - Josh Bruyning and Richard Stiennon

Bruyning Media Season 2 Episode 43

In this episode, Richard Stiennon makes some bold predictions about the future of AI in Cybersecurity. Artificial intelligence is transforming cybersecurity at an unprecedented pace, with large language models increasing in intelligence tenfold every 12 months and reaching a potential critical mass by 2027.

• SOC automation represents the most immediate and profound application of AI in cybersecurity
• Approximately 15 startups are developing AI solutions to replace tier-one SOC analysts
• AI will eventually enable 100% automated triage of security alerts
• The shift from tactical to strategic skills will be critical for cybersecurity professionals
• As defensive AI improves, cybercrime may dramatically decrease, forcing attackers to use more expensive human-based methods
• Major industry consolidation will likely occur as AI solutions demonstrate overwhelming effectiveness
• Nation-states will remain the primary threat actors as they can afford to develop counter-AI attack capabilities


Josh's LinkedIn

Richard's LinkedIn

Check out Josh's new book "The Close Line" available as an e-book on Amazon now, with paperback and hardcover versions coming soon. Check out IT-Harvest for a free demo.


Josh's LinkedIn

Speaker 1:

All right, let's get started, richard Steenan. Welcome back to Cybernomics and thank you for listening to this episode of Cybernomics. Every week, richard and I go through the security market. You can call it a market watch, if you like. You can even call it security market watch, which is a throwback to what this show was called before Cybernomics. So I guess now it's a segment. So, richard, welcome. I'm so excited to talk to you today about AI and you're going to make some predictions. Prognosticator of prognosticators. How's life?

Speaker 2:

I'm doing awesome. I am at a stage of my understanding and people always think I come out of left field with some bizarre prediction about the direction of the world, especially in cybersecurity, and it's not true. It takes me years, but when it happens, it happens and it's pretty overwhelming. So that's what I hope we get to talk about today. And I want to you know stage setting.

Speaker 2:

Take you back to 1992 when I was reading an article in something called Midnight Engineer and the editor of that magazine was interviewing somebody who had a side project that was the midnight engineering part and he asked what's the most critical thing that helped your business? And the guy answered the internet. And I remember scratching my head going huh, what's that? And within six months I had become an internet convert and had started Rustat and ISP and that was my full-time job. I dropped all the automotive engineering I was doing and just went full in. This was going to change the world and I knew it. And it was clear as anything that this was the biggest thing of my life technologically. So that is what I've had the feeling of ever since ChatGPT came out.

Speaker 2:

I've written that. If you're a reader of my Substack, I'll say how important this is, and everybody should pay attention to it, etc. It's only been with us since what? November 30th of 2022. So it's still not three years old that we've had large language models that do what they do, and they have been increasing, and I've known this because I've written about this as well. They increase in intelligence 10x every 12 months, which is pretty phenomenal. That alone is enough for you to go huh. I should pay attention to this. If there's something that I want to do with AI but it's not ready, well, try it again in 12 months, because it'll be 10 times more intelligent.

Speaker 1:

So try it again in three weeks three weeks in some cases.

Speaker 2:

Yeah, yeah, cause it could be that jump of 10, 10 X that happened in that three weeks. So keep your eye on it. Well, um, I finally became aware of something that was published way back April 8th, so 10 days before we recorded this, and it comes out of, you know, one of the rationalist society thinkers, and they put together a scenario using tried and true scenario planning techniques and backing it up with all the graphs and probabilities and all the rest of that. But anyway, some super, super intelligent people got together, looked at where AI had come from in such a short period of time and where it was going, and came up with AI 2027. So their prediction for the next two years, two and a half years, and it's pretty mind-blowing when you look at it, to the point where you may dismiss it because they take it to its logical extension. You know things like the human race being wiped out by the AIs, which I know is a big fear amongst AI researchers, especially in California, if they think that way. I don't think that way, but I haven't read about that yet. But what I did absorb was what they base their prediction on, and here it is plain and simple.

Speaker 2:

The large foundational model companies OpenAI, xai, google Anthropic Tencent, google Anthropic Tencent are today using large language models to help them code and come up with their strategies for what to train and how to train and what data sets to use and all the rest of that stuff. They're using AI today. Of course, everybody who touches code should be using AI today, but they're using it significantly already and I've seen stats, you know, up to 50% for some researchers. So what is going to happen is they are going to train AI researchers to replace the AI researchers that we have I'm sorry, the human researchers that we have, and that will occur throughout 2026. By the end of 2026, they will have accomplished that. So large language model developers like OpenAI will have automated AI bots that are doing the work and then it just takes off right Because now they can iterate. They work 24-7. They can have a million workers doing this work, trying different scenarios, trying to come up with innovations. Maybe they'll come up with the next transforms, who knows?

Speaker 1:

Would it be fair to call that critical mass?

Speaker 2:

Would you have hit critical mass?

Speaker 1:

at that point? And can we also use the S word, the singularity?

Speaker 2:

Yeah, we try not to. Yeah, these guys are basically backfilling into the singularity. That's correct.

Speaker 1:

Yeah, maybe it's the beginning of the singularity. Yeah, I've got to get Ray Kurzweil on this podcast? Yeah, we really should. Yeah.

Speaker 2:

So they give a nod to Ray Kurzweil and there's a excellent podcasts with um, the two of the authors of AI 2027. Um, one you'll you know you'll recognize the story Daniel uh Coca, todd Todd Tyo. Um, who was at open AI in the safety side of things. He didn't like what was going on, didn't think that they were going to do a proper job of thinking about safety, so he quit and then, when presented with the non-disparagement agreement that all the employees got when they quit, which basically says if you disparage us, you don't get your options, we claw them back. So, in other words, you know, if you say anything bad about us, we're going to. You know, take hundreds of millions of dollars from you, right, which is how valuable that stuff is. So, but he made the conscious decision to go okay, I'm sorry, but I'm not going to sign it. Made the conscious decision to go okay, I'm sorry, but I'm not going to sign it. And luckily, you know, ai, or open AI, backed down and now that clause and that practice has been absolved. So he's still got his options and now he's a free thinker, he's independently wealthy and, you know, he's in pretty good shape. So he's one of the authors and he's on this amazing podcast, um, and then, amazingly, scott Alexander is on the podcast.

Speaker 2:

Scott Alexander is known to every techie. He's the author of the Astral Codex 10 blog and he covered his wide gamut of technology and psychology. He is a psychologist. Um Seems fairly young to me he's probably older than you but he's been around blogging for a long time and is obviously super, super smart. And he's never been on a podcast before. So I'm writing this up and you can follow the link from my Substack post to this three-hour podcast. It'll blow your mind the depth at which they go, this three-hour podcast. It'll blow your mind the depth at which they go. But anyways, long story short, I decided to take their assumptions, ignore their conclusions about what happens after 2027, but for now, take their assumptions and apply them to what's going on in cybersecurity. So that's why I'm so excited to talk today.

Speaker 1:

Awesome. A lot to unpack, especially when it comes to cybersecurity. How does that affect the cybersecurity market and how do we go forward? Do we step carefully? Do we, you know, go all in and put all our eggs in the AI basket? We'll find out. So when I think about AI, the first thing that comes to mind is what's going to happen to jobs, what will happen to people's jobs, especially when it comes to the SOC? Right, when I think of an application that is ripe for AI not necessarily the coding. So the coding. Obviously there are lots of cybersecurity companies that are going to be using AI to write their code and all that stuff. Let's set that aside for a second.

Speaker 1:

But when it comes to the heart of security operations, the SOC if you don't know what that is is the Security Operations Center. That's kind of the lifeblood of cybersecurity. When you hear cybersecurity, most people think of someone in a room full of computers and they're looking at stuff on screen logs that are popping up and there's an alert that comes up and says intruder, intruder. That's not the way it really happens. The SOC analysts are often going cross-eyed looking at the logs and they'll see something that's anomalous and they'll report it and it goes up the chain and if it's bad enough, it gets to the CISO and then, if it's really bad, there's someone in the system, then the security team takes action.

Speaker 1:

As you might imagine, that process takes a lot of manual effort and it's really easy for humans to miss an event right. So in the SOC we're seeing AI being used as a way to not miss anything, basically, and AI has kind of been in the SOC for a long time when it comes to trying to do false positives, false negatives, correct errors, when it comes to flagging things, because oftentimes the system might flag something that's not bad or it might miss something that is bad. But now we're seeing a new iteration where the SOC analyst is getting the support that they need. Where do you see the SOC going forward with AI? Is it going to replace the SOC analyst? Is it going to replace the SOC analyst, or do you think that they're going to be SOC technologies and SOC vendors that are completely humanless or nearly humanless? I mean, how far can?

Speaker 2:

we go and where is what is the logical end of this technology? Yes, I take it to its logical conclusion and that the SOCs will be unmanned, unpersoned. I guess we should say they will be fully automated and starting with the, you know the triage. So we already have account 15 startup vendors, most of them by far less than two years old, who are working on this and able to demonstrate in some way that they've got something right. But they're very, very small. You know, the biggest one is ExaForce, with 46 people. So they're not, you know, selling or getting traction.

Speaker 2:

Yet I am on record saying that by the end of the year they will get traction and large enterprises will start to use them to alleviate the burden pressure on the so-called tier one SOC analysts, the people who look at this you know giant list of critical things they have to pay attention to and they quickly go through them. You know, yes, no, yes, no, yes, no. Which ones should we look at? They dig into a few of them. Some of them they go ooh, this looks really scary and they escalate it to the SOC 2 analyst.

Speaker 2:

So so far, most of the talk and most of the claims from these vendors they're not saying you know they do not go out there and say this is the end of the world for SOC analysts.

Speaker 2:

They say we're augmenting them and we're helping with that triage problem. So we're going to and none of them even use I'm waiting to hear it for the first time 100% triage. That's to me is a game changer, right? If you could say every single alert generated by your SIM which of course, is, you know, a collection point for all devices that are logging and alerting in alerting, every single one of them is looked at and, you know, deemed okay, you know false, positive or normal and not a threat whatever, and the ones that are actually threats are highlighted quickly and maybe some action is taken. That is going to be happening by the end of this year, 2025. And to the point where some of those companies, 15 of which I can show you on the screen here, are still very small, but at least one of them, I believe, will be valued at over a billion dollars before the end of the year.

Speaker 1:

Are you saying that one of them, as in we don't know which one, or do you have your? You know, like if you're a betting man, would you? I do not.

Speaker 2:

I do not, though, but let's just look at it, and then we'll decide how we could come to that conclusion.

Speaker 1:

While you're bringing that up, I have a theory when it comes to why companies don't say that they're 100% unmanned. I think that the narrative right now, when I ask anybody will AI replace your jobs? The narrative right now is AI will not replace your jobs. It will make what your team does more effective. It's going to help them work faster, more efficient, be more productive, so productivity goes up. So then the vendor coming in is now seen as a revenue driver, is going to help you drive revenue right, or cut costs. Helps what's that? Or cut costs.

Speaker 2:

Or cut costs.

Speaker 1:

Exactly so. At any rate, it makes the value seem like a value-added solution, right? So I think that that's a business narrative. I don't think that that's a prediction of what actually will happen. I think you're right. What's going to happen is it's going to replace a lot of people, but I'm really interested to see how the narrative is going to shift once we cross that threshold.

Speaker 2:

And all we have to do is ask Gary Kasparov, who I actually saw speak at a Cyber Reason conference, of all things, and I think Cyber Reason very pointedly brought him in as a speaker, because he's arguably the first person to lose his job to AI Right, even though how Could you walk me through that story?

Speaker 1:

I'm not familiar at all.

Speaker 2:

Oh, ok. So over over the years, gary Kasparov was the last world chess champion before AI and you know, had won multiple years and stayed the world chess champion and he was always willing to go up against the latest computer. And finally IBM created Deep Blue or something like that Deep Blue or something like that and it beat Garry Kasparov in a chess master kind of playoff. So, you know, multiple games the computer beat him more times than he beat the computer. Computer won, so he's out of a job. Right, A computer can beat him.

Speaker 2:

So what's the point of being the best in the world at what he does when a computer can beat him? So what's the point of being the best in the world at what he does when a computer can beat him? And then, since then, you know, of course, google has the most impressive chess bot right now that can beat the big blue or deep blue, whatever it's called, and it's just amazing. So Gary Kasparov says look at, you know, don't think of it as me losing my job. Think of it as now I can play chess and have AI augmentation. And so in the future you'll have two chess grandmasters playing each other and each will have their own computer model that they can use to help them choose the next move.

Speaker 1:

That's brilliant because it takes them out of the seat of a tactician and puts them squarely in the seat of a strategist, which is what chess is all about?

Speaker 2:

Yeah, totally, except that that never happened. That does not happen, right. And yet, you know, after one of my favorite movies, the Queen's Gambit, came out, chess just took off. There are more players now than there ever have been in history, and chesscom is continuously has problems keeping their servers running because there's so many people playing games.

Speaker 1:

So it's so, it's OK. Does he have his job?

Speaker 2:

again, or I don't think. So I think. I think he's a political pundit now. Oh, he's too busy slamming Putin in Russia.

Speaker 1:

Instead of pushing chess pieces around now, he's uh commenting on how world leaders are pushing people around.

Speaker 2:

He has direct experience, you know, because he became grandmaster in the soviet era and then, you know, saw the changes.

Speaker 1:

Uh, until putin took over is he like on rt or?

Speaker 2:

no, he's not. Um, yeah, he lives in the united states now. He's's not welcome in Russia. He's considered a dissident.

Speaker 1:

Oh so yeah, he wouldn't be an RC. Yeah, they swapped him out with Jesse Ventura. Okay, anyways, what do you got for me?

Speaker 2:

Okay, so we are looking at search results on what I call SOC automation, which sounds really boring compared to the fact that it's a part of the AI security world. There are 96 companies in AI security. Most of them do you know governance and guardrails. You know, essentially, dlp for AI and a few of them like protect AI and prompt security. Protect models, which I didn't think was a thing. Right, it's like the models. The people who create the models are only like seven companies that make models. Boy, was I wrong. Right, there's over a million models in Hugging Face and people download them and use them all the time, and GM alone supposedly has I don't know 300 models that they're using. So, yes, there's a business for protecting people's large language models.

Speaker 2:

Internally, protect AI is undergoing, you know, a rumor takeover by Palo Alto Networks for between $600 and $750 million and with only they've still got under 50 employees. So makes my prediction of a billion dollar valuation company seem a little. You know, not far stretch by the end of the year, and yet nobody else has predicted that. So it's going to happen If I, as I just did, I sorted on headcount and as I just did, I sorted on headcount. So these are the 15 vendors that are currently going to market with. Hey, we do stock automation, we're creating agents to replace your tier one agents, and ExaForce just announced yesterday 75 million investment and I don't know what the valuation was, but probably not a billion just yet. Um, they only have 46 people and yet they've grown 18 so far this year, or drop zone, grown 34, so they, you know they grew a third from so far this year do you think that number is going to drop or increase, that 46 headcount?

Speaker 2:

oh, it's going to go up dramatically, do you think?

Speaker 1:

even as they add more agents and robots. Do you think there's?

Speaker 2:

a.

Speaker 1:

CEO out there that wants purposely to keep the headcount low.

Speaker 2:

Yeah, so they're going to need the engineers, and right now they don't have superintelligence. Within a year and a half they will, and then they can stop hiring engineers, but right now they need salespeople, right? There's no way you can get to a billion in revenue in 18 months. Yeah, you don't do it, whizzed it Right.

Speaker 1:

Yeah, unless you have, uh, unless the, the contracts, the deals are just like massive, um. But yeah, until then you got a point and you're going to need sales guys, massive.

Speaker 2:

But yeah, until then, you got a point and you're going to need sales guys. You only need $10, $100 billion deals.

Speaker 1:

That's one way of thinking of it. That's a good way. Those sales guys are going to be happy.

Speaker 2:

Yes, oh yes, If you're on this bandwagon. That's one of the few ways I think of to profit from all of this If you're not in a position to start an AI security company.

Speaker 1:

Just going to sales.

Speaker 2:

Be in sales at one of these, because people will just buy it. The demo will be so easy. Hey, give me your SIM, give me API access to your SIM and we'll start solving all your problems.

Speaker 1:

Right away. Time to value Incredible yeah it'll be okay.

Speaker 2:

You can't unplug that. Where do we sign? Problems Right away, time to value Incredible? Yeah, it'll be okay. You can't unplug that. You know where do we sign yeah.

Speaker 2:

Kind of thing Right, because the value is just incredible. So a lot of cool companies here. I've only talked to a handful of them. Some of these, you know, just have the right DNA, the right people, method, security she's got people you know from the NSA working at it. It's pretty dang cool. Just awesome stuff. So that's where I start in my list of predictions. And they just keep flowing right and they just keep flowing right. So it's based on the idea that all these companies will be able to take advantage of superintelligence throughout this coming life cycle, right To do what?

Speaker 2:

exactly.

Speaker 2:

Yeah, to basically deploy agents that can see all you know, see every single log, every single alert, and you'll be turning on all of your alerts that you tuned before, right. So now you want everything. You want to know who's getting email from where, and you want all of that and you want it all in one place and the AI will be able to parse that and determine what's going on, determine when it's an attack of any sort. You can personally get a feel for this If you go to ChatGPT next time you get a phishing email. You'll look at the. You just look at the verbose mode. You know where you can see all of the HTML that came with the email. Cut and paste that into ChatGPT and ask it to analyze it for you and it will tell you everything. Right, like you know, I know a few people who can read that stuff and tell me why the d mark matters.

Speaker 1:

And all these things add up to this is a phishing attack but john smith and accounting isn't going to be able to make sense of it exactly so.

Speaker 2:

But chachi pd, can any of the uh large language models can do that just so easily. Now just imagine that ability to. You know, take 20 pages of text and reduce it down to what it means and apply it to your reams and reams of logs and alerts that you're getting so inability. You know, let's say Gentic, it can say, hey, I need more data. I'm going to go out and scan that endpoint, I'm going to cross-compare with where was the owner of that endpoint? You know, was he or she logged in, were they in the building? Were they on the road in China?

Speaker 2:

All those millions and millions of factors easily taken care of. And it'll just get better. You know it's going to be good, good enough to invest, and you'll be able to logically say, hey, that entry-level person I got to hire out of school and start training who will be a good tier one engineer in two years, that person costs $50,000. I will pay $50,000 for a single agent to do that work and it'll be perfect, right, and it'll be, won't make mistakes, and it'll be like buying six, because it gives you 24 by seven coverage. Takes six people in order to accomplish that right. So and then, yeah, that's where the huge revenue will come from.

Speaker 1:

Let me ask you this, richard. You obviously understand the ISP space really well. What is this going to do in that space? What do you think? I mean we're stepping away a little bit from cybersecurity, but when it comes to telecoms and communication and the infrastructure, what do you think? How is AI going to play a role in all of that? Are we going to see, like, faster speeds in our communications? Are we going to see more secure communications? How does that affect?

Speaker 2:

that world. I think the first tier will be cost optimization. So you know, if you're a large company, you've got tons and tons of MPLS circuits which are point to point circuits. Quite a large company? You've got tons and tons of MPLS circuits which are point-to-point circuits. Quite a lot of them you don't need. And you kick off a study today to say you know what, if we just went to local internet breakout, bought a bunch of Fortinet gear and connected it to local ISPs and then VPNed all over or used Zscaler or Kato for that purpose and you can quickly find your return on investment for doing that. Or if you need all the direct circuits, you could re-home them. I've seen that done in the past and it's quite magical. When it happens you can save, you know, 50% of your costs. But don't forget, that's what all of your network engineers actually the old guys as old as I am that's all that they're hired to do in the first place, so they've been around for a while.

Speaker 1:

Are those guys going to be out of a job? I mean, I think they're already becoming a little. I've uh, okay, yeah, no, I don't want to, I don't want to, like, expose anybody, but I do know a few of those guys and companies are really afraid of them because they've got the password to everything and they can bring down your entire network, right, and they're close to retirement. So what would AI help companies replace them? Or at least you know, because those guys got it. So it's a huge Absolutely Brain OK, right, right, absolutely Brain drain, okay.

Speaker 2:

Right, right. So the you know that guy's sitting in his easy chair behind his desk and because he knows where all the secrets are hidden and has a picture in his head of a network that he or she helped build, that person could be replaced overnight by AI. That knowledge base can be, but the hard thing to replace is the person who knows what questions to ask of an AI, and you're going to need those people.

Speaker 1:

Would that come in the form of a prompt engineer, like someone who's specializing in prompt engineering, or is it like a strategic person who knows how to ask a prompt engineer what to ask the system?

Speaker 2:

Okay, yeah, the strategic engineer. Okay, so we will see a shift to the strategic rather than the tactical.

Speaker 1:

That is, I think, the single craziest thing that's going to happen to the job market. So a lot of these tactics, because when you go to school for computer engineering, development, software engineering, anything like that, they don't really teach you anything about the business. And so now you're going to have a whole breed of people who are just like a chess master, right? The one that instead of playing chess, you are telling the model and directing the model at a strategic level on how to play and to beat your opponent. And so I'm really interested in that Like that's.

Speaker 1:

That's what I want to see, because I think there's money to be made for people like me who are, you know, we've been in the industry for like 10 years, right, so you're a senior, you're in a senior position, right, but you're not a director, you're not a CISO, you're not a business business leader yet. But if you've got the strategic chops and you understand enough of how the business works and you know enough about the technology to be dangerous, I think there are going to be a lot of people who are going to start their own businesses, and that's just going to revolutionize the way that we do this whole entrepreneur thing. You're going to have a whole breed of entrepreneurs and there's a whole lot of money to be made in that space.

Speaker 2:

Totally agree, and they will be able to convince their current employer. If you're going to ever do that, strike out on your own. The number one way to success is get your current employer to hire you as a consultant. So strike out on your own. Don't do it like I did, which is just completely shift my career. Do something different. That way lies poverty for most of your life.

Speaker 1:

A friend of mine is going through this shift herself. She started her own consulting company helping let's just call them legacy users at these companies mid-sized companies, companies use AI and leverage AI. But then she's like Josh, you know, like, uh, don't put it out on the internet that we did this podcast. Like you know, you can post the clips on your page, but I won't be able to post it on mine because I don't want my employer to know. But I'm like you should position, you should help your employer. They should empower you to do this because, number one, they can get you for cheap, because instead of paying you $200,000 a year, they can pay you $70,000 a year as a consultant. The only caveat is that you have other clients, which is not a problem because you're an AI master and you're super efficient. Oh yeah.

Speaker 2:

Yeah, yeah, yeah, yeah, yeah, yeah. My daughter got a job at Michigan State and because her brothers and a little bit her father are involved in AI, she's like all over it and she's, in particular, with the challenges that academia has with AIs. Right, they're very resistant to the whole concept. Right, they're just against it. The whole concept, right, they're just getting it. And so even the people that were supposed to be the academic experts on AI within the state school system don't want to tarnish the reputation by saying it's good or that we should do it or anything like that.

Speaker 1:

So, all of a sudden, my daughter, who lives in the world of IT she's in their department has become the go-to person if you got questions about ai and she's just having time of her life going around and training people and yeah showing them what they can do and talking them off a off a cliff when they say that, oh my god, my students cheat yeah yeah, honestly, those of us who've been in AI for like, from the beginning I was with Chad GPT from literally like Chad GPT version one and I I love like I remember back then blowing people's minds and we're having a field day.

Speaker 1:

These people that are like have been doing it for a long time because now we can say, yeah, I remember three years ago when you said I was crazy for suggesting that a robot is going to be able to help your kid with their homework, and you know they're like, you know they're not really going to replace a real tutor. It's like, yeah, yeah, download chat GPT, it will run circles around any tutor. So now we're having a field day, now that we've been proven right and in some cases, too right. So let's wrap up with this good segue. What are the dangers of this technology? Once we hit critical mass and the robots start to um, when we hit super intelligence, as we're calling, what are some of the dangers of that?

Speaker 2:

You know I don't want to go beyond the impact on the cybersecurity industry, but I have thought about that, right, so we are going to get really good at stopping cyber attacks. In other words, it's going to work for once and there will be a reduction in overall cyber crime and a dramatic reduction, to the point where there aren't as many cyber criminals anymore. The low-hanging fruit that they go after today, those that just don't do anything and haven't done anything that's going to shrink and shrink and shrink because there'll be service providers that can say hey, you know, just give it to us, we'll take care of all of your attacks and stop them in the tracks. That is super dramatic. It's going to decimate the cybersecurity industry. There won't be as many vendors as the year before. There will be consolidation.

Speaker 2:

You know you still need firewalls. So Palo Alto and Fortinet are still going to sell firewalls, um. You still need multi-factor authentication. You still need sensors all over the place, um, but there won't be cyber crime. There won't be news every single day of another breach, uh, and that's going to have a big impact on the growth of the industry, thank God. Do you think that there's going to have a big impact on the growth of the industry?

Speaker 1:

Thank God. Do you think that there's going to be a consolidation Once that happens?

Speaker 2:

consolidation will be there, right yeah.

Speaker 1:

So the cybersecurity?

Speaker 2:

Buy one of everything and you just, you know, get your credit card and you'll have everything you need.

Speaker 1:

So if you were to start a cybersecurity company today let's think of services, solutions, technologies, tools, swiss Army knife, you name it what would you do?

Speaker 2:

for longevity. Yeah, I would create the sensors and any data efficiency that you can bring to the world, because that'll just lower costs for the users of AI, soc automation, and yeah, that's what.

Speaker 1:

I do. So your thesis is basically that the just to sum this all up the best application for AI and where we're seeing everything moving, it would be the SOC, correct, okay, and your prediction is that the SOC will get so efficient that we will see a reduction to cybersecurity attacks to near zero, like we'll patch all the holes. Well, what's going to happen with the hackers? What if they have super AI?

Speaker 2:

Right, so they will have to have it. Question is, can they afford it and is it worth the investment?

Speaker 2:

Because that's, don't forget, this battle, this 25 year battle we've had, is to raise the cost to the point where it's not worth it for the attacker yeah, right, yeah and we've never gotten there, because, you know, we thought we were getting there and then bitcoin comes along and, all of a sudden, ransomware is possible so they can go after individuals and um, so we never got there? Uh, but now this is our chance, so that the only attackers will be the ones that can outspend us, and those are going to be nation states, and it'll be cheaper. Rather than come up with a new attack against somebody protected by AI, they'll have to hire a spy or bribe somebody or kidnap their family in order to get what they want, which is extremely expensive and risky, right, very, very expensive, right Millions of dollars invested to do an operation like that. So it'll be a long time before AI is better than that.

Speaker 1:

So we'll come back basically down to the human element. Like it's going to be, if you want to break in, you're going to have to trick somebody, you're going to have to James Bond your way into that organization and you're going to ask them if it's like their martini shaken or stirred and then wind up in somebody's bed, probably, and then get the nuclear codes.

Speaker 2:

That way, good old fashioned police work yeah.

Speaker 1:

Yeah.

Speaker 2:

Yeah, good old fashioned.

Speaker 1:

OK, all right, richard, great stuff. I mean, we've got some predictions here and I I'm gonna hang my hat and bet with you, right, not real money, because I'm not a betting man, but if I were I would bet that you're correct, that, um, it's, it's gonna be an interesting five years down down the line, um and uh, and we will be covering it here on cybernomics. And so, richard, if people want to find you, how can they find you?

Speaker 2:

Yeah, head on over to it-harvestcom and you'll see all the contact forms and all the rest. Or find me on LinkedIn. I'm there all the time.

Speaker 1:

Yeah, richard's always posting good stuff on linkedin and I always see people eat it up, and for good reason. It's always really good information, very timely stuff, and sometimes you post stuff right out of it harvest, which, right you know, bodes well for what you guys are doing over there. So if you want to find me, you can find me also on linkedin. Josh bruning, b-r-u-y-n-i-n-g. Quick announcement my book, the Clothesline, is on Amazon, so you can get the e-book on Amazon right now. Paperback and hardcover will be available later. Ing and keep following us at Cybernomics, and so we'll have an episode with Richard doing our security market watch every Monday and for interviews and conversations with CISOs and vendors and founders, check us out every Wednesday at 7 am. Richard, good to see you again. Thanks, thanks, chuck. All right, thanks for listening. Bye.