Security Masterminds

Unleashing the potential of AI and Zero Trust in Cybersecurity and Data Protection with our special guest, Ian Garrett.

Security Masterminds Season 2 Episode 7

Send us a text

Protecting data in the age of cyber threats, cybersecurity expert Ian Garrett ignites a battle against ignorance, as he harnesses the power of AI and zero trust to defend organizations of all sizes from the lurking dangers of the digital world.

Today's guest is Ian Garrett, the CEO and co-founder of Phalanx. With a background in computer science, he became an early adopter of AI application in cybersecurity and has been making waves in the industry ever since. Ian's ability to combine AI rapid data processing with a human understanding of nuanced threats exemplifies cutting-edge cybersecurity practices that help ensure data protection and privacy.

Don't ignore the data outside of secure places. Even drafts and email attachments can be vulnerable. Take a comprehensive approach to data security. - Ian Garrett

In this episode, you will be able to:

  • Gain insights into how AI and Zero Trust model can reinforce your data protection strategies.
  • Learn from industry connoisseurs about typical data security blunders to be avoided.
  • Identify the hurdles in managing multicloud data and the solutions to counter these challenges.
  • Delve into the potent dangers presented by AI and chatbots and how to keep them at bay.
  • Understand the practical application and multiple influences of the Zero Trust architecture on your business.

Ian Garrett, CEO and co-founder of Phalanx, with a background in computer science, he became an early adopter of AI application in cybersecurity and has been making waves in the industry ever since. Ian's ability to combine AI rapid data processing with a human understanding of nuanced threats exemplifies cutting-edge cybersecurity practices that help ensure data protection and privacy.

Don't ignore the data outside of secure places. Even drafts and email attachments can be vulnerable. Take a comprehensive approach to data security. - Ian Garrett

Connect with Ian Garrett

Connect with us:

Website: securitymasterminds.buzzsprout.com

KnowBe4 Resources:

This show's sound is edited by ProPodcastSolutions -https://propodcastsolutions.com/
Show Notes created with Capsho - www.capsho.com

Ian Garrett:

But if you're run of the mill criminals are leveraging ChatGPT to build custom malware, then all of a sudden , your signature base and all that stuff is gonna be useless. I'm Ian Garrett, co-founder and CEO of Phalanx, and we're a lightweight data security solution that does document mapping across workspaces.

VoiceOver:

Welcome to the Security Masterminds podcast. This podcast brings you the very best in all things cybersecurity. Taking an in-depth look at the most pressing issues and trends across the industry.

Erich:

Automation and machine learning can go a long way in helping organizations streamline their. Data solutions and cybersecurity efforts. It's crucial to combine the strength of AI's rapid data processing and pattern spotting abilities with the human's understanding of nuanced threats to reduce and secure our organizations.

Jelle:

Ian Garrett, the brain behind Phalanx was spurred by an early interest in technology and a knack for reverse engineering systems. As CEO he's channeling his extensive education and hands-on exposure to, into steering his company with his research in AI application in cybersecurity to maintain data protection and privacy.

VoiceOver:

This is episode 20, unleashing the potential of AI and Zero Trust in Cybersecurity and Data Protection with our special guest, Ian Garrett.

Jelle:

So welcome to another episode of Security Masterminds with me, Jelle Wieringa and my good colleague Eric Kron.

Erich:

We have an incredible guest this time, Mr. Ian Garrett. This was a great discussion.

Jelle:

So yes, and they do a really cool thing with DLP and Ian has been around in the security industry for a long, long time now. Uh, and we wanted to know what his origin story is.

Ian Garrett:

My origin story for cyber really starts, I mean, there is a bit of that classic teenager, like to tear things apart and put'em back together again. Um, I think that that really got a little bit of taste of that. And then really it was in college that, um, studied computer science at West Point, but it was not necessarily the computer science component, but really the extracurriculars, I guess that got me more into cyber. So, uh, it was early on. I was either, I think I was a 19 or so and the DoD just pushed out that they were banning USBs. So I was like, what? Why would we need to ban USBs? What can I do with a USB that maybe it was less than great? So that, that I would say is really what started things off is I got got around to playing with that. I was able to build a little USB tool that pretty much side loaded Ubuntu into laptops and then was able to pull off files off of people's hard drives. Without having to log in. So that was fun. And then a lot of our professors also just trans transferring in from the army doing a lot of cyber stuff. And then after that, commissioned into the Army as a cyber officer. Got a lot of great firsthand exposure there. From there, it's been taking what I've learned and those that are learned from and using that knowledge.

Erich:

Yeah, so Ian had more of a traditional cyber background than a lot of people that we've talked to. You know, we, we've talked to people coming out of, uh, theater, we've talked to them coming out of all kinds of different approaches, but. One thing we definitely have in common is I wanted to know how they worked and tended to take them apart.

Jelle:

We each have our own background and, and for me personally, I I, my love for technology is what got me into cybersecurity. I love tinkering with things like being tinkered with s b drives and, uh, USB sticks. I, I'd love to tinker with technology as well, so yeah, recognizable. We're

Erich:

seeing breaches here, data loss there, ransomware here, it's like all over the place and pretty much all the verticals, I think, and with all organization sizes, which is kind of frustrating. I know, um, some of the smaller people that I've talked to, like for example, my chiropractor is like, well, uh, if these big companies can't protect themselves, what chance do I have? So I'm just gonna use like all ones as as the password to get in and look at your stuff. And so there's a lot of that that goes on with some of these smaller organizations. But what kinds of things can organizations do to really. Improve their overall like security culture and, and, and where they're at with security.

Ian Garrett:

Yeah. I, I think you're really hit the nail on the, on the head there about smaller organizations almost feeling like they have to give up just 'cause it's overwhelming the amount of things they have to deal with from a cyber perspective. If anything, it's, it's a, you know, there's so many of us out there, hopefully they won't pick me, which is essentially trying to do security through obfuscation, which we know doesn't work. Yeah, I would say the biggest thing that I think an organization can do really of any size is to start at the basics. Getting an inventory of what exactly are all your cyber assets, whether that's your network, whether that's the various devices, you know, to include employees. Um, right now part of what we've been talking about is really the death of perimeter based security as a strategy. And part of the reason of that is also as it relates to SaaS applications. So, you know, there's a lot of logins out there, there's a lot of, you know, room for error. So, you know, starting with what exactly is there. First, I think also helps organizations find that it's easier to make it more or less overwhelming because you can actually itemize it. There's, you put it into a box essentially without it being just a lot of unknowns and hopefully this one random vendor that I bought some tools from, covers me now so,

Erich:

yeah, , the whole security through obscurity sort of idea is one that unfortunately still does plague a lot of industries. I do think there's a lot that can be done in the smaller organizations as well.

Jelle:

So I, I agree that, that you need to know what's in your network. You need to know what assets you have, you need to know what you're protecting. Uh, but if you're a smaller organization, it's also about taking security serious. A lot of smaller organizations actually do think they're kind of off the radar to cyber criminals and bad actors that they're, they can't be or they won't be attacked because they're not interesting enough. And I don't think that's the case because a lot of, cyber crime today doesn't matter if you're an interesting organization with lots of money, lots of assets in a lot of cases you're just being used as a stepping stone for some bigger attack or something like that. I can remember there was an incident where a server in a bakery all the way in the back covered in dust turned out to be a server where they had ransomware running and it was, uh, basically one of those command and control servers. And that baker just didn't know that he was part of that larger network with many, many nodes and that they were attacking large organizations through his computer. Basically. It's not only about whether you are an interesting target or not, because you always are. It's about you understanding this and taking appropriate matters. And with cybersecurity, you just can't afford not to do anything. And I think that's something that a lot of small organizations today need to learn.

Erich:

Yeah. You know, and, and we gotta think about who their customers are as well, because sometimes that's the target. Even the smaller organizations have a lot to offer to bad actors that they can use. And if we've learned nothing about supply chain attacks in recent years, that's an important lesson I think we should definitely take with this, is it's not always about you. What are some of the most common mistakes you see companies making when it comes to securing their, their data, like the stuff that they have and how can we address that? How can we help them with that?

Ian Garrett:

Yeah. There's uh, two of the biggest mistakes that I can think of that we see, and one is that. They're essentially dreaming of a perfect world. So the, from a data security perspective, so obviously there's, everyone puts out different policies and let's say they have like data rooms or places, you know, enclaves where they're keeping that data secure or encryption solutions where, you know, they're have a way to, to secure certain types of data. Essentially the thought is, you know what, all of our sensitive data lives within these places that we're. Securing the data, but they ignore, you know, drafts of those documents or email attachments or transfers to get between, or maybe something's sitting on a USB drive or, or a burn disc, and it's just kind of like, well, you know, that's all just the stuff does exist in the secure place, so let's just kind of ignore the stuff outside of that, and that's fine. So that's like probably the biggest and, well, I honestly, if you think about, uh, as an attacker, some of this is stuff that, again, learned through prior experiences, but if you think about an attacker offloading data, all those places are great places to find. Like if they have this locked lockdown database, for example, and they're like, all right, I need 10,000 rows of this data outta this database to download into Excel so I can make a PowerPoint out of it or do whatever out of it. And that's just sitting into the downloads folder because they had created their product afterwards and then they just didn't delete it then. That's like, that's a great place to, you don't have the whole database, but hey, you got 10,000 rows of it, and that's not bad. And that's all PII that's a very expensive breach. I think the second biggest issue from a data security perspective, it's probably that this, it seems almost too obvious, but is to use other tools for data security. So this is going back to itemizing all those cyber assets within an organization. So somebody saying like, Hey, I got a firewall, so I'm good with data security. Or, Hey, I got an antivirus, so I'm good with data security, but. Really you wanna have a defense in depth. So firewall great, you know, anti-malware, great. Don't get rid of those. Um, but you also wanna have, you know, other data security solutions that depending on if you have a database, for example, you wanna make sure you have something for that. If you have a lot of unstructured data, make sure you have something for that. And if you have a lot of, you know, IOT or let's say like industrial control systems, OT stuff, then make sure you got something for that.

Erich:

Yeah, one of the things that I've found in my career is that people find data all over their stuff, all, all over their network that they had no idea was there. I've seen this happen more and more often with groups that are perhaps migrating to the cloud. They start moving stuff and they find all of this stuff back there. They find as was mentioned by Ian, you know, spreadsheets that are being used to do good things. These aren't malicious programs, but someone figured out, wow, I can actually hit this database, pull this in, and I can make things much more efficient. Nobody outside of their department has a clue that this is happening. And so this data is being stored there and people trip over this stuff and they go, holy cow, I didn't realize all this was going on. So the point of knowing what you have and then trying to find it is definitely a very valid one.

Jelle:

Yes. They need to have granular tools that look at identity behind the data, that look at where has this data been? Where's this data being used, et cetera, et cetera. And then you've got a lot of organizations and, and Garrett already pointed that out, where they, they kind of don't really use all the tools for, you use the wrong tools for security. Companies that spend a lot of money on tools, on, on mitigative tools on security tools tend to implement them and then kind of forget about them. So they don't use the full range of capabilities that that tool has. They buy it, they spend the money, and then they forget about it.

Erich:

And I actually know a guy who was doing I believe it was a cloud migration that got him in this kind of identifying where everything lived within the organization place. And he was absolutely shocked at all of the little side databases, side spreadsheets that tied into some of these things that had data all over the place. And he had no idea that it was out there. He started looking at, and he's like, man, there's duplicates of this stuff everywhere and if we got hit, that could be really ugly for us. Do you, do you see a lot of that?

Ian Garrett:

Yeah, so that is actually, and this is Ian's hot take for 2023 going on. I think there's a huge issue, and we're definitely seeing that across organizations, that there is so many different silos for data to exist that it's hard to wrangle what exists where and what to do about it. So, and this is especially true when you have sensitive data that you're trying to secure with various data security solutions, because often what that looks like is some sort of secure enclave going back to like that data, data room style. So you're putting some data in there, but that means like, for example, let's say you have Office 365. So you have OneDrive and your SharePoint and teams, but then you needed something extra secure for some sort of data. So you have that additional cloud environment that's, that's locked down and maybe you have an internal network that has a network share drive on there. So you're not quite sure where everything is, how to itemize it. And that makes it even more dangerous because especially when it comes to some of those auxiliary solutions, like you, you know, you, you wanna let people use OneDrive when it's for stuff that's, you know, maybe, you know, whatever nonsensitive information. You don't wanna ruin that productivity feature for them. But you also want to make sure they're not accidentally putting, you know, PHI PII in that if if they haven't properly secured it for that kind of use. So that, that, I think the issue of so many siloed locations of data is, is gonna be something that organizations have less tolerance for. We're even seeing that on our end that, you know, a lot of the people we're talking to are saying like, I'm tired of. Adding four or five, six different environments into my workplace, people don't even know where to put the data. And like you mentioned, there's a lot of locations where it's duplicates and if it's sensitive duplicates that's not good for, uh, security and it's not really helping from a productivity standpoint.

Erich:

I find it funny because even in my personal life I struggle with data consolidation and knowing where everything is. I, I don't know if you've ever taken on the project of taking all of your photos of the families and stuff in digital format and trying to move it to one place, but it's kind of a nightmare. And I find I'm afraid to delete certain things, so I have multiple copies of it all over the place here and there. And then you gotta go through and de-dupe. And that's just photographs that's not trying to find sensitive information that may maybe floating around within an organization or, you know, all these little side quests, if you will, that people go on.

Jelle:

Look, storage is cheap, right? So you can get a lot of storage space. And data hygiene, we're not, all of us are really good at that. We don't all understand the impact of making copies or deleting something or leaving it there, or, and I think it's a sign of the times almost. It is something where we have so much data nowadays that it is a problem. But then again, I also think that this is where visibility, again, visibility comes in. You need to know what you have, where you have it, what you want to do with it. And if you got that sorted, then yeah, it becomes easier. It's still a hassle, but it becomes easier. So since we're talking on the topic of how you, you handle data and where you store your data, and we also see a lot of organizations that come in and, and use third party data storage. And we ask Ian what his recommendations are when using third party data storage.

Ian Garrett:

Right? Yeah. That, that's a huge issue as well. The supply chain tax essentially through being able to off offload data through somebody else's system and, and I think the biggest thing that organizations can look to to mitigate risks associated with that is really understanding how, like where the data is stored at any given point. So for example, sometimes, you know, you need to send data to that third party and they need to obtain positive control over it. And, you know, that's the most dangerous version of it you no longer have control over that data. But there's a lot of cases where that's not the case. And the easiest solution maybe is to attach something to an email and ship it off. And that's, and again, that's, you lose control of it that way. And it's also sitting in someone's email server for who knows how long. So that's why, uh, business email compromises are so expensive. I always like to say that they end up being shadow file servers. Yeah. Thinking about how to really deal with that. It's like for one of, again, figuring out solutions that lets you share data without giving up control of that data. So, you know, maybe it's a transfer solution that just lets them view it. Um, make sure if you have that already, make sure you're deal, you know, using the access controls on that so they don't download it. Because again, a lot of it isn't maliciously, you know, they're overtaking screenshots of your data and they're, you know, they're trying to hold onto it. Usually it's just done out of, Hey, this is the most productive thing for me to do so, and I don't feel like dealing with the other stuff and they're not paying me for security. They're paying me to get my job done, so let me do it. So if there's any way to essentially force controls on that, obviously that's the best case. But the more realistic is, again, going back to policies, trainings, and focus on. Having everyone understand where the data lives so that you know, you don't want it to live at a third party if it doesn't have to.

Erich:

Yeah, we see the third party thing happen a lot these days, right? Yella,

Jelle:

i, I just look at it like it lives everywhere, right? There is not one single copy. You can't make sure there's not one single copy unless you have the tools to do that, which a lot of organizations don't have that in place. So just assume there's multiple copies. Data's everywhere and assume it's already lost by the time you create it. That's basically the mindset that you have to have nowadays. Uh, but I do think there's a lot of things you can do if you look at security by design. Just think about who's going to need access to that data and be able to edit it. Where are they going to do it from? When are they going to do it? Why are they going to do it? And try to build that into the way that you design your security, uh, around your data, whether it's in your hands or through contracts with your third parties. But you can do a lot of things with contracts, although a contract does never ever give you the guarantee that they won't be breached so always assume your data can be lost and do security by design anyway. We wanted to swap it up a bit and change gears and see what did Ian think about things like artificial intelligence, things like chat, GPT. they've become so popular nowadays, so quickly, and there's a lot of, actually a lot of risks that are involved with that.

Ian Garrett:

Yeah, so I guess to put a little context on where I'm coming from perspective wise, from the whole AI side of the house. So a lot of my PhD research has been focused in essentially the intersection of cybersecurity and artificial intelligence, both on leveraging AI for cybersecurity tasks and, and also the other side of the house where it's the vulnerabilities within these models. So because of that, when there's this whole AI wave really began, I was very skeptical, um, not of the capabilities themselves, but of the way it was being, uh, adopted because I was seeing a lot of people just slamming the I believe button on it. So it goes from people saying, oh, AI is nothing to AI is everything. And knowing how that, how it's built out, knowing all the different types of it, you know, you really un see that people just also toss it all into one big bucket. And it's best to think about it is more as what kind of task the AI is, is out there to perform and think about, you know, what is capabilities truly are and what its constraints are. But that being said, I think. People have really jumped into the latest iterations of ChatGPT and I only say that, um, because prior to I guess chat, G B T, they had, OpenAI had the other GPT models available. But the latest iterations of it, people have really adopted it because it's shown that it's at least capable enough to start reaching the point of a lot of the work that we, I, I think some of the more busy work that we were doing, I, I'm definitely impressed with what, what's out there. And to say that we don't leverage some of those capabilities would be a complete lie. But I do think there's a lot to think about when considering using those capabilities that's beyond just, again, smashing the I believe button saying, Hey, this is, this is what it gave me. This is good enough. You know, I don't have, you know, for example, if someone doesn't have expertise in a certain field and it's just pushing content, leveraging the systems, you know, that's very dangerous because they're not necessarily always a hundred percent accurate. So that's, that's the biggest thing that I, I think it is a little worrisome, is that people are willing to just go for it. And on the flip side, I'm curious to see how people exploit that from a social engineering perspective, particularly.

Erich:

I love how Ian kind of couched that as it's not always a hundred percent accurate. You know, that's not wrong, but, uh, we found that in a lot of cases it's not very accurate at all. We had that recent story here in the US with a federal attorney going to a federal court and citing a bunch of cases that he got through ChatGPT the judge said, okay, gimme more detail on those. And he went back and he asked it for the detail and he gave him the details which he provided to the judge. It turns out though, that none of those previous cases actually even existed. So it is a very dangerous sort of thing if you want to trust it a hundred percent. I think that's something that a lot of people really don't understand, and if you think about it, yella, how often do you question your Google search results?

Jelle:

Actually, that is a good question. Eric and I, I think that people rely on the internet and everything that's on and in there, uh, way too much as it is. And I, I like the one with ChatGPT where you ask it what happens when you break a mirror and it returns that you get seven years of bad luck, which yeah, some people believe that. So given that everywhere, AI ChatGPT, everybody can access it, is it easier for people to engage with ChatGPT?

Ian Garrett:

So that. Believe actually it's really extensible to most tools out there that a lot of the technology exists today and this definitely can be extended out to the cybersecurity realm. There's a lot of great tech that exists that just isn't user-friendly and or easy for people to navigate. So there's a high barrier entry knowledge-wise. And so then it becomes difficult to use and that's why, you know, you'll see certain types of roles still can engage in using that technology, but you won't see that on a mass adoption until there's an easy way to access that. And that's, I think, you know, again, going back to open AI, G P T models existed prior to chat G B D, but if there was a chat G B T for those models, we would've seen a lot more of this type of activity. And adoption may maybe not as widespread until it. You know, goodness. But, you know, going back to why is it now and versus then, you know, now we have Bard and there's a tons of, uh, side, uh, solutions for every possible niche for, uh, leveraging large language models. I. In a way that just didn't exist, you know, five years ago or so. So again, kind of taking into the cybersecurity real world, I would love to see that type of easy access to cyber tools in a way that a lot of times the cyber tools are created from technical people who are just like, I understand how to do the tech stuff and that's great, and then it leaves everyone else kind of being like, I don't really know how to use this tool to its maximum effectiveness'cause it's so hard to use.

Jelle:

So I think that the conversational part of ChatGPT is really great. It allows for us to have a very natural conversation with the ai, ask it questions and just give it commands that makes it accessible to everyone. But. It also gives you the responsibility to know what you're doing, right? That's where the whole role of prompt engineering comes from. You actually have to fine tune your question in order to get a good reply, to get a good response out of it. And then you have to maybe re-ask or fine tune your question or, and that's something where I think that even though it's really accessible to people, your average Joe doesn't know how to do that. So that's how you get mis and disinformation because they ask a question, they get an answer, and they don't really think about, have I asked the right question? So accessibility is great, but we need to start learning as a society how to use these tools responsibly. A lot of organizations have all this new technology. They have APIs talking to other APIs. They have users coming back into the office after Covid. There's a lot of things where zero trust, zero trust being the never trust, always verify concept can come in handy. So, What are the challenges or what are some of the key challenges that organizations face when implementing Zero Trust in these environments?

Ian Garrett:

That is a great intersection of, of my current world. For one, obviously as we all know, Zero Trust is quite the buzzword. But I, I think it's important because going back to what I was saying on what I believe is the death of perimeter based security, it's one thing to say, Hey, this doesn't work anymore, and then leave it at that. So you, you usually wanna say, Hey, this isn't working anymore and here's a new solution to replace that. And the strategy, you know, the zero trust architecture is that, but obviously, you know, we can't just go out and just swipe your credit card and buy some zero trust. So thinking about how do we actually implement that? Uh, we always say it's really, you know, going back to tying identity to various cyber assets. So someone can, can, can fight me on that. But so far that's been the easiest way for me to distill down a buzzword to something actionable. And again, we've, we've focused on that specifically doing zero trust on the data access component. But you know, again, going back to itemizing all your assets, you wanna be able to do it for everything. Your network, your, your devices, you know, and have that, and that goes into adopting AI solutions into your organization as well. So, and or even not AI, just even if you think about machine to machine, like leveraging APIs, like you wanna make sure that everything accessing anything has some sort of identity associated with it, and then that it can only access the stuff that that identity is supposed to access. So whether that's, An AI going out and reaching into your database, let's say, and, and, and performing some actions with it, whether that's just a non-ai, regular artificial machine. I don't know, you know, just a standard machine to machine reaching out again via API or anything else, or whether that's your actual human workforce out there. You know, not, I think it's easy to say, you know, even if you go with a role-based access, people have too much access. I mean, you think about some of the leaks that we've seen you know, there's a lot of people that just, they didn't need to see that, but, you know, it was just, there wasn't a, an easy way to segment what they're doing for their role, because they do need some of it. They don't need all of it, but they need to be in some of the spaces some of the time. You know? How do, how do you separate that more? And I. Better cyber capabilities as we're growing and maturing, you know, as a cyber industry, we'll get into being able to segment that in an easier fashion. And I think honestly, 2020 and Covid has fast tracked the need for zero trust to grow from a concept to an actual implementable item. And specifically going back to again, there's it zero trust coming up is, was essentially a nice to have for about 10 years. Uh, because it was like, that's cool, but hey, what I'm doing works and it's already in place and it was expensive. Going, you know, after 2020, you know, even, even if people are coming back to, you know, office environments, the, with all the other changes, you know, again, going to SaaS applications, cloud storages, all kinds of other stuff, the IoT, we were essentially operating with duct tape on our perimeter base security. And, uh, COVID just kind of blew that out of the water. So I don't think there's any going back to the way things were before, and, and I think everyone knows it, so that's why Zero Trust is like, okay, we need something different. Zero Trust has been sitting on the shelf. Let's actually figure out how to turn this thing on.

Erich:

Yeah. For how many years have we heard Zero Trust as a buzzword? And it's very nice to see that it's actually happening now.

Jelle:

I think that a lot of organizations, even though they want to do Zero Trust, don't actually understand what it is yet. You see that happen a lot with things that come outta marketing, where it is a really cool term. It's been coined by lots and lots of big vendors, but they kind of forget the how to part. They tell you what it is in theory, and then they tell you to, Hey, figure it out on your own. And that's where Zero Trust is at the moment. I think a lot of organizations really believe in what it can do. Or what it potentially can do, and they're now looking for ways to actually do it and that's one of the, the bigger issues in cybersecurity in general, I think it's, we see it with things like security culture. We're talking about security culture a lot. The difference is we're not only talking about it as a marketing term, we actually have documentation. We help our organization, students, our, our customers on how do you actually do it? What are the steps you need to do? We make it practical and I think that's something where the Zero trust community can learn from go help people.

Erich:

Yeah. There's a big difference between having this great idea and having it be actionable. And I think that's, that is where we're heading with this, is it's becoming actionable as opposed to just conceptual.

Jelle:

And the, the more we help organizations do that, the better it is. Security relies a lot on, on being able to detect the threat and to respond accordingly. What role can zero trust play in this, and how can it help to make AI-based solutions something we see a lot of nowadays in the cyberspace be more effective?

Ian Garrett:

Yeah, so I think the best way to think about it is from a couple different perspectives. The first one is if you are using AI in more of the, way that people are, are now getting used to it with the large language models, I think the best way to approach that is to think about going back to how I mentioned that component of AI is really gonna shift how we interface with technology in general. Think about that from, you know, even other cyber tasks that you're doing. If you're able to implement that across the workforce for various, you know, tool sets, then that'd be awesome to be able to say, Hey, You know, I'm looking for, you know, this, this and that, and they say, Hey, this is, this is what we're seeing in, across, you know, all, all those data stores. But otherwise, I would say just use it as a coach or a buddy for when you just need additional help on, Hey, I need to do this, this, and this task. What's some recommendations? I, I think it performs well in, in those kind of cases. The other side of the house is thinking about AI from a not large, large language model perspective. And, and that's the one thing I, I hope doesn't get lost in this, is that there's a lot of other types of AI out there that are all very important for various business functions. And thinking about that and the perspective of just all the data that's getting captured for cybersecurity, there's a lot of opportunity to leverage some of these, uh, more an analytical styles of AI to really data mine. And I know that's like no longer. The hot topic that was like, whatever, 10 years ago, five years ago, without data mining, big data. But I, I think there's actually so much we haven't done with that from a cyber perspective, that especially as a lot of these models become more accessible to people that aren't, you know, AI / ML experts, I think we wanna go back and revisit what we can actually really do with just the massive set of logs we have, you know, some, some "S Buckets" out there just full of logs. Like what can we do with all that data? Um, and I think we're able to really mine it for being able to find more insights than what we realized we we're sitting on.

Erich:

Yeah. You know, that's a big problem Jelle, where we generate so many logs and so much data and metadata and all of that kind of stuff these days. And like you said, storage is cheap. So what do we do? We just kind of pile it all in one spot even through a SIEM or something that's trying to keep up with this, but I mean, we're talking about a lot of data. Being captured. And as data moves faster and more people access it and so using these as tools to help deal with that ridiculous amount of data and try to find those nuggets in there a little bit easier and less prone to errors perhaps than a human looking through logs. I think that's a great way to use this. I've sifted logs quite a few times in my career and it is mind numbing work and you end up overlooking stuff that you should have spotted just because you unintentionally tune out.

Jelle:

Yeah, I was lucky. I was in management, so I had my teams do that. The AI can really help you do that and do it quickly, efficiently. And the one thing we need in cybersecurity, we don't have time. So we need speed, we need philosophy. And that's something again, where AI steps in. It can help us detect the issues way quicker so that we can handle them. So we talked about using AI as anomaly detection, and it's a fascinating topic because it is something where a lot of organizations can, in a very simple way, get way more value out of their data. So we wanted Ian's take on it.

Ian Garrett:

I definitely would say As humans are really good at finding patterns, but the reality is there's just so much data that needs to be processed that if you were to plop it all in a room and bring a bunch of people together, We could find it out, but just the, the speed of which attacks happen, the, the speed of which things change is just not feasible for a human to do that. So, you know, I I, I do agree that the AI / ML systems are, are pretty good at anomaly detection, but one area that I think we haven't done enough, uh, exploring and so a lot of them are based off of supervised learning. So it's, you know, training the data based off of some known stuff to hopefully find some patterns in that, that, you know, the unknown stuff will be caught in. But you know, it, it still leaves a lot of room for error as it, as it relates to really new stuff. So I think there's a lot of room that we can bring unsupervised learning, which does a lot more of trying to find patterns with or without a label, without being able to say, this is malware it's not malware. It's like, Hey, these, these, here's some groupings of these things. And then maybe layering on some,, components after that that say, okay, this because of this, it's malware because of this. It's not. And I think that's especially important with the development of, and again, easy accessibility to these large language models. Going back to making technology more accessible, that means. It makes it easier for people to develop malware. And what I think a lot of people don't think about is a lot of the anti-malware, a lot of the anomaly detection out there, it's all really based off of known things. Like even if it's not a completely signature based, um, but often it's, you know, heuristic space and so like a, you know, large nation state that can afford to spend $5 million developing some malware, that's probably gonna gonna be not caught. But, you know, you're run of the mill criminals that are just pulling some stuff off the shelf that's probably gonna get caught. But if you're run of the mill criminals are leveraging ChatGPT to build custom malware for pretty cheap, then all of a sudden. You know, your he heuristic base, your signature base and all that stuff is gonna be useless to you. So I think we need to develop, you know, better anomaly detection, leveraging some of these other types of AI that we aren't, that I, I haven't seen a lot of solutions use to really start capturing, you know, that those kind of anomalies.'cause I think there's gonna be a, a, a greater amount of unseen malware, custom malware, um, because of these large language models.

Erich:

Yeah, that's a great point. I mean, being able to leverage this for evil, not just good, we're upping the game here constantly, but this is something we've been doing since almost the beginning of the internet, is continuing to evolve both on the offense and defensive sides. So, of course, bad actors are gonna use this to make more difficult to detect malware. But then we're also going to be improving the models to be able to detect the more difficult to detect malware. It is kind of a running loop here and sometimes we're ahead of the game. Sometimes we're further behind. What do you think Jelle?

Jelle:

What strikes me though is that, so the good guys are looking into AI a lot. We're trying to figure out how to leverage it and I'm pretty sure that the, that the bad actors are doing the same. But if you look out there in the world, I haven't really seen too many attacks, at least, let me just put it like this. I actually thought there would be way more by now because the technology is so accessible to everyone. So why wouldn't bad actors use it more? And then it comes to mind like maybe they don't have to, maybe the current attacks are, are way too successful, so why would they evolve?

Erich:

everybody has heard of AI doing what it's doing, but how do you see it in a, like a large scale impacting the world in let's say, the next decade?

Ian Garrett:

I think what we're seeing right now with ChatGPT and a lot of the innovation surrounding those type of AI / ML capabilities is similar to what we saw when the first iPhone came out. Where it was just a lot of, because at first iPhone comes out, all the apps that, and really the apps around it, so the app, all the apps you got like the bubble Popper app you have like you blow on your phone and steam come, you know, there's a lot of stuff that's just fun, silly games because there's a lot of technology that. Has a lot of capability, but we just haven't figured out how to make it productive. If you look at the apps on your phone now, it's just like banking app, productivity app, like cloud access, you know, none of the fun stuff's out there anymore. We don't have any more soundboards unfortunately. So I think we're gonna see that with a lot of the improvements in AI ML over the next 10 years or, and honestly, like obviously seeing how fast tracked everything's going, it's gonna be way less than 10 years and we see it, but we're gonna see it move from a lot of this like, oh gee whizz, look at this cool stuff I can do with it. To it being just really baked into our productivity in a way that is less. Sexy, but is just really just a way of life that we can't, you know, not see anymore. And I think that goes back into the human machine interface, where in the next 10 years we're gonna replace a lot of the ways that we interface with technology, with being able to just communicate to it in a more common language than us having to learn the inputs of the machine. It will take our language, transform them into the inputs of the machine.

Erich:

Yeah, it's more about being able to easily communicate with it. And I think that's been an ongoing issue that's kept people out of coding, that's kept people out of other things, is having to learn that new language. So the better we get about the communication, the better off we are.

Jelle:

I think the technology, the original idea behind it is to make our lives easier. And using AI we can make or add the access to technology easier, right? It AI can really help us to have technology, quote unquote understand what we want better and how it can help us better. And that's what I see with ChatGPT. It's just the, the whole conversation thing behind it allows us to leverage AI to leverage technology. That's where it will, that's the evolution we'll be seeing in the future. If you can create some laws and legislations that focus on that part and, and help us control and, and, and shape that, then there's no reason why our, our journey in AI can't be a magnificent one. The one thing that sticks out to me throughout this interview is the need for identity. Identity is, is a key issue when you look at security. But data in general, and I think that's something where organizations can really turn to things like identity access management tools to boost their security if they have that implemented, ah, there's a lot less breweries to take care of. And the other thing is that AI in itself is a great technology, but it, it comes down to us people to use it in a right way to use it ethically, to use it morally and to make sure that if we implement it, we think about the outcome.

Erich:

that's a great point this has been a fascinating discussion. I for one, welcome our new AI overlords. Throwing that in there, had to do it. Okay. Ian is such a smart guy and he does have so much background in AI and in data protection. I think we can learn a lot from what we've seen or what we've heard here today. I think this was very enlightening and, and frankly, I'm gonna walk away from this again. I'm gonna be thinking about this quite a bit. Um, it's something that's been in the back of my mind, but when it comes to data protection, yes, uh, identity is key. Knowing where your stuff is is key. And then tying those together and saying what identities are allowed to interact with this data set, that's gonna be a very important part.

Jelle:

So Eric, today's episode was really interesting and, and Ian was a really great guest. What did you think?

Erich:

Really enjoyed it. Glad we had him on here. And he's just another one of these great people that we've had on here, these security masterminds that we've been able to introduce.

Jelle:

I agree. So Eric, thank you for doing this episode with me once again. It was a blast. We're going to round it up and just go say goodbye, Eric.

Erich:

Goodbye Eric.

VoiceOver:

Coming up on our next episode of Security Masterminds,

Nicole:

I love talking to people, I love helping people and I love things that just make sense from a business perspective. And I love pulling people together and helping them win. And that's literally what I do every single day.

VoiceOver:

We invite you to join us with our special guest, Nicole Dove. You've been listening to the Security Masterminds podcast sponsored by KnowBe4. For more information, please visit know KnowBe4.com. This podcast is produced by James McGuigan and Javvad Malik with music by Brian Sanyshyn. We invite you to share this podcast with your friends and colleagues, and of course, you can subscribe to the podcast on your favorite podcasting platform. Come back next month as we bring you another security mastermind, sharing their expertise and knowledge with you from the world of cybersecurity.

People on this episode