Security Insights - Cybersecurity for Real-World Workplaces

AI vs. AI: The Future of Cyber Security

August 17, 2021 Ivanti Season 1 Episode 14
Security Insights - Cybersecurity for Real-World Workplaces
AI vs. AI: The Future of Cyber Security
Show Notes Transcript

Host Adrian Vernon, VP of Product Managment Chris Goettl, and Chief Security Officer Daniel Spicer talk about the future of artificial intelligence and machine learning in cyber security including:

  • The beginning of AI
  • How AI and ML benefit organizations 
  • How AI and bots are used to attack organizations
  • What kinds of AI will we need to stand up to those attacks? How do we get Skynet to go up against Ultron?
  • The human element behind AI and the danger of over-automation

  • Next episode going live June 29, 2023!
    • New episodes publish around the second and fourth Thursdays each month.
  • For all show notes, resources and references, head to Ivanti.com/SecurityInsights
  • Join the conversation online on LinkedIn (linkedin.com/company/Ivanti)

Adrian: Well hi everyone, welcome to another edition of the Ivanti Insights podcast. I'm Adrian Vernon, and today I'm joined as always by Chris Goettl, our Vice President of Product Management at Ivanti and Daniel Spicer, our Chief Security Officer at Ivanti. Guys, great to have you here. 


Chris:  Thanks Adrian.


Daniel: Thanks for having us.


Adrian: All right now Chris, last time we were here, you were Senior Director of Product Management, now you're VP of Product Management, so congrats on the recent promotion.


Chris: Thank you for that, yes.


Adrian: And then Daniel, on your side, you're recently stepping into that Chief Security Officer role, so again, congrats there. And guys I just feel like boy, am I just running in the mud here? I'm not sure, but it's great to be here with you guys. So today we're talking about artificial intelligence and machine learning. These have been hot topics in security in recent years, but did you know that AI was first developed back in the 1950s? And do you guys know why we didn't see AI implemented even before that time? Any guesses? Ooh, stunned, silence.


Daniel: Processing power. 


Chris: Yeah, I think that was probably the biggest factor, it wasn't capable at that point.


Adrian: Exactly. As I understand it, prior to 1950 computers lacked one of the key prerequisites for intelligence, they could not store commands, so they could be told what to do, but they couldn't remember what they did. And Chris I'll tell you, my wife might say that about me today, right? That's how I am at home. So what's really changed in the last 70 years, I think this is what my wife would say, but obviously since then, applications for AI and machine learning, they've exploded. I saw a recent stat ‘Fortune Business insights predicts that by 2027, artificial intelligence will be a 270 billion dollar market,’ that's billion with a B, so it's serious business. Folks are starting to get on board, so why don't we get into it. Let's focus on cybersecurity and how artificial intelligence and machine learning fit into the equation. Daniel, we're going to start with you, what are some of the benefits that organizations realize when they implement AI or machine learning solutions?


Daniel: So just at a general level, one of the things that we deal with in technology is the volume of data. We've gotten to the point where we generate so much data, so much information that it's not actually possible for humans to go through all of that. And it's gotten to the point now that even just generic algorithms that we want to generate, we can't generate them fast enough. Machine learning and artificial intelligence is a way for us to start trying to handle that data volume. There's no data volume like security data volume, right? There’s constant attacks, shifts in the landscape and it moves so quickly, there's just a great application. 


Chris: Another aspect of that is things that you can't really accumulate on your own as well. It's very difficult for a company to go through and have people try to identify what they're trying to get through. An example would be like insurance fraud, you can't have your insurance claims adjusters be able to go through every single claim and try to identify which ones are valid, which ones are not. Examples of that would be like there are a lot of people who try to present fraudulent claims. They'll go and grab an image of whatever item it is that they're trying to get the claim for just off of the web and say, oh yeah, this was what I had, and was that really real. So machine learning could be used to identify a variety of different things, pull together all of those data points; it's a big data problem. But also just a matter of there's some simple solutions too that can be done, it’s just a matter of they're so difficult for a human to go through and look at all those different points. It's a lot easier to have a machine learning algorithm do those repetitive tasks for us and identify when something fits that pattern or fits that behavior. So it's actually used quite a bit in a variety of different ways that most people don't even realize. 


Adrian: Okay, now let's flip the perspective a little bit, just like in Star Wars where there's the power of the force, there's always a darker side. So what about the dark side of artificial intelligence and machine learning. Daniel, are cybercriminals also using this technology to their benefit, and if so, how are they doing it?


Daniel: There's actually a bunch of different ways, and the one that I find the most interesting, one of the earliest places where we see the machine learning model versus machine learning model, the battle of the AI, is actually in phishing emails, right? We've had for a very long time the vendors sitting at our mail gateways collecting these large amounts of different examples of email to help better detect potential phishing emails. And then a few years ago, we had certain banking botnets, we're thinking Emotet and TrickBot. You know, they may be defunct now, but one of the functionalities they introduced was stealing mail data. And they collected a vast number of messages that they can now have the same data set that some of our defenders have in order to identify and create more sophisticated, more convincing phishing emails. And so this is really the first place where you're really seeing the bad guys and the good guys, both with their machine learning assistants going at each other. 


Chris: Now one thing that I'll throw out there, leading up to this conversation, I did a little bit of research about AIs. If you look at the distinctive AIs that have made it into movies and fan fiction and everything else, I think that the AIs fighting for good, don't stack up to the AIs fighting for bad. You've got things like Skynet, Ultron, those types of AIs fighting for the bad, on the good side you've got things like the droids from Star Wars or Wally. So can we find a way Daniel to get Skynet to fight for good and go up against Ultron for us? Because it feels like we're going to need that level of AI to fight back against that kind of forces of darkness, right?


Daniel: Yeah, absolutely. It comes down to making sure that we're refining the quality of the models. And truthfully, this is like any other cat and mouse game; insecurity. We start using machine learning and AI, the bad guys started doing the same thing, and so now we're going to get to this point about who's generating better models and really starting to put them against each other. And this is actually kind of core to how artificial intelligence works. One of the core ways that machine learning works is through adversarial learning. Where you take a machine-learning model that is supposed to be able to positively detect something and you pit it against another machine learning model, which is designed specifically to try to fool the other model. And so there's going to be a lot of this, and it kind of reminds me of how we have good red teams who help us test our network defenses. 


Adrian: Let's talk about the human element. For years, the skills gap has been a hot topic in cybersecurity, and it never seems like there are enough smart and talented individuals to fill the security roles. So do you see AI as a way to alleviate some of that gap? But if there is that gap between the dark side and the good side, will it really alleviate that skill gap?


Chris: Yeah, I think introducing AIML technologies is absolutely necessary to decrease the amount of effort that's going into it. If you look at several security technologies over the years that really needed to evolve and used machine learning as a way to evolve. Antivirus was reaching a top line point where definition-based AV detection just wasn't going to survive. And went into behavioral and still evolved further and further until it evolved into EDR. EDR started around things like indicators of compromise. Still, it was not quite at the level where it could really eliminate some of the need for that human intervention. There were a lot of false positives, a lot of times where it still wasn't quite robust enough. The introduction of more ML capabilities to really understand the baseline. And you could have a baseline across everything you could see across multiple companies. You can have a baseline at a company level, you can have a baseline at a person level within an organization. That's the point where the volume of those false positives can be eliminated by those algorithms to make it so that the human element can now deal with the volume of incidents that really need the attention, that needs that human element. So I do think that's a good example of where this is helping to drive to reduce the amount of human intervention needed to respond to security issues. The human element is still necessary. But we can eliminate a lot of the noise from the equation and get down to the incidents that really need the most attention by implementing these types of technologies.


Adrian: So when we talk about AI and ML, there's certainly some goodness there. What should organizations be wary of? Are there any risks they should be wary of or cautious of when implementing such technologies in cybersecurity? 


Daniel: Absolutely. You know, one of the things that I always worry about is organizations trying to over-automate a process. If you take something and you try to completely remove the human element, eventually it becomes stale or stagnant, or it's not agile enough to handle edge cases. As much as we would like it to be, the AIs that we're talking about aren't at the level of Skynet, right now they're more of augmentations of our employees. So kind of like what Chris was saying, we still need smart people behind the wheel in order to make sure that these models are paying attention. There was actually some really great research. I believe it was last year where researchers were able to fool Microsoft's machine learning models in Windows defender in order to convince it that a piece of malware was actually not malware. And so being able to manipulate those ML models, if there isn't a human paying attention, bad guys can manipulate them the same way that they can make their own bad AIs. There are great tools, it's an amazing technology, but it really is something that enhances our ability to do the work, and not something that we can just set and forget. 


Adrian: As we head to the home stretch here in this episode of Ivanti Insights, going to ask you guys is there anything we missed? We're going to start with Chris, final parting shot you want to leave our listeners with?


Chris: Not really anything that we missed, but kind of to some interesting ways that machine learning is helping to improve experiences. We're looking at the September or August patch Tuesday coming up here very shortly. One of the challenges that I think a lot of companies face is getting the level of understanding on how patches are performing in their environment. There's always a fear of pushing out something operationally and having it break in the environment and also understanding which update needs more attention than others. Back to that how do we help people focus their attention on these types of algorithms? So that's something that we've actually implemented. Machine learning in solutions like our patch intelligence helps to take crowdsourced information from many organizations, and with that, it can help show reliability of updates in an environment. If we can show somebody that the latest Google Chrome update has been installed thousands of times in less than 24 hours and give them a feel for how it's performing. If we can also take social sentiment from that and find baselines to be able to go out and say, okay this update just came out, it's getting an abnormal amount of chatter across social media and other sources. This is something that you might want to take a look at. And with that sentiment, being able to show somebody, hey, this might be more security related, this might be more operationally related. Now the humans involved can start to optimize their time spent. So definitely something to look out for is ways that these technologies are helping to reduce time and effort to close security gaps.


Adrian: And Chris, you mentioned patch Tuesday. Everyone may not know exactly what you're referring to there, at least from an Ivanti perspective, give people a quick overview of this monthly webinars series that you lead.


Chris: Yeah, this goes back to Microsoft many years ago who started a regular cadence for when they distribute their updates out to the world. This made it easier for organizations to schedule maintenance because that type of security update needs some downtime to be able to put in place. So Microsoft patch Tuesday kind of started this huge following for most organizations to schedule their monthly maintenance. Now many organizations are more on a continuous basis as well, so there's always updates coming out many days during every week. But most organizations still kind of subscribed to and focus on a monthly cadence of doing most of their updates that happens on the second Tuesday of each month. For Microsoft and other vendors like Adobe that have standardized on that cadence. We have a webinar series that we host at Ivanti that helps our customers to quickly get a lot of details in a very short period of time. We do a lot of analysis, help them understand what came out, what types of security issues they might need to be aware of or operational issues they might want to look out for and help them to prioritize better. To spend their time where more efficiently as they try and go through that process.


Adrian: And a very quick Google search for Ivanti patch Tuesday will pull up a registration page for this webinar series if anyone is interested in joining you for that. All right Daniel, final closing comments from your end.


Daniel: Not much here. I just kind of a bit of amusing. One of the things that I'm always interested in is how we're going to respond when an AI makes a bad decision. The machine learning models are kind of black boxes, and so it's hard to understand why a machine learning process made a certain decision. And so I think that's going to be an interesting line of research over the next few years, as our dependency on these technologies continues. 


Adrian: All right. Well Chris Goettl, Daniel Spicer, we really appreciate you coming on. As usual, we'll do it again two weeks from now. Until then folks, we hope you enjoyed the conversation with Daniel and Chris talking about AI and ML. Until next time, stay safe, be secure and keep smiling.