What's Up with Tech?

AI: The New Playbook for Enterprise Risk Management

Evan Kirstel

Interested in being a guest? Email us at admin@evankirstel.com

Nick Kathmann, CISO at LogicGate, brings over 25 years of cybersecurity experience to tackle the paradox at the heart of modern risk management: security teams drowning in data while GRC teams are starving for it. This fundamental disconnect has long hindered effective enterprise risk management—until now.

AI is emerging as the bridge between these two worlds, combing through massive datasets to identify patterns and relationships that humans might miss. Drawing from his extensive background in highly regulated environments, Kathmann explains how AI can transform incident data, near misses, and control failures into actionable intelligence that helps organizations calibrate their risk tolerance and prevent threats before they materialize.

The conversation explores how cyber insurance is evolving through AI-powered underwriting that evaluates security control effectiveness with unprecedented precision. We also examine the governance challenges organizations face when SaaS providers unexpectedly enable AI features without proper opt-in procedures, creating what Kathman colorfully describes as product teams "running with scissors" to meet market demands.

Perhaps most valuable is Kathmann's practical framework for implementing AI governance—understanding that different AI use cases require different risk evaluations. Whether you're enabling AI in an SAP instance, using GitHub Copilot for engineering, or building custom LLMs, each scenario demands consideration of data sensitivity, potential biases, and intellectual property implications unique to that implementation.

Looking ahead, Kathmann offers an optimistic view of AI's impact on GRC professionals. Rather than replacing compliance officers, AI will likely increase demand for human expertise by making risk data more accessible and actionable. The technology will serve as a co-pilot, handling routine tasks while enabling humans to make better-informed decisions about high-impact risks. For organizations ready to transform their approach to risk management, the journey begins with mapping connections between processes, controls, and risks—then implementing modern platforms capable of turning this complex web of relationships into strategic advantage.

Which aspects of your risk management program would benefit most from AI enhancement? The future of GRC is here—are you equipped to leverage it?

Support the show

More at https://linktr.ee/EvanKirstel

Speaker 1:

Hey everybody, today we're diving into one of the hottest topics in enterprise risk management AI. From risk mitigation to compliance, everyone knows the rulebook, the playbook, the blueprint if there ever were one is being rewritten by AI. So I'm really happy to have an innovator in this space today at LogicGate. Nick, how are you? I'm good. How about you? Good Thanks for joining New England representation here, both in the Boston area. Before diving into all things AI and risk, maybe introduce yourself your background and how do you describe LogicGate these days?

Speaker 2:

Yeah, so my name is Nick Kathman. I am the CISO here at LogicAid. I've been in tech and in cybersecurity specifically for going on over 25 years, so started very early in my career, back when I went to school for what was called Information Assurance, which was the original DOD-sponsored cybersecurity track within computer science. So I've been working in mostly large organizations, highly regulated environments since then. So everything from verified by Visa, mastercard, securecard to the world's largest SAP and Epic hosting providers, some of the world's largest storage clouds that we were white-labeled all around, so most of the storage clouds you've ever heard of. So mostly with really large, really complex, really regulated environments. And now here at LogicGate trying to help our customers grow and mature in their risk management and GRC maturity journey.

Speaker 1:

Brilliant. I love you're a practitioner and not just a marketing type. No offense to us marketing types. So let's start with the big picture. I mean, where do you see AI adding the most value now for folks like yourself in risk and compliance in the field these days?

Speaker 2:

sciences problem. I think if you look at the security side of the house, we're swamped with data. We've got so much data. Just look at what most security organizations are paying for their SIM costs and their data storage costs and you'll see. There's just data coming in from all over the place and every one of your tools is giving you data all the time, but on the GRC side, on the risk side, a lot of times that their data is starving.

Speaker 2:

So how do we bridge that gap? To really understand like every, every component within your cybersecurity program is linked in one way, and you know like one of the big examples I always get or you know questions I always get is how do you set your risk tolerance? Well, if you have incident data or not, just like this, something bad happened, it was confirmed, but also near misses. We had a phishing attack was successful, they got credentials, they got in, but our multi-factor stopped it. That was a near miss that your layered defenses caught. This is data that feeds what your phishing and your social engineering risk should be at and what level your risk tolerance should be, so you can keep it underneath that.

Speaker 2:

So where AI is really big is combing through massive data sets, finding patterns, finding components. And this will be, I think, where the industry will lean towards is we're going to take all of this data that we have on the security side, on the tool side, on the program side, and really start to feed it into custom models and custom prompts where it can go through and find those patterns and say the last couple times that your company had these control failures in conjunction with these risk indicators, in conjunction with this other event, is when you had something bad happen related to that risk, so that risk is actually rising. When you had something bad happen related to that risk, so that risk is actually rising. The risk is going up, the risk level is going up and you should probably take an effort to stop that now. And I think that's really the biggest thing that I can see AI doing for us right now is just combing through the massive amounts of data and making sense of it.

Speaker 1:

Yeah, great point there. We're also seeing cyber insurance playing a core part of risk mitigation. How is AI changing the way underwriting works?

Speaker 2:

Yeah. So I mean they're using it for the same reasons, the other side around. So you know, if you look at like cyber, cyber insurance for a long time was really just you fill out your questionnaires and they would kind of just rate you based on that and there wasn't a lot of underwriting data yet for them to understand, like what is the risk to this organization in this region, so on and so forth. Like just the data the underwriters had wasn't there yet Because still at that time most companies had no idea they were actually compromised. And if you looked at you read the Verizon breach reports back. You know, 10 years ago most companies were breached for 18 months before they ever realized it. With ransomware that's gotten shorter because now they just immediately go, instead of just stealing your data and persisting for a really long time. They hit you with extortion, trying to get money out of you faster.

Speaker 2:

So I think now there's more underwriting information so that they know what works, what doesn't work.

Speaker 2:

And one of the things that you'll see is in cybersecurity or in cyber insurance specifically, now most of the underwriters will have a technical security engineer assigned to them as well.

Speaker 2:

So as they're reading the questions, they can understand like did you actually implement this control or is this, you know, is what the company telling me? Does it make sense? And then from the AI side, really taking all that data now and looking for inconsistencies, looking for any type of patterns that don't make sense, and then kind of giving it like a scoring rubric. So an example is I was speaking to some underwriters a while ago and their security engineers and certain EDR players or certain EDR companies perform extremely well, certain ones don't. So your kind of rubric based on which provider you used can reduce your risk and can reduce your information, but it also goes into what other layers of controls you have on there and how that works together. So I think AI is really helping them, just the same thing as we're looking at risk management, going through the data and coming up with those patterns, coming up with those indicators that they can really write policies against.

Speaker 1:

Yeah, great, great insight. So you mentioned the data problem. What other big vulnerabilities or blind spots are you seeing out there in enterprise AI adoption today?

Speaker 2:

change the way we do business completely in this world, and so AI affects everything from your FP&A to your product, to engineering, to how you do security, to how you do marketing, and the use cases are very different and then a lot of times, what you'll see is your company will have a whole bunch of SaaS services, a whole bunch of applications they're using to help support it, and then, just without an opt-in, they'll turn on the AI functionality.

Speaker 2:

Now you're kind of chasing because you've already done your TPRM assessment against them. You understood everything that was risky about that, but now there's a whole new risk lens that you have to now evaluate against it. And I'll tell you most of these companies, when you start asking them the AI questions and like how do they do their AI program? How do they have AI governance, it's like looking at this like deer in headlights. They don't know. You know the product teams are running very, very quickly. You know I almost call it running with scissors to get the AI functionality in there to keep up with the market demands, that they're not always stepping back and saying is this appropriate what we're doing, and then companies are, you know, on the receiving end trying to figure out is it appropriate for the product teams?

Speaker 1:

Yeah, of course, you don't know what you don't know. So what does a successful AI governance framework kind of look like in practice? Where are people falling short?

Speaker 2:

I mean, the biggest thing is just looking at it as a single-threaded. You know AI is AI. It's like the Internet. It changes everything.

Speaker 2:

So really just understanding what are the use cases that you're trying to achieve, so how you would assess risk of you know, we enabled AI in our SAP instance is very different from how you're going to assess the risk of AI, for I'm going to build my own LLMs for use in my products and, as well as you know, I'm going to implement Copilot for engineering or I'm going to implement Copilot for end users. So, like the, you have to really step back and say what is the use case? And then it's a lot of. It comes down to the, what you've been doing in risk management for a long time data sensitivity and impact. So understanding, in the marketing side, if we're going to use an AI tool to help write and to help build images and sheets and product briefings, where else? I'd add, it's taking data that you're giving out to the internet for free anyway. So the data sensitivity isn't that important. But there could be a lot of things around like how, like data sensitivity or like ai, is sensitivity um, bias, um, cultural sensitivities, like, um, you know it, putting in things that it shouldn't have done and upsetting people upset, you know. So it's really trying to figure out like what that's? That risk case or that risk acceptance or that governance use case is going to be very, very different from I'm now building. You know, I'm now third partying my ai for all of my source code out to a third party, and how are they? So now I'm taking my sensitive ip and I'm putting it out somewhere where it could potentially be. That's a completely different use case that you have to evaluate.

Speaker 2:

So really it's it's, I would say it comes down to understanding the data, the use case and then all of the specifics around. That is almost from there. It's really almost just like treating it as any other type of risk that you would have, like the prevalence, the impact, the likelihood, and then trying to understand the technical details around the AI usage. And that's where you really have to start digging into acceptable use policies and AI governance of the companies and the products that you have.

Speaker 2:

We've seen some where, in order to use this feature and it was an opt-out feature they actually went in and immediately turned on this feature in a software that we already had and it was an opt-out feature and they changed their terms to say any data that you put into it is now licensed to the company and it's like the third party and all of the public uses. I got to cut that out of there, whereas others will take it completely. They'll say you know, we are not training a model that is yours, we do not store the history and retention of this. So, really understanding the data, the implementation, the use case and making a distinction from there, yeah great point.

Speaker 1:

So we're talking about AI generating compliance concerns. Let's look at the flip side of how it can alleviate compliance issues and concerns. What's the big upside or potential there?

Speaker 2:

This is the part that kind of really excites me is now it can go through. And the same way we're looking at risk. So, like everything, everything in cybersecurity is interrelated in one way or another. You know, same as business, same as kind of life. So you know your risks are identified or are intertwined with your controls, your controls with your tech stack or your. You know people process technology, whatever you did to put that.

Speaker 2:

So each one of them should have some evaluation of is the control being deployed and how effective is the control? So KRI and a KPI kind of thing. So key performance is it. Is the control actually out there and on every system? Kri, is it actually working?

Speaker 2:

But then all of that comes back to policy statements, to incidents, to you know, you I mean across the board, to vulnerabilities, to assets, to everything. So you build out this. I almost call it like if you've ever seen the graph databases where you can click on it, it kind of spiders out. Think of all of the data that you're getting into your GRC program in that context to really understand here. Is this control that I'm looking at, this control failure or this control is related to this risk, this risk to this outcome. These are the control failures against it on this asset, which has these vulnerabilities, and it's tied to this application with this much ARR, which is tied to these third parties, etc. So really getting all the data in and building that relationship around it, like the matrix around it, is what AI is going to be very, very good at is combing through all of the data and finding those, those relationships, and then building it out so that, as you, as the GRC practitioner, now get the full context where you can make that distinction. So it's you know.

Speaker 2:

The example is you know, if we're training a prompt and say, hey, give me a, you know, build me a business plan. It'll go out and build you a business plan. And then you say, hey, give me a, you know, build me a business plan. It'll go out and build you a business plan. But then you say, hey, build me a business plan, it should be a SaaS software. You're going to get better results and I want to get to, you know, a billion dollar valuation within five years. It gives you better results and as you keep feeding it more and more and more information, it's not problems. You're going to get better and to get a better and better business plan.

Speaker 2:

The same is kind of true with GRC. If you only say how is this control evaluated, it's going to go out and just find stuff on the internet and say this is how I would do it. According to NIST, according to OOS, according to whoever, this is how you should evaluate that. But if you start giving it all of the extra contextual data now, it knows how that control is working in your company, in your organization, with your tech stack. And that's where most people fail is they immediately go to the Verizon breach report says this, or you know the Mandiant says this or whatever, and they say across the industry this is what you know. The likelihood of this happening to us is. This is what the likelihood of this happening to us is. Now, with all of the data, and so GRC, with AI, you can now come in and start to say the context to my business. Is this not just in generalities, from what researchers are finding?

Speaker 1:

Great points. So if you were to give one piece of advice to organizations just starting on this journey, maybe just adopting AI and GRC today what would it be? How would you suggest they get started the right way?

Speaker 2:

Yeah, I mean, it's a data sciences problem, and the first part of data science is getting that data and getting it into a modern platform that will let you actually analyze all the data and build those relationships, let you actually analyze all the data and build those relationships. So it's understanding, you know, not just like step back and say, okay, here are all the processes that we have. We have a user access review. That happens once a quarter. Why do we do that? What is that control? Or like, what does that process implement of control? And if I have this control and we're doing this, what is the risk associated with that? And then really stepping back and say, okay, what are we trying to stop? And building out that big tree from there.

Speaker 2:

And then, as you start building this out and you start building all these complexity or all these data hierarchies, you'll start to understand here is the data that I now need in to understand where the risks are, where the controls are, where the control failures might be, how this is contributing to the bottom line of the business, how this is contributing to this product AB, how this is contributing to pipeline and really.

Speaker 2:

So just start to understand and just on a whiteboard, even draw out that flow and then start to understand all those data points that you want to bring in, and then get a platform, a modern platform, that can really start to bring this stuff in, that has ai based built into it, to start to understand this. That would be the first. The first step is understanding the problem, scoping the problem whiteboard piece of paper, you know, charting, vizio, whatever can do that really well and then figure out okay, now that I understand the scope, now how do I get the data to support it? And then putting that into something where you can actually analyze it and get insights.

Speaker 1:

Brilliant. So you were named a leader in the Forrester wave recently. I'm just looking Congratulations for that. What do you think was your secret there? Your special sauce when you talk about your platform? What's unique about LogicGate?

Speaker 2:

Yeah, I think the big thing for us is one ease of use.

Speaker 2:

Both our power users, who are like the admins of the platform for the customer, as well as the end users, who a lot of times are doing the assessments, doing the third-party risk assessments, the privacy assessments, whatever always rave about how easy and modern the platform is and how well it is.

Speaker 2:

It's much easier to use than a lot of these old remember these 17 different shortcuts and stuff like that that you used to get a long time ago? Is that, as well as because of how we built our platform on a graph database, we can build those really complex relationships really really well. So, like a lot, it's a lot more flexible in a way to build it and so to map that data like what we're talking about before, all those connections in a graph database that it would be like in a traditional SQL database, and I think that's where we really shine. There is helping bubble up all of that context where you get a really truly holistic view instead of just you know, I understand my third-party risk, I understand my vulnerability, I understand my. It brings everything together into one view.

Speaker 1:

Brilliant. So last question, a little bit controversial, but people are worried about losing their jobs to AI. But specifically in GRC, I mean, do you think there's going to be some displacement of humans who are doing risk management today, or is this giving them sort of superpowers? What do you? What's the human impact? Do you, do you think moving forward?

Speaker 2:

Yeah, I mean it's, it's, it's AI. You kind of have to break out into multiple like like you, like that. You'd not use cases models maybe, so you have like co-pilots, which is where it's helping you. It's helping a human give a bunch of information and it's helping a process. Then you've got agents where it's now taking and it's putting the. It's actually making the decision for you, especially for high impact or high business value decisions anytime soon. It's still just not quite there On the co-pilot side where it's just giving you a lot of that context, building the relationship so that a human can be in a loop. I think that's where the value is now Whether or not it gets strong enough to actually be able to just fully.

Speaker 2:

You know it eventually gets to the point where it can do specific agents for lower risk things.

Speaker 2:

You know like third-party risk management is a prime example of this.

Speaker 2:

It can go through your contract, your SOC 2, your CAKE, your ISO, your vulnerability report, your pencet report, et cetera, and really come down and say, okay, this is a low-risk vendor, it has access to this amount of like, to this data that's not really sensitive to the company and from a resilience perspective, it's not going to take down the company.

Speaker 2:

If this vendor goes down, either due to a cybersecurity event or an operational event or whatever, then that's where an agent can come in and say okay, we've evaluated all of this and, based on our score in Uber and the sensitivity of it, we can just go ahead and approve this as a lower, moderate risk type of vendor. But as it gets into the higher tiers of risk, I don't see I see it almost in the short term hiring more GRC people, because the data will be much more there that you can actually make those decisions and find the risks before they become a problem, versus the other way around. Now, you know, I can't predict what will happen 20 years in the future as AI becomes a lot more powerful who knows? But you know, I think as of right now it's a tool in the cog, but it's definitely going to be an assistant for a long time.

Speaker 1:

That's great, well, good news for compliance officers everywhere. Thanks so much for joining Really insightful view behind the curtain and how the magic is being made. Thanks so much, nick. Appreciate your time.

Speaker 2:

Thank you for having me. I enjoyed this.

Speaker 1:

And thanks everyone for listening watching. Also check out our new TV show, techimpacttv on Bloomberg and Fox Business. Thanks everyone, Bye-bye.