ClearTech Loop: In the Know, On the Move

From Tool-Driven Cyber to Adaptive AI Defense with Ryan Lutz

ClearTech Research / Jo Peterson Season 1 Episode 35

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 9:12

Cybersecurity has become a tool driven industry. Organizations buy platforms, stack controls, generate alerts, and ask humans to stitch it all together under pressure.  

In this episode of ClearTech Loop, Jo Peterson sits down with Ryan Lutz to explore what changes when AI becomes part of the security workflow. Not as another console, but as an adaptive capability that helps teams interpret signals faster, prioritize more intelligently, and respond with more consistency when the volume is too high for humans to manage alone.  

The conversation focuses on three real-world themes: 
Why the SOC is the best initial use case for AI augmentation, how leaders should think about the inherent exposure that comes with more AI and more code, and why Ryan’s research on AI malware matters for building adaptive defensive responses.  

Subscribe to ClearTech Loop on LinkedIn:
https://www.linkedin.com/newsletters/7346174860760416256/  

Key Quotes 

“Cyber is a very tool driven industry… with the implementation of AI being generative, I think that we’re going to see AI being used more in a way that’s adaptive.” — Ryan Lutz  

“In a setting like a SOC analyst… you have a ton of information coming in… millions of possible attack vectors… it’s very applicable to use AI… to generate a response very quickly and more efficiently.” — Ryan Lutz  

“How should the CISO be thinking about AI adoption… from an organizational governance perspective, because you don’t want to be the Department of no.” — Jo Peterson  

Three Big Ideas from This Episode 

1) Adaptive beats tool-driven 
AI helps security teams move beyond tool sprawl by accelerating interpretation, prioritization, and decision-making in high-volume environments.  

2) The SOC is the natural first use case 
SOC work is overwhelmed by inputs and possible attack paths. Ryan explains why AI can rank what matters, accelerate analysis, and suggest response paths quickly and efficiently.  

3) Governance must guide adoption without killing innovation 
More AI and more code creates more exposure. The leadership job is balance: govern the use and guide adoption without becoming the “Department of No.”  

Episode Notes / Links 

🎧 Listen: In player
▶ Watch on YouTube: https://youtu.be/-2mxfnCexjQ   
📰 Subscribe to the Newsletter:
https://www.linkedin.com/newsletters/7346174860760416256/  

Resources Mentioned 


 

🎧 Listen: In Buzzsprout Player
Watch on YouTube: https://www.youtube.com/@ClearTechResearch/playlist
📰 Subscribe to the Newsletter:
https://www.linkedin.com/newsletters/7346174860760416256/

Jo Peterson:

Hi. I am Jo Peterson. I'm the vice president of cloud and security for clarify 360 and the chief analyst at cleartech research. And we're here today with clear tech loop. We're on the move and in the know, and we're visiting with Ryan Lutz, hi Ryan,

Ryan Lutz:

hello, Jo, thank you for having me.

Jo Peterson:

Oh, I'm so glad that you're with me. Y'all. Ryan is an AI cyber security researcher, and he's working on his master's in cyber at the Citadel. But Ryan's had some great hands on experience in AI. He was a data engineer for ag first farm credit bank. And while he was at AG for ag first farm credit bank, he created locally hosted AI solutions for 750 employees. He wrote an AI governance framework placing controls to regulate AI efforts. He authored an AI security research paper outlining that whole process, and he created a website for documentation and to supply AI to 2500 plus association members and 80,000 farmers. But y'all didn't know that farmers were using AI, but they are. They are. Hey, Ryan, hello. Well, anything you want to say about your time doing that work for AG, first farm, credit, bank, what was one of the most interesting things you learned during that role?

Ryan Lutz:

Definitely? Well, I'd say one of the most interesting things about that role was, you know, using previous technologies to solve problems, so implementing AI in a way that had never been done before. You know, providing other farmers is not something that people normally think of, so that was very exciting, yeah,

Jo Peterson:

but, you know, it's, it's interesting. You know, people don't think that farmers use that much technology, but I had a chance to work with a farm equipment manufacturer many years ago, and they were very far ahead in terms of technology in their equipment, like they had built in technology to their equipment so that farmers could see things like, you know, what the precipitation was going to be in the field, or just all kinds of things. And we just don't generally think of ag being that tech forward, but it is definitely. I know it

Ryan Lutz:

was surprising to see, and I'm glad that I had opportunity to work there and complete that project. Yeah, well, let's

Jo Peterson:

dig into the show questions. So let me start with the first How can cyber security professionals leverage generative AI to sort of break out of that traditional tools and tech mindset and maybe to drive more innovative thinking and execution in their security programs? Definitely.

Ryan Lutz:

Jo, that's a great question. And, you know, I think the real consideration there is that cyber is a very tool driven industry. You know, we have seen we have firewalls scanners, which is all reactive, and, you know, that's great, but with the implementation of AI being generative, I think that we're going to see AI being used more in a way that's adaptive, so we can make decisions, and we can simulate real decision making without having, you know, the exposure and the vulnerabilities of a real person trying to, you know, create an attack there.

Jo Peterson:

Yeah, and you know, you were entering the job force, you have some hands on experience, but you're still relatively newer in your cyber career. One of the things, and I'd love to get your opinion on this, one of the things that I'm hearing from some of the seasoned cyber professionals is that AI are, especially in Soc situations, helping their teams level up. It's not only that, it's pointing to the problem, it's ranking the problem and then giving them a remediation path. Any thoughts there? Definitely.

Ryan Lutz:

Well, I mean, one of the great things about AI is that it's able to take in a lot of information without, you know, having a physical person sit there and parse through things. So in a setting like a SOC analyst, where, you know, you have a ton of information coming in, you know, millions of possible attack vectors, it's very applicable to use AI in that situation, because we're able to, you know, parse through this and generate a response very quickly and more efficiently than having someone sit there and, you know, physically do that.

Jo Peterson:

But do you feel like that? It's going to give less seasoned folks in the cyberspace kind of a leg up? It's going to help them advance?

Ryan Lutz:

Definitely, I think that you know, one of the great things about AI is that it's able to really apply to anyone. So I think that you know, people who don't have as much experience, or able to use AI in more of a educational or informative way, you know, like you just said, you know, putting in information and then getting a response, where you're able to find a solution that you normally wouldn't have seen is very helpful. But then also, for more experienced people, you know, you're able to do the same thing with saving time. So I think there's an advantage on both. Into the Spectrum, you know, those are more experienced and less. I think that there's an advantage to both.

Jo Peterson:

What would you say to your peers that are worried that AI is going to take their jobs?

Ryan Lutz:

That's a great question, one that I've been asked a lot. And I'd say that it's like any other technology. You know, when the internet came out, when computers were starting to be a thing, you know, created more jobs. Like, look at how the tech industry is today. So I think that those that think that their job is going to go away, you know, maybe have some merit there, but I think that overall, just going to improve overall stability and efficiency within roles. And then, you know, that's like, those jobs are going to go away. You know, your task and day to day might change, but it's not going to go away. There's still going to be a need for that role. And then, with regard to that, you know, the increased job market within AI itself is going to be there. So there's, there's definitely work, then to just

Jo Peterson:

do it right? Because who's going to manage non human identities? Come on now, exactly. So, speaking of a balance act. You know, CISOs, think about this sometimes, and I'm wondering how you're thinking about it. How can organizations embed security and privacy controls into AI model development without sort of slowing down innovation? Definitely.

Ryan Lutz:

So, you know, first and foremost, I think it's important to understand that there is a concern, because with more code and with more tools, there is going to be more exposure to risk. That's just inherent. That's just how things are. But I think with the implementation of AI on a defensive standpoint, is where things become very interesting, and that's part of the research that I'm doing at the Citadel with the intersection of AI malware. So we're actually creating AI malware. And the interesting part of that is, you know, people look at that in a negative sense. You know, you're creating something that has a potential to harm or do bad, but at the same time, you know, because we're in a controlled setting, in an academic setting, we're able to produce this in a way that is informative, in a way that protects people, and then, at the same time, create a defensive IDS or IPS and adaptive response to this concern. So because we're there and controlling in all aspects of it, we're able to really mitigate any risk there. But back to your question, I think that it's really just important to understand that, you know, this is a necessary cost that as AI becomes more and more embedded into our daily lives and our into our ecosystem that's going to be there, whether we like it or not, and there is an increased risk. But there's also ways to combat this, with the implementation of AI itself.

Jo Peterson:

Well, first of all, what you're doing is school. Sounds super cool. I mean, I mean, she's Louise. That sounds cool. Okay, last question, how should the CISO be thinking about AI adoption and two sides of the coin in terms of its use to secure emerging threats, but also from an organizational governance perspective, because you don't want to be the Department of no

Ryan Lutz:

definitely, that's a great question. My previous experience lies in, you know, finance and defense. So I definitely understand how there's, you know, roadblocks and, you know, hierarchy into getting things done. But I think that it's important to consider the fact that AI is coming. AI is here, and it's going to be more adopted as time goes on. So it's, it's one of those things that it's not the cure for everything. It's not the silver bullet. You know, not every facet of work needs AI, but at the same time, you know, competitions implementing it, attackers are implementing it. So I think from all aspects, important to understand and consider how AI can be used in a positive way.

Jo Peterson:

Great answer. Well, we'll have to have you talk, come back and talk MCP servers, or NHI securing NHIS or something. So I'm sure you're gonna be learning lots of cool stuff. Thank you everyone for joining and thank you for your time, and we'll see you all later. Thank you.