ClearTech Loop: In the Know, On the Move
ClearTech Loop is a fast, focused podcast delivering sharp, soundbite-ready insights on what’s next in cybersecurity, cloud, and AI. Hosted by Jo Peterson, Chief Analyst at ClearTech Research, each 10-minute episode explores today’s most pressing tech and risk issues through a business-focused lens.
Whether it’s CISOs rethinking cyber strategy or AI reshaping risk governance, ClearTech Loop brings clarity to a shifting landscape—built for tech leaders who don’t have time for fluff.
We cut through hype. We rethink assumptions. We keep you in the loop.
ClearTech Loop: In the Know, On the Move
From Reactive to Predictive in AI Security with Jen Waltz
Cybersecurity has been trapped in a reactive cycle for years: a new threat emerges, a new tool gets purchased, and teams get overwhelmed by alerts.
In this episode of ClearTech Loop, Jo Peterson sits down with Jen Waltz (Chief Information Security Officer at IMAJENATIVE) to unpack how generative AI can fundamentally disrupt that cycle—shifting the focus from managing tools to achieving strategic outcomes.
The conversation goes beyond “faster alerts” and gets practical about what’s changing right now:
- Moving beyond alert triage into predictive threat hunting, including simulating adversary behavior and generating TTP playbooks—especially when paired with threat intel and MITRE ATT&CK data.
- Upskilling SOC teams by using GenAI to reduce menial work, provide clearer remediation paths, and support more anticipatory defense postures.
- Embedding security, privacy, and governance early so “secure-by-design” becomes a business enabler, not a speed bump.
Jen also gives a clear governance warning: as AI adoption accelerates, organizations must guide usage with approved tools and acceptable-use controls—especially to reduce the risk of sensitive data being dropped into consumer AI chat tools like ChatGPT.
If you’re responsible for security operations, AI strategy, or governance, this episode offers a grounded path for how to adopt GenAI without losing control.
👉 Subscribe to ClearTech Loop on LinkedIn:
https://www.linkedin.com/newsletters/7346174860760416256/
Key Quotes
“Cybersecurity has long been trapped in this reactive cycle… generative AI… can fundamentally disrupt the cycle by shifting the focus from managing tools to achieving strategic outcomes.” — Jen Waltz
“The CISO no longer is this superhero defender of the perimeter. You have to become a business strategist…” — Jen Waltz
Three Big Ideas from This Episode
1. GenAI can break the reactive cycle—if teams target outcomes, not tools
Jen frames GenAI as an opportunity to move beyond buying more technology and instead shift security programs toward strategic outcomes and anticipatory defense.
2. Predictive threat hunting becomes practical with TTP playbooks + MITRE ATT&CK context
Rather than only prioritizing alerts, Jen describes prompting GenAI to simulate adversaries and generate playbooks—then connecting that to threat intel and MITRE ATT&CK data to anticipate attacker evolution earlier.
3. AI governance is a leadership mandate—and the CISO role expands
Jen argues the CISO must operate as a business strategist balancing innovation enablement with risk governance. That includes guiding internal AI use with hardened, approved tools and clear controls—without shutting down creativity.
🎧 Listen: Buzzsprout player above
▶ Watch on YouTube: https://youtu.be/EEf0eRdCfzg
📰 Subscribe to the ClearTech Loop Newsletter:
https://www.linkedin.com/newsletters/7346174860760416256/
Resources Mentioned
- MITRE ATT&CK Framework: https://attack.mitre.org/resources/attack-data-and-tools/
- NIST Cybersecurity Framework (CSF): https://www.nist.gov/cyberframework
- ISO/IEC 27001 (ISMS): https://www.iso.org/standard/27001
- ISO/IEC 42001 (AI Management System): https://www.iso.org/standard/42001
Jo, Hey everyone, thank you so much for joining. I am Jo Peterson. I'm the vice president of cloud and security for clarify 360 and the chief analyst at Clear Tech Research. And I'm here today with Jen Waltz, Chief Information Security Officer for imaginative, Hi Jen.
Jen Waltz:Hi Jo. It's great to see you again. Thank you so much for having me back on the show.
Jo Peterson:Thank you for being here. In case you guys haven't joined this is clear tech loop. We're on the move and in the know. And let me tell you a little bit about Jen before we dig in and start asking her some questions about AI security. Jen has more than 15 years of experience in both IT and security. She's a lawyer, y'all a lawyer, and has held roles at Equinix, Unisys and Microsoft, so she's like the triple threat. So I mean, this should be an interesting conversation. I'm getting right. Look, she's flexing her muscle now. So Jen, how can cybersecurity professionals, in your opinion, leverage generative AI to kind of break out of what we see in security all the time, this sort of traditional tools and tech mindset and maybe drive some more innovative thinking and execution in their security programs. Yeah.
Jen Waltz:So Jo, first and foremost, I love the thought leadership that you have here on on clear tech loop. I you really bring the heat regarding cyber security and an AI thought leaders. So thank you for having me. So the one thing you know when I thought about this question is that cybersecurity has long been trapped in this reactive cycle. You know, this new threat emerges, a new tool is purchased to counter it, and then the team becomes overwhelmed by all these alerts. So generative AI, or Gen AI, is offering this, you know, once in a lifetime opportunity to fundamentally disrupt the cycle by shifting the focus from managing tools to achieving strategic outcomes. And so, you know, I look at this as you have to move beyond alert triage to predictive threat hunting, right? So we've got to move beyond just simply using AI to prioritize, you know, SOC alerts, you know, you you can prompt Gen a Gen AI to simulate some type of adversarial behavior like quote, generate a TTP playbook for a ransomware group targeting the manufacturing sector in q4, 2025, based on recent advisories. So what that does that prompt? It enables proactive threat hunting, moving from reactive defense to a more of a predictive posture. And with that, when integrated with this threat intel platforms and Mitra attack data Gen AI can help anticipate attacker evolution before it reaches any type of production environments. In addition to that, it also automates the cyber security grunt work. You know, this is one of the things that we're seeing with how jobs are changing as a result of generative AI. And so now, companies are using generative AI to reduce all the toil and trouble in drafting security policies aligned with frameworks like NIST or ISO, right? And then they're using it to write remediation scripts for common CBEs, and they're generating tailored incident reports for executives and building these user friendly, contextualized security awareness content. So this is allowing the security teams to really focus on the high and complex, high value activities, such as designing a zero trust network architecture or managing cyber insurance negotiations.
Jo Peterson:I like that. I like the fact that. So I got a couple things out of that conversation that I want to highlight. So you brought in the miter attack framework, and that is proactive by nature, and not reactive in its nature, and that's great. And the other thing that you talked about which kind of led my mind that way was maybe the ability to upskill sock technicians, right?
Jen Waltz:Yeah,
Jo Peterson:instead of having you know what I what I've seen, for the folks that are using this, I'd love to get your thinking is AI provides a remediation path. So it'll suggest, right? So instead of, instead of just, you know, finding an alert and saying, okay, good luck, go fix it.
Jen Waltz:Yeah,
Jo Peterson:not only will it suggest a path forward, but. It'll do depending upon the complexity of the of the prompt and how it's been utilized, it will come back and it'll prioritize, you know, so it's, I think it's a great way to upskill maybe some of the more junior folks in the sock. What do you think?
Jen Waltz:I love that meaning. So think about it. If I think, as people are using generative AI, I'm not quite sure if they think the prompts are going to break it or something. Jo, because you can ask it the most complex things, and it will break it down for you, like a math tutor. Think about
Jo Peterson:one way to think of it, go to school. Going to school, well, and you brought up something else, and that takes me to the next question. You know what I'm hearing you talk about, also, as an underlying theme, is balance. So how can organizations embed security, security and privacy controls into AI model development without kind of slowing down innovation? Any thoughts there?
Jen Waltz:Yeah, so, oh, this. I love the these questions because they they basically build on each other, right? So the first question is like, how do you execute? You have to think from tools to outcomes. You know, you have to stop thinking of AI as just another stock add on and use it as a strategic copilot. I know people are hearing that word a lot, co piloting that augments in human intelligence, right? And enabling is proactive security. So now you know you're going to have to embed privacy and security into AI development. So you got to secure AI and so you have to move fast and break things right. That's the way we've been taught in it, that approach is incongruent with responsible AI, so organizations have to move deliberately and secure things and embed privacy security governance into the AI System Development Life Cycle, which is like aisdl to build this secure by design culture, right? And so you have to implement, you know, this formal AI governance framework, right? You're going to have to realize that it's, it's going to be, have to be the five musketeers of security, privacy, legal, compliance and data science, right? And so that's going to give you this acceptable use policy and AI risk classification tier. It's going to help you review processes for how you put this model into deployment and then also reporting an audit, right? And then you have to look at every input into a Gen AI system as a potential threat actor. So imagine, you know, when quantum computing is fully here, it's going to be able to guess passwords and other things. And so you're going to have to look at, you know, how do you scrutinize data for poisoning, bias, PII, leakage, right? You're going to have to make sure that you are using reputable sources, and then you still have to fact check. I don't know if people do this right now, but I use at least four different AI tools to to vet each AI. Does that make sense? Jo, absolutely. And because you can't rely on just one source as the truth. You have to look for because people will prompt anything into the tool, and then it becomes knowledge, right, right? And they will falsify information to put into that. And you have to realize there are bad actors putting bad data into these AI tools. And so you're going to have to look at how people are using backdoors or embedding logic bombs, and then also you have to look at like software, you know, Standard Board of materials and vulnerability scanning and license risk assessment. So you're going to have to put so much due diligence on third party vendors and partners using, you know, generative AI and so I hope that answers your question, because, you know, it's, it's so much more. It's like secure prompt engineering. It's coming up with these model drifts and thinking of, how do you create more rooms, right? So you can out think and advocacy. Adversary. If that makes sense,
Jo Peterson:it totally does. So first of all, incongruent is not my new word of the day, seven letters. It's a great word so incongruent, but you said something else, and it made me think, you know, maybe AI. Is not only helping us rethink the way we're doing work, but maybe it's helping us break down silos, because we don't have a choice, right? It's forcing these groups. You'd mentioned three or four or five different groups that all need to be involved, and I was thinking myself, gosh, she's right, because these people all have to talk to each other, whether they want to or not, and agree on what the go forward is going to look like. And the go forward in lots of ways, right, not only from a from a framework perspective, but from a governance perspective. You know, what's allowable, what's not. For example, it's going to take everybody getting to the table and thinking together about what's going to work for the organization, which is kind of a cool thing, absolutely.
Jen Waltz:I mean, Secured by Design. It's a business enabler. You have to think about trust and safety. And then when you're shifting, you know, in the aisdl, you have to look at it is, is accelerating a structured, auditable innovation, and it has to be built into the security and privacy, trust and safety have to be built into the CICD pipelines, into all dev SEC workflows. And then as you're evaluating partners, you've got to look at the ecosystem. Can this partner secure AI, not only inside the org, but across of their other partners, if that makes sense to you. So, right. Perfect, perfect.
Jo Peterson:Jo, right. I mean, pardon me, and I'll have to have you back, because MCP servers are a great example of, you know, a new tool that's come into the security arena that is has, it has a chance to be a game changer, right? And it's doing all the things that you're talking about, particularly in the supply chain, right? So it's a different way, but that'll be a different conversation. Let me ask you the last question, sure. So be thinking about AI adoption in terms of, not only it's used to secure emerging threats, and we kind of touched on this a little bit, but from an organizational governance perspective, what do you think?
Jen Waltz:Oh, so I love this question, because the CISO no longer is this superhero defender of the perimeter. You have to become a business strategist, and you have to be thinking in lockstep with the CFO and the Chief Revenue Officer regarding innovation enablement, balance with risk governance, you know, and it means managing several mandates simultaneously. You're going to have to use AI to secure the enterprise, right? So you have to think AI for security in terms of human capability to, you know, reduce MTT D and MTTR to enhance efficiency and reducing burnout. Number one, so AI is to, again, lift up those people from doing menial tasks and give them better tasks to do, have them learn into it. Because that's it's an expensive proposition when you hire people and you want to keep them engaged and you want to keep them building for you, then you have to and have use them and their knowledge combined with AI and machine learning to enable this predictive defense, right using telemetry detecting subtle attack patterns, you know, this is going to help companies and help the company that will help the CISO, help these companies shift their posture from reactive to being anticipatory, and then allowing them for, you know, preemptive asset hardening. Then you have to also translate tech to business, right? So again, these executive ready reports that translate technical vulnerabilities into this business risk you're this is going to help CISO communicate crisply, more effectively at the board level to tell them these imperatives. Then the second mission, or the mandate, is to secure, you know, the organization's use of AI, so securing it so I see people will have to build and not necessarily buy AI, in my opinion, Jo and have it hardened in their environment. Because one thing i Because, you know you're you would be opening up yourself to risk, to allowing your employees to just go on chat, GPT, right, and put information in there. So you so do you remember? Okay, do you remember when, before SharePoint got started, there used to just be these knowledge bases that you would go to? So that's, I remember how you. Built off the knowledge how you got to being able to share information like SharePoint, this is what I see happening, that each company is going to have their own bespoke use of AI hardened down in their environment, because, you know, from an access acceptable use policy, you're going to have to be able to say what can be used, how you treat PII and source code, you're going to have to have a list of approved tools that are only for your environment. And I know that might seem limiting, but with all the risks that are out there with mobile devices and everything, again, we're talking zero trust. You have to have risk review processes for not only vendors, but also your employees and so at the same time, you don't want to block people's creativity innovation. You want to guide it. You want to be able to offer it safe alternatives and training so that business units can still use these amazing Gen AI tools responsibly.
Jo Peterson:Yeah, I mean, such, such a good sort of encapsulation of both sides of that question. So thank you so much for the thoughtful response, and thank you in general for sharing your thinking today, guys, it's been fun, and Jen just made it more funner, if that's a word, I don't know. So we'll see you all next time. Thanks for joining. Thanks, Jo.