ClearTech Loop: In the Know, On the Move
ClearTech Loop is a fast, focused podcast delivering sharp, soundbite-ready insights on what’s next in cybersecurity, cloud, and AI. Hosted by Jo Peterson, Chief Analyst at ClearTech Research, each 10-minute episode explores today’s most pressing tech and risk issues through a business-focused lens.
Whether it’s CISOs rethinking cyber strategy or AI reshaping risk governance, ClearTech Loop brings clarity to a shifting landscape—built for tech leaders who don’t have time for fluff.
We cut through hype. We rethink assumptions. We keep you in the loop.
ClearTech Loop: In the Know, On the Move
AI Security Is Still Software Security with Nicolas Moy
AI is being embedded into enterprise tools faster than most organizations can govern it.
Productivity platforms, security systems, and development workflows now include AI capabilities by default, often without a single approval moment or clear ownership model.
In this episode of ClearTech Loop, Jo Peterson sits down with Nicolas Moy, Founder and CIO of LifeMark Financial and vCISO for Security Engineering at Halyard Labs, to talk about what AI security looks like in practice when it is treated as software, not as a separate discipline.
Nicolas shares how security teams are already using AI today to accelerate policy development, reduce operational noise, and support threat modeling earlier in the design and build process. The conversation also explores why governance is struggling to keep pace with employee behavior, especially as sensitive information enters AI systems without clear visibility into where data goes or how it is reused.
Rather than framing AI security as a future problem, this discussion focuses on what CISOs and CIOs are dealing with right now, and why accountability has to keep pace as AI compresses timelines across security and technology decisions.
If you are navigating AI adoption across security, development, and governance, this episode provides a grounded perspective on how to approach AI without losing control.
👉 Subscribe to ClearTech Loop on LinkedIn:
https://www.linkedin.com/newsletters/7346174860760416256/
Key Quotes
“For AI, it’s similar, it’s software, but there’s some new evolutions to it.”
— Nicolas Moy, CISSP, CCSK
“If my employee puts this confidential information into an AI chat system, where is that being shipped out to?”
— Nicolas Moy, CISSP, CCSK
Three Big Ideas from This Episode
1. AI security accelerates familiar risks rather than creating new ones
Treating AI as software brings it into existing security and DevSecOps practices earlier, rather than isolating it as a separate problem.
2. Governance is lagging behind real employee behavior
AI tools are being used inside normal workflows faster than policies and controls were designed to handle.
3. CISOs and CIOs must engage together earlier
AI security sits between architecture, data governance, and risk ownership, requiring shared accountability across roles.
Episode Notes / Links for Suzie to Fill Out
🎧 Listen on player above.
â–¶ Watch on YouTube: https://youtu.be/MBVbyAE33e0
đź“° Subscribe to the ClearTech Loop Newsletter:
https://www.linkedin.com/newsletters/7346174860760416256/
Resources Mentioned
- OWASP Top 10 for Large Language Model Applications
https://owasp.org/www-project-top-10-for-large-language-model-applications/ - OWASP AI Project
https://owasp.org/www-project-ai/ - ClearTech Loop Episode: AI as a Digital Co Worker with Timothy Youngblood
https://www.buzzsprout.com/2248577/episodes/18509846-ai-as-a-digital-co-worker-with-the-experience-of-an-intern-with-timothy-youngblood
Hey everyone, thank you so much for joining. I'm Jo Peterson. I'm the vice president of cloud and security for clarify 360 and I'm the chief analyst at Clear Tech Research. And we're here today with clear tech loop. We're on the move and in the know. And I've got a great guest with me today. I've got Nick Moy,
Nick Moy:Hi, Nick, Hey, how's it going? Thanks for having me on
Jo Peterson:Hey, thanks for being here. Y'all don't know Nick, you should. He is the co founder and CIO of Lifemark financial. Nick has more than 15 years of experience in both IT and security in the financial industry, and he holds several technical certifications. And you all know this, financial institutions, and understandably so, spend the most money on cyber security, right? It's the largest portion of their you know, security is the largest portion of their IT budgets in that vertical in particular. So we're here to talk today about AI security, and Nick is part of a very tech forward organization. And I want to sort of get his thinking on a couple of thoughts that have been floating in my mind, and I know floating in some of y'all. So first question, Nick, how can cyber security professionals really leverage generative AI to kind of break out of what's been happening for a while traditional tools and this tech mindset and and drive more innovative thinking and execution in their security programs.
Nick Moy:Yeah, yeah. No. Great question. Jo, there's, you know, AI's is obviously rapidly growing, and we're kind of at an interesting point where everyone wants to figure out how they can best adopt it into their day to day life and day to day workflow. And we're seeing a lot of progression within AI, you know, it's not, you know, I think there's still a lot of kinks to be worked out, just because it's so new. I mean, it was like three years ago that I think chat jet GPT three is first model went open and available people start testing with and since then, there's been so much progression. You know, there's so much money being poured into this within the sector. So, you know, when we think about how that kind of ties in with security professionals and how they can use this and their day to day life, you know, I think that there's still some areas for improvement within what AI is capable of doing, but some of the things that I've seen actually working today is on the JRC side of like developing procedures. You know, when I have, when I've done this before without, you know, AI's help. It's always alright. I need to get a, you know, a template that's pre existing, and kind of see what I want to take away from this piece of together and build something new out of this that is custom fit for the organization that I'm with, with, you know, AI, you can technically say, Hey, this is the organization, this is the industry. These are the regulations that are applicable to us. Help me come up with this type of policy. And that's one easy way where it can kind of generate that content and apply it right there. And then you can take that and modify, like, I never say, you know, take exactly what you get from, you know, chat, GPT or whatever, and spit it into the organization. But it's a good starting foundation where you don't have writer's block, and you really just can start being more efficient with your time and kind of work from that. So that's one area, you know. Another thing is on the SEC op side of things, right? So how you're, you know, diagnosing different issues you're, you know, as data is coming in, if you're you have a lot to weed through. And that's one thing that SEC ops people have to deal with all the time is, you know, just all the different signals coming in. You know, they have to look at the different logs and everything. And so it's easier if you can pop some of that into, you know, an AI system and have the AI system kind of call out things that are, you know, red flags immediately, and, you know, higher level priority items for you to kind of focus on. So that's one area that I've kind of seen it, you know, being more applicable and reliable as far as the security side of it goes. And then my personal favorite is on the on the threat modeling side of things. So I'm a security architect by trade, devsecop specialists, and so we do a lot of threat modeling and a lot of risk analysis, you know, as we're well prior to build out and then as we're building out software and building out cloud environments. And so that's another opportunity where AI can really kind of help out with, you know, design implementation or different risk scenarios that need to be talked about when you're going through that process.
Jo Peterson:So I not just love the idea that you know it suggests a remediation path. I can remember being a young engineer always having to go to my supervisor going, excuse me, is this problem? So. Excuse me, now we're it's a way to help train the more junior on the team, I think, and show them a path like, Okay, well, you want to prioritize this, because this is a real problem, for example, versus what do I prioritize? I've got so many logs to look at, right? So that's kind of a cool thing. Yeah, but great answer. Thank you. So next question, fairly or unfairly, security gets known as the Department of No, fairly or unfairly. It's just, you know, it's like the like, the like the cowboy with the good white hat, the cowboy with the black hat, right? And we're the black hat bearing, you know, gear cowboys. So how can organizations embed security and privacy controls into AI model development without sort of slowing down that innovation and excitement that the teams want to keep going?
Nick Moy:Yeah, yeah. So that's a really great question. And I think that this is something, again, just because of how new this type of technology is being, as far as rolled out and introduced into the public, I think that there's still a lot of things to be found and discovered. You know, with when it comes to vulnerabilities and how do we catch these things? I was looking into this couple of months ago, and one thing that I really liked is what OWASP is doing. They have their AI project, AI project that they're working on, and they're putting out a lot of really good resources that I think will help cybersecurity leaders with a starting place, and that's really what we need, is just to get that 30,000 foot overview and try to figure out all right, here's here's a baseline roadmap I can start. Here's some things, some call outs on risks that could be applicable to my organization, and some remediation strategies I can put in place, and security controls I can put in place to prevent these things from happening. And I think what they're doing over there, it's either very aggressive, they're working on it every month, and there's more and more, you know, white papers, cheat sheets, different, you know, open source tools that they're even releasing as well, that you can see how it kind of fits into the model as your organization's kind of looking to pump out, you know, AI, and, you know, to the people within the company, so, so, yeah,
Jo Peterson:you make a really good point, because we're so accustomed to traditional security frameworks. And AI doesn't necessarily behave that way. And if you bump up a, you know, if you know, if you bump up a traditional security framework that you may have deployed at your organization, I believe that there's going to be gaps that exist in each of the domains where it comes to AI, right? So that's a really good point. Is to keep educating yourself. Keep checking these trusted sources like OWASP or any tools that they can, you know, share to help you on your journey. Because nobody knows this cold and we're all sort of learning, right, right, right? We just are. I think this next question is particularly good for you because you're in a CIO role, but you've done lots of security, right? So that gives you an extra edge. I think, how should a CISO and or CIO be thinking about AI adoption in terms of its use to both secure emerging threats and then also from an organizational governance perspective? Yeah.
Nick Moy:So this is, I think, I think this is a pretty big concern that, you know, people need to start thinking about, with all these different AI tools being embedded and so many different things that we have been using for so long, it's been like a new feature roll out. Hey, buy my product because we were AI, you know. But I think, you know, a lot of people, maybe still are missing the thought of, okay, well, where's that data going? If my employee puts this confidential information these, you know, maybe it's a customer's info or trade secrets or whatever pertaining to a project that they're working on, and they pump it into an AI chat system. Where is that being shipped out to? What type, you know, where could that be exploited, right? And so there are some new tools that are coming out to kind of catch that, and really, you know, govern what should be vetted and allow to the employees and to the contractors within an organization. Nudge security. I don't know if you guys have heard, you know, if anyone here has heard of that one, but they look like they're doing something pretty cool, as far as you know, providing that oversight and that granular control over what exactly employees will have access to. From an AI perspective, what tools should they be using and what have you so I kind of like that. From that. Get a day off site. It kind of reminds me of, like, a, like a, like a CASB tool or something like that for the cloud. You know what? I mean, yeah, for sure. But then I think from an architectural perspective, we have to think about this too, you know, especially if you're an organization that's custom, that's building out a custom LLM, you know, or MCP or something like that. Like, where's this thing actually be connected to what are the third parties that may have access to this? Just really thinking through some of that stuff and making sure that, like you would do a traditional threat model on patient development, or you have your different controls on a dev psychops pipeline, kind of think through that as well. For AI, it's similar, it's software, but there's some new evolutions to it, and so just kind of build off of that framework.
Jo Peterson:That's a good way to think about it. Yeah, well, great answers to some evolving questions and questions that are in folks' minds. So thank you for taking time out of your day to visit and everyone, thank you for taking time to join and we will see you next time.
Nick Moy:Thanks, Jo.