ClearTech Loop: In the Know, On the Move
ClearTech Loop is a fast, focused podcast delivering sharp, soundbite-ready insights on what’s next in cybersecurity, cloud, and AI. Hosted by Jo Peterson, Chief Analyst at ClearTech Research, each 10-minute episode explores today’s most pressing tech and risk issues through a business-focused lens.
Whether it’s CISOs rethinking cyber strategy or AI reshaping risk governance, ClearTech Loop brings clarity to a shifting landscape—built for tech leaders who don’t have time for fluff.
We cut through hype. We rethink assumptions. We keep you in the loop.
ClearTech Loop: In the Know, On the Move
AI Risk Is Mostly Not New with Michael Machado
AI has technically been around for years. What changed is not the math, it is the front door. Suddenly anyone in the organization can touch it, feed it data, and make decisions with it and that shift in accessibility is rewriting business risk.
In this episode of ClearTech Loop, Jo Peterson sits down with Michael Machado, CISO and Chief Data Officer at Hyland, to break down a calm, practical truth CISOs need right now. Most of the AI risk conversation is not new. The disciplines are familiar. Visibility. Data movement. Accountability. Audit trails. Resilience. What is different is the speed and the number of people who can now participate in risk without realizing it.
The conversation focuses on leadership realities, not hype:
- Why there is risk in doing things and risk in not doing things
- How AI accessibility changes governance pressure inside enterprises
- Why governance should start with business goals, not tools
- What it means to build an AI governance model that is multi department by design
- Why CISOs should measure adoption and value signals, not only exposure
👉 Subscribe to ClearTech Loop on LinkedIn:
https://www.linkedin.com/newsletters/7346174860760416256/
Key Quotes
“Waves of technology come. Cloud, SaaS, mobile, AI. The muscles we flex are things we have seen before.”
Michael Machado
“There’s a risk of doing things and a risk of not doing things, and it’s important to strike a balance when we’re having that conversation.”
Michael Machado
Three Big Ideas from This Episode
- AI risk is not new, accessibility is
The models are not the headline. The fact that anyone can use them is. That changes how quickly data moves and how quickly decisions get made. - Governance starts with the business problem
Tool first governance creates policies that look responsible and work poorly. Start with the objective, map the data movement, then apply controls that support the mission. - Measure value analytics alongside risk analytics
It is not enough to track what could go wrong. CISOs also need visibility into whether tools are being used and whether adoption is producing meaningful value.
Episode Notes / Links
🎧 Listen in player
â–¶ Watch on YouTube: https://youtu.be/D-1-Ny4hlF0
đź“° Subscribe to the ClearTech Loop Newsletter:
https://www.linkedin.com/newsletters/7346174860760416256/
- Resources Mentioned
An Evolution of Defensive Security Operations: From Simple Detection to Modern SOC Automation https://www.linkedin.com/pulse/evolution-defensive-security-operations-from-simple-modern-machado-unctc/ - NIST AI Risk Management Framework https://www.nist.gov/itl/ai-risk-management-framework
- ClearTech Research Insights https://cleartechresearch.com
Jo, thank you so much for joining clear tech loop. We're on the move and in the know. I'm Jo Peterson. I'm the vice president of cloud and security for clarify 360 and the chief analyst at cleartech research. And I'm here with Mr. Mike Machado, in case you don't know Mike, he's a CISO, Chief Data Officer, mentor, entrepreneur, advisor to startups and investor. How's that for an intro? Mike
Michael Michado:thank you so much. Great to be here. Thank you so much for having me.
Jo Peterson:Jo, thanks for coming on. In case you guys again don't know about Mike. He's a veteran of cyber security, and I personally find so interesting about him is that given his advisory and board experience, plus his hands on role as a CISO, he gets this really unique sort of holistic look at cyber security from both a business and a technical lens. Mike, do you think that's a fair description?
Michael Michado:I think that's fair. I certainly that's very complimentary. Thank you. I think you know to be to be expansive in our views. All CISOs have this business element of the role as well. I certainly wouldn't claim any any specialness or uniqueness in my part, but I do find that any sort of adjacent things that I'm involved in, you know, product building, data related work, advising companies, you know, supporting go to market, all of these help inform kind of a broader perspective and how I approach the core work of Being a CISO and defending organizations and defending customers.
Jo Peterson:That's a good that's a good explanation. It just, it colors things, right? Having getting to see how different people go about things. Sort of colors things, I think,
Michael Michado:absolutely, you know, understanding adjacent technical domains, understanding other goals and objectives for for peers and partner organizations, you know, creates both a technical understanding as well as a human empathy, all of which I think are valuable in doing doing the work of security, which is never just the work of security, it always has elements and threads that are going into other teams.
Jo Peterson:Yeah, the best run organizations have many voices at the table from different domains. Seems like that sort of, you know, group thinking helps, really to get to solve the business problems that are at hand. So I was super interested in your advisory work. What's the most fun thing about doing advisory work,
Michael Michado:seeing the companies that I'm working with that I'm advising kind of take a combination of, you know, my ideas, other people's ideas, like lots of people contribute. Of course, it's certainly by no means just me and really move the needle within their organizations to solve real problems for their customers, right? I have a big customer focus, and so seeing tangible things into the hands of customers that are making their lives, the lives of their business, the lives of their own customers, better is very rewarding. It's also just rewarding to help people right, to be to be a contributor to to their journeys.
Jo Peterson:Yeah, I can see that. And I can particularly see that if you're advising for startups, it's like maybe watching your little baby birds fly, right?
Michael Michado:You know, the ideas are certainly my baby birds. The startups, obviously, they're their own baby birds. But it's great. It's great to, you know, to add fuel to these rocket ships, absolutely.
Jo Peterson:Oh, that's, that's a good way of saying it. Well, folks, in case you're new to the clear tech loop podcast, we're a hot take approach to podcasting, and we're really focused on just a couple of things, cybersecurity, cloud security and AI security. Now I know that's a lot, but that's our areas of focus. And in each weekly episode, we ask our guests three focus questions. What we want to do is quickly educate our listeners about the security landscape in that moment, and that's both from a risk and an opportunity standpoint. So it's more of a sound approach. Mike, the first question that I have for him, from a CISOs perspective, how should you be thinking about cyber risk in terms of dollars in business impact?
Michael Michado:You know, there's tangible and intangible impacts, right? So I guess that's, that's the first part of it. Sometimes you can quantify tangible impacts in terms of dollars. Other times you can't. Other times you can kind of get there through steps, like time is money, type calculation, types of things. I definitely think there's. Is, in my experience, and kind of how I approach it right, always looking to tie security to the opportunity and growth side of an organization, in addition to the risk, the cost, the, you know, the efficiency side of an organization, and kind of drive toward informed and balanced risk decisions within a company. You know, any organization that's that's thriving, that's healthy, is going to have a tapestry of business risks, right? Security is just one facet of business risk. It's definitely not our job. Or, I think we should not make it our job to try and eliminate all forms of security risk, right? Like, if you eliminate all forms of business risk, you essentially have a very stagnant, you know, not very productive organization. That's that's just kind of standing still. So sometimes I say business equals business equals risk, and I think it's important to strike, strike a balance. That being said, it's wonderful to get to the heart of your question, when we can quantify an actual dollar amount, or when we can quantify an actual time impact, or something of that nature, often in like product abuse, online fraud, things like this, there's a very direct line between organizational loss, which is A really nice metric when you're able to get it other times, we might use something like Fermi estimates to estimate an order of magnitude, as opposed to a specific, specific amount. I'd also say that it's really important to tie the conversation about risk to an organization's business goals, right? Or, if it's not a business, you know, nonprofits goals, etc, right? But the organization's core goals, because there's, there's all sorts of, like paper cuts and kind of, you know, tangents that we could get lost on and and having the risk conversation be focused on where any particular organization is trying to get to the important motions that that company is trying to take, and the steps forward, I think, is important.
Jo Peterson:That's a great answer, impactful of useful information. So it wouldn't be a podcast in 2025 if we didn't mention the word AI, right? It just wouldn't be so there's so much shadow AI happening in organizations, and what I heard you say before is like, hey, some of it's going to happen, and you don't want to necessarily block innovation. That said, how do you balance that and communicate the value of secure AI adoption to the organization?
Michael Michado:Yeah, great, great question. Since this is also a hot take on security and AI podcast and not just a generic security and AI podcast, my hot take is that it's mostly not new, right? I mean, in one respect, AI has been around for more than a decade, maybe in a couple decades at this point. So AI itself is not new, even though its adoption is is obviously greatly accelerated from what it was just years ago. But, you know, there are a couple of elements that I that I think right. One is there's a risk of doing things and a risk of not doing things, or we could say a benefit of doing things, and so I think it's important to strike a balance when we're having that conversation, what are the productivity gains? What are the, you know, accelerators to our our business goals, and then what's the downside of doing things? And try and strike an informed balance. Again, I mentioned this, you know, in the previous question, but I really do think part of our work in security, an important part, is bringing information above the water line so that the right people and the right levels within an organization can be making informed risk decisions. I don't tend to think that it's all about embracing the risk or avoiding the risk. I think it's more about having our eyes open and whichever decision we make being an informed decision that's happening at the right levels of the organization. You know, that being said, I do think there are, you know, functional things to look out for, you know. But going back to my earlier point on you know, there's, it's mostly not new, right? Waves of technology come it come right? You know, cloud, SAS, mobile, AI, so now AI is the, the latest one, but a lot of things are, a lot of the the motions, or the muscles we would flex, are things we've seen before, right? Like, establish visibility, get our arms around the problem, right, understand what are the security and privacy related impacts, and balance those with respect to the various you know, operational or organizational gains. Do we have audit trails, forensics capability to be able. To investigate when something's gone wrong. You know, that's very important. Do we have resilience within you know what we're introducing? Is it kind of fragile, or is it, is it, is it robust? Do we have meaningful policy written, organizational policies that speak to you know, what we want to achieve as an organization with regard to mitigating, minimizing, and kind of, you know, controlling certain adoption and use of technology. There's there's analytics tools we can now start bringing to bear that help us understand both the risk side, things like, you know, outcome cases or sentiment analysis. When generative AI is being used, for example, what's it being used for in the organization? Which organizations are using it? What data is moving from my organization into these applications? But there's also value Analytics, you know, I'm spending money on this license and that piece of software, is it being used? Is it being properly adopted throughout my organization? You know, these are, these are some of the ways I think about AI risk, both in old ways as well as new ways.
Jo Peterson:And that's a nice holistic way to take a look at it. I suppose, for me, the thing that I think about is that AI is like fast food now. It's ubiquitous. Yes, it's been around for a long time, but the average Joe can go and use it really, right? And it wasn't that way
Michael Michado:pervasive, absolutely.
Jo Peterson:And it's easy to get to and it's easy to use so you so let's say that you and the CIO are working on a governance framework together, and you guys are putting our gals, Guy and gal are putting their heads together about what the organization needs, what are a few things that organizations should be thinking about from a tactical perspective to get it moving, to get that framework moving, because I feel people are stuck sometimes.
Michael Michado:Yeah, fair enough. I mean, I've, I've been there. We've all been stuck, and we've all been we've all wrestled with incorporating emerging technology into existing, you know, security, privacy, governance, risk programs and so on. The very first thing in my mind is that it's a multi department effort, right? So, because it's a multi expertise effort, I would not, I would never think that only the security department has the car keys for determining, you know, AI, risk governance, nor, nor would I think that any individual department is kind of the sole voice in the room or the sole subject matter expert, because there's everything coming in, you know, there's things that come in from the regulatory aspect. There's things that come in from, you know, technical weaknesses in the tools. There's value chains we need to get. There's human elements, you know, that are related to our people, both inside an organization, as well as our customers. And all of these are moving quickly and converging, in my thinking, into a multi, multi expertise, multi department approach. That's kind of the only way to tackle it from another, you know, throwing another lens on it, I think it's important to determine, are you using AI in the sense of, like, we're using generative AI and that sort of thing. Are we building with AI meaning? Are we incorporating, like, large language models and foundation models and specialty models and things like that into our own products and services? Are we building AI itself, like we're builders of foundation models, or we're creating a large language model that we put out there into the world, because those all have different, you know, different elements to them. You know, there's overlap, of course, but there's, there's other other things that are that are not overlapping. For example, if I'm using a generative AI application. It's built by someone else, third party. Risk management comes into play. The data sharing is always implied in using, you know, generative AI. We're typing prompts, that's submitting information. We're perhaps uploading files, things of that nature. So we want to make sure that we treat it like any other form of data sharing for the organization. Do we have our contracts in place? Do the contracts have the appropriate terms, you know, security terms, privacy terms, things of that nature. There's, there's an element of, Are we, are we building with AI? If we're building with AI, then we want to make sure what we're incorporating is using models for which we have a good sense of transparency, reliability, safety, security, privacy, even the reputation of of the creators of this model, right and what? What do we know about the data set going into this model, the testing being done for this model? Do we have our own means and cape? Abilities to test the model itself, as well as the application that we are putting the model into. I think it's important to understand that there are, you know, attacks or weak points in the data that goes into a model you can try and, you know, poison the data, things of that nature. So that's important to understand whether you're putting the data in yourself directly, or, you know, a vendor is doing it in the application you're using. There's attacks that can be done or attempts to abuse, like abuse cases for models themselves. Can we, you know, can we jailbreak a model? Can we get it to return something that the creators of the model wanted it to filter and not return. You know, is it? Is it not filtering anything at all? So it's just like outright, you know, safety risks. Often when we're consuming models, we might be consuming, consuming them from cloud vendors, you know, rock and Azure, and, you know, things of that nature. So like any other cloud service, there's elements of, are we configuring the service that we're using with security, privacy, safety in mind, you know? And then, of course, if we're actually building models themselves, you know, where are we sourcing our data from? Do we have, you know, legal rights to that data. Is it a trustworthy, unbiased data set? Have we cleaned it up? It's going to be, you know, well suited to incorporating into our model, into our application. This is not an exhaustive list by any means, even though I just said a lot of things. But you know, I do think you know the core elements we user. What degree are we a consumer, slash, you know, incorporating others into our tech. And what degree are we building AI, you know, from the ground up, are important things to think about when developing our own AI governance, you know, framework and kind of muscle building.
Jo Peterson:I think that's fair. I think that what I got out of that, beyond all the good tips that you shared, was, what is the business problem you're solving for? You know, it's it's all. It always comes down to it, right? What are we trying to advance within the organization? What are we trying to protect the downside from in the organization. You know, it's no different than when I described security and, you know, security risk with the beginning of our conversation. You know, AI is just another form of risk, and it's, it's really about, how do we want to balance our decisions related to this technology to get the maximum benefit we can with respect to to your point and write our organizational goals and the minimum downside to our organizational goals, great response. Thank you with that, we'll leave you for now. Thank you so much for joining us. Thanks Mike for joining us.
Michael Michado:I appreciate it. Thank you for having me. I enjoyed our conversation. Have a great rest of your day. Me too. Take care. Bye.