The Signal Room | AI in Healthcare & Ethical AI
Welcome to The Signal Room, your go-to podcast for expert insights on ethical AI, AI strategy, and AI governance in healthcare and beyond. Hosted by Chris Hutchins, this show explores leadership strategies, responsible AI development, and real-world implementation challenges faced by healthcare AI leaders. Each episode features deep conversations covering healthcare AI innovation, executive decision-making, regulatory compliance, and how to build trustworthy AI systems that transform clinical and operational realities.
Whether you are an AI strategist, healthcare executive, or AI enthusiast committed to ethical leadership, The Signal Room equips you with the knowledge and tools to lead AI transformation effectively and responsibly.
Join us to learn from industry experts and healthcare leaders navigating the evolving landscape of AI governance, leadership ethics, and AI readiness.
Follow The Signal Room and stay updated on the latest trends shaping the future of ethical AI and healthcare innovation.
The Signal Room | AI in Healthcare & Ethical AI
The AI Shutdown is Here: Why Most Projects Will Fail in 2026 | AI Governance & Ethical AI Strategy with Andre Samokish
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this timely episode of The Signal Room, host Chris Hutchins speaks with AI governance and privacy expert Andre Samokish about the looming AI shutdown in 2026 and why most AI projects will fail without strong governance, ethical frameworks, and strategic leadership.
You’ll learn:
- The critical difference between privacy governance, AI governance, and cybersecurity—and why conflating them creates dangerous blind spots
- Why governance isn’t a project blocker but the essential pathway to moving fast, safely
- The three pillars of true AI literacy for both technical and non-technical teams
- How to embed privacy by design and ethical controls into AI product workflows before launch
- The most common failure modes in data collection, model deployment, and organizational culture that threaten AI success
- Practical tools, certifications, and communities to build your AI governance knowledge today
This episode is a must-listen for healthcare AI leaders, strategists, and executives aiming to navigate AI transformation with responsible, ethical leadership and avoid project shutdowns.
Connect with Andre Samokish and learn how to future-proof your AI initiatives with robust AI governance and strategy.
About The Signal Room: The Signal Room is a podcast and communications platform exploring leadership, ethics, and innovation in healthcare and artificial intelligence. Hosted by Christopher Hutchins, Founder and CEO of Hutchins Data Strategy Consultants. Leadership, ethics, and innovation, amplified.
Website: https://www.hutchinsdatastrategy.com
LinkedIn: https://www.linkedin.com/in/chutchins-healthcare/
YouTube: https://www.youtube.com/@ChrisHutchinsAi
Book Chris to speak: https://www.chrisjhutchins.com
If you collect garbage, you'll have garbage. In the implementation stage, I feel like the moment when, for example, they're not developing AI, they're implementing or buying services. And then just think about it, how you're going to use it and talk to the legal team to understand what would be the risks. Because there is some regulation even saying that there are prohibited AI activities you need to be aware of. So that implementation, your use case can be very tricky and very harmful for your organization. So why not just go over those lists of high-risk activities, talk to the legal team, and understand in the early stage what you can do with those tools and what you cannot. So if you still have a problem you're trying to solve with AI, it's better to understand in the early stage.
Christopher Hutchins:Andre, welcome to The Signal Room. It's great to have you.
Andre Samokish:Oh, thank you, Chris. Thank you for having me here. I'm so excited to do this conversation. Yeah, I'm happy to be here.
Christopher Hutchins:It's been a little bit, you know, I think we met out in Las Vegas at the Put Data First conference, right?
Andre Samokish:Yeah, that's correct. I love all events in person. You can meet experts and just talk about different expertise because that one was basically mostly about security data, Put Data First. But also I met a couple of privacy experts, and of course, everyone was talking about AI implications and how it works together with privacy, security, and all related data.
Christopher Hutchins:That was good. That was one of the things that kind of struck me about it because I've been to conferences and, you know, AI is certainly a topic, but it was really cool for me because the biggest concerns that people are talking about these days are really around governance. And the healthcare sector in particular, we're a little behind the curve on that one. A lot of folks like yourself were really talking about this stuff with a lot of passion. And you know, for me it was very, very comforting to know that people like you are really engaged in thinking about the security, the privacy aspects of it, and kind of pushing us a bit on that side of it so that we are very thoughtful about how we use it. It's one thing to have a good intention, it's another thing to execute, right? So very excited about having a conversation this morning. But let's kind of start on the real basic premises that people don't probably understand very well. Try to explain for us the role of privacy and AI governance as a manager or to a manager that's not an expert.
Andre Samokish:Yeah, great question, because a few years ago we just had kind of privacy governance, right? Data governance. But then a new topic came, AI, and now there is some demand on experts about privacy and AI governance because both of them touch data, and data touches all these industries. And as a manager, for me, most important is to support business on achieving their goals. They want to build new AI tools or implement and buy some services from third-party vendors, and those initiatives now are just going so fast. The role of privacy and AI governance experts is to keep up and also be proactive. How can you be proactive? You need to think ahead a little bit and of course build processes and controls in place. I don't want to scare people with the word controls because controls means you can go with your initiative, your business initiative, when you have those controls if it's required. And you can feel safe if you implement those things upfront, what we call privacy by design. Implement some processes, controls, policies on time in the early stage, and then you can just be sure that when you're launching your product or feature, you're already done. And it's not just about complying with requirements, which is very important, but also it's about building trust with clients and being transparent as much as you can. Because when people start using new tools, the first concern they have is what will happen with the data once they enter the service, or even touch the website, right? They want to feel safe and trust is something you need to build. So the role of a privacy and AI governance manager is to help business implement, first of all to understand what controls and processes you need to have in place to make sure that you're using data in a safe way, then also following those controls on an ongoing basis. And I would highlight one more role of the privacy manager, which is to educate people on privacy and AI governance. It's not something where the main goal is just to solve problems. It's mostly about being proactive, building trust with clients, and even using this as an advantage. When you have strong AI governance and privacy controls in place as a business owner, that becomes part of your value proposition.
Christopher Hutchins:You mentioned governance in the sense of controls, and I think that's one of the bad raps that governance has had in times past. But it's now not something that's intended to slow you down. It's the only way you can safely speed up. I mean, I was just working on developing a website and I do not code, but the technologies are so advanced. I built a pre-roll business website myself, and it's moving between screens and all kinds of things. So I can only imagine the types of risks that could be taken if people are not engaging someone like yourself with this expertise to help them understand where to put your guardrails. Don't get ahead of yourself because we're going way too fast at this point.
Andre Samokish:Yeah, definitely. And those controls, just a simple example would be to put a proper language pop-up window message that you're interacting with an AI system, for example. This is an example of controls. It's nothing scary to just try to design it as early as you can in the early stage of your product development, and you're okay to go. So it's just something that will help you. And just as you're saying, it's not stopping. That's why I feel like one of the roles of the manager is to help people kind of shift to a culture of privacy and governance.
Christopher Hutchins:Yeah, I mean that touches on areas that people are confusing. You know, control versus governance, that's one. But what are some of the other ways that people might be confused about really what this means? What are some of the misconceptions people might have about privacy and AI governance?
Andre Samokish: Maybe I would highlight one misconception:that governance is something that should be handled just by the legal and privacy team. But I feel like we really encourage product people, marketing teams, cross-functional teams, to be proactive and just to contribute as much as they can. Talk to our teams earlier so we can discuss, and even if there are some risks, that's another topic. When we say risk, it can be mitigated. Again, we can put some controls in place. It doesn't mean a risk means you should stop. Most likely you can mitigate that. I feel like privacy and AI governance is work for all cross-functional teams.
Christopher Hutchins:Yeah, I think it's an interesting challenge too. Organizations are pretty sophisticated and they've got a lot of different verticals, and there are going to be some people you need in the room that really don't spend a lot of time thinking about this kind of technology. But we need them to because their subject matter expertise really needs to be represented to make sure that we are understanding all of the areas where we could have risk. It's important people understand that you're not there to control things, you're there to support them as a manager that's responsible for this stuff. Maybe you could talk a little bit about an initiative that you're working on that excites you. I don't know if people think about privacy and security and governance and think that might be exciting, but I kind of think it is because we've been having fun conversations since we've met.
Andre Samokish:Yeah, I feel like privacy and AI governance now sometimes gets associated as something very complicated for non-experts and even for privacy experts, even for legal teams. It's overwhelming with new regulations, with complicated topics. So I feel like one of the initiatives that excites me is to implement educational trainings, to talk to non-experts, cross-functional teams about simple things, fundamental things. What is personal data? How is AI even related to personal data? What are the main principles of handling personal data? I mean, I feel like I can talk even just about one principle, data minimization. I could need a whole day to talk about data minimization. Teams, you cannot collect data just in case. You need to know why you need that data, and most importantly, you need to explain to your customers and get their consent on how you're going to use that data. So it's just something that saves a lot of time and maybe saves time when you need to do future assessments when you talk to your cross-functional teams just now. Set up 15 minutes. I don't think you need a couple hours of meetings every day talking about privacy. Just 15 minutes about privacy, maybe once a week or something like that. It should be a very simple conversation about the most important things. And that shift, that education, doesn't happen overnight. It just takes time. Those efforts really are required, and I like implementing trainings. Also, I'm very excited about automation because again, it's about simplifying privacy processes when you can implement tools that help you automate and move from Excel sheets to some traceable workflow, sending notifications in assessments, setting up tasks, involving people through one tool to collaborate on data privacy and AI governance. There are a lot of similarities in how you handle data privacy and AI governance processes. So yeah, these two things I feel are very important to keep in mind for privacy experts.
Christopher Hutchins:You mentioned something that I think is really worth pausing on for a second. Historically, in my experience over the years, people dealing with data and analytics and data warehousing tend to go for, "I want to pull in everything." Can you talk about that just for a second? Because this is a shift that has to be made and organizations have to think about this differently now. Because if they're going to start using AI capabilities on it, now you have to really think about things differently. You can't just have these new programs or solutions attached to the ocean and think everything's gonna be okay. There have to be some conversations and ways to handle this. Maybe just talk a little bit about how you support teams and help them think through that effort. It seems like it's going to be a pretty significant shift in how they think and how they manage their data sets.
Andre Samokish:Yeah, great question. Because I feel like now every team wants to onboard some tool, and most likely that tool will have AI capabilities because it's speeding up processes and everything. But I feel like everything starts with a simple question. Why do you need the tool? What will that tool be doing for us? And from a privacy side, I would ask how they're going to use our data and whether we want to permit them to use that data. Is this mandatory or can we turn it off? So it's just a simple conversation with the business owner of the process or whoever is going to implement that initiative, just to talk about what is the value of the tool and is it really solving our problems, issues, or challenges. And okay, now let's talk about data. Do we have controls in place for that? If not, can we implement that quickly in parallel with your initiative? Are there any risks? Are there any high risks, and is there a way to mitigate them? And again, talking about risks is just something we need to work on. Of course, if there is high risk and there is no mitigation, there might be a question about stopping that initiative. It's a conversation with business in the early stage.
Christopher Hutchins:Yeah, and I think you're touching on probably one of the more important aspects of the governance piece of this. It's really what responsible and ethical AI means. Talk a little bit about that. How would you describe that in your words? What does it really mean?
Andre Samokish: I feel like we touched on this topic a little bit, but ethical AI, there are many subtopics and things to talk about. But I would highlight being very transparent with your clients, customers, and even if it's not a requirement, think about what you can disclose on your website. Put more information out for customers so they are aware about your technologies. If you have an AI component, maybe you can explain the logic behind the AI. How transparent are you with your customers about your business and whether you have controls in place? Let them know. Let them know what certifications you have. Are you following the major certifications? How do you care about data and AI safe use in your business? Just make that publicly available so people can go read it. People read more now, I'm sure, and they want to know about your business and its AI component because there are different opinions about safe use. Safe use of your data, what AI can do, how scalable it is, and then they make decisions:do I need to use this tool or not?
Christopher Hutchins:You're touching on a great and important topic. You mentioned the explainability piece of it. There's a translation needed because you're not going to, as a layperson, understand it. It's not like you can turn over the code and someone's going to be able to understand what you're doing with it. There's really a translation aspect of it. So how do you think about translating abstract principles into concrete controls?
Andre Samokish:Yeah, I feel like first of all, you'll be translating that to different audiences. For example, your employees who have some expertise, right? They have expertise in tech, most likely about data in general, not just personal data, but data. So you can talk to your teams, just help them to understand that transparency and explainability are very important, and involve them in creating that documentation about how it works. Help them actually improve future products because now they're involved in understanding the importance. They're not just reading what you created; they're actually working with you together to develop these controls. Another audience would be your visitors, for example, on your website, your future customers or current customers. So then you need to make sure that you're talking to them in the same language. Not professional language or tech language, but explain simple things to help them understand in plain language, in simple language, how your system works. And I feel like collaboration with the legal team would be very helpful because they have the regulatory language component, and they can explain it to you. You can ask all questions to legal first, and then kind of translate that to make it more accessible to your audiences. Everyone will benefit from that.
Christopher Hutchins:Right. Yeah, I think the tendency in at least the last year or so is that everyone's going at such a breakneck pace, moving so fast. If you're not involving the legal experts, you probably have a risk right now that you don't know about, and it's a good idea to go talk to the legal team. Get them involved. It's just this really important thing you don't lose sight of right now because I think this is going to be the year of governance for sure. And I don't think it's all going to be rosy. I think there are going to be some bumps. So I'd encourage people to really hear this point. And Andre's absolutely right. If you won't even work with your legal team, there are all kinds of different reasons that things fail. Where do you see the biggest current failure modes? Data collection, model design, deployment, organizational culture?
Andre Samokish:Great question because every piece is important. You start maybe with data collection, and I feel like I'm not going to say something new when I mention that if you collect garbage, you'll have garbage about data. But also I want to highlight the moment when, for example, a company is not developing AI, they're implementing or buying services. And then just think about how you're going to use that tool, and talk to the legal team to understand what the risks would be. Because some regulation even says that there are prohibited AI activities you need to be aware of. So that implementation, your use case can be very tricky and very harmful for your organization. So why not just go over those lists, a couple of lists of high-risk activities or prohibited activities, talk to the legal team, and understand in the early stage what you can do with those tools and what you cannot. If you still have a problem you're trying to solve with AI, it's better to understand in the early stage. Maybe you need to look for another vendor or implement controls, right? Start implementing right now, because some controls really take time to implement. It's not happening overnight. For example, some certification or registration in your system or database, whatever it can be, it will take time. So yeah, I would say sometimes it all comes down to communication and collaboration between teams. If the product or business owner comes to our privacy team, governance, and just shares ideas when it's in the idea stage, then it's easier to manage.
Christopher Hutchins:You're hitting on something that we really need to think about a little bit differently than we have before. Inside of a company, there's generally an intent to speak the same language. So you and I have talked a little bit about AI literacy before. But what does it mean? What does AI literacy mean for non-technical staff versus technical staff? They really do have to be able to communicate and understand each other. It could be even worse than a language barrier if it's not given some attention.
Andre Samokish: Definitely. And AI literacy, I mean there are different definitions, but I would say there are at least three components. First of all, it's about AI and the technical side:how it works, what are the capabilities, what are the performance indicators, what are the metrics you can measure performance on, what is the accuracy, what are the other technical things you need to understand about the logic behind AI. The second component is of course regulation. You need to know what you can do, what you cannot do with AI, and what controls need to be in place. We talked about this today a lot. You cannot avoid the governance piece; it's just mandatory. And it's also an opportunity to scale your business and add to your value proposition that you have such good governance in place. You want to give it as a benefit to your clients. And the third component is ethical AI use. The ethical use piece is being mentioned more often now because it's about how people feel about using their data and having transparent AI systems. So these three components. Since you mentioned technical staff and non-technical staff, they both would benefit because technical understanding is a requirement, and technical personnel can explain that side to, for example, the privacy team. And non-technical staff, of course, we can talk to tech to explain the importance of understanding the technical side. Just mutually beneficial.
Christopher Hutchins:We've talked a little bit about data usage things here, but let's talk about some of your lessons that you've learned, including privacy awareness and training, since you've kind of opened this topic. How do these lessons apply to AI literacy?
Andre Samokish:Great question. It's a very big topic about trainings, and I feel like sometimes businesses need to shift and just think about it. You're using so much data and you're serving customers. You have so many regulations, you have so much governance in place. How much time do you dedicate to educate people on what you have? Just compare how much education you provide compared to what you have. And education doesn't happen that quickly, not overnight, and especially when people are facing data privacy training or governance for the first time. It's like something undiscovered, something unknown. And that's why maybe you feel sometimes resistance from cross-functional teams. Another thing I feel is that now it's almost the responsibility of privacy experts to make those materials, that privacy content, interesting to people. Then good design comes to play because you can think in different ways how to design your content, your slides, maybe short videos, and it should be on an ongoing basis.
Christopher Hutchins:You know, it's increasingly important as the regulatory landscape seems to be evolving towards explainability and transparency. And at the moment, it's not really at the federal level that's moving at the pace that it is in some states. Some of the first cases in California, in Texas, for example, they've put some things in place for healthcare specifically that are requiring transparency and explainability in order to actually achieve informed consent. And there are pending lawsuits about this stuff now. So if you're not designing with that in mind, you really need to take another look at that so you don't end up having to pull something out of production and rebuild it. But that's the risk you take if you're not thinking about those things. Fifty states with fifty different approaches is quite possible. It's happened before. So I'd urge people to make sure that you're looking for that.
Andre Samokish:Yeah, and one more thing. I just want to add one more thing to this question that popped up in my mind. Some time ago, I believe privacy trainings were something happening once a year, or just onboarding security privacy trainings. And I feel like sometimes those trainings are mostly about security and less about data privacy. So naturally, people forget about data privacy. There is not huge retention of knowledge about data privacy. So I feel like there is some shift required from just onboarding trainings to ongoing privacy and AI governance education. Similar challenge, educating people about AI governance. It should be small pieces of information on an ongoing basis, well designed. Think about maybe involving designers who will help you, again cross-functional collaboration, to design your content well and think about how to deliver it in an interesting way.
Christopher Hutchins:Right. Yeah, I think there's got to be a shift in thinking about this. AI models are evolving and they're being retrained constantly. So we can't really treat this as a checkbox like we could probably do in some instances in the past because the policies just didn't change that much, because the outputs that the organization was accustomed to really didn't change that much in terms of how they were being produced. Most workflows don't change overnight and they never have. But today that's not necessarily the case. So I think it's a really important concept that you're talking about there, because it's got to be thought of differently. We've got to have some kind of process to keep feeding the information to make sure we keep current and people are understanding this stuff and know what the risks are.
Andre Samokish:Yeah, that's a big topic, definitely.
Christopher Hutchins:What do you think are some of the common misunderstandings that you encounter? Like "AI is magic," or "They will handle it, we're going to worry about that later," or "It's the vendor, they've got this." What are some of the other things that you run into?
Andre Samokish:I feel like we talked about this a little bit. It just feels like even if the vendor is saying everything is okay, go over everything again and involve your teams. Maybe use that as educational purposes. Just do it again and make sure that what they're saying works. And if you're planning to use that vendor for the long term, go a little bit deeper and start with business owners. Involve responsible people in the process, communicate with the vendor, and sometimes along the way you can uncover something. Like, business has plans to use this tool in different use cases, and how can you know that without collaboration? How can legal or the privacy team handle that if they do not know the business purposes or plans about those tools or specific implications? So yeah, we just technically cannot handle data privacy and governance by ourselves. And I feel like when you're collaborating with teams, it just shows that you have great communication inside the company with other teams. And that just also adds to your product value proposition, that everything is clear even for non-experts. When they talk to prospects, they can mention it and sound like well-educated people about privacy. Privacy is very often about data. So since you have a lot of data, most likely there is some personal data, and people want to know about it. Why not add that to your expertise? We're happy to share.
Christopher Hutchins:We talk a lot about the technology and the data, but maybe let's talk a little bit about the role of tools and platforms like OneTrust versus human process and culture. That's an area that I think is not getting quite enough airtime right now. We're talking a lot about the cultural stuff and the human impacts. I'd love to hear your thoughts about that.
Andre Samokish: Yeah, it's a great question. And I feel like automation, similar to AI, serves one purpose: to help, to save time, to simplify some very predictable tasks so you can focus on more strategic things. I love automation, and particularly I work with and have got certification for OneTrust. It's very good practice and says something about your data handling maturity level. You have one tool, one source of truth about data. For example, about vendors. You have one profile of a vendor, all risks, all your communication, all mitigation measures you implemented in one tool, in one profile. You can generate reports, you can have dashboards to see how many risks you have. What is the overall situation about privacy? It's very hard, I found, to do without a tool. And it definitely saves time. Even non-privacy experts, if they have permission, they can go to a specific vendor and take a look. Can we work with this vendor? Do we want to work with them in the future? What is the risk level? How responsive are they to mitigate those risks? It's just great to collaborate together on data privacy and AI governance. Just keep in mind that with a tool like that, you will most likely need to customize a lot of things because every business is unique and is looking to cover different goals. So once you work on setting up the tool and customize everything, then you can really enjoy the workflow and automated tasks, and your assessments will go easier. You will have visibility into what's going on with your privacy and AI governance situation. You can share your practices with your potential clients, generate reports, share a certification, whatever. So it's very convenient, and I feel like it really says something about great leadership support for how you handle your data and AI governance. Things to consider:I would really encourage businesses to have some data management tool, whatever you select.
Christopher Hutchins:We're kind of getting towards the end of our time. There are a couple of topics that are really related, but it occurs to me that there are probably some folks listening that are really not quite sure how they're going to embed governance into product and engineering workflows so it doesn't feel like a blocker. That's one side of it. But also there are some skills that I think people should be thinking about and prioritizing as well. So maybe you could talk about those two things. First, how do you embed the governance? But then what are some of the skills that you think need to be developed or acquired if an organization doesn't already have them?
Andre Samokish: Yeah, I feel like almost every business will face this challenge of how to build AI governance. But there is one easy component. If you have privacy and data governance in place, you can transition and transfer many controls from there to AI governance. For example, you can handle your assessments using some questions from your previous questionnaires. You can use the same tool, the data management tool, to handle AI assessments, vendor assessments, and internal assessments. And talking about skills, I feel like it's very clear you need to have technical skills, data privacy skills, like getting some certification from established organizations. But also I feel like we're missing something that people don't think about enough:communication skills and project management skills. How to communicate to cross-functional teams in the early stage, how to embed a privacy component and AI governance component into the project scope and project plan, and how to use project management tools to create that visibility and communication through shareable documents, dashboards, and everything. Just to simplify life for non-experts, non-privacy experts, and help them. Project management skills would be very helpful. And communication will be helpful for every team because we will be talking, and it's okay to talk about initiatives in the beginning, giving feedback. Communication skills, how to give feedback and request feedback, request help from experts in different fields. It's all about communication, and it's not just something you're naturally born with. There are skills you can learn a lot about. Take some courses, take some education on that component. It would also help because cross-functional collaboration really will help to save time and reduce those blockers.
Christopher Hutchins:You lead into probably one of the biggest and most important points of our conversation. Where would you encourage people who are listening and want to learn more? Where should they be looking? Are there communities, certifications, resources? Where would you point them?
Andre Samokish:I'll share my experience. There are many more resources you can learn from. I got a couple of certificates from the IAPP, the International Association of Privacy Professionals. And there is an AI governance certification as well. It's not just about privacy, but they have that. Also, since we talked about automation and implementing tools, I think OneTrust has a lot of educational materials. You can find them, some of them are free access, and maybe you can start from there. And also, I personally like how many educational materials you can find on different platforms like LinkedIn or Coursera. Just try to focus. There is so much information, just try to focus on what specific area you want to work on. Maybe you want to be a security expert; it's a bit different from privacy, even if you're working closely. But it's funny because when I go to conferences, sometimes I hear a question, "Hey, what do you do?" "I do privacy." "Oh, you're a security guy." There are some similarities, but this is where we need to start to educate people on even simple things. What is security? What is privacy? What data are you responsible for? And I think it's a great question to help people understand, even like, okay, why does an AI system need my data, and where do I go on a business website specifically to read how that data is going to be used? And it's just about shifting to that culture of safe data, and for businesses to understand that it can be part of your value proposition. You can even emphasize when you have good AI or privacy governance, or both, in place. Feel free to add it to your value proposition because it works.
Christopher Hutchins:Andre, this has been a fantastic conversation. I've learned some things. Thank you. I always enjoy speaking with you. For listeners, you'll find all kinds of good information in the show notes. You'll find anything you need to know about how to reach out to Andre if you have questions, if you just want to have a conversation and glean from his wisdom a bit. I'm sure he'd love to hear from you. But that's going to do it for this episode. And Andre, thank you so much for being here. I can't thank you enough. It's been fantastic.
Andre Samokish:Thank you, Chris, so much for having me here. And your questions are just right to the point. And I'm happy that you are curious about that, and I'm sure our audience wants to know more about this. And of course, there are experts from different fields who really want to become privacy experts and they want to start from somewhere. And I'm sure they will find application of their expertise in the privacy field because it touches a lot, and it's also a global and international field. Privacy, once a business has presence somewhere in Europe or wherever, becomes international and global. So much to learn about privacy and now AI governance. So I'm happy to share my knowledge with people who want to reach out. I'm happy to talk.
Christopher Hutchins:Excellent. Thank you so much. That's it for this episode of The Signal Room. If today's conversation sparks something in you, an idea, a challenge, or perspective worth amplifying, I'd love to hear from you. Message me on LinkedIn or visit SignalRoomPodcast.com to explore being a guest on an upcoming episode. Until next time, stay tuned, stay curious, and stay human.