Top Voice Podcast with Michael J. López
Each week, I sit down with leading voices in business, leadership, and transformation to unpack the issues that matter most. Together, we explore fresh insights, bold ideas, and real-world stories from the people shaping how we think about change, culture, and what's possible.
As a LinkedIn Top Voice myself and an expert in change and transformation, I bring a unique lens to every conversation—connecting each episode to powerful, science-backed strategies that help individuals, teams, and organizations navigate change with confidence.
Whether you're leading a company, driving culture shifts, or simply looking to level up in your career or life, these episodes are designed to challenge your thinking and expand your perspective.
Subscribe now to join the conversation and never miss an episode.
Top Voice Podcast with Michael J. López
The Judgment Gap: Why Lawyers Won't Be Replaced But Must Evolve with Colin Levy
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Michael speaks with Colin Levy, General Counsel at Malbek and adjunct professor at Albany Law School, about how AI is disrupting the legal profession and what Levy calls the “judgment gap.” They discuss AI’s growing role in performing tasks formerly done by lawyers, the need to verify AI outputs that can be plausible but inaccurate, and how legal advice is shifting toward more complex, solution-oriented recommendations. Levy explains how AI may change what junior lawyers are asked to do, why client counseling and emotional intelligence remain essential, and why legal education should better prepare students to use AI. He shares tools he uses daily—including Malbek AI, Perplexity, Claude (including Claude for Work), and Gamma—and emphasizes experimenting with clear goals.
Timestamps:
00:24 Welcome
01:38 Meet Colin Levy
04:09 AI Disrupts Law
06:26 Trust and Verification
09:20 Training Junior Lawyers
13:48 Clients and Transparency
16:19 Existential Threat Debate
19:45 Emotional Intelligence Edge
24:17 Learning Judgment and AI
27:17 Daily AI Tool Stack
29:02 Key Takeaways
30:48 Career Influence Story
32:24 Where to Follow Colin
33:20 Closing
Connect with Colin:
https://www.linkedin.com/in/colinslevy/
https://x.com/Clevy_Law
https://www.colinslevy.com
https://a.co/d/06HSUhbs
Michael's National Workforce Study on Change Management:
https://www.michaeljlopez.coach/research
Welcome to the Top Voice Podcast where each week I sit down with leading voices in business leadership and transformation to unpack the issues that matter most. Together, we explore fresh insights, bold ideas, and real-world stories from people shaping how we think about change, culture, and what's possible. Hello and welcome to the Top Voice podcast. It is April 14th, the day before tax day. We're not talking to an accountant, but if you are talking to one today, make sure you get your stuff done. Uh it's the day before tax day. We're here with Colin Levy to talk about AI and the legal profession and what he's affectionately referred to as the judgment gap. And I'm excited to talk a little bit about that with you, Colin. It's a conversation we started, I think, several months ago when we met. And so I'm going to give you a chance to tell your story here in a bit. Before I do that, for those of you that are listening in live, tuning in, please do let us know where you're tuning in from. Leave a question or a comment. We love to take those during the call, during the conversation. And for those of you listening on your favorite podcast platform, please do be sure to subscribe. Colin, let's jump in. And we started this a little bit ago. I mean, AI is taking over everywhere. It's become part of everything. We were just talking about how we use it before the show. But before we jump into our our our own strategies here, tell us who you are, tell us what you do and how we got to this point in the conversation.
SPEAKER_01Sure. Uh well, pleasure to be here with you. Uh again, my name is Colin Levy. I am general counsel of a company called Mallbeck. We uh are in the contract management space, uh sell uh solution to help large companies manage everything having to do with their contracts. As GC, I kind of manage the legal function. Um before that, I was a uh transactional attorney for a large number of different types of uh companies, primarily in the tech space. Um, and then prior to law school, actually, I was a paralegal creating e-discovery databases for a large law firm, which probably helps explain my long-standing interest in technology uh and my desire to always be learning and experimenting with tech. Uh, and so in addition to my role with Malbec, I also am an adjunct professor of law at Albany Law School, where I teach courses on technology and law practice. Uh, and in general, uh, am on LinkedIn and other platforms helping people learn and grow alongside me about AI and other types of technologies, uh, because the best way to kind of help get up to speed, so to speak, is to learn and experiment. And so I focused a lot on helping others uh learn. Uh, and it's been uh an exciting and ongoing journey. Uh, and ironically, before I worked or had anything to do with law practice, I actually was not a huge fan of technology. I saw it as clunky and um intrusive, actually. Uh, so I've come a long way from those days.
SPEAKER_00Your own, your own personal journey. We can we can talk about that if you want. It's it's it's interesting. You mentioned experimentation. Uh, for the listeners that were tuning in last week, you might have heard me talk about. I just released a large study called Rethinking Change Management. It's a it's a it's a thousand people across working America to understand what they really need around change at work. And we won't go into that in depth today, but when you mentioned experimentation, one of the big stats from that conversation is 43% of all those respondents said the most important thing that we need in the world of AI, particularly, is a chance to use it, a chance to experiment. So let's start with that, because I think the legal profession, of course, is built on decades of information. And yet it's also built on what you call judgment. And AI is giving people the opportunity to experiment with both of those things. Let's just start with the beginning. How how is AI starting to disrupt the legal profession? What are you seeing? How is it showing up?
SPEAKER_01So it's showing up in a large number of different ways of varying impacts, but perhaps the most, I would say, impactful way is it is starting to perform different tasks that used to be performed by lawyers. And so it's it's starting to, I think, press this question of what does it mean to be a lawyer? Uh, what does it mean to practice law? Uh, that used to be fairly simple to answer in terms of you would provide advice and information to those who needed it to help them resolve legal matters. Well, now AI is helping with that by providing greater access to information and resources and helping find that those answers, if you will, quicker. But it's also allowing for now the development of broader, more longer-lasting solutions for clients. So now clients who are aware of AI are starting to wonder well, okay, if you have AI and that can help you answer sort of specific questions, what about actually building things for me? What about actually creating solutions for me to help solve larger issues or problems? Uh, which again goes back to, well, what does it mean to be be a lawyer? So I I do think that AI in many ways is being disruptive in the sense of kind of asking whether it's time to redefine what it means to be a lawyer and or what the tasks a lawyer does are and or should be.
SPEAKER_00Yeah. I I remember way back when I'm I'm not a lawyer, but I know people who who are went to law school. And I think even back at the beginning, we had the Lexus Nexus database and those sorts of things that were part of the legal infrastructure and others that were housing information where people would spend hours, days, weeks finding the right case, finding the right information. I'm I'm assuming that's the first thing that makes life easier for a lawyer is going quickly to it. But I guess to your point, you still have to know and be able to discern that it's the right information at the right time.
SPEAKER_01Right, absolutely. So that's that's actually one of, I think, an increasing challenge, frankly. Before it was a challenge in the sense of finding things that were kind of right on point through searching databases. But now with AI, it's a question of kind of not taking things that AI provides you with at face value per se all the time, because as with any other type of technology, it can make mistakes. And that's where kind of the judgment thing that I had talked about before and continue talking about comes into play because you need to be able to discern when AI gives you an output whether that is exactly what you're needing. And if it is, is it accurate? Is it precisely what you need? Means meaning double checking your work and verifying because an inherent part of these tools is its ability to give you plausible but not necessarily completely accurate answers. And that I think a lot of people mistake for being a flaw of these tools when it's just an inherent part of these tools, which goes to kind of figuring out as a professional, as someone providing professional services to others, whether the information you're providing is A, accurate and B what the client is looking for.
SPEAKER_00Yeah. How do we do that differently? I guess if compare the two experiences of before AI, I'm researching a case, a precedent, I'm working with a client on a particular topic. I compile a bunch of information and then I sift through that information to come up with a recommendation. Now I'm doing it differently, faster, bigger, faster, stronger. But is the part where I'm making a recommendation has that changed at all?
SPEAKER_01So you're still, as a lawyer, you're you're still being asked to make recommendations and provide advice, but perhaps now the recommendation or advice is a bit more complex, a bit more uh comes in different forms, isn't necessarily just sort of, well, the answer to that question is X or Y. It's now, well, to answer that question, we probably should be doing this or that. Um, and and here's how we can get there, and AI can help with that, or we can, you know, build something, get us there, or something like that. So it's kind of changing the way in which advice uh and a recommendation is being given in the form in which that recommendation is being given is changing.
SPEAKER_00Yeah. The I was I was just having a thought here. So I'm a consultant, you're a you're you're a lawyer. I'm sure we could spend most of this show just telling all the great consultant and lawyer jokes that are out there, but we won't, we, we, we won't do that. The the thing that's happened in the consulting world is that many of the entry-level positions are being reduced. There's fewer of them where entry-level consultants were doing the blocking and tackling, the research, the deck building, the writing, many of these foundational kinds of tasks that were not senior-level consultants or partners wanted to do. How is that is that trend the same in the legal world? How is it sort of disrupting the pipeline of lawyers who are maybe, you know, those who have been here for a while and those who are sort of coming up now in this new world? Is it changing any of that at all?
SPEAKER_01So it is. Um, I I think what's happening is is not that there is not that there is necessarily a less of a need for sort of entry-level lawyers, if you will, but more a matter of they're being asked to do different things because technology and AI can help do the things that they used to need to do themselves manually. Uh, which I also think goes to the question of well, how are we training lawyers to do work? How are we, you know, what are we assigning to lawyers to do? And also, frankly, how are we able to train lawyers to be able to use AI in ways that are helpful without completely delegating everything to AI or some other tool?
SPEAKER_00And what would that look like? Give me a couple examples because I'm curious about that because I, again, I'm putting my consultant hat on a little bit and thinking about the stuff that I might delegate and the things that I want them, a junior consultant, to do, because I want them to earn the skill of whatever task, again, research, deck building, storytelling, those are all big parts of my profession. What does that look like for you all? Like give me a give me a scenario or talk to me about a young lawyer coming up. What would you want them to do or not do?
SPEAKER_01Yeah, so what I would not want them to do would be, you know, if I give them a task to say, hey, can you research this issue uh and come up with a report around how to resolve it? Um, whether, you know, perhaps it's a you know, it's a contractual matter in my world, uh, and there is this problematic clause that we need to somehow find a way to agree to without agreeing to completely. So I need you to research, you know, how do we get from outright no to a path forward. So, you know, what I would not want them to do would be to basically present that scenario to an AI tool and then just read back to me what the AI told them and say, here, this is what this is what I was, you know, this is what I think we should do. Rather, what I instead of that, what I would like them to do is think about that scenario, think about how they would approach it, and ask AI to assume that you are this person and you need to resolve this, you know, research this, how would you suggest going about it? And then taking that information and then doing your own research to verify whether it's actually accurate and doable, but then applying your own kind of just look, you have an understanding, you know, as a new lawyer, if nothing else, you have a basic understanding of sort of doctrine. So apply that knowledge to the output and figure out does this make sense? Is this a way forward? You know, what should we do here? And so that's really where I think it is best to work going forward is have AI and other, and I've always kind of felt this way about technology more generally, is have it augment and supplement how you work as opposed to taking over everything that you do. So in that scenario, it's augmenting what you're doing, and it's helping you find a path forward by providing you with another perspective, but it's not completely giving you exactly the answer because you're not accepting what you're getting from the AI tool at face value and just simply verbatim, repeating it back to you know the partner, whoever you're working with.
SPEAKER_00Yeah. It's really well said that it's the process of giving you another perspective, not just giving you the answer. I've I've seen several articles on lawyers who have outsourced a legal argument to AI and gotten in trouble because something was wrong, something was missing. And I think it then leads to a conversation about the client relationship. So let's talk about the other side of this because I would imagine that it's changing expectations for clients. Are some clients happy to have AI become part of it because it reduces billable hours and it makes things faster? Are some clients skeptical? I I saw an article the other day about something that said, would you read a book if you knew it was entirely written by AI? And you know, there's AI music. So what's what's the client experience like on the other side of this?
SPEAKER_01So I think it's a little just as it is on the lawyer side, I would say it's probably it's fairly mixed in that there are some clients who are happy about AI because it's providing perhaps avenues to get work done at lower cost. On the other side side of it, though, it's also likely raising the issue of, well, if I'm having, if I'm paying a human, a lawyer to do something, I want some transparency, I don't mind them using AI, but I would like to know how they're using it. Uh so that I have a greater idea of how this work actually is getting done and what tools are being used. Not specifically kind of every single task being done, but if you're using AI, what are you using and how? Um, so so there is that piece of it. Uh in addition, there likely also is uh a contingent of clients that are aware of AI and think that there should be some degree of usage, but also some degree of, well, how much are you relying on it for and and and and so on. So, you know, it's kind of we're in this interesting area right now where AI, despite seemingly being around for a while, is still fairly new. And more importantly, we're still, despite the seemingly day-by-day progress of AI and its capabilities, uh, we're still very early in the journey. So I'd love to say that it's you know, AI tools right now are probably as bad as they're ever gonna be. I mean, they're continuing to improve. And likewise, we're probably all gonna look back upon this period of time in 10 years or so and think, wow, boy, we've we've come a long way from those days. Um and that, you know, on the one hand, is perhaps a little bit of a scary thought. On the other hand, it's a little exciting in terms of just the the progress that we're able to make now and and what technology is allowing us to do and the doors it's opening.
SPEAKER_00Yeah. We had a conversation, I had a conversation with uh Phil Ledgerwood a couple weeks ago, and he was talking about the title of the episode was How to Face an Existential Threat. And it was about this issue of he's in the software world, and of course, you know, every AI tool now is writing its own software. Where do you put the moment we're in on the existential threat continuum in the legal world? Because I know this has been a sacred spot where lawyers had a unique position in companies as individual practitioners. What what does your your comment about where we're headed spark that idea for me? I'd I'd be curious where you think, you know, how much disruption do you think is really happening here?
SPEAKER_01So I think there's a fair amount of disruption happening, but it it's not all completely obvious, I think, to everyone. Uh and I I think in terms of sort of the broader existential question, um I would say a couple things. I would say I don't think that it's helpful to think about us versus AI to think in such stark sort of terms um about about this. I think it's more a matter of thinking about, well, what does this mean in terms of what I should be doing to do my job and what tools should be using to do my job, as opposed to, oh my God, this is going to completely take over my job. Because I think what we're seeing with AI, at least currently, is that it is imperfect, as I mentioned earlier. And because of that, it it can't completely do everything we do ourselves because there's just a lot of context and a lot of factors at play, and we're not perfect, and tools we create aren't perfect either. Uh, so I I think there is that piece of it. There also is this sort of, you know, I think undercurrent in some circles of AI, you know, taking over and just running the world for us. And and I don't really see that happening either, because of a the imperfections that we've already seen, but also the fact that the world is a very complex place and humans are very emotionally driven. And emotions are complex uh and unpredictable. And AI doesn't really handle that particularly well. Uh, because right now they're they're based on large language models, which are based upon large but limited amounts of data, and it's finding patterns in that data. And emotions often don't present the same pattern over and over again, certainly not on an individual basis. So, because of that, you know, it's hard to predict what someone's gonna do one day in a certain scenario when they're feeling one way, when they're gonna might face the same scenario another day, but in a completely different mood and therefore may react completely differently.
SPEAKER_00It's interesting that you say that because I I certainly in the world of change where I spend my time, I always say that all of this stuff sounds great till you get real people involved, and we don't necessarily respond the same way to the same stimuli or the same moment, the same pressure, the same stress, the same excitement, pick your experience. So in the world of lawyers, then it seems like it's reserving this piece of your time and skill and effort for the human side of what's happening. Less about the research and but checking but verifying information and more about how do I handle my client? How do I help them manage this experience? How do I help them navigate a complex issue, which is, I don't know, how how much is that an experience-driven thing? Are you trained for that? Does it happen over time through trial and error?
SPEAKER_01Yeah, it's it's funny you should ask it that way because uh I don't think that law school, generally speaking, in the US at least, does a great job of the what one could call client counseling piece of lawyering, which is a huge piece of it, which requires lawyers essentially to be counselors, therapists, um, you know, advisors. Uh and and and and what I mean by that is lawyers um have to kind of see through the emotion and and not ignore it, but address it in a way that allows them to best help their client, but also help their client help themselves too in some ways. Um, and you know, I think humans have this inherent need to be with humans and and have a human element in the mix. And if there isn't, it's it's difficult because there's just this sort of almost indescribable emotional need that humans have um to be with others. And so I I do think that lawyers as a whole, yeah, will need to focus more on that going forward, um, and be better trained to handle that. Because I I will say that initially when you tend to graduate from law school, you kind of almost forget that humans are emotional and you you kind of are just looking for just the facts. Uh when in fact that's kind of hard to get often from people because they tell these stories and they're very emotional a lot of the times, and you're trying to kind of cut through it. But if you do that too bluntly, the client's gonna view you as just a complete jerk and and they're not gonna want you to help them because you're not relating to them.
SPEAKER_00Yeah.
SPEAKER_01In any way.
SPEAKER_00Is there a difference as you say that you're describing clients? In the legal world, clients mean a lot of things. They mean individuals, they mean small companies, they mean corporate entities. How does the AI experience and this reserving the unique part of client management, does that change if I'm working with a, you know, if I'm a family law attorney and I'm working on divorce cases, or I'm you working in a corporate entity dealing with other companies?
SPEAKER_01It absolutely does, in that, you know, family law, you're dealing with individuals in a very emotionally fraught kind of situation a lot of the times. Uh, whereas when you're working for a company, technically your client is the company. But nevertheless, your client is made up of individuals. And so the bottom line is it does change in terms of a specific area of law you're working in, but nevertheless, you have to have this high degree of emotional intelligence to work with people because you need to be able to do your job. And in a large way, in order to do your job, you need the help of others, whether are, you know, in family law, they're you know, your individual people clients, or in a corporate setting, the other employees to help you. And they're more likely to help you if they can emotionally connect with you and feel like there's, you know, there's a connection there. Um, because you've built trust, you've built a rapport and all of that.
SPEAKER_00This is what I you what you're describing again is I'm just going to compare the consulting world. It's it's different, but the same in some ways, in that when I don't have to spend time building decks anymore, and I don't have to spend time designing workshops, or I do design a workshop, but I'm designing it differently. What it gives me the ability to do and where my value sits is what I describe as high leverage moments, which are the unique moment where you recognize based on your skill, experience, judgment, perspective, all of the above, that what this team or person needs in this moment is X, not Y or Z, but X. And you've got the experience to be able to do that, which I think is what you're describing as judgment, which is what does somebody need in this moment? So, so, so how do we build that in your in the legal world? Is it simply trial and error? Is there training for this? Is it just you've got to cut your teeth in the moment? Uh, one of my favorite phrases is good judgment comes from bad decision making, which comes from bad decision making, which comes from bad judgment, which is you you you kind of have to mess it up sometimes in order to finally figure out how to do it right.
SPEAKER_01Yeah, so certainly there's a degree of um, a large degree, I would say, of experimentation and and fumbling about to some degree. Um, but you can do that in some ways, like with, you know, when it comes to AI, perhaps using a sandbox in a controlled way or or some other form. But there also are a number of different resources out there to help educate uh online education courses and other things to help kind of give you some ways of thinking about AI and interacting with it. Um, I would also say that you know, the best piece of advice I can give to folks in terms of working with AI is have a clear idea of what it is, what your intent is. In other words, what is your goal of using? Is it just to learn and experiment? Fine. Do you have a more specific goal in mind, like what it is you're trying to achieve, and have that guide your usage of the tool? Because A, that'll help reduce the overwhelm from using whatever tool you're using, and B, it gives you something to focus on when you're working with it to have an idea of okay, was this successful or not successful? Because the more amorphous your idea of success or output is, the harder time you're gonna have figuring out, well, that was neat, but was that what I wanted or not? I don't know. Um, and so I, you know, I do think that there is a need for clear intentions. And also, frankly, I think that legal education as a whole, um, and I'm speaking now sort of through the lens of me being an adjunct professor, um, a strong need for there to be further education and and courses on AI to help give students the ability to interact with these tools before they enter the real world, so to speak, and start using these tools in an actual professional context. Um, because at the end of the day, I think that the goal of education, like legal education, should be to help law students and would-be lawyers be prepared for the world they're entering into rather than having to kind of fumble about and figure out, well, fine, I was in this world, but this is not the world I was expecting to be in.
SPEAKER_00Yeah. Yeah. What you're describing is the difference between a career based on knowledge or information and wisdom or experience, which is it's not enough to know what the answer is. It's it's about understanding the experience of arriving at the answer or how to choose between multiple potential answers. And all of those things, I think, really kind of summarize this judgment conversation. I we're we're getting towards the end and I these shows always go so quick, but I would love to. How do you use AI on a daily basis? What are you using? You can share your stack, not for attribution. We we're not endorsed by any company, but we it would be great to learn a little bit about what you're doing on a daily basis with it.
SPEAKER_01Yeah, so uh I use a variety of different tools. Um, I use Malbex uh AI for for contract review and um analysis. I use perplexity for just kind of research or or perhaps things I'm missing. Um I use Claude Cowork and just general claude chat a lot of the times for kind of brainstorming, outlining ideas, helping me kind of think things through in a more systematic way, maybe create a first draft of something that then I work with and go forward with. Um, those are those are probably the three different types of tools I use on a daily basis. Uh and obviously there are a ton of others. I also use uh gamma as well for creating um sort of presentations. Uh I find it very helpful and very quick and um actually pretty accurate um and relatively low cost. So um those are the tools I use. Um, but I would say that to anyone looking at tools they want to use, it really is um very much a context-specific kind of driven exercise. So um I use the tools I use for the type of work that I do and content that I create, uh, whereas others likely are using other tools because they're trying to accomplish other things and have other goals in mind.
SPEAKER_00Yeah. Yeah. Uh we talked about this. I'm I'm I'm I'm a huge Claude cowork uh fan. I think it's just a powerful, powerful tool. But there are many others that all do some amazing stuff. Um, Colin, we've we've talked about a lot here, and I always like to give our guests a chance to just summarize, you know, in the world of of the legal profession and AI, and this conversation about how do we reserve what's left for us as people and as lawyers to kind of focus on on judgment. What are the two to three things you really want people to take away from this conversation?
SPEAKER_01I think number one, uh always remember that we're imperfect. And those imperfections actually give us opportunities to kind of remember our humanity and and focus in on that. Uh number two, I think we need to understand that um emotional intelligence is never going away because we're emotional and and and likely many times for better or for worse, make decisions based on emotion. And that is something AI can't really handle very well. Um, and number three, uh, in terms of learning about AI, you know, the best thing you can do right now is is experiment. Um, literally just go in with a tool and just experiment um and see what happens.
SPEAKER_00And I loved your advice about that, those are all great things. I loved your advice about experimenting with a goal in mind. Pick a pick something you're working on and experiment around a focused goal, because otherwise you're just kind of wandering in the woods with maybe out you know, without the right direction in mind. And so I think that's that's really, really great. Um, Colin, and as you know, we have a closing tradition on this podcast, which is I stole from another podcast, which I love, which is the last guest leaves a question for the next guest. And it always works out in a really, really fun way. But the um uh the question for you, and this is interesting because we haven't really talked about this, but who or what has most influenced your career path and uh and the path that you chose in the legal profession?
SPEAKER_01Interesting question. Um I would say that uh funny enough, actually, in one way, uh there is someone I once worked with um who told me that law school perhaps wasn't the best idea uh for me. Um and uh this person definitely doesn't know this, but that I think in some ways it made me angry, uh and made me frustrated, but also really drove me to want to prove them wrong. Um I'm always seeking to, I think, improve. And so I think that that statement, even though it was hurtful and not particularly constructive, uh, helped drive me to do what I do. Um, so I'd say that was likely one person slash statement slash situation that has stuck with me for a long time.
SPEAKER_00Yeah. And what a great encapsulation of the difference between AI and judgment, which is having the answer and being emotional about the answer and making a different decision because you have the agency to choose a different path. And I think that's beautifully well put. And whatever motivation got you here, you're doing a great job in making an impact, uh, Colin, and we appreciate you for it. Uh, where can our listeners, viewers uh follow you, learn more about you, follow your writings and your posts and all of that?
SPEAKER_01Sure. So LinkedIn, obviously great place uh under my name. Uh also my website, Colin Slevvy.com. That's C-O-L-I-N-S-L-E-V-Y dot com. Uh, I'm also on Instagram under C Levey underscore law. Um, those are probably the best places to find me.
SPEAKER_00Great. We will put all those links into the show notes and we always do a clip or two. We'll tag you in all of those and and uh make sure that we get a chance to share uh with our audience as we go. Colin, thank you so much for an incredible conversation. I I know that the world of of of law is undergoing a lot of transformation, and and I think it'll be great for our audience to follow you because I think you're at the cutting edge and the leading edge of this conversation, uh, helping us navigate this unique world of AI and the legal profession. Uh, for those of you that tuned in today, thank you for listening, watching. We appreciate you. For those of you listening on your favorite podcast platform, please do subscribe. It's the best way to support the show. And as I mentioned at the beginning of the show, I've just released a new national study on change management uh across the U.S., a thousand workers called Rethinking Change Management. Uh, it's really, really a dramatic look into what corporate America needs in the world of change and what employees need specifically. We talked about a few of those here. Uh, to download that, we'll put it in the show notes as well. Go to Michael J. Lopez.coach slash research and you can download the study there. Colin, thanks again for a wonderful conversation, and thank you all for being a part of the Top Voice podcast.