Conversations for Leaders & Teams

E89. Responsible AI for the Modern Leader & Coach w/Colin Cosgrove

Dr. Kelly M.G. Whelan

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 34:36

Send us Fan Mail

Explore how leaders and coaches can adopt AI without losing the human core, turning compliance and ethics into everyday practice. Colin Cosgrove shares a practical arc for AI readiness, concrete use cases, and a clear view of risk, trust, and governance.

• journey from big-tech compliance to leadership coaching 
• why AI changes the leadership environment and decision pace 
• making compliance human: transparency, explainability, consent 
• AI literacy across every function, not just data teams 
• the AI leader archetype arc for mindset and readiness 
• practical augmentation: before, during, after coaching sessions 
• three risks: reputational, relational, regulatory 
• leader as coach: trust, questions, and human skills 
• EU AI Act overview and risk-based obligations 
• governance and accountability

Reach out to Colin on LinkedIn and check out his website: Movizimo.com


Support the show

BelemLeaders–Your organization's trusted partner for leader and team development. Visit our website to connect: belemleaders.org or book a discovery call today! belem.as.me/discovery

Until next time, keep doing great things!

Colin’s Background And Mission

SPEAKER_00

So, welcome to Conversations, where today we have Colin Cosgrove, an AI coaching consultant and founder of Movismo, working with purpose-led organizations to harness the power of AI ethically and effectively. With over 17 years in global tech, including 11 years at Google, Colin brings deep expertise in scaling innovation responsibly. He led high-performing teams through major regulatory shifts, including the EU's 2017 shopping judgment, and launched and helped launch Google's first AI ads products in Europe in compliance with EU law. Welcome to the show, Colin. What's new in your world today?

SPEAKER_02

Thanks so much for having me, Kelly. I'm excited to be here and really looking forward to this conversation. But what's new is continuing just to get deeper in the whole area of AI in coaching and how in particular we can support leadership development in that space. So that's um that's a big area of focus for me right now. I'm launching a new community in the next couple of weeks. So people can get in touch with me at the end. I'll give details on how you can find out about that. But yeah, it's it's it's all happening.

SPEAKER_00

Excellent. Well, we met. Well, first of all, I am wearing green. You're you're from Ireland, so I am wearing green on your behalf.

SPEAKER_01

Appreciate that. That was uh one of my top requirements.

SPEAKER_00

Of course. We met about a year ago in a training. Uh that was about the metaverse, and it was so interesting. But as I read in your bio, you are in such an interesting niche. So I would love to hear how did you land there?

From Compliance At Google To Coaching

SPEAKER_02

Yeah, I mean, maybe I can take a little bit of a step back, I guess, in terms of some of my earlier work, and I can share a little bit about how I got into the whole, I suppose, AI and compliance field initially. Because a few years ago, I actually thought topics like data privacy bias, yeah, that they were really hard to get to grips with, and even, dare I say it, boring to some extent. But through my work as a leader at Google, I started to get exposed to some interesting related topics in this space. I became more and more fascinated about how big tech was perceived externally from a competition point of view, for example. I was also always really inspired to take a user-first approach in all my work and the role of compliance and compliance around technology and protecting that is actually so important. Um, so I eventually moved myself onto a team where regulation and governance was actually critical. It was essentially the strategic function of the team. And you mentioned there I had involvement in helping lead out a compliance function for the shopping ads business, ensuring compliance with a big shopping ads directive in Europe. I was also involved with leading the rollout of Google's first AI ads product. And all, you know, one of my key learnings from all of this was these topics are never easy, but we can make it easier. And personally, I feel in many respects we've a duty to make it easier. And that's a mindset I've I've tried to bring forward as I launched my own solo leadership coaching practice since leaving the tech world. I've I've tried to apply a lot of that in areas such as the growing development of AI and coaching and coaches and how they support AI and leadership as well. Um, with the ultimate goal being to support the development of agile, literate and tech savvy leaders in this new AI reality we're in. So I guess that that's a little bit of the the journey, I suppose.

SPEAKER_00

Yeah, well, thank you for sharing that. It is an interesting journey and one that many people probably, you know, when they think of AI, they don't think so much, or you know, tech, then they're never really thinking about the back end so much, but that was really the work that you were doing. And so I would love to hear, you know, you mentioned leaders and your the work that you're doing in leadership development. So when you think about the current environments, what is important in this environment now that we have that leaders are operating in when it comes to the work that you're doing? What are you seeing and what's important there?

The New AI Leadership Environment

SPEAKER_02

Yeah, sure. So so we're having now this environment where really AI has changed the game. So it's no longer that, you know, just VUCA environment, which many of us know the volatile, uncertain, complex, and ambiguous. But really, we have algorithmic added to that as well because AI really is changing the game. And really, what la we have this dual scenario where data and output arrives instantly, but then there's a lag with our judgment and meaning making and how we align with our values and our purpose. And this could be both at an individual or an organizational level. So these are really important questions. Then, as well, there's all the various different regulatory frameworks governing AI. So in Europe, for example, we have the AI Act, which you know that can confuse, but really must now be part of the measurable mission and no longer, you know, just consigned to compliance departments. Leaders need to take on board a lot of these topics. So there's a lot of different dynamics at play, uh, including many which I haven't addressed here, but but those dynamics are there for the modern leader. So, really within that context, it's requiring the leader to create their own form of compass with the right supports to help them navigate that. And yeah, I'm happy to share a little bit of my thoughts on that as well.

SPEAKER_00

I I'd love to know because you you do in Europe, it's a little bit different. And so when when a leader, let's just say a leader comes on board, this might be something. Well, it's probably new across the board for leaders to have this compliance, right? So even helping them understand why this compliance is so important, I imagine, is something that you would be able to help leaders with. Is that correct?

Why Compliance Protects Human Rights

SPEAKER_02

Yeah, absolutely. And I think that the the point on the why is is critical here because um, you know, we can all have arguments around which legislation is is better or worse. And um, you know, there can be a lot of restrictions that certain legislation provides, but ultimately it's it's all there to protect all of our fundamental human rights as citizens, regardless of the jurisdiction we're we're in. And how that manifests for a leader in the work environment is you know, they're they're handling their teammates' data or the coaches, if you know, that's their clients' data. So how are we handling that responsibly and with care in this new world as well, with more and more AI coming into the relationship between leaders and teams and coaches and leaders? How are all of the various different parties being transparent about the use of that technology? And what does that look like in the contracting, whether that's one-to-one meetings or a meeting between a coach and a leader? Um, how is that transparency provided? And how literate is the leader? In other words, how well do they understand both the benefits and the risks of the systems they're using? And can they easily explain that to the people that they're they're working with? Because you know, these are all mechanisms I firmly believe that the modern leader needs to take on board to be to be trusted in this in this new world.

SPEAKER_00

Now, would that fall under like if I'm thinking of a division in a corporation, would that fall under tech? Would it fall under HR? Something in between? Do you happen to know?

Literacy Across The Organization

SPEAKER_02

Yeah, good question. I I think there will always be those functions around tech, hate tech, HR, and you know, compliance and and governance will have specific responsibilities with regard to technology and and how it's deployed, but I think it will permeate at every level. So I don't think it will be okay in the future for the leader to say, I'm not, you know, I'm not sure about the data I'm using here, whether it's compliant or not. It's really on everyone to be literate and understand. So I think we're shifting from this world where all of this is sitting in one box in a different part of the organization, to the leader actually taking on this responsibility and the understanding of the compliance of the tech that they're they're using and being able to ask the right questions back to their providers to make sure that they're really standing up for what's right. That makes sense.

SPEAKER_00

So, what are maybe some of the ways that leaders are taking advantage of AI? Like in a good way, hopefully, right?

The AI Leader Archetype Arc

SPEAKER_02

Yeah, no, it's a great question. And I think to take a step back, firstly, I think where leaders really need to spend time and also be supported in is exploring what is their real relationship with AI. So what I mean by that is what is their mindset starting point before we even get to the technical relation relationship. So I created this concept of the the AI leaders architep archetype arc. A lot of A's in there. I need to think about that. But it essentially looks at this idea of how do we navigate between zones of certainty and zones of curiosity. So, what do I mean by this? The zones of certainty, it starts from that point of, you know, I know what's right. And that could be on one end of the scale, like complete disdain, you know, just slamming the door on AI. It's just, you know, it's it's not for me. Um to say more confident dismissal or even skeptical tolerance, which is okay, I'll follow along, but I'm not really sold on this. So there's all of those areas, and and by the way, you could apply this on the other extreme, right? So somebody on the other end saying, all forms of AI are, you know, it's amazing, and I'm just fully bought in. But then if we were to kind of expand that arc out a little bit to some of the other pieces around those zones within those zones of curiosity, it's around like cautious openness. So, you know, maybe there is something here. How might I explore this a bit more? You know, moving into genuine interest, like what could this actually add to my business or to my to my practice? And and I think how I and so how I help people practically leverage this is really spending some time, you know, where do you see yourself in this arc? Yeah, and what are the reasons for for for seeing yourself in that position? And you know, how do you feel when you are there? And just not jumping too quickly ahead, just really exploring that that relationship, and then eventually thinking, okay, what might it look like if you were if you were sort of moving from cautious openness to genuine interest? What could that open up for you? So I think that can be quite helpful for just really understanding, firstly, what is that relationship that you have and trying to uncover you know whether there's any hidden biases or experiences there that might be influencing that perspective. Then, secondly, then it's it's more practical, right? So that's thinking about it from a productivity point of view. And I've done a lot of work like with coaches on this topic specifically, and I think about it from a perspective of before, during, and after the coaching session, like what are some of the typical use cases that you could you know responsibly get some AI support in? Some of those might be brainstorming a new coaching plan for your client and getting some AI companions to help you and to support in that journey, or supporting after the session your reflective practice, you know, how did I do as a coach and what were some of the insights? There's various different tools that can enable that. And during the session, you know, you have the AI transcriptions and and and others as as well here, but all of that with a responsible lens, right? So as you're thinking about integrating more and more of this tech into the journey, really thinking about how transparent have I been on this with my team member? Like how do they feel about this? And having that open conversation not only builds trust, which is really going to be at a premium as more and more AI comes into the mix, you know, from a leadership point of view. That's that's really what I'm going to be measured on a lot. So it builds trust, but it's also you know, it's ethical, it's responsible, and legal in many cases. Like again, in in Europe, you are obliged to be to be transparent and to explain the systems that you're using.

unknown

Right.

SPEAKER_00

Yeah. And to go back from what you said about, you know, your arc and whatnot, it all goes back to what I was thinking about when you were talking about that, is it always goes back to readiness with people, right? Are they ready to even move into? And if we move too quickly, right, if we don't go back and jump back like you did with the question and talk about that readiness and where they land on that arc of yours, I think that we do them a disservice, right? When we're working with leaders, but really understanding where they are in that moment. And not to miss that.

SPEAKER_02

It's a great point, exactly. And that's the purpose, really, is to meet people where they are and move forward from from that point. Or move back, by the way, for that person who's fully bought into AI, like what might they be missing by moving too fast? And and what questions have they not asked? So, so yeah, it's so critical to to go through that that that process. And I think coming back to some of the fundamentals for all of us who are coaches out there is you know, you can't just tell people this stuff. It's it's for that longer term buy-in to happen, it has to come within and to be supported, you know, by a by a coach helping you through that as well, ideally.

SPEAKER_00

Yeah, definitely. Let's see what else we get have here. What are some of the risk leaders must navigate during this process?

Readiness, Trust, And Transparency

SPEAKER_02

Yeah, well, I think yeah, we we've touched on on some of this maybe, but but to to bring that together a little bit, I I think of three ores really in this in this regard. So I call it reputational, relational, and regulatory. So what do I mean by those three? So reputational, the more you use AI without those topics like transparency and explainability, the more it undermines the the integrity, the trust of the relationship of the people that you're you're working with, and and that can ultimately lead to some you know reputational harm or or damage. And and yeah, relational really that that trust is is so important in forging those relations, especially when we think of really human-led areas here, like like coaching, where that really the success of coaching has been built on that kind of human-to-human relationship over over time. So if we let AI mediate too much or come into those forms of communication without that human oversight being very present, that trust can quickly quickly get get eroded. And then finally, regulatory. So that's again, non-compliance is essentially it's a legal and operational risk, right? If if you and your teams are are not using it in the right way. Um so those are the risks, and then I guess how we can move forward out of that is is what I would call more, it's it's less about the stick and more about the cards, which is more you know, risk curiosity. So, in other words, like what do I mean by that? It's not fear of what could go wrong, it's a genuine curiosity of of what is hidden. So it's it's shifting that dynamic a bit. So it's it's just asking those questions. You know, what data is this AI tool learning from? You know, who is accountable for it? Where might there be some bias that that could appear? And how do I how do I address that? So that's just asking those questions to uncover, you know, not because you have to, but because you're you're curious, and that curiosity is going to be so critical in the skills that leaders require in the in the years ahead.

SPEAKER_00

That's that's amazing because when I think about leaders and they're all they already have so many responsibilities, and now they have this this other compliance measure that they not only have to understand and perhaps train their people on, but then perhaps that they have to be the this coach as well. So a leader is coach. So how does how does leader as coach in this dynamic work? Like how would you see that kind of panning out in the workplace?

The Three Rs Of Risk

SPEAKER_02

Yeah, I think it's it's a critical point. And it comes in the context where you know there are genuine fears amongst people around what whether I'm a leader or a coach, what parts of my roles are going to become automated, and and the fears that that surround that. But to the point you just touched on here, I genuinely believe that for both leaders and coaches, responsibility is increasing, not decreasing in this in this new environment. And those skills that really matter as AI is becoming more involved in this decision, decisions, whether that's you know, deep listening, powerful questioning, transparency with trust built in, like these are very human skills and skills that we associate with great coaches, right? So I definitely think they're we're moving that journey forward where we're we're adding extra layers within what's required of coaches, but that that is adding a really important dynamic to how leaders continue to develop. Um, because a lot of the you know a lot of the old school kind of leadership tactics of command and control and all of that is is moving to one side because it is those human coaching skills that will need to will really need to come to the fore.

SPEAKER_00

Are you finding in your practice with with either you know within organizations or with coaches that they're using bots to to kind of supplement different coaching sessions or however you see, maybe they're using them? I guess that's kind of that in your world.

Leader As Coach In An AI World

SPEAKER_02

Yeah, so so we're seeing it happen and that whole area of AI and coaching, like there's a lot going on in that space. And I think of the you to to to split it into two, I see this in two, you have AI in coaching and AI coaching. So AI in coaching is where you're using techniques to augment or support that coaching process, whereas AI coaching is where you actually have, in some cases, fully fledged coach bots that are taking on the role of the of the coach. So we you have the whole spectrum and everything, everything in between. Um, but I think typically how how coaches are using it at the moment is you know, there's a lot of anecdotal stories and evidences that using AI to augment in strategic ways the coaching journey can be can be quite effective. So whether that's taking notes of my session while I stay a lot more present with the person that I'm coaching, you know, that has that has real benefits or saving some time on on pre-prep, but within the sessions and less admin uh so that I can really be more focused on on the growth of my client. So so yeah, it's definitely coming in more and more, but the argument I would keep making in all of this is it should be done hand in hand with the client, with the teams where this is being introduced, because that buy-in is is so critical and and so important to establish that that trust. You know, so so so indeed like the takeaway here is that coaching isn't just about you know development conversations anymore, it's it's about how it can be a strategic partner in this new responsible AI world that we're we're all operating in as leaders.

SPEAKER_00

Gosh, there's so much there. There really is to unpack. I know that must have been well, a couple of years ago, I ran a survey on AI and I'm getting ready to deploy it again because I want to see what the difference is from the first time when it was like all of a sudden, even though it's been part of, you know, AI has been around for a long time, the whole boom, you know, a few years ago is like, let's just see where people are. And I and I'd love to now see, you know, what is the difference between a couple of years ago when it was just, you know, people were really afraid of using AI and in a lot of different components and what it could look like now and what the difference is. So that's gonna be interesting. But I would love before I give you the last word, is there anything else that's really important that you feel we didn't talk about that you would like to add?

SPEAKER_02

No, I I think it's what is important is that people feel safe and comfortable having these conversations because what can happen, especially within organizations, at times, is you know, there's a rush to adopt because that's that's what's what's been cascaded and what it is needed. And and then there can be concerns that just aren't highlighted because the forums aren't there. So so really it's it's just a call to you know organizations in particular, just to make space to to have these conversations, because in this rush for efficiency and competitiveness and to you know lead innovation within whatever sector you're in, there is a risk that that part gets gets compressed. And and it can be counterproductive in the end, right? Because you know, people hide their fears and maybe they don't use the tool that they're supposed to use in the way they are because of whatever fears are are underlying there. So I think ensuring that that open human conversation is is happening, and again, I'm just going back to those points of transparency, explainability, like these need to be principles that are modeled at every level within the organization.

SPEAKER_00

You know, I am curious about one other thing because EU does have the compliance. Do you see in the future where it could be that it is so stringent where it outlines what is okay to use and what is not okay to use as far as tools and whatnot?

AI In Coaching vs AI Coaching

SPEAKER_02

Yeah, so I I think how I mean there's there's various different legislations. How the EU AI Act approaches it is it more looks at a kind of a risk categorization. So it it informs the type of systems that would fall into various different risk buckets, and depending on where they fall, that informs the type of governance that's that's required. So to some degree, you know, there is some prescription there in what that looks like. And then really it's for the for for the system developers to to figure out well, what does that mean for me in terms of where my my tool tool sits. Now, I think the global context all of this is that there is that you know recognition within Europe that there needs to be that balance with the balance of making sure that that there's also competitiveness in topics around innovation and in AI. But where I'd be coming from really is it's it's not an either-or question here. It is possible to innovate responsibly and bringing it back to some of those practical use cases of say AI and coaching or AI in other forms of LD and leadership development, you know, when you're looking at the use cases and the tools you're using to bring those in, you should look in tandem at those those questions around how you know, is it being done safely? Is it are the right processes in place around transparency, explainability? These aren't things that need to come on after and add two months, five months to the project. It can be done in tandem quite easily once the structures are in place and those investments in responsible design and practice are made.

SPEAKER_00

And then I guess one more thing, because you keep piquing my interest and curiosity here. As far as an organization, where is the accountability? Where does where does that lie? So if if a leader, let's just say I have a team and I find that they are not abiding by, they are not being compliant, right? They're non-compliant, then is it up to the organization to have their own what's going to happen? So when somebody is non-compliant, is it already written by the overall compliance?

Make Space For Open Dialogue

SPEAKER_02

There will there is, and there will be a responsibility to have a you know an AI governance framework that that sits within the the broader AI strategy. So that should you know it should clarify the who are the accountable persons for ensuring that there's there's compliance of of AI within the uh within the working relationships. But again, back to that point, it it has to happen at every every level. So yes, there should be those broader structures and accountability, but ultimately, you know, from a legal point of view, who is accountable and who is responsible, they are the system providers bear the greatest responsibility when it comes to comes to actually you know something going. Going going wrong. But the deployer, so in other words, say the organization using the tool also has their responsibilities, and that's where they need to put policies around and ensure that the relevant teams have all the right trainings within the organizations. And again, going back to some of the principles here, this is not about training the data team because they're working closer closest with the systems. It's making sure that all parts of the organizations have the relevant level of training for their business unit. So that's where it needs to be this cross-team, cross-functional strategy.

SPEAKER_00

All right, Colin, I have loved learning from you today and learning about the work that you're doing, uh, the good work that you're doing with uh with your clients, with organizations, and with coaches. And I would love to give you the last words. So what say you?

SPEAKER_02

Thanks so much. Yeah, I mean, look, last words here is I think leadership in AI and coaches who support leaders. It's it's not about just getting better and faster. I guess a lot of the tech is is doing that more and more. It's about how we ask better but crucially more responsible questions that are human-centered by design. So in all of this context, the responsibility for human leaders and coaches supporting those human leaders is continuing to have a greater role within that within that context. Um, so so yeah, I have absolutely Kelly, love, love being here. And if anyone would like to you know reach out to me further, you can find me on LinkedIn. My website is uh Movismo.com and uh I have a newsletter there so people can sign up to the various different initiatives I run through there as well.

SPEAKER_00

Excellent. Well, you heard it here, folks. Colin Cosgrove, AI coaching consultant. Thanks for being with us, and until next time, you keep doing great things, and we'll see you soon.

SPEAKER_02

Thanks so much, Kelly. Appreciate it.

Inside The EU AI Act

SPEAKER_00

Very interesting. I mean, we probably could have spent a couple hours talking about it and really unpacking. Uh there is just so much around compliance, and especially, you know, in your world where you're practicing. And we'll see what the US does, you know, as far as um where we go with that. But it is just it's amazing. I mean, like I said, with that survey, people were so so so nervous about stepping into even using it. And what does this mean? And the whole, you know, and and you'd see it online with, you know, is coaching, you know, is AI gonna take over coaching and are coaches gonna be no more and all that. And it's like, no, there's always gonna have to be that human connection with people. You know, we can augment and we can we can help um clients in different ways, but it's another tool.

SPEAKER_02

Absolutely. And I think we're seeing, yeah, and we're seeing that as well with uh with so many other, I guess, human disciplines coming back, you know, yoga and people are going back into that space a lot more. So um, so I think coaching is a little bit similar. There's a great value in that human connection piece, and I think even when it went from in-person to online, people felt something was missing with that shift, even, you know.

SPEAKER_00

Right, and it's second nature now.

SPEAKER_02

Totally.

SPEAKER_00

It's just I am rarely face to like in the same room with with people, it's very different. And however, I was virtual before you know everybody seemed to go virtual too. So I was using Zoom a lot prior to COVID. So it was already part of what I do, how I work, very easy for me. I hadn't worked with teams until COVID virtually. But um, yeah. Yeah, no, just have to go slow. Yeah. Meet them where they're at, like you said earlier.

SPEAKER_01

Exactly.

SPEAKER_00

Yeah, yeah, yeah, 100%.