The Coaching Cafe Podcast
The latest thinking from Australia's leading Organisational Coaching specialist, Open Door Coaching.
Released weekly on Tuesdays the Coaching Cafe is presented by Dr Natalie Ashdown an MCC accredited coach.
Enjoying the podcast? Check out our Social Media, our website or leave a 5 star review to help spread our reach!
The Coaching Cafe Podcast
Part 2: AI and Ethics.
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
As AI becomes more embedded in the workplace, questions are no longer just about capability—they are about ethics, responsibility, and professional practice.
Following our exploration of AI and coaching last week, this week we turn our attention to the ethical implications of using AI in coaching and leadership.
- What does it mean to use AI responsibly in coaching conversations?
- Where are the boundaries between support and over-reliance?
- How do we ensure that coaching remains human-centred, confidential, and ethically grounded?
These are not future questions—they are current realities.
As coaches and leaders, we are required to exercise sound judgment, maintain trust, and uphold professional standards. The introduction of AI into our work challenges us to think carefully about confidentiality, bias, decision-making, and the role of human insight.
Join Natalie and expert panelist Lu Ngo, Head of Digital Skills Programs, Australian Institute of Management (AIM) as we explore the ethical considerations of AI in coaching practice.
You’ll walk away with:
✔️ Key ethical considerations when using AI in coaching and leadership ✔️ How AI intersects with the ICF Code of Ethics and Core Competencies ✔️ Risks related to confidentiality, bias, and decision-making ✔️ Practical guidance for maintaining ethical coaching practice in an AI-enabled world
Transcripts can be found here:
Thanks for listening! If you enjoyed the podcast please leave us a 5 star review wherever you listened to us! It helps promote the podcast to streaming services and other listeners. Also drop the podcast a follow!
Watch the webinar of this episode on YouTube or read the blog by visiting our website.
Want to join us live every Friday? Register Here!
Engage with The Coaching Café Podcast
Thanks for listening!
[Music] Well, it's a very good morning, it's a good afternoon and for some of you it is a good evening to you all. Welcome to the coaching cafe, my name is Stockton Natalie Ashdown. I am joined on the line today by the Head of Digital Skills Program for the Australian Institute of Management. Welcome to you, Loo Nords, lovely to see you today. Thanks for having me again, now. Great to be here. And it's a welcome back, you were so popular last week, we need to carry on our conversation, which I'm very excited to do. And today we're talking about AI and coaching ethics. And what leaders and coaches need to consider when it comes to AI and ethics. So it's a big topic and we've got lots to discuss. We have people dialing in from all around the world. We've got Texas, Louisiana, KL, down the road in a Hobart, across the, across, over in Perth, so many people, Tokyo represented, so welcome to you all. It's wonderful to see you. Before we begin, let us acknowledge the traditional owners, the custodians on the lands, on which we all meet today. And the continuing connection to the land, waters and communities of Australia. And the lands from all around the world where you're joining us today. Either live or on your favourite streaming service, the podcast is going crazy, so thank you so much. We pay our respects to them and their elders past, present and emerging, and elders from indigenous communities from all around the world. I have welcomed back Lou, there is so many things to discuss today. So I would really like to just do a bit of an introduction then hand over to our expert. We want to turn our attention today to the ethical use of AI, in particular what should leaders and coaches be aware of when it comes to the ethical use of AI. And what does good practice look like to mitigate the risks? So for those of you that are new, we always say welcome. We are all about creating shared learning experiences, having thought provoking conversations. And if you are here to earn ICF CCEs, then yes, they are available at the end of the session if you are listening to this live. If you are new, welcome back. You know that we love this time. It's our time out for our own professional development. Please feel free to interact with us via the chatbot. Chatbot, I'm Emily said, by the chat box, and we'll pick up your questions and comments as we go along. So welcome to our community, welcome to our thought provoking conversations and welcome to one of the biggest topics going around right now, not only in coaching, but also globally. So let's just first of all recap from last week, we talked about how AI is being used, in coaching at the moment, it's been used in content writing, which is pretty obvious, in virtual coaching assistance, which are growing in sophisticated data analysis, which has been used for metrics, reporting, but also for themes. There's deeper conversations and better outcomes being driven through sophisticated data analysis. In scenario simulations, which are becoming very popular using AI as agents for coaching, and of course, the self-coaching and goal attainment. And if you pick up last week's coaching recording, we showed you a few examples of these as well, and Lou gave us insights into a number of the different ways that AI has been used in coaching. So this is my challenge to us. All it's my personal challenge as well is that we do not keep our heads in the sand, like an ostrich with our heads in the sand. We really acknowledge that we're not experts by any means in the field. What we really are trying to be is explorers and experimentalers, and we really want to keep up because we need to. And one of the ways we do need to keep up is in the ethical considerations, which is why I welcomed back Lou today as well. When it comes to ethical considerations and the boundaries, the International Coach Federation has done a lot of work in this area. You can go to coachingfeteration.org, go to embrace the future ICF's AI standards for coaching actually. So they have released standards. These standards are being updated because for those of you that can see the screen, they were introduced in late 2024. And so much has happened since then. But they are putting out standards for the use of artificial intelligence, particularly by introducing those into coaching and coaching frameworks. What we want to have today is a more personal leadership and coaching conversation about what we should be thinking about as leaders and coaches. So the International Coach Federation's standards for the introduction of artificial intelligence, we would have a more of a personal conversation. And that's where I thought let's bring back Lou from the Australian Institute of Management. And let's really get your insights loose. So with that in mind, I'm going to stop sharing. And we can just really dig into our conversation together. Fantastic. Let's do it. Okay, so Lou, a big question. And I think where do we begin with such a huge topic? So perhaps we can begin with Lou, when you think about the ethical considerations, what leaders should be thinking about when it comes to AI? Can you kick off the conversation with us? I know you've done. Well, actually firstly, I forgot last week. Can you introduce yourself and tell us why it's so important for you on the line with us? Tell us about your role. And then perhaps let's direct the conversation to what should leaders be thinking about when it comes to AI? Thanks, Nat, for having me back. This is really exciting. So for those of you who I haven't met yet, nice to meet you all. And I am Lou from the Australian Institute of Management. I currently run the Digital Skills portfolio that are just kicking off at aim. My role mainly focus on probably, you know, research and really understand what the CPO's really care about to then inform our training. And it also has to align with organizational challenges and, you know, where we're all heading, particularly in Australia. But I think these concerns are actually global. We have found that, you know, in our content, in our research, a lot of the things that we talk about, are pretty much global. The concerns, you know, for leaders everywhere at this very moment are the same, you know, especially with the workplace and the workforce being changing so much. So, yeah, I think I would probably pre-empt to you by saying that I'm not an expert in Digital Skills, because no one actually is. It's such a broad space. But what I am an expert in is really understand the needs of organizations in learning and in training their staff and understanding what leaders are struggling with the most, especially executive teams, to be able to then help them with solutions, you know, especially in Digital Skills, which is a big topic at the moment, like Nat just mentioned. Yeah, and as you just, oh, just for those of you don't know, open door coaching is now part of the broader brands, part of the Centria brands, and that's where we aim as like a larger sister company now as well. So, so, when you think about leaders and you think about AI and you think about ethics, what are some of the ethical considerations that come to mind when we're talking about those? Yeah, this is going to be a long one, because I think we'll have to start and set the scene first. So actually yesterday we ran a responsible AI leadership webinar at aim, and we have over 400 attendees, like I mentioned in that. So we ran a little poll in the session and we understood to understand what leaders in Australia currently concerned about the most when it comes to AI, and then we'll go into the ethics part of it. So actually when it comes to AI, they are consideration right now at the top two actually workforce skills and capability. And the second one is managing risk and governance, which is related to the ethics that we're going to talk about today. A lot of organizations are in the early adoption phase, and usage is evolving. So when it comes to ethical consideration, we have to start by understanding how is being used in the first place. And so there's a lot of shadow AI use as we call it, where your team actually goes about using AI without you knowing, or not mentioning it at all. And that's where the risk and ethical consideration come from, because I'm sure people dialing in today would not be unfamiliar with the deloid case recently. Not that long ago, there was a really big government report that deloid delivered, and AI actually hallucinated the sources. So when they click into the report and go through the references, some of them actually don't exist. And I think that was about a $400,000 project. So you can imagine for that scale of a problem across organizations, if this is happening, and there's also shadow AI use, and you're producing not only internal reporting, but also work for clients. What does that mean for us in terms of ethical consideration? Well, it means we don't know if we are doing the right thing. We don't know if we're producing high quality outputs. We don't know if we are actually properly checking throughout the process of producing that output because there are many steps to be able to get to a client output. And some other concerns would be privacy, because a lot of organizations are starting to adopt AI, and I'm sure in coaching too, so client data, internal information, client data, all of that, where is that going? And in Australia, we are no strangers to the recent data breaches in some of the really big names like Optus. I was actually one of the people that got my data breached. So that was a very big concern, especially when now AI is being incorporated everywhere, what does that mean with our information? So leaders think about the same thing for their organization, especially with the larger the organization, the larger the risk. Yes, and I can definitely relate to what you're saying there, Lou, because I was in a seminar just today, just this week as well, with Chief People Officers. And the big discussion around the table was exactly what you're talking about, the risk, the governance. Are we uploading company information? If we are and company data, where are we uploading it to? What are we using that for? And there was a lot of discussions around putting boundaries in place. I haven't heard that term shadow AI before, but I can I can definitely relate to that because the discussions I was having is around the boundaries. So if if people are using it, we don't want to restrict them, we don't want to hold them back. But what are we putting in place in terms of risk and governance? Very interestingly, I asked the question about who's responsible and and a couple of the organizations that were at that round table discussion said the board, which was so good to hear. Meaning the board is taking responsibility for the ethical use of AI. That's how important some organizations are seeing this. Yeah, it really is. And you know what, I think another thing that boards and executive leadership teams should probably really think about is that fairness and bias. You know, this is a big one. I want to separate that from the other ones that we just talked about earlier because we we've been having learning, we can send here this week. And I'm proudly saying that, you know, our teams developing all these digital skills solutions and courses. And I took one of them this week. I took the AI for productivity course. And it was really eye-opening to hear from different perspectives in the room, one of which being hiring, applying for jobs, hiring. So when it comes to fairness and bias in organizations nowadays, this is only one part of it, right? Let's just say the first step that you take into an organization is when you go through the recruitment process. And a few years ago, I don't remember exactly which year, but a few years ago, Amazon hired people through using AI and screened through all the profiles based on historical data. Now, this was actually not fair to the candidates because most of the people in the historical data were male. And so, you know, most of the females candidate was screened out. And, you know, if you go through this case study and actually, you know, read through it a bit more, it's quite fascinating. So obviously they introduced it and they have to take it back because that's obviously introduced that bias and, you know, unfairness in screening and recruitment. And then if you think about, you know, the future of work, there will be a lot of AI usage in organization that leaders will introduce to then, you know, things like a stretch project for a team member appearing with their future potential in the organization based on historical data. Is it going to be fair? Are you actually capturing all the data? And, you know, if we're just basing the decision on historical data, what does that mean? For people that, you know, come from backgrounds that just didn't exist in the organization before. So, you know, things like that would raise concerns and questions around the fairness and the bias that might happen if we start, you know, if we start just adopting AI without having, you know, I think that would probably fall into the data consideration of, you know, your data set and your data structure. So, yeah, it's a very big topic and, you know, I often think about, you know, a Gen Z who are, you know, really struggling to get into the workforce and, you know, this is actually a very big concern for them because most of them are saying, I can't even get through the screening. AI just screening me out, right? So, it's kind of, it really helps to speed up the process for organization. But on the other hand, is it fair? You know, is it actually helping everyone collectively? That's another big question that leaders probably should consider. You're listening to the Open Door Coaching Coaching Cafe podcast. And for more information on programs run by Open Door Coaching, head to our website at OpenDoreCoaching.com.au. Now back to the podcast. Yeah, thank you, Lou. Look, someone has asked, will the recording be available? And it will be available on our podcast and all on also on the blog and there's so many questions I have and, and I'm taking notes already from you. So thank you so much for your generosity and sharing. You've given us some really thought-provoking things to think about already. And what I was thinking about is it's so important for coaches and leaders to be aware of this because this is the context within which we are coaching. So, so we need as coaches to be able to ask questions about ethics and the ethical use of the AI, if the leader has bought that topic up. So, you know, if it's if it's counterpart or coachy led, we need to be able to ask those quality quality questions that come from us knowing the context and the landscape and what's going on in the world and keeping up with AI. And so that's when you said around the different examples you've been using. It generates questions to me, Lou. It generates questions in terms of, so to what extent is your would you would you think that the process is fair or to what extent is the process equitable or how would you rate the equity or fairness of the process if that's the topic we're talking about. So, you know, how are we getting the best candidate? So, how are you ensuring the best candidates? It's the knowledge that you're giving us, Lou, I think that inform the quality of our questions if we're coaching others or if we're leading as well. Yeah, absolutely, especially as leaders, you know, if you're if you're coaching a team member, oftentimes nowadays even in my own conversation with my team, I'm starting to adopt some of the coaching practices that I learned from you. And so, you know, I find that really great to then have a think about, you know, for example, if we used a grow model that you taught us and then, you know, what are the options that we have and then if one of the options comes up being AI, what are some of the follow up questions we have to ask back, right? Because oftentimes the solution nowadays would be, oh, let me think about how can you say, I sure, it's great, right? We were all starting to get on the same page, but what does that actually mean in which context can we use AI? Mm, mm, mm, mm, mm, and you talk about augmented and you augmenting and you did that last week, but just quickly get us a snippet about what you mean by augmenting as well there. Yeah, so last week we touched on augmenting the coaching practices and I understand it was probably a relatively new concept to a lot of us here. So to recap, what I meant by augmenting is you don't just use AI here and there for admin tasks, you don't just grab it and go when you need it or you feel like, oh, maybe it can help me here, but actually zoom out and take an audit of everything you do in your coaching. Yeah. It's kind of think about your coaching as a business and it is oftentimes a business, right? So you go through all the processes and you have the steps you have to go through, mundane things like capturing client data, then contacting the client, booking time, I know a lot of coaches are using tools to book the time to save that back and forth, for example, that is adopting it for productivity, whereas augmenting would be when you think about the whole process of setting it up, following up and then continuously developing exercises so you know, you know, outside of the coaching session, you can keep the conversation going and allowing the participants to, or you know, your mentees, or how do you call them? I'm not really sure with the terminology, but anyway. Yeah. So the people that you coach can then continue on and then also it helps you to then capture the data. But then obviously, because we are talking about ethics and privacy, I do have to say that this is where it comes in. When you're thinking about augmenting your coaching, you also need to think about, I would probably call it like a decision block situation, in your decision blocks, what are the key things you have to consider throughout the process of augmenting your coaching? If you're introducing AI throughout the process, what would that mean for data and privacy of your client? What are the things that you will never introduce AI in when it comes to your coaching? And you know, if you ever talk to a vendor about, you know, certain steps of the process where they can help you and that vendor is in AI, what are some of the questions that you can ask them? What are things that should be on the contract to make sure that you protect your clients? Yeah, there's so much for you. There's so much, that are all great questions. So I'm really hoping that coaches and leaders that are listening might listen back as well and write down all these questions as well, because you have turned to a really important, a very important topic actually, and that is the use of client data. And, you know, we can use AI to review transcripts. And last year, I showed our audience how to do that, actually. And a funny thing to share with you there, Lou, was that I actually got AI to rate a transcript, which for me was clearly rubbish because I created the transcript and I was not doing coaching. You know, I was jumping into the space and I was cutting people off and cutting the person off. So I actually produced quite a rubbish transcript. I uploaded that to AI this time last year. So a lot of what's happened in the year, mind you. And the AI chatbot rated my coaching really high. And I said, "Ooh, I think you might be mistaken." And all know, know, you know, Nat, congratulations on your coaching. So, you know, things have evolved so much. But we can use AI to review transcripts. We can use AI to listen to calls. We can use AI to improve our coaching practice. We can use AI as an agent for us to practice a roleplay. So we, some people take notes and they can use AI to store their notes. So, Lou, what practical advice do you have for us as leaders and coaches when it comes to thinking about all those different uses and the ethics of all of that? Just what are you thinking? Oh, I'm smiling because this can be a whole webinar in itself. I know. I had to be brief. Just a few things that I picked up on that you sent before. So first of all, when you uploaded the transcript into AI a year ago, you have to remember, let's just talk about the foundational knowledge. Generative AI is basically built around LLM, lush language models. And they're trained by these, you know, major, what we would call them, frontier models, like open AI for chat GPT, like Anthropic for Claude and like Gemini, you know, by Google. So they have been trained on a certain set of data. Most of these models, when you talk to them, especially, you know, if you think well, last year or the year before, they are so positive. They look everything you produce to the point that everyone starts saying, why is it just complimenting everything that I say? You know, I kind of wanted to challenge me a little bit, but that's hard because whatever you say to it, it will say, you know, for example, I'll talk to my AI every day and then it, you know, it'll be like, that is Crayon glue, great idea. I think this is fantastic. Something along the line of that. And it still does this, you know, it's a little less annoying, but still does this. So what is this data set it was trained on? And, you know, how was it trained? That's the big question. We actually don't have a lot of insights into that, especially as non-technical people, you know, if you don't dive deeper into the world, you probably don't know. And I think it's just something to be mindful of because are they actually trained on data that would help coaches in particular? First of all, and second of all, I don't think, if you think about it, coaching is so personal. You wouldn't find a lot of actual real data, but it can train on. So that's probably why, you know, it came back to you and said, that was great. That's one. And second of all, there is something called natural language processing in AI, where, you know, it keeps training to then mimic human voice and be able to understand us better. So obviously, you know, it's made headways in a year, right? So now is probably better at that, but obviously when you think about, you know, getting it to write a transcript or, you know, even a conversation, even you type it out or anything like that, that is a big question of, like, does it actually know in the human context? Probably not. And so coaching, I think, is really personal, is really niche and unique. So, you know, any, you know, open model like the TETGPT, for example, wouldn't be able to do what you needed to do in that sense. The second thing that I will say is, this is actually really interesting. When you upload anything, you have to know that if you don't have an enterprise plan, that is going to AI's data. So remember, I told you, lush language models get trained on data. So if you're uploading anything that you produce with your client, act or data, that's going to the internet is going to live there forever. So be really mindful of that, right? Because I also have this conversation with our team and, you know, other colleagues in Centria, recently about, you know, using voice, voice chat with AI or even, you know, uploading audio, that I would recommend don't do. Yes, yes, that's right. So, but coaching, it needs to, that is what we need. And I think that's what coaches need, the feedback on the actual conversations. So, you know, I think the practice that you mentioned, that's really good if you can roleplay with it. But anything like a conversation, a coaching conversation, if you need it to rate, the conversation, I would probably be a bit hesitant and think about what it actually means, because obviously it cannot replace coaches, you know, that's fuster. And also, there might be particular organizations or, you know, AI models that can really help with coaching in particular. So that would probably be the other thing to consider. But then again, you know, it comes back to the question that we were just talking about before, data privacy, how am I going to actually use this what kind of level of information do I want to disclose? Those are the questions to ask. You know, I think they're very good questions. And Lou, again, I think about those questions in the context of us personally as coaches and as leaders. And then also, the excellent questions to be asking of leaders, because they inform the coaching conversation when the time comes or if the opportunity arises. So, yeah, there's a lot to think about in this space. And what I think I like is that, you know, we can tap into people like yourself. And we can explore these things. For me, it's about we do need to be having the conversation. I think that's the most important thing as well. Definitely. Yeah, I think, you know, especially when you were introducing the new AI standard by ICF earlier, I remember correctly, the actual considerations and boundaries that, yeah, so this is really great, right? I think coaches need to keep the conversation going. And continuously upskill yourselves to understand how it actually works. You know, what are the key considerations outside of this, you know, because this is an introduction. I think it's it's called a standard, but I think the the conversations you're going to have, especially with, you know, coaching practices between different cultures, that's something to share and keep the conversation going about. Yeah, absolutely. And Lou, let's just because you have been so involved, well, you are leading the field here in the development of courses. Can you tell us quickly about AIMS, AI Essentials for Business and AI's Responsible AI Leadership courses? Just give us a one, you know, 30 second overview because we're right on time. But tell us about this because you said you did the, you've done the core, you yourself have attended more courses this week and I need to book myself in as well, but yeah, quickly about the courses that that I moved developed here as well. So they serve two different audience, obviously AI Essentials is more for the individual contributors up to middle managers who wants to really upskill themselves and understand the basics of AI. And this is where a lot of people struggle because they actually don't even know where to start, you know, what does AI mean? And, you know, artificial intelligence keeps evolving. How do I keep learning to keep up with it? That's the idea. And then obviously the Responsible AI Leadership serves a slightly different audience of senior leaders up to executives and we do customize solution for this as well to then help senior leaders in organization to really understand the consideration like we just touched on a little bit earlier today because, you know, this space is continuously evolving a lot more organizations adopting AI. And so, you know, I think they serve slightly different audience and different purposes, but the key here is that I think it's about continuously learning. Even for us, developing the course we see that, you know, the insights change all the time. So we have to continuously update and, you know, support our facilitators to then deliver the most up-to-date insights as possible to help our learners. So hopefully, I didn't speak too fast. Mindful of time, but yeah, that's that's the gist of it. Yeah, absolutely. And of course, you can reach out to us at support at Open Door Coaching and pick up a discount on those courses still available this week as well. So thank you, Lou. I'm hoping that we'll be able to get you back actually to share even more because we do want to be explorers and experimenters and we do want to keep up and understand the world and the context which we within which we're coaching. And I want to thank you so much for last two weeks because you really have opened up our minds to what's possible in that regard as well. So thank you so much. My pleasure. Thanks for listening to this episode of the Coaching Cafe Podcast. You can watch the full video of this podcast on our website. I'll put a link in the show tonight. We'll see you at the next Coaching Cafe.(upbeat music)