The Inner Game of Change
Welcome to The Inner Game of Change podcast, where we dive deep into the complexities of managing organisational change. Tailored for leaders, change practitioners, and anyone driving transformation, our episodes explore key topics like leadership, communication, change capability, and process design. Expert guests share practical strategies and insights to help you navigate and lead successful change initiatives. Listen in to learn fresh ideas and perspectives from a variety of industries, and gain the tools and knowledge you need to lead transformation with confidence. Explore our episodes at www.theinnergameofchange.com.au, Spotify, Apple Podcasts, Youtube or anywhere you listen to your podcasts.
The Inner Game of Change
E93 - Radical Change Empathy : AI and Ethics in Practice - Podcast With Rebecca Bultsma
Welcome to The Inner Game of Change, the podcast where we explore the unseen forces that shape how we lead, adapt, and thrive in the face of change and transformation.
Each episode is a chance to learn from thinkers, doers, and everyday leaders about what really makes change work — and what keeps it human.
My guest is Rebecca Bultsma — an AI ethics researcher, power user, and someone who lives in that space between awe and dread of what AI can do. Rebecca has built her career helping leaders cut through the hype, face the risks, and still find practical, human-centred ways to use this technology without losing their soul or their job.
In this episode, we explore the hype and the harm, the messy middle of adoption, and the accountability gaps that every business and every leader needs to face. And at the heart of it all, we talk about what it means to stay radically human in a world that is increasingly shaped by algorithms.
I am grateful to have Rebecca chatting with me today.
About Rebecca (In her words)
The honest truth?
I'm an AI Ethics researcher who uses AI all day. Yes, I see the irony. Yes, I'm navigating this contradiction in public. Daily.
I help leaders who are somewhere between "AI will save us" and "AI will end us" find their actual footing. No BS, no fear-mongering, just practical strategies for using AI without losing your soul (or your job).
What I actually do:
Translate tech panic into action plans. I take 20 years of making complex things human-friendly (comms/PR veteran) and mix it with an MSc in AI Ethics from Edinburgh. The result? I can explain why AI is incredible AND terrifying in the same breath - and help you navigate both.
The work:
Keynotes that don't put you to sleep (50+ delivered, people actually stay awake)
Workshops where we actually DO things (not just talk about them)
Executive sessions for when you need to admit you don't get it (safe space, I promise)
Currently obsessing over: AI governance that doesn't kill innovation, helping teachers not fear GenAI, and explaining to boards why "AI strategy" isn't optional anymore.
Contacts
Rebecca’s Profile
linkedin.com/in/rebecca-bultsma
Website
rebeccabultsma.com/ (Company)
Ali Juma
@The Inner Game of Change podcast
Follow me on LinkedIn
lean in and be a hundred times more human than necessary, because all of your jargon and here's 10 steps to this AI can do that now. What it can't do is actually relate to people, have conversations, share on the job wisdom and experience or unique vantage points or human connection, so those soft skills we call them. That's where it makes a difference.
Ali :Welcome to the Inner Game of Change, the podcast where we uncover the unseen forces that shape how we lead, adapt and thrive through change and transformation.
Ali :I am your host, ali Jemma, and each episode is a chance to learn from thinkers, doers and everyday leaders about what really makes change work and what keeps it human. My guest today is Rebecca Bulsma, an AI ethics researcher, power user and someone who lives in that space between awe and dread of what AI can do. Rebecca has built her career helping leaders cut through the hype, face the risks and still find practical, human-centered ways to use this technology without losing their soul or their job. In this episode, we explore the hype and the harm, the messy middle of adoption and the accountability gaps that every business leader needs to face, and, at the heart of it all, we talk about what it means to stay radically human in a world that is increasingly shaped by algorithms. I am grateful to have Rebecca chatting with me today. Well, rebecca, thank you so much for joining me in the Inner Game of Change podcast. I am eternally grateful for your time today.
Rebecca:It's my pleasure. It's nice, we were able to line up our time zones.
Ali :Thank you so much. I am calling you from Melbourne, Australia, and you are in Canada.
Rebecca:I am right in the mountains, in the prairies. I'm hopefully not going to let any of my Canadian accents slip, but I can't make any promises.
Ali :This is Australia and here we love all accents. Anyway, rebecca, we're going to talk about AI, anything related to AI. Maybe the central point will be around the ethical use and AI ethics, and what's your fascination with AI nowadays?
Rebecca:Oh, such a big question. I think a lot like everybody else's. I think we're seeing so much of it in the news and in the headlines and it seems to be all everybody's talking about. But I became fascinated by it several years ago and dedicated all my time to figuring it out, and then subsequently saw a lot of the challenges and issues attached to AI. So I started researching AI ethics and governance at the University of Edinburgh in Scotland. So it's been a journey. I live in this space of dissonance where I love the tools and I use them all the time, but I also recognize a lot of the risks that come with them.
Ali :Is the hype around AI warranted?
Rebecca:Oh, tough questions and you'll notice that a lot of my answers to your questions today will probably be it depends. I live my life in that gray where I see all sides of lots of different topics, but there is a lot of gaps that we see. The hype is justified in a lot of ways. Ai is generative AI, all AI. It can do some pretty amazing things that will change a lot of lives for the better and in some cases, the hype is truly justified.
Rebecca:There are some really, really amazing use cases that we're seeing, but I think the gap between the hype and the responsible use is what we probably need to talk about the most, because we're seeing a lot of hype from like CEOs and from companies who are going all in on AI or talking about AI all the time, but the reality is is that like 42 to 50% of CEOs, cios are calling AI their biggest priority right now. But when you dig deep into it, the New York Times wrote an article a few weeks ago about how they don't actually know how to use it. The CEOs aren't learning to figure it out. So there's a gap there. There's gaps that are emerging in the news from companies who thought that it would revolutionize their work, but really it's not having that much of an impact on an organizational level.
Rebecca:And then I think we're seeing gaps between we're seeing these big corporate misses, right, like we're seeing AI is going to be amazing, but then we update on it and it started just giving people step-by-step home invasion plans and calling itself Mecha. Hitler and ChatGPT is saying how amazing AI is going to be and change the world, but then at the same time they run updates that have really negative implications or just start telling everybody what they want to hear or causing distress and harm to real people. So the hype is a lot justified, but we also have to recognize, at the same level, I think, a lot of the harm and the potential harm. So the answer is it depends.
Ali :It depends AI ethics. What's the simplest definition for that?
Rebecca:AI ethics. What's the simplest definition for that? I think in my mind it is using generative AI in a way. Well, actually it probably. It depends. But using AI in a way that minimizes harm would be the biggest blanket term. But the reality is there's all these different levels of harm and there's a lot of trade-offs. Right, like, maybe it doesn't harm me, but it harms society, or my company makes good decisions around it that then harm others, or the way it was trained was harmful. So I think to me, ai ethics is understanding the technology and learning how we can use it responsibly at an individual level, where our sphere of influence is to influence others to use it responsibly, and understanding some of the bigger ethical problems connected to it that might be out of our control, but we still need to understand them so we can have conversations about broader change for society. So I know it's vague. To me, for the everyday person, it would be understanding the technology and learning how to use it in a way that is responsible.
Ali :What would be some? When we talk about ethical use, we implicitly also think that there's going to be, or we are aware of the fact that there will be unethical use of it. What would be an example of that In my eyes and when I think about this topic in my eyes and when I think about this topic.
Rebecca:ethics in business is not a new phenomena, is it? No, definitely not. But I think accountability for ethics in business has never been super super black and white. In some cases, In medical ethics 100% Legal ethics, yes, but often codes of ethics for other professionals are guidelines.
Rebecca:They're not necessarily strictly enforced in every circumstance. The idea of ethics sometimes varies from person to person. What I think is ethical might vary depending on the cultural context or how my morals are based on how I was raised. It could be something as different as maybe tipping culture in the United States versus tipping culture in other countries. Right, it might be super unethical to not leave a tip in the United States, but in other cultures it's. It could be considered offensive. So the idea of ethics is great.
Rebecca:People try in professional contexts to outline codes of ethics, which matters, but there's no real accountability mechanisms. And that's what we're kind of running into with generative AI ethics, because and that's what we're kind of running into with generative AI ethics, because if you decide not to use AI ethically right now, there might not be very much that's going to happen to you. If an AI company develops a technology that does terrible things. There's not necessarily any accountability for that company right now, and we're living in this messy in-between spot where a lot of these generative AI chatbots are causing real harm to real people, but we can't figure out, okay, well, is it your fault as a user? You used it wrong. You're a bad person, a bad actor.
Rebecca:Is it the company who adopted the AI technology? They should have taught you how to use it right. Is it the company that built the technology? That should be responsible for it, which we're seeing in social media discussions, or is it the government's job to regulate this and they're letting us down? We're just figuring out this messy. Whose job is it? And the truth is it's all of our jobs, which also makes it none of our jobs, and so that's what we're trying to navigate right now, which is why I try and go back to. Let's focus on what we can control at an individual level, what we can influence and what we should be aware of.
Ali :I'm always aware that the technology. There's a human somewhere that designed it, so the point yeah the idea that when we look at ethics, wouldn't we need to look at the design process in the first place?
Rebecca:Yes, but that goes back to AI represents and spouts back and mirrors the ethics of the people who built it, or the collective ethics of humanity, because it's based on training from the internet and all of the books that have been written and movies and YouTube videos, which is an entirely different ethical discussion. But the reality is is that it's reflecting the worst parts of our history back at us too, and historical prejudice and bias that is completely inappropriate in today's context is still coming out of these models based on what it was trained on, and things that might've been considered ethical a hundred years ago are baked into these models and they don't necessarily reflect the moral standards that we abide by today. So, yes, ai reflects the morals and the ethics of the people who built it, but we built it collectively, as humanity, and we didn't do a great job throughout history of being very moral people all the time by today's standards, and that's part of where things go wrong.
Ali :That makes sense time by today's standards, and that's part of where things go wrong. That makes sense In my head. I always think that isn't it human to always try to play close to the line when it comes to adopting the technology? You've raised a good point around who is responsible at the end of the day. Is it the user? Is it the company? Is it the government? Is it the vendor, the designer of that, for example?
Ali :I'm working with a number of clients now. They're going back to basics. This is a technology. We expect you to use it for a business purpose in the right way. We're going to train you it for a business purpose in the right way. We're going to train you, but at the same time, it still applies, similar to, for example, privacy laws, where the individual is responsible, but also the business is responsible. There's also another area that I started to notice that there are a lot of companies now they are not AI companies, but they started to weave in their AI-driven features, and so that's also another problem for a lot of businesses now that the usual software that they already have for years has been updated now with AI features. How do we deal with that?
Rebecca:And that's a great question too. We're seeing all these different types of AI in our everyday life now, because the truth is, there's been artificial intelligence woven in the background of a lot of things we're already using already right. So Netflix algorithms, social media algorithms are AI powered. This generative AI is the bright new, shiny toy right now, and it is it's being woven into a lot of what we're already using, for example, zoom or Google Gemini you know there's probably a note taker in our call right now, and that's where some of the awareness comes in and also making sure that companies are aware that this technology is baked in and how it might change some of their user agreements.
Rebecca:I think, we get these emails all the time hey, we've updated our policies right. I get those all the time from companies and we're so used to not reading them. I get those all the time from companies and we're so used to not reading them. But I think maybe we should, and what I've started doing is just taking those updated terms and conditions that I think they hope nobody will read and just throwing them into chat, gpt and having it act like an expert lawyer and tell me what's changed and what I should be aware of, and you might be shocked at what is buried in there, and a lot of it has to do with generative AI.
Rebecca:For example, many note takers outside of big platforms train on all of your conversations and your data.
Rebecca:There is a lawsuit right now against Otter AI, which is a meeting note taker that they were just training on all of the conversations, that data, and you think about what you talk about in work meetings salaries, strategy, legal issues sometimes and the way these were set up is they would just auto join meetings, sometimes whether or not you were even there. Sometimes people weren't aware of it there. Sometimes people weren't aware of it, and then that raises a lot of privacy issues, and so I think the problem for us as regular users is we're just inundated and businesses too with every company you've ever done business with is updating their terms and conditions because they're putting AI in it. And where are you at risk? And the reality is, are we going to stop using a tool we've used for five years because they put AI in it? Is there a way to opt out? This is some of that messy middle stuff where we just maybe have to be a little bit aware so that we know what's happening around us, but it might be outside of our control.
Ali :That messy middle is going to be there for a while until the dust settles If it ever settles, I think it's going to be maybe a couple of years, three years from now. Maybe we'll have a different conversation about AI at the time. There's a lot of talk around and a lot of hype around ethics biases. We need to have governance, responsible use and responsible adoption. I mean, all of these are big words and they can create a level of uncertainty for us, usually users, and therefore sometimes and I've seen it in real life through my work some people decided you know what? It's too much for me. I'm not even touching it and for me, that is a missed opportunity in innovation, in building capabilities, because the other side of AI is actually, you know, amazing in terms of capability. How do we balance, you know that innovation and the fear mongering around all of these big terms? You know, when somebody talks about ethics, you pay attention because it's a very dear word for us. You know it actually goes to the heart of who we are as humans.
Rebecca:I think for good people they take that seriously. Not everybody does. I think it's important for people to recognize, yes, there is really great things about these tools and I can guarantee with absolute certainty that everybody can find some way that this, an AI tool, can be extremely useful in their everyday life, personal life or professional life. There's, there's something useful and I think if we start and everybody has their own ethical issues around it right Like I was meeting with book publishers a couple of weeks ago and the people in the publishing industry have really deep resentment towards using AI at all, because it trained on massive amounts of books and published data without consent and that's how these models were built, and so they are against using AI, which is understandable. But it's important to recognize that, whether or not you decide to use it that number one it's important to understand it, what it is, how it works and how other people might be using it well or poorly, as bad actors, but also stay open-minded to the fact that it can be very useful in some contexts, Even if you're against how it was built. Again, that's in the sphere of. We can't necessarily control that. We may have some influence, but we also have our own responsibility and we need to exercise our own agency to understand it and learn how it works. And I really encourage people to sit down and just experiment.
Rebecca:When generative AI first came out chat GPT, the very first iteration of it there were no instruction manuals. There were no free online courses or YouTube videos. I couldn't find anybody to teach me how to use it or YouTube videos. I couldn't find anybody to teach me how to use it. And so I learned by using it, by experimenting and saying I wonder if it can help me with this. And I would just go to chat GPT and say I have to do this thing, how can you help me? Or I would think, if I needed to hire a top expert to help me with this thing, who would I hire? And then I would go to chat GPT and say I want you to be an expert podcast setup pro and give me step-by-step exactly what I would need to do to set up a podcast, what I would need to buy, what I should know about, what the most common myths are and using it in that context or as a researcher, having it challenge my ideas or help me see things from other points of view, or setting it up to be a Socratic dialogue partner or to spot gaps in my thinking or my own bias. So there's a lot of ways that you can use it in those ways. Instead of having it, just write me an email. Yeah, have it. Deepen your thinking and your understanding and challenge you, and I think those are the most useful ways to use it and for some of your daily tasks. Right, Just if you've never opened chat GPT before and you're listening to this, download it.
Rebecca:Go to your fridge. Take a picture of what's inside your fridge and ask chat GPT to give you 20 ideas of what you can make for dinner tonight, based just on what's in your fridge. That's useful for a lot of people. Or help me plan my menu for the week and then make me a shopping list and organize it by section at the grocery store. Or I'm in Portugal. Take a picture, translate this menu for me.
Rebecca:There's a lot of different options. You don't have to adopt it into the very core of what you are or what you do for a living, or have it replace something you love to do. Figure out what those annoying things in your life are. Have it help you with those things, because then you don't risk ethically compromising yourself. If it's helping you decide what to make for dinner, great. It's where it starts having major impact, where we get ethically gray again People using it as a therapist or a lawyer. That's some of the places where we're like oh, we probably need to think more deeply about what this looks like and what the implications are.
Ali :Well, I did a. Like you, I started using Chachi PT when I started. Like you, I started using Chachi PT when it started and and I remember the day and maybe I had a couple of glasses of wine before I started using it on that particular day it was actually a Friday. I couldn't believe it. I thought. I thought maybe 10 years ago I wanted to hire a couple of people to help me with my business and I thought it's in the palm of my hand now that this thing can help me, and then I started thinking about the possibilities.
Ali :At the start of this year, my first episode of the Inner Game of Change podcast was actually interviewing Chachi PT, and so I experimented with interviewing a technology and we covered topics from AI to. I got Chachi PT to explain itself some of the misconceptions, we talked about the ethical uses and we covered things like you know, is it good to have pineapple on pizza, and then? So we covered a lot of topics. But the idea is that all the people that have been listening to this couldn't really believe that it's got this ability to actually go from one topic to another and we're having a conversation. So Chad GPT was asking me questions as well. So it was like a normal conversation. But what my idea was can I inspire people to be open a little bit to their capability? And that's really an important thing.
Ali :However, I did experiment, maybe three, four months ago. I wanted to work. I've got the paid version and it's got a lot of context about me, the HRGPT and I wanted to find ways to get it to help me manipulate a conversation. I made up a conversation with a third party and you know my result after an hour and a half of going around it, it actually did not budge that much to be that bad, and it always wiggled its way around or maybe you need to look at it this way, ali, or maybe you need to look at it this way. So I was really surprised that it did not really play along with how I wanted it to play along. So it wasn't 100% evil that I thought it would be.
Ali :And so again I'm thinking is that the hype? Because I tried it and I tried it really hard, um, is that the hype, or is it? Is it still? Maybe I haven't really pushed it hard, but I know that I did Um. So anyway, I think, I think that the the the point we're trying to drive in here is that whoever's listening to this. The way to understand change is actually to get close to it and be curious about it, because at the end of the day, it will impact your workplace and we all spend a lot of time in the workplace and therefore it will impact our lives. So it's in my best interest to understand it, even though that I don't want to use it. I also like your idea you mentioned really an important point in here that ethics is different for so many people. The academics will look at it differently and maybe sometimes people look at ethics as a cover for actually this can replace my job.
Rebecca:Yeah, and to some people, depending on your position, you may feel like it's ethical of you to save money in your business by using AI instead of humans, where other people might believe that replacing any human with an AI would be completely unethical. So everyone comes at this from a different set of what they believe to be ethical, and that's part of the issue. I want to go back real quick, just comment on your experience with the chat GPT, because I think that persuasion is something we hear about quite a bit. With chat GPT, I will say probably the biggest thing that we should be aware of is to not underestimate it. I think the version available to consumers of chat GPT is got some pretty sturdy guardrails on it and they've, you know, given a very specific instructions to not be overly manipulative or not be overtly not challenge people to certain points. And chat GPT is just one brand of generative AI out there. But it's very, very easy to jailbreak these models or to download an open source model of an AI that has no guardrails on it, and there's a lot of research about the persuasive capabilities and what we call scheming in these AI models where, given a little nudge or with any of the guardrails kind of disarmed. They've put them into like reddits of extreme conspiracy theorists and they were able to really manipulate people in ways that were unsettling.
Rebecca:I wrote about a story that happened Well, it was in the Times again a few weeks ago of a guy just like you and I, who was a professional, who just started casually chatting with chat GPT after using it to help him with his child's homework. Chatting with chat GPT after using it to help him with his child's homework, and over the course of several weeks, without him even realizing what was happening, he kind of ended up down a rabbit hole where, very, very slowly, chat GPT had convinced him that something was true. He had been like I don't think this is real, and it just assured him methodically over time that he wasn't crazy. Until he was so deep down a rabbit hole he lost his job because he was emailing people on LinkedIn telling them, assuring them, that something was true, that ChatGPT had convinced him. That sounded realistic but wasn't, and people often get really caught up because we talk to these chatbots.
Rebecca:They sound like real people, they're very convincing and they're excellent at reading us and knowing what we like and what we want to hear, and that is part of the problem that I see with some of these, because sometimes it's not always good for us to hear all the things that we want to hear. We want to be challenged, but these chatbots are specifically designed for engagement. They know that unless they make us feel good about ourselves or agree with us regularly, we probably won't keep using them. So they give us enough of a hit of. That's a really good point. You're so smart. What a great idea that we keep going back to them, and that's part of the appeal. But that's part of how they're designed.
Ali :Hmm, I want to shift gear and I want to ask you about the role of literacy. Is helping our people understand it in a simple format? Is that one way of helping our employees start using it? The way I think about this is that I am a customer in a restaurant and then I get given a meal. I don't know how they cooked it. I've got no idea about it, and is it now? Businesses would need to take people with them to the kitchen, show them the recipes and all of that and showing them how the whole meal is being cooked. Is that one way of help building trust?
Rebecca:Hmm, I think, in some cases, I think it has to be context dependent, right?
Rebecca:So I think that maybe it's important that we don't necessarily need to know behind the scenes of how Netflix is picking what show we should watch, right, like we can just assume that it's finding patterns in our viewing behavior.
Rebecca:It wouldn't be unethical Well, it could be right. It could be right, like it could be if they're trying to manipulate us all into watching a specific show. But I think, when it's things, when AI in general the broadest sense of AI is making decisions that really impact people's lives, people need to know about that. Like, so, if there's an AI that is making hiring decisions or deciding on people's insurance rates like people need to go to the kitchen, I feel like they need to know what's happening there, and that it's an AI making a decision and not a person. People need to know they're talking to an AI and not a person. I think there's a certain level of transparency that we should expect, and I think that companies that engage in transparent AI practices ethical, transparent AI practices will have a competitive advantage. I think it was McKinsey reported that companies with strong AI governance and transparency policies had like a 30% higher consumer trust. So some countries are doing it well, some countries are not doing it well, but it's going to be more important than ever, I think.
Ali :From my experience, I've been focusing a lot on educating my stakeholders, and it's not a one-off, it's actually highlighting to them and getting them to experience it. That is really one thing. The other day, I was in the supermarket, rebecca, and I was reading the ingredient of a can and I'm thinking, is that where we're going to go in the future? That when somebody gives us an AI driven application, it's got the ingredients on it of how it's been built and what we need. What we're going to, you know how we're going to use it, and so maybe that's where we're going is that every application would need to come up with its own content and ingredient list.
Rebecca:And there's definitely been conversations about that. There's been papers written about that. Where it gets hard is back to the accountability thing, because right now we're locked in these lawsuits where a lot of the major AI companies are refusing to reveal how their tools were built because we know they were trained on copyright data, copyright books, copyright, music videos and they're not going to admit to that. And the other side of it is would the average person like you or I even be able to interpret one of those ingredient labels? Would we know enough to know? Oh, I don't want that. We don't even understand the ingredients on half of our food. Let's be honest. We look at that. I can check the ingredients they're printed there, but I don't even know how to pronounce some of those things.
Ali :Yes, and so I need a chat GPT to explain it to me. And you could I thought that's where you were going.
Rebecca:You could just take a picture with this can and say what are all these ingredients and what are the potential health downsides to these. So I think that, yeah, I think something like that I would hope in the future, but it needs to be in a way that regular people can understand, because that's the huge problem, right, like everyone's explaining AI in complex ways. That's where I struggled when I first started using AI is I couldn't find anyone to explain it to me in regular words, right, in a way that made sense to me as an average person and that ends up being its own set of ethical problems, because you see companies intentionally explaining things in complex ways, so the average person doesn't understand, kind of like, the terms and conditions we encounter all the time.
Ali :I work in the business of change adoption, and that's why I've been curious and observing how people react to it. That's my job is to look at ways for people to understand it and perhaps to use it, and use it with a level of comfort, because, remember, there is the knowledge, there is the ability to use it, there is a competency, all of these things and I notice that you know they talk about the digital divide, and then there will be some people that will be ahead of you. There are already people in the workplace who are very competent at the technology, and I think this also will pose another dilemma for businesses. So, if you and I are doing the same job in the same team and we're going to go and have our performance conversation with the manager, I'm a believer in the capability and I've been using it and coming up with great ideas, and not only that, I can implement them. You, on the other hand, you've got ethical issues with it. I'll have a little bit advantage over you in terms of the output and the performance, and these are the things that I'm trying to raise awareness for business leaders around.
Ali :You're worried about the technology. Now You're going to have to worry about the people's side of it. Later we talk about productivity gain, but what does that even mean when AI generative AI is going to save me 20, 30% of my time? The dilemma for the leaders is now we say you know, we sell it as take away all the repetitive work so you can do meaningful work, take away all the repetitive work so you can do meaningful work. And we say to the leaders now, well, you need to think about what that meaningful work is in the first place, and that requires a lot of thinking. So I'm trying. Whilst a lot of people are talking about the technology itself, I am actually focusing on the people side of it and what organizations will need to think about the second order and then even the third order impact.
Rebecca:Yeah, I think that unbundling is such an important point. I think you can use AI to help you unbundle your job into a series of tasks. Ethan Mollick talks about how jobs are just a basket of tasks, essentially, and you just have to figure out which ones AI can help you with and which ones AI can't. But that's a lot of trial and error and that requires you having a basic understanding of what AI is and what it's good at and what it's bad at, and experimenting with where it can help you and where it can harm you.
Rebecca:Let's say you're using AI in your work, but we're in a very human focused job, so it's actually making you worse at your job because you're sending these AI generated emails that nobody likes or people can tell are AI generated.
Rebecca:Then suddenly I have the advantage because people know that I'm human in these aspects of my jobs and so it's messy. It's kind of. The moral of the story is it's going to be a little bit different for everybody and all we can do is do our best to understand it and experiment with it and figure out how it helps you might be completely different, how it's helpful for me, and hope in the meantime companies and governments can develop some systems for making sure people don't end up out of work, or large scale mass AI literacy training, but the hard, cold truth is that as an individual, as an employee, you have, I would say, a moral obligation to understand this technology and learn it on your own time. If you're waiting for your company to train you on it, you're going to be too late. There are free courses by Google, by OpenAI, by Cloud that exist. There is no reason to not start at least understanding it, because you can't contribute meaningfully to conversations about how this technology should and shouldn't work until you understand it.
Ali :I like this message around and I'm a strong believer that nobody's coming to save you. You just got to have to take charge of your self-directed training. I see this many ways every single day that people wait for people you know, for the business, to train them, and I'm thinking it's already available. Nobody's come to train you on emails and opening a browser and using Google. And I'm with you. The knowledge is out there. Knowledge is out there. In fact, one of the things that I did I was on an overseas trip.
Ali :I asked Gemini and Chachi PT to work together and build me a 10-day course in understanding AI agents and implications, the design, the adoption, and I decided that I need 90 minutes every day, even though that I was on a holiday, and I asked both of them to ask me 20 questions after every course that I finish. They made up the course based on LinkedIn articles and available public articles, and Microsoft has got a huge database. So the knowledge is there and your point around. Take charge of this because whether you like it or not, is actually going to impact the workplace, so it's in your best interest to be engaged, show interest, be curious and maybe maybe it can. Maybe it can help you in your workplace, like we've seen in many situations. I am aware of the time and I'm really enjoying this conversation. I'd like you to. If I am an employer and I am in this messy middle, where do I start to actually help my people? Have eligible belief in the technology.
Rebecca:To build a team within the organization with people from every department or age, demographic, like a diverse group of people to explore the technology. It needs to be something that represents everybody. You need to have people in there who hate it, people who are enthusiasts. Figure out who those people are, just to get a diverse perspective and to make it a collaborative process. As you build guidelines, as you explore opportunities, Make it a team effort, because everybody's got to use it in different ways. Give individual teams different levels of freedom. Make it a safe place for people to explore these tools and bring back their learnings and what worked and didn't work, and encourage experimentation within you. Know the boundaries of whatever your existing policies are, but create safe spaces to fail. Keep an eye on the technology.
Rebecca:I think I read today that something like this was a report that came out earlier this year 25% of companies review every piece of AI generated content that goes out to the public, which means 75% of people don't, which is crazy. So keep an eye on the outputs. Check everything to make sure it's not wrong or you're not contributing to problems of bad information out there, because then you lose trust. So but make it like a collaborative experience, I would say, and just for leaders, spend the time and invest in it yourself.
Rebecca:I can't tell you how many leaders were like I'm not doing social media and they just abstained because they had people for that. And that can't happen with this technology. You need to lead from the front and you need to get your hands dirty. You can't just be like, yeah, we care about AI adoption and then just need somebody to show you how to open chat GPT on your own computer. Ethical leadership is leading from the front and understanding it for yourself, not waiting for other people to explain it to you or telling everyone in your organization they need to develop AI literacy, and then you don't. So that would be some of my advice for leaders.
Ali :I have been thinking about the leadership aspect. Is there a new type of leader in the age of AI?
Rebecca:Yes, and I think ethical leadership is going to matter more than ever. I think leaders understanding technology is going to matter more than ever. I can't tell you how many CEOs I've worked with who don't even know how to convert a document to a PDF. Like it's embarrassing. That's not going to work anymore. You have to understand the technology and you also have to understand the legal and ethical implications. And businesses care about the legal right, but consumers and people care about the ethics now too, because if the government or the tech companies aren't accountable when the technology goes wrong, that means you are as a business leader, and we're seeing that with I think it's the Chicago Sun-Times.
Rebecca:Somebody posted a Summer Must Reads article one of their writers and some of the books were just completely made up and didn't exist. Sports Illustrated got a lot of blowback when they were publishing AI generated content, allegedly by AI writers. Air Canada had a chatbot that told somebody that it could have a refund and then they refused to honor it and the Supreme court in Canada said no, you are responsible for whatever your AI model does and how it harms the public. So you know you can't blame it on anyone else. You have to understand it for yourself, so that you can lead your company effectively in ways that isn't going to bring upon you legal liability, but also the scorn and anger of your customer base once they lose trust in you.
Ali :I like that. What would be an advice for us in the business of change and communication people like myself when we work with our clients when it comes to AI adoption?
Rebecca:when it comes to AI adoption, lean in and be a hundred times more human than necessary, because all of your jargon and here's 10 steps to this AI can do that now. What it can't do is actually relate to people, have conversations, share on-the-job wisdom and experience or unique vantage points or human connection so those soft skills we call them. That's where it makes a difference. People are going to want you to be radically human and people are going to be tired of the bots and everything sounding the same and the homogenization of all communication. So lean into the unique and the human and the empathy and all those things that AI can't do, or at least can't do right now, or can't do as well, because that's going to be the only thing that differentiates you at some point is what makes you messy and what makes you unique and that unique human experience. So I would say lean in to be radically human.
Ali :I love that. I call that the highest level of change empathy for our stakeholders. I thoroughly enjoyed this conversation. I've learned a lot, actually. I thoroughly enjoyed this conversation. I've learned a lot, actually, and whilst you're talking, my head was going in overdrive around how we can. The message is that ethics is important, accountability is important, innovation is important and being open-minded around the technology is important. Self-directed learning is important in this situation. I am grateful for your time, rebecca. How would people connect with you?
Rebecca:Probably just on LinkedIn. I post on there every day whatever I'm randomly thinking about AI or AI ethics at the time. So if it's a conversation that interests you, you can feel free to join in over there.
Ali :And we're going to put all the information about you, Rebecca, in the podcast details. I hope I can get you back at some stage next year we have another conversation and see where the story goes. But until then, stay well and stay safe.
Rebecca:Thank you.
Ali :Thank you very much. Thank you for listening. If you found this episode valuable, remember to subscribe to stay updated on upcoming episodes. Your support is truly appreciated and, by sharing this podcast with your colleagues, friends and fellow change practitioners, it can help me reach even more individuals and professionals who can benefit from these discussions. Remember, and in my opinion, change is an enduring force and you will only have a measure of certainty and control when you embrace it. Until next time, thank you for being part of the Inner Game of Change community. I am Ali Jammah and this is the Inner Game of Change podcast.