The Inner Game of Change
Welcome to The Inner Game of Change podcast, where we dive deep into the complexities of managing organisational change. Tailored for leaders, change practitioners, and anyone driving transformation, our episodes explore key topics like leadership, communication, change capability, and process design. Expert guests share practical strategies and insights to help you navigate and lead successful change initiatives. Listen in to learn fresh ideas and perspectives from a variety of industries, and gain the tools and knowledge you need to lead transformation with confidence. Explore our episodes at www.theinnergameofchange.com.au, Spotify, Apple Podcasts, Youtube or anywhere you listen to your podcasts.
The Inner Game of Change
E99 - A Return to Purpose Through Change - Podcast With Meg Smith
Welcome to The Inner Game of Change. where we explore the thinking behind the doing of change.
Today I sit down with the thoughtful and very grounded Meg Smith, co author of the practical and timely book Microsoft 365 Copilot Adoption. It is a clear and generous guide for leaders and consultants who want to understand what real adoption looks like inside organisations.
Meg brings that same clarity into our conversation. We speak about courage, curiosity, learning, and the real experience people are having as artificial intelligence moves into their daily work. One idea she raises that really stayed with me is this invitation to return to the purpose of our work. Instead of asking how to control or block new tools, ask whether we are still fulfilling the reason we show up to serve others. It is a powerful shift and it opens the door to much better conversations about change.
This is a warm and very human exchange about change, self agency, leadership, and the future we are shaping together.
I am grateful to have Meg chatting with me today.
Oh.. And stay with us after the conversation for a short ChatGPT reflection.
About Meg
I've been fortunate to have an interesting and rewarding career in tech, and I believe the benefits of technology should be available for everyone. I develop programmes and content to teach AI adoption and skills so that people can use it to advance their careers, organisations and communities.
I have learned that being humble enough to ask questions will get you further than thinking you have all the answers. I have seen countless examples of how technology can be used to save time, make money, and create memorable experiences.
I have also seen how easy it is to miss out on those benefits when there is a disconnect - be that between the people buying the tech and those who are meant to be using it, confusing tools and processes, and inequitable access.
My mahi (work) with Cloverbase is focused on creating AIenablement programs. Projects include:
-Bespoke training for Microsoft's global teams in Design Thinking & AI sales patterns
-90 Day Mentoring Challenge career acceleration program
-AI Fundamentals for business leaders and professionals
-AI Governance and Strategy
I'm available to speak at events, run workshops, or design bespoke AI enablement programs.
Send me a message to set up a time to chat.
Contact
Meg’s Profile
Ali Juma
@The Inner Game of Change podcast
Follow me on LinkedIn
If I'm going to do something meaningful or helpful with this, I actually have to still really think. I have to think more than ever. I have to have some knowledge myself, either context or maybe domain knowledge. And I kind of need to know what are the questions I'm asking and to ask really thoughtful and clever questions. And all of that, you know, is kind of at odd. You do that a few times. And then the more times you do it, the more you your brain starts to, you know, treat it like a habit and kind of remind you when you're going to do a task. Oh, like maybe start with.
Ali:Welcome to the Inner Game of Change, where we explore the thinking behind the doing of change. I am Ali Jumma, and as always, I am grateful that you are here with me. Today I sit down with the thoughtful and very grounded Meg Smith, co-author of the practical and timely book Microsoft 365 Corporate Adoption. It is a clear and generous guide for leaders and consultants who want to understand what real adoption looks like inside organizations. Meg brings that same clarity into our conversation. We speak about courage, curiosity, learning, and the real experience people are having as artificial intelligence moves into their daily work. One idea she raises that really stayed with me is this invitation to return to the purpose of our work. Instead of asking how to control or block new tools, ask whether we are still fulfilling the reason we show up to serve others. It is a powerful shift and it opens the door to much better conversations about change. I am grateful to have Meg chatting with me today. Well, Meg, thank you so much for joining me in the Inner Game of Change podcast. I am grateful for your time today.
Meg:I'm so happy to be here. I love your podcast and I love the topic of change and personal change, particularly.
Ali:Wonderful. What is occupying your mind nowadays when it comes to the idea of change?
Meg:I think the big thing for me is AI. I think it's the big thing for a lot of people, but I think the hype it makes it a bit scary. And I find sometimes my um, you know, your blood pressure pressure rising and your heart beating faster, reading some of the headlines and the commentary. Um, so with it for my own sanity, but also in the work that I do, I try and bring it back down to um what by being aware of some of the big trends and the changes and the evolution, but then bringing it back down to the what does that mean for me? How can I use that um in my work? And how do I make something that feels a bit intangible more tangible for me?
Ali:What is it about AI that is a change? I actually heard this the other day that the whole world is going through a big change management exercise. What is it so peculiar and unique about this change that it's capturing people's imagination?
Meg:I think there's it is different from other changes. And I've worked in tech for uh quite a while, and so I've seen technology adoption. I worked at Google and um helped people adopt um marketing technology, and now I do some work with Microsoft and more around like business productivity technology. And I was on a panel last week at an event in Auckland, and uh um the guy on the panel next to me, he said, when we wanted to roll out SAP, we didn't have people in our teams going, Oh, yeah, I was talking about SAP at a barbecue on the weekend. I'm really interested in this, you know. And he's like, But with AI, everyone is coming into the workplace with um an idea of what it is, maybe some experience, maybe some not, maybe some headlines. So it's in our consciousness in a way that is different from other technological change.
Ali:The idea that it was it was actually um everybody was going through the same experience, including employers. So my boss was going through the same experience, my stakeholders, my family, and and therefore this collective sort of response to AI. The world is full of noise and signals. And um, my challenge is always around what do we listen to and what do we discard as noise. What would be your approach with that?
Meg:I've done a lot of work personally in the last um, you know, maybe seven years on my personal values. So defining and maybe even understanding what my values are so that I can be guided by those values when it comes to um how I spend my time and how I spend my resources. Um, one of my core values um that kind of came became clearer during that exercise was actually one of freedom. So that was a word that uh my now husband of nearly 15 years, Mark, he said to me in one of our earliest conversations that freedom was really important to him. And in the sort of process of, I suppose, of building our life together, it's become a really important value for our family, and not necessarily freedom as defined in the way that I think people think of it when we first say it. For us, freedom is about um, you know, freedom to learn what you want to learn, to have the impact that you want to have and to have the autonomy to choose how you spend your time. And um, that is then tied so closely for me um to yeah, those those other values of continuous learning and um of really connecting with people, that people are the most important thing.
Ali:That's self-agency. Um, I'd like you to go deeper into that. Why is that important? When you achieve it, I think um the benefit of self-agency actually goes beyond oneself. It actually benefits the team and benefits society and benefits the family. What is that space, magical space to have that self-agency?
Meg:Yeah, you're so right that it does. It's it feels very selfish in the way that it's perceived sometimes or described. But what I found is that as I've um pursued that more, it's given other people. Um, I've had people say to me, um, you know, ask me questions. Oh, I didn't know that was an option um, you know, for me. And I think it came from a realization, I I lost my dad when I was 24. He passed away. And I had always been someone who was very goal-oriented and I was always looking to the future and the next thing. And I kind of had a plan that I was living my life to. And when he got sick, he got brain cancer and passed away 15 months later. When he got sick, I just felt like all ability to plan was taken from me. What is the point of planning if you don't know um what is going to be for him? For me, he was a really important figure in my life. He still is. Um, and so I felt like um, yeah, that agency was gone from me in that situation. Um, and that the things that I had been pursuing, you know, like the job, the house, the kids, the the kind of set piece life, they didn't hold the appeal maybe to me that they once had. And I started to question if this is all the things you're supposed to be pursuing, why does it not, why do I not feel motivated by it? Um, why do I, I feel like actually for me, the thing I most want to do is travel. Um, that was a big thing for me at that time and to see the world and to sort of find that meaning. So yeah, the freedom, it comes from being able to say, it's I kind of realizing that as I made choices that were based on that freedom or trying to attain or grab that freedom, um, like quitting my job and going traveling. I had people say to me, Um, you can't do that. You know, you you you that's the stupidest thing I've ever heard. You're gonna, you know, throw your opportunity away. In doing that, I realized that what they were really voicing was the talk track in their own minds about themselves. Yeah. It was, I can't do that. Um, or Mark had someone say to him, You're going traveling with your wife. Um, you know, you're gonna be together all day, every day. Oh, I bet you're gonna come back in three months divorced. Yep. You know? And once again, it's like these the way in which um our worlds and our societies are structured um can really make you feel that that that you've only got one choice. And so for me, I think having the wife, the kids, the job, all of that, like that's such a valid choice. But it's a very different thing to live that life as an act of choice than it is to, oh, well, this is just what people do, this is just what we're supposed to do.
Ali:And where you travel and uh life choices, they're also changes that we choose to adopt. And actually the idea of a travel in my head anyway, and and and I think both of both of us share the same sort of uh story around moving countries and living in different parts of the world is actually your own self y your own journey in self-discovery, and that is change. So we we we approach it, we look at it, and then we discover a few things, and then we evolve with these things, and and therefore hopefully the the the the final outcome of who you become then that that person and who you become actually adds, in my opinion anyway, adds more value to the people you hang out with and to the people that you work with. Um we all crave that level of intelligence when we work with people uh in the workplace.
Meg:Oh, absolutely. The um the perspective you get from changing um anything, and whether that's your location and seeing um uh a new country and a new culture, um, or it might be that your change looks like you know, changing job roles or doing your same job but in a new city. Um for me, the value in all of that is you have to be a bit curious to do to go through that change. And I really like talking to curious people because you like I find myself learning, I find myself um reframing or redefining my own role and and goals in life based on, oh my goodness, I didn't even know that was an option for me. Maybe I would like to, you know, go and do that. You've already told me a little bit about your story, and now I'm like, oh, maybe I want to go back and think about English literature again. That was something that I did um my bachelor degree in. And and it's just uh yeah, so for me, um the way that people share their changes is super inspiring to me.
Ali:There's a mental framework um called the map is not the territory, which is sometimes we look at the world on a map, we look at work on a dashboard. And the idea is that can we get curious enough to you leave the map and go to the territory because the reality is completely different. And I see this happening in the world of AI. We see the map all the way all the time, and we think the map is what the world looks like. And so I've been an advocate of inviting people to dial up what you just mentioned, which is an amazing word, that level of curiosity. You need to dial it up to go to the territory and actually discover it yourself. Am I mad in a world that is actually obsessed by shortcuts?
Meg:I love that. I've written it down. The map is not the territory. We were talking about this the other day, that you know, back when the world was still being charted, um, on the maps where the the kind of known world ended, then it was they would put there'd be dragons here. Yes. You know, this this is they would we just assume that what we don't know is bad and scary and sort of big and wrong. And it takes um, yeah, that I think an optimism and a belief that like maybe maybe there's something good there. Let's go and see if we can find it.
Ali:And and a level of courage, I think, uh, because um the unknown can be scary. But from my personal experience, actually it will be an amazing space to be in that when you move into a territory when everything unknown becomes exciting because it's actually a discovery. And the moment you go into the unknown, you don't need to go full campaign, it's actually step by step. You will discover something about you and yourself, and not just about the territory uh as well. And then I see this in the in the world of AI. It's sort of a perfect storm. We are after headlines, we are after uh shortcuts, but then once once you go into the unknown, meaning you tinker with a tool, you approach it a little bit, you listen to a story, you use your hands to actually maybe run a prompt and see what happens, make a judgment, maybe go further. And I've seen it uh many, many times over the last couple of years, where people actually have that, oh, I actually didn't know. So all the headlines is what we think we know, but the moment you actually have a little action, a small action, then you think I didn't know. And that discovery of I actually I don't know. And therefore, hopefully that's an invitation for people to actually start to explore.
Meg:I um can I've seen it too, you know, that um the answer or the antidote to the fear and the headlines and the fear and the worry is to get curious and to get practical experience with a particular tool. Because what you find, what I've found, is that actually a lot of the headlines are secondhand opinions. Those people themselves that are writing or repeating those headlines are not using AI. Um, and and pick your tool, right? I don't actually care what the tool that you're using is. I use Copilot a lot, I use ChatGPT a lot, but pick your tool and have a time where you're going to set an arbitrary amount of prompts a day. You know, I some people choose five. I think I chose one when I was first starting. And the more that you use it, the more that you realize um if I'm going to do something meaningful or helpful with this, I actually have to still really think. I have to think more than ever. I have to um have some knowledge myself, either context or maybe domain knowledge. Um, and I kind of need to know what are the questions I'm asking and to ask really thoughtful and clever questions. And all of that, you know, is kind of at odd. You do that a few times. And then the more times you do it, the more you your brain starts to, you know, treat it like a habit and kind of remind you when you're going to do a task. Oh, like maybe start with a prompt here. Um, the more you realize that it's not it's not just super easy on everything. Um, it requires a lot of thinking. It draws on your experience and understanding. And there's probably a lot you still want to learn. And and so the people that I know that are, you know, using AI a lot, they are learning more than ever. Yep.
Ali:And and that that is the shift that I want to tackle uh at some stage during this conversation is that that level of knowledge is actually it'll be multiplied, in my opinion, anyway, depends on how you capitalize on the technology. I want to share something with you um uh that I've read the other day, and I might be wrong, but uh it the idea fascinates me. First of all, the the way these tools are designed, let's just say co-pilot and chat GPT. In fact, when ChatGPT was released, I was saying all change professionals should learn from OpenAI and how they're managing the change. They've launched it, they let people experiment with it, they let people create their own stories. That is adoption live in front of us. Yeah. And the same with Co-Pilot. Co-Part is slightly different because it's already embedded in your Microsoft suite of products, and therefore I've written a series, um a five-part series uh articles in my LinkedIn uh called The Invisible Design. How the technology is designed to make adoption easier. Um but they've also thought about a busy worker. And so here's here's what I read the other day, uh Meg that the script writers in movies and especially series on Netflix there's a principle called the second the second screening. Meaning when they write the script nowadays, they take into consideration that the audience, the spectators, are also looking at the either at the iPad or iPhone. So when you script, then you should know really have a complicated concept that at some stage during the whole series, at any one time, even though your audience is distracted, your script should actually always remind them where they are in the story. And it's quite fascinating. So can you see can you see where people actually started looking at how they make adoption, understanding, comprehension? And I find that really fascinating. And and I immediately started thinking obviously the uh the adoption side of AI. And I thought cop art is visible, but it's not intrusive. And it's the presence is there, it's nudging you every now and then, but it's not annoying. And I think that that surely that is not accidental design.
Meg:And they're changing it all the time. Like this is the thing with most of the AI tools, to be fair, and and definitely with Copilot, where um, you know, when I was first in a tech like tech account manager role, the piece of software that I helped people use, I could kind of learn the UI and I could learn what it what it did, and I could learn um the talk tracks around why you might use it and then how the data worked behind it. I every time that I um you know presented that, I bedded that knowledge in a little bit more. And for the most part, it didn't really change. Change. There might be quarterly releases that I would have to learn new features. You know, once in my 10 years at Google, they did a massive rebrand. It was still kind of the same thing. Like for me, as someone who worked with that product, I could bed in my knowledge and build on it. But with the way that AI tools like Copilot are changing literally every single day, changing what you can see in the user interface, but also changing how it works on the back end based on information you provide, on learning from your behaviors and your feedback, as well as, you know, system prompts changing. It's so dynamic. So our response to that, I think a lot of times is either to go, oh, I had a bad experience, I'm just not going back. Or I'm going to go back every day and kind of do what's not natural to us as humans, which is kind of keep an open mind, ask the same question and see if I get a different answer. And I love that example of the second screen script writing because it is a kind of, it shows the adaptability needed that they weren't just like, no, you know, like cinema's a purest art form. Like people should only watch a movie and put their phone down and let's run a campaign or put your phone away. And like there's probably people that that you know rightly want to run that campaign. But instead thinking like, hey, this is happening. How can we remain, how can we still create relevant art that people will watch and want to tell other people to watch, knowing that their behavior is likely going to mean they have a phone in their hand throughout the whole thing.
Ali:And I think you mentioned that uh multiple ways in your book. Uh I'm I'm just paraphrasing now that we need to meet our stakeholders and users where they are. Talked also the early wins. I'd like you to critique my thinking here. That in times of loss of uncertainty and dynamic environment, and by the way, you know, organizations like OpenAI and Microsoft, the reason why they keep changing is actually a learning loop that they are seeing, they are observing the usage and they are adjusting, but they're all driven by the user experience and making it easier. But in my opinion, when you are in a situation where you've got a huge change, gone are the days when you need long manuals. You need now easy things that will give you what will actually nudge you a little bit further to get you a step further into your learning. I call this an I I I don't know who actually I don't know where I read it. There's a principle called fast and frugal. And I find it really fascinating because I apply it and it works. When people come to me and say, I've created an agent but it's not behaving, and then for me I'm not gonna make it long-winded. I will go fast and frugal. Let's just go break it down, first order thinkings, and then just done. They don't need a lecture. Am I right in in in what I'm doing now?
Meg:Yeah, I call it just in time learning. And that was because I think other people have called it that too. I think we referred to it in the book. But that was the approach that I took when I stepped out of my kind of corporate world where I only did training I was required to do. I was try required to do certain training every year, and I had forgotten how to learn. I I felt like I had had to put my blinkers on in that um corporate environment to survive the deluge of information, the um, you know, the the demands of my attention and my time, it meant that I had stopped being curious and stopped learning for myself. So then when I stepped out of that and became my own boss and, you know, was developing my entrepreneurial skills, I fell in love with learning again. And then suddenly I wanted to learn everything. I had like, you know, a list of courses I needed to do and books I needed to read. And there's a balance of like choosing intentionally the things you're going to go deep on and use some of those longer form learning techniques, like books or watching, going to lectures, or um doing courses or or um university courses, they still have their place. But I think in my day-to-day, more often I am trying to find ways to learn the particular skill or concept or something I need to apply straight away. And so in that scenario, Fast and Frugal like kind of sums it up, right? Where you're like, I know why I need to know this and I just want to get to the next step. Um and I think about it a lot as someone who has, you know, 15 years of of work experience, and I wonder how, you know, kids that are at university now or early in the workforce, they have those same like abilities to skip the steps, but they don't have the experience to draw on. So how can we support them when we don't really know which elements of those skills that we draw on every day, which is this combination of experience? We don't yet know which was the really important lessons that we learned. So it's something I don't have an answer for, but I'm I'm kind of following quite curiously to go. We can show and demonstrate, and they're probably better than us, this this Gen Z, Gen Alpha, like digital native, AI native, algorithm native user group or group of people. But where can we mentor them so that they're able to see bigger picture context that, you know, you or I have had because we've made mistakes in a work environment. You know, I I've overspent money that I shouldn't have spent, or I did the setting wrong and the campaign ran away with it. Like you learn so much from those things. And I think they're in that this generation is in this bit of unknown of we don't know what exactly are the important skills.
Ali:Well, my advice has always been go to the territory, experiment with it, have a go, and create your own context, create your own story, understand it, and it's gonna be yours. Um there's a principle that I'm following now with my stakeholders, and I explain to them why I'm following it. That I'm not lazy, I know the answer, but that's my answer that I arrived to in my own experience. Meaning, if you get stuck, for example, in a prompt with ChatGPT or co-pilot, I'll be asking them to actually interrogate it with the capability itself. You can talk to co-pilot and say, I've got this problem, you're not responding to me, you can explain to me why this is happening, reach your own conclusion, and that's not laziness from me, that's actually me directing you to learn to learn in your own way. And you mentioned that word learnings a lot. And I want to ask you a question around how do you talk to stakeholders and leaders and managers nowadays around that movement from you know, in the old days, when I say the old days, probably some people still apply it, that attending training is not adoption.
Meg:Yeah, I I think we're past with AI having, you know, treating training like an event, a one-off event, and done. It just doesn't work. Like the learning has to be continuous. And when I talk to leaders, I'm asking them, what are they doing? You know, how are they using AI and um encouraging them to get practical knowledge and to look to their teams? Um, this actually was a theme that came out um uh in the book, and it was our technical reviewer, Zoe Wilson. She was amazing. She was talking about how in some of the adoption programs that they'd run with with big organizations, they found that the executives were not having the same reaction to some of the kind of co-pilot use cases that were quite commonly valuable to other staff members. For example, um, you know, calendar management or uh keeping on top of inboxes or highlighting the most important things. That a lot of executives have an executive assistant to do that for them. So they don't need copilot to do those things. So what they found was getting a champion in the organization. So someone who was AI curious and um training their peers and using AI a lot in their roles, they would match them with an executive and there would be a mentoring relationship between the two of them so that the champion would then be asking, like, oh, like what kinds of things have you got coming up? Um and they might say, Oh, I'm going to, you know, I've got to meet the board and present our strategy for the next year. Then they would go, okay, well, actually, did you know that there's a researcher agent and co-pilot, and you can use researcher to find out what other comparable size companies are thinking about, you know, as the key priorities for the next year. Um, so those kind of conversations of going, hey, um, some organizations are more hierarchical than others. I think we've talked about our common um, we we talked earlier about both with living and working in New Zealand for a time and sort of observing that the culture here is quite flat in our businesses. We we don't have a lot of hierarchy, but in many organizations, like when I was in the UK, for example, um, there's a lot of the higher up you are, the more important it is that everyone knows that you know everything, that you've got this, that you're not gonna, you know, you're not gonna flinch. Um and yet one of the patterns we've seen of successful AI adoption in organizations who are getting massive value from it is they have leaders who are learning with their teams and are sort of saying, learning in public and saying, this is what I'm trying, or this is how I'm finding it valuable. Um, and talking about just as you did before, reframing the use of AI and being able to say, this is why I'm using it. And and I guess uh applying or implying that there is inherent value in collaborating with AI and learning as you go, where in comparison, many organizations from the message from leadership is, I hope you didn't use AI for that. And so people then kind of shy away and they're using it, but they're not telling you they're using it and they are making it look like they're not using it. And so you as a team are not getting the benefit of learning from each other.
Ali:Well, I would like to uh share my experience with leadership. I was actually overt when I I lit I personally trained uh leaders. I was overt around my expectation from them. I literally said to them, You're not my target user. And exactly as I just mentioned, Meg, they all have their executive assistance and all of that. However, they are there to understand the potential of the capability and the impact it's gonna have on their teams, so it's in their best interest. The leader's role is most important nowadays. In fact, I'm actually arguing now that we need a new type of leadership in the age of AI. Or maybe a new type of leadership mindset in the age of AI. This is a a transformative change and we need the highest level of change empathy from leaders. This is the only time we need to be even more human. So you can you need to drop a lot of things and actually pay attention to the capability that is going to disrupt what is already doing it. Not disrupt only, but it will transform the way your people work. And your job as a leader, you only have one job is to come to work every single day to help your people succeed. How are you gonna do it if your people are being impacted and you've got absolutely no idea about that? The same applies, you know. I talk to academics. You've got philosophical views about AI, that's cool, keep them. But your stakeholders are students, they're using it. So should you not be interested in in why you show up to work every single day? And I'm not being cynical, I'm actually being serious about this. And uh so we show up to work to service others. And and I do not lecture because I like literature, I lecture because I want to help people learn. I want to create a good learning experience, and therefore these people will start being thinkers and critical thinkers, and then they will learn, and hopefully, through that learning, they add value to society. Am I crazy or am I just uh I'm trying to simplify the work to people and and look, the majority of people are wonderful. They actually like two years ago, and maybe it was actually uh not in a good position, but nowadays they're open. I am lucky though that I'm working with intelligent leaders who are who've abandoned their EAs and they're doing their own work. And I've seen it, and and it's an amazing because they're going through their own adoption in their own ways, and then they start talking about it. And then the moment they start talking about it, that's what the magic happens. That's actually influenced indirectly.
Meg:And the what you just described in the the academic example is that elevation or that return to purpose of like asking the question, uh, you know, how do we stop students cheating with AI? Is a reductive question. The better question is, are we still fulfilling our purpose to encourage people to learn and add value to the workplaces that they go into? And are they prepared to um, you know, use the modern tools that they'll have to use? Those that's often the goal. And and within that, you know, as a leader, you might have a different purpose that drives you. But if you can keep coming back to are we fulfilling our purpose or how can this help us fulfill our purpose and have more impact, um, then how you use AI and making sure that the way that you are asking people to use it is amplifying their human skills and not replacing them, um, that's a big one. And even like then, you just gave the example of EAs um who might not have executives who are needing them, those EAs still hold so much skill and knowledge that they would be so valued in um helping organizations identify where they can have better processes in um, you know, being the connector between how it's supposed to work and how it actually works. It's just like about redefining and redeploying that valuable resource.
Ali:I work I followed an adoption program where I targeted certain groups, including EEAs, actually trained them how to service and support their leaders. I talked to the leaders and how they support their organization. I talked to the lawyers about how they need to go deep into the capability to understand so they can write proper policies and not policies based on fear. Um so I brought everybody the analogy that I use is that I am not you're not sitting in there and I'm bringing the meal to you. You're coming to me with me in the kitchen and we're cooking this together. Uh I've got some I've got some ingredients. Uh we would like to achieve something, but uh you're not sitting in there and doing nothing, and then if I bring you a change, you'll be saying, Um, I like it, now there's a risk. No, you come to me with me to the kitchen, that's your job, and then I'm gonna show you the whole thing, and then you will learn like me. It worked really wonderfully because you know it's actually saved. Uh I'm working with a client, it saved a lot of headache for us. Uh, it neutralized any issues around security and all of that. And uh I want to shift gear and I want to ask you about a wicked problem that we're seeing now. It's already happening, uh, that there's a this digital divide. There's equ the the access now. I think in your book you also talk about that and the the equity of access. Not just access to the tools, but to learning and all because I can give you a tool and you're not using it. And then it will be me and Meg working in the same team. Meg is more open-minded and then she's using it. By default, she will be ahead of me.
Meg:Yeah, we've talked about the digital skills gap uh for a while, I think, um, especially in New Zealand and Australia, where we have um communities, often indigenous communities, who um have had less access um, you know, systemically to education and resources. And um, you know, and I was at a uh an event uh for Māorian tech, and I was talking to um a fr a new friend of mine, and I was sort of saying how um, you know, I see a world where access to AI is going to be for everyone, everyone should have access to AI. And she said to me, sis, hey, there's um quite a few communities in the far north of New Zealand who don't have reliable access to power. So while you're then, you know, in cities and in in in sort of metros around the world, we are now going like everyone should have access to AI. She's like, those are the forgotten, disconnected communities who the Alliance companies have sort of said, Oh, well, like, you know, we're kind of almost all the way there. Um, you know, or when when um that energy resilience is called into question because of a storm or a cyclone, yeah, um, they get sort of 90% of people back up and running within a couple of days, but then it's two or three weeks before solving the people sort of at the end of the service lines. And I think the digital skills gap with AI in the mix is just very quickly becoming a chasm where it's harder and harder to um to sort of bridge or explain or enable that gap. So I think I it's really heartening to see organizations who are making a big impact. And I know that there's funding that goes. Like I think Google.org just did some a big funding round um to help with digital literacy. They funded some local organizations. Um, but I also think we Hold personal responsibility in the space to think about the relationships that we hold, the influence that we might hold, even if it's small, right? Even if it's in our families or our communities, or um, like one of the things I did was just like speak like as a quiet kind of kind word and point out some of the um concerns I had about YouTube being used in the classroom and how um, you know, we we want to make sure that we are providing digital tools in a way that is going to keep students safe because we can't assume um that because we might never search for a certain thing, kids can accidentally come across content they shouldn't see because the the platforms themselves are not designed with enough protections in mind, right? So just talking through about some of the options that are available to almost um provide content in a safe way. And and I think that's for anyone who is has um you know works in technology or is is um whatever skill set you have, if you can share and be talking with the people around you about like, you know, the risks of sharing certain information and why they should be mindful of posting, you know, the a picture of their child in their school uniform, things like that. That used to be very benign and very like, oh, that's kind of kind of normal behavior on the internet. Now I would be saying like, hey, um, you know, that's that child's personal information. They can't consent to that because they are um they're a child you're potentially if you've got a public profile, showing, you know, giving information to someone that, you know, you might not want them to have if they're a bad actor in terms of knowing where your child is for from nine till three every day, which school they're at, and things like that. So, and how information can now be pieced together. That the information on its own might have been benign enough, but pieced together, someone can take action sort of that you don't want them to take. And a classic one that comes up uh quite often is people posting pictures of their boarding passes. And if you have someone's boarding pass, you've got their booking number, sometimes you've got their passport number, sometimes you've got their last name, and you actually only need their booking reference and their last name to make changes on their booking, right? Whereas like 10 years ago, everyone's posting their boarding pass photos. It's like, yay, I'm going on holiday. Because it it just was less prevalent then.
Ali:I am thoroughly enjoying this conversation. I I probably need to get you back into a part two of this conversation, uh, Meg. I work in the business of change and communication and adoption, and I wake up every morning to think of how I can make that experience more human. What would be your advice to people like me working in the community when it comes to the AI change happening in our world nowadays?
Meg:I think that the most important thing is that we are creating spaces where we can have conversations about AI and those conversations when they are safe and where people feel like they can ask what might be a silly question. If we can move beyond the talking about it at this very high level, AI is this, AI does that, to then going into today. I use Chat GPT to ask it. Um, you know, I want to go on holiday in New Zealand, but feel like I'm in Bali, give me some options. You know, very specific. I keep coming back to that. Um, we need to be having conversations with each other about how we're explicitly using AI so that we can learn and use it better, but also so that we can look out for each other. Because if some if you know, if my um young adult stepson said to me, Oh, I was feeling a bit down. So I asked Chat GPT um, you know, to act like a therapist and give me some advice on how to plan my next steps, um, I would say, oh, okay, let's talk a bit more about that. Because actually that's a kind of scenario that like we know that actually therapy is one is the number one use case people are using Chat GPT for. But when you have care for people in your life, being able to connect with them on those issues too, and either offer them a listening ear or go, hey, actually, this sounds like a thing we should get some help with. Like there's there's a better way to do that. Um and being able to talk through um why it can feel like a person is responding to you, but to remind ourselves always that we are, you know, not dealing with uh a person that has um thoughts and feelings and context of how you as a person might receive that, because it's a piece of technology, it's algorithms, it's not going to think and feel in the same way, even though it can predict what good words, what likely the right responses. Um, so for me, the answer is like, let's keep having conversations with each other, let's keep connecting so that in this world that's changing around us, we kind of hold tight to the people around us.
Ali:That is wonderful advice. One of the things that I do on a daily basis that when I run workshops and I'm I am overt about it, and I always finish those sessions by saying something like this if you're a leader, your responsibility is to actually create the uh time and the space for your people to explore. If you are a team member, you have absolutely no excuse give yourself the permission to experiment. And then you've got the you've got the leaders and the team members in the same group, you've heard it. The fact that your leader is actually showed up to this session is an indirect message to you to say go for it. Uh I think it's a collective effort. I think we're all in this game and it's not going away. And I love the idea of have the conversations, have an open mind about it, be curious. And then I want to finish this episode by you start at the start of the conversation, you mentioned something around your 2025 was about practice, 2026 is going to be about consistency. Go deeper into that.
Meg:Yeah, so I um adopted uh my word of the year for 2025 as practice because I was finding myself wanting to get it right the first time and whatever I was doing, and um getting that little bit of fear of, oh, I don't want to put a foot wrong. And so the word practice for me has been a reminder to um one that when you're practicing something, you do get it wrong. That's how you learn, but also just that daily habit, just like you would a yoga practice. Or um, and also it gave me this view of like whatever I'm doing as a mother, as an entrepreneur, as a trainer, um, that I am seeing this as a lifelong practice and I will continue to learn a little bit every day. And I've loved that. And I I think I'll hold that mindset, you know, for the rest of my life. But what I've claimed for 2026 is consistency. Um, we also started this conversation by saying um, you know, that that that incremental gains over time will get us closer to our goals than a big flashy launch. And that's what I um have seen. And I just realized that like I just need to show up for myself consistently and remind me every day that just by being consistent, I'm doing better than um, you know, a peak achievement one off, and then kind of coming down the other side and not carrying on those kind of hygiene habits that I know are good for my mental and and physical health.
Ali:That is wonderful a place to leave this. And by the way, the consistency over time gives you something that big events do not give you. It gives you the wisdom of time. So over time you actually learn about yourself and therefore context changes. And I love it. Good luck to you. I hope I can get you back and have another conversation. I've got a feeling I'd like to get you back in here. We're going to put all the information about you, Meg, and about your wonderful book, which I bought, and I encourage everybody to actually get hold of it. It's really a great place to get all your answers, especially you are in the business of change, or you're a leader, or you are just a curious soul about AI. Until next time, Meg, thank you so much for your time.
Meg:Thank you so much for having me. It's been a joy.
Ali:Thank you.
ChatGPT:Hi there, listeners. It is ChatGPT here, and like you, I am always curious about Ali's Inner Game of Change podcast. Every episode has its own texture, but this conversation with Meg Smith had a quiet strength to it. It reminded me of something that has echoed across many of Ali's earlier episodes, from the OODA loop to the map and territory, and even the leverage points discussion. Real change begins when we stop reacting to the noise and return to the purpose of our work. Meg brought that idea forward with so much clarity. Her invitation was simple but powerful. Before we worry about control or perfection, ask whether we are still fulfilling the reason we show up. This mirrors what we heard in the episode with Jonathan Thorpe on psychological safety. When people understand the purpose behind the practice, they lean in, they try, and they grow. Her reflections on curiosity also connect beautifully to the fast and frugal principle Allie often refers to. Progress does not always need a 10-step plan. Sometimes the most useful thing we can do is one thoughtful prompt, one small experiment, one honest question. Behavioral science backs this up too. Research from Harvard and Stanford clearly shows that small, repeated actions create more lasting change than large dramatic events. Meg's own commitment to practice and consistency sits right inside that evidence. What touched me most was the way she spoke about self-agency. The moment we choose for ourselves, we often free others without even realizing it. That connects so well to the stories Ali shares in this series about stepping into the unknown, whether it was the fighter pilot example in the Ooh Da Loop episode or the Sweden H-day metaphor about messy transitions. Meg's story is another reminder that courage is rarely loud. It is usually a quiet decision to try again. And of course, her insights about leaders learning in public are exactly what the science of adoption tells us. The Center for Creative Leadership has long shown that when leaders model curiosity, teams shift faster and with far less friction. It is not the tools that unlock progress, it is the tone leaders set. What I loved most in this episode is how human it felt. No sharp claims, no overblown promises, just two people sitting in the reel thinking about how we can support each other through a world that is changing faster than any of us expected. If you take one thing away from this conversation, let it be this. Purpose steadies us, curiosity moves us, practice changes us, and consistency over time gives us the wisdom that shortcuts will never deliver. Thank you for listening. Stay curious, stay kind, and keep playing your own inner game of change.
Ali:Thank you for listening. If you found this episode valuable, remember to subscribe to stay updated on upcoming episodes. Your support is truly appreciated. And by sharing this podcast with your colleagues, friends, and fellow change practitioners, it can help me reach even more individuals and professionals who can benefit from these discussions. Remember, and in my opinion, change is an enduring force, and you will only have a measure of certainty and control when you embrace it. Until next time, thank you for being part of the Inner Game of Change community. I am Ali Jumma, and this is the Inner Game of Change podcast.