
Total Innovation Podcast
Welcome to "Total Innovation," the podcast where I explore all the different aspects of innovation, transformation and change. From the disruptive minds of startup founders to the strategic meeting rooms of global giants, I bring you the stories of change-makers. The podcast will engage with different voices, and peer into the multi-faceted world of innovation across and within large organisations.
I speak to those on the ground floor, the strategists, the analysts, and the unsung heroes who make innovation tick. From technology breakthroughs to cultural shifts within companies, I'm on a quest to understand how innovation breathes new life into business.
I embrace the diversity of thoughts, backgrounds, and experiences that inform and drive the corporate renewal and evolution from both sides of the microphone. The Total Innovation journey will take you through the challenges, the victories, and the lessons learned in the ever-evolving landscape of innovation.
Join me as we explore the narratives of those shaping the market, those writing about it, and those doing the hard work. This is "Total Innovation," where every voice counts and every story matters.
Brought to you by The Infinite Loop – Where Ideas Evolve, Knowledge Flows, and Innovation Never Stops.
Powered by Wazoku, helping to Change the World, One Idea at a Time.
Total Innovation Podcast
6.Nathalie Nahai: Ethics in the Age of AI
Nathalie Nahai’s background in human behaviour, web design and the arts offer a unique vantage point from which to examine the complex challenges we face with the rise of Generative AI today.
Described as “a rare polymath with deep expertise in tech, marketing and psychology”, Nathalie draws upon a rich background in human behaviour, web design and the arts, to offer a unique vantage point from which to examine the complex challenges we face today. Having studied psychology and worked as a web designer early in her career, her frustration at the lack of a comprehensive framework through which to understand online behaviour, led her to write what has since become an international best-seller, Webs Of Influence: The Psychology of Online Persuasion (Pearson).
Adopted as the go-to manual by business leaders and universities alike, Webs Of Influence has been translated into 7 languages, and alongside her new book, Business Unusual: Values, Uncertainty and the Psychology of Brand Resilience, serves as the cornerstone for Nathalie’s work with clients including Google, Accenture, Unilever and Harvard Business Review, among others. A popular speaker, consultant and facilitator to Fortune 500 companies, Nathalie has lectured at some of the world’s most prestigious institutions (from Cambridge and UCL universities, to Lund and Hult business schools), and her experience as a skilled communicator has seen her present at SXSW, host the Guardian Changing Media Summit and hold main stage interviews at the Web Summit.
Learn More: Ethical AI Innovation: Insights from Natalie Nahai - Wazoku
Welcome to another episode of the Total Innovation Podcast. Today we have an incredibly exciting guest who is at the forefront of digital psychology and ethical AI integration. Natalie Nahai, a friend of mine and also best selling author and host of In Conversation, the podcast, And the flourishing future salon combines her expertise in human behavior with cutting edge technology to help businesses unlock the full potential of AI. Natalie's insights into how psychology can guide impactful and ethical AI solutions have transformed numerous enterprises, making her a sought after speaker and consultants. Get ready to deep dive into the world of Nahai and discover how your business can leverage these powerful tools to create meaningful, ethical impacts for efficiency, productivity, and innovation for growth, sustainability, and more. This is an episode you will not want to miss. Welcome, Natalie. Thanks for having me. Great to be in conversation with you. In conversation, indeed. So good luck. There's a million and one places. I always say this at the start of this podcast. Now there's so many places we could go, but the purpose of the total innovation podcast and my kind of commitment out to my crowd is that rather than going super broad and super theoretical, we'll, we'll try and get deeper into the hows of a specific thematic. Um, as I said up front, given your background and the areas that I know that you're currently focusing on, and you've just been telling me that you're going to go and be a guest lecturer, um, on, on as well, is a deep dive into how we can leverage AI for impact with ethical and psychological considerations. And I think something close to my heart is this bridge between the human and AI. Interaction, right, which I think is partly what you're going to be teaching on. So why don't we, why don't we start there? Like, just just set that set the scene for us, right? Give us a bit of bit of background into why this area. What are you? What are you looking at? What are your initial thoughts on the emerging opportunities, threats, potential of AI in the context of human interaction? So I always approach these tech questions from a behavioral, psychological, sociocultural perspective because I think that's where the wisest choices stem from is what do we want to use these tools for in the service of what vision towards what And many of the tech conversations that happen start with, well, what's possible and then let's, let's chase the tail and see if we can do that. So you end up with the tail wagging the dog rather than saying, okay, what kind of affordances that these technologies have, are they in service or in alignment with the goals we want to achieve? And then if you compound that with things like the fact that. humans typically as meaning making mammals, we tend to infer motivation, sentience, agency to anything that resembles a human like interaction. And this was happening all the way back in the 1960s at the MIT AI lab with Professor Joseph Weizenbaum, who created ELISA, one of the earliest natural language processing models. Um, and it was just basically pattern matching in a way that you might expect a conversation to go with a Rogerian psychotherapist. So I'm upset. Tell me why I'm upset. blah, blah, blah, blah. And so even when things are really simple, when the stakes are really low, we have tendencies to relate to seemingly intelligent beings and infer sentience, conscience, motivation, motivation. And I think that we're at this very particular moment in time where this combination of desire for something to help us lighten the load of everyday life, the difficulties of everyday life, and provide convenience and provide ease alongside the hype that's being pumped out of what the, what, what these potential tools can accomplish. And then our propensity to just kind of go charging ahead because you don't want to be left behind. And so you're left with a tail wagging the dog and people not even stopping to think, what's going on? actually what do we want as a civilization? And that's going to be a pluralistic answer, right? This creates a really curious moment in time in which I think what we actually would be best served by is conversations that are slowed down, where we can pause, we can think about what's actually at stake, and have more meaningful questions be asked around where we want to go as a species and how we might want to use, develop, and guardrail these tools. So that's everything that's kind of on my mind at the moment. From a, from a, from an organizational perspective, and I, I fully agree with the premise of what is it that we as a species and we as people are seeking to achieve, we're often battling with that, with what is the organization seeking to achieve, right? And ultimately, of course, the organization is comprised of people, um, And is is is typically there to serve people as well, right? It's a very human element to an organization. And yet in in many of the constructs and ways that organizations operates, innovation is no different from many of the other aspects of business life, the human somewhat detached from it, like the human, the human aspects is somewhat detached from it. How do you think that that plays through? And there's a good example of that, right? I was talking to a global innovation team of one of the large FMCG organizations recently, they were playing through various futures of, you know, how AI might Enable certain things in their organizations and one scenario, not necessarily one that they believe could come true. But one scenario being that, you know, the ultimately the CEO for vast parts of the business could just have a magic button that presses and that triggers a whole bunch of, you know, Robotic process capabilities, a eyes and other things that make fast decisions. Arguably those decisions that people often can't make as well quickly for them. And then, you know, and magic happens, right? And so clearly that could be a more dystopian magic than just magic in a variety of ways. How do you think we how do you think we handle for that? And, um, I guess leading off the back of that, What is it that, that's, what are the things that you see as the emerging opportunities alongside the emerging threats, right? I'm going to try and lean on as much as possible on the opportunity side. I do recognize there were threats as well. So I was chatting with Adam Hawkins of LinkedIn, an event they were doing, uh, very recently, and he was talking about the ways in which previously we've hired on roles and responsibilities and how. With AI tools, be able to support and augment people's capacities. It might make more sense when we're making decisions within organizations to think in terms of tasks and skills and say, okay, what tasks do you, for instance, play as Simon, and what is outsourceable of those tasks to AI? What are the costs associated with them? What are the likelihoods of aberrations or hallucinations? What are the costs attached to that? What are the skills that you have that perhaps aren't replicable, but could be put to good use so that you know how to sort of free up certain time in order to prioritize those things that might be high value for the business, like more of the strategy, innovation, creative thinking, relationship building. And so it's really kind of, Breaking, finding ways to break down the ways in which people are valued at work for their capacities, their skills, their tasks, their presence, um, and then figuring out where that can be outsourced. The other thing I think that's really important when we're talking about practical uses, um, and those two things, one is being able to separate the hype from the actual results. And the other is figuring out whether your assumption about a tool's efficacy is valid. And that one you can do. So I'll start with the latter. That one, I had a friend who's working at a big company who have decided to sprint test various AI uses within their organizations. They get everyone together and say, okay, what are all the different things, use cases, scenarios in which you could potentially use the available AI tools to optimize our workflow, efficiency, productivity, et cetera. They. basically do a brain dump of what they could use these tools for. They pick, say, I don't know, half of them, however much they have the time for. And then they run sprints and go, okay, this is the hypothesis. These are the tools. Will it work? What's the output? Is it better or worse than a human could do? And they realized quite quickly that actually there was only a few of the, say, two dozen ideas that they came up with at the beginning that were outsourceable to AI tools without much oversight or looping from humans. So I think key thing, number one, test your assumptions, do trials, and then decide how much intervention you need, how much humanity you need in the loop to get those things working in the way that you would like them. So the second thing that I think is really important for people to focus in on is not getting taken in by the hype that exists out there. And there's a lot of hype because the companies who are pushing these products want us to believe in their omniscience, the most extreme end of the scale. And I've been playing around with various different tools. Um, chief among them, I was very excited about perplexity, which helps you to research, do citations. You know, as an author, I spend, I can't tell you how many years of my life I've doing research to sort of synthesize into a book that's accessible. And so I was thinking, great, Perplexity is going to do this for me. I don't have to do any of the grunt work. Anyway, so I'm busy using it, testing it out. And then very recently a new paper comes out that's published in the Journal for Ethics and Information Technology that says, and the title is, and I hope we're allowed to do x rated here, ChatGPT is Bullshit. And the authors say, and I quote, because these programs cannot themselves be concerned with truth, and because they are designed to produce text that looks truth apt, without any actual concern for truth, it seems appropriate to call their outputs bullshit. And then you have people like, uh, tech CEO Palantir's Alex Karp, who's saying that it's not at all clear not even to the scientists and programmers who build them, how or why the generative language and image models work. When you've got academics saying, hold on guys, there's a problem. And then you've got CEOs saying there's an interpretive interpretability issue. We've got to really rethink how we as consumers and users of this technology are actually checking the output. making sure that it is truthful, that it's accurate, um, and that we're not just falling prey to the hype of, yes, this can do everything. And it's back to that behavioral piece of, um, just because there's extra workload and precarity and friction and convenience, like all of these competing factors make it very, very tempting to me as well as probably most other folks to just outsource everything. And I think that's where the danger lies. And if there's a single practical step for people to take away from this conversation, don't believe the hype. test these things, check their limitations, and make cogent, um, arguments for why specific things should be used, uh, having road tested them yourself. Yeah, I mean, I think it's probably not widely known to people, but, you know, these big LLMs are employing armies of real people, right, to go through and fact check and machine train the right things into the machine, which obviously lends itself to a whole level of, Of other sets of questions around who are those people and what types of things are they training in, etc, etc. But and what are their values and biases that are being baked in because you know, you talk about the CEO guy I was talking in india earlier this year and one of the opportunities for ai is to train it on data that is intentionally Bias reduced I don't think one can ever necessarily be bias free but bias reduced So then you could basically take the best of a ceo's qualities in theory in terms of the outputs they produce, decisions they make, optimize them to reduce bias, prejudice, any of these sorts of, um, less desirable traits, and then essentially create something that's like, you know, Simon 2. 0, your digital double that makes a better go of things than you might. And then what happens to you? And then what happens when your digital double gets into a room with other people's? And do we end up with a secondary digital market of digital doubles? Like it just gets into uncanny valley very, very quickly. Um, it was probably something other than Uncanny Valley, it's more like the holograms are at it again. Star Trek territory. There's a, yeah, Simon 2. 0 sounds like a terrifying idea. I was told that I'm reading, um, working through, um, Ethan Mollick's book, uh, Cointelligence at the moment. And he talks about this being like the alien mind, um, uh, as a, as a concept. Um, and part of the parallel in that I do. I do think there's an element of something that I can get behind on this is that the human and the machine and your sort of Simon Simon 1. 0 Simon 2. 0. It's like the bits the machine a good at. Is good. That is the bits that humans often struggle with, right? So that's computation and frames of reference and knowledge and everything else. And the bits that the machine is bad at, humans are often good at, right? And so there is a very natural, I guess, complementary piece to all of that. But it has to come with a very specific skill set. I think it's a skill set that's quite useful in the modern world Anyway, because the ability to spot bullshit is um, Is is is a key skill, right? There's no x rated on this podcast. You're fine Maybe the first guest that's sworn, but it's fine. We can start a friend I think bullshit is good. Exactly. You can put the bullshit cherry. It's fine. Um, but but but what are what are what are your thoughts on that? Right. From, uh, where does that, where does that line come between the role of the human and the role of the machine going forward? And how do we get that balance right inside our organizations? And I ask that from two perspectives, right? Because then you can come at it from an angle of, of, you know, just let the reins off, follow the hype. What I'm actually seeing in a lot of large organizations is the opposite sort of like fear based hit the ground, hit the guardrails a little bit. Um, and there is quite a lot of slowing down, I think, in some respects, in some, um, aspects of this, um, but I, I don't then know if the areas of focus are quite right that are happening the other side of that, of that slowing down, and if the actual benefits of this are going to be felt, because typically these large organizations run fast. Break something, go into full blown panic, and then it takes too long to bring it back around again. Right, so, you know, how, how, how do you think we should be thinking about this from a, from an organizational perspective, which is where a lot of innovation is driven out of, right? A lot of the, the things that hopefully create value for us as a species going forward, and we need lots of that value creation right now. There's lots of reasons to hopefully be hopeful. in some aspects of this. So how do we get that? How do we get that balance right? Do you think? That's a good question. I mean, I think one of the things that we need to weigh in when we're making hopefully wiser decisions are the perhaps less visible, longer term externalities or costs, hidden costs associated with getting things wrong, not to put the brakes on. too early because I don't think that works either. I think we have to be able to play with these, with these different tools, but to advise us in terms of how we play with these tools in order to get more understanding of it. Yes, not to be left behind, but also not to make some really costly mistakes. And so there are, increasingly widely documented issues around AI's problematic relationship with intellectual property, privacy, data protection, um, hallucinations being used in, for example, in the US and probably elsewhere, but the ones I've read in trials where people's lives literally hang in the balance. And so if we're thinking about The importance of getting data right, and yes, harnessing the scalability and rapidity of these technologies to improve our processes and our reach and our capabilities. We also have to be really conscious about how they're employed. So I think that's one thing to talk about. It's like the context of it. Um, but then I think the other thing that's equally important for innovation, and this comes down to organizational culture, and it speaks to your point about the fear that we can then, on the other side of the spectrum, fall prey to these doom laden spectra, uh, sort of stories about, you know, AI is going to steal your job. It will collapse some jobs, but that's always been the case. It might be happening a bit more, um, quickly in this instance, but, you know, But then we have to skill up, there'll be something else to do, might even steal your love life. I mean, there's some interesting stories coming out of, um, some parts of Asia about, you know, synthetic, erotic relationships, that's for the, the x rated version of the podcast. Um, but I think also there are other interesting elements around cultivating a stance of curiosity, continuous learning, openness of the beginner's mind, orienting towards these tools in a playful discerning way with a sense of. I guess having a critical sense of. what its limitations might be so that we can actually engage with these technologies and start to make our own minds up. I think that's the thing that I'm most excited and fearful of is it's, it's this moment in time where if we lean into our capacity for self determination, for cultivating critical thinking, discernment, leaning into the values that support Positive flourishing organizational cultures, social cultures. We can actually make some really good decisions about how the trajectory of AI goes from here on in. Um, if we don't, then we're going to be really dealing with a lot of shocks that come afterwards. For example, a lot of folks are already bringing their own AIs to work. And if you're doing that anyway, because you don't want to be left behind who does and everyone in LinkedIn is kind of. Everyone on LinkedIn, that's a bit of a generalization, but everyone on LinkedIn, as far as I know, is prompted to write a post with AI. Like it's, it's everywhere. It's built into the prompts, nudges, um, that we encounter every day. So of course people are going to use it, but then what do you do about, um, privacy issues when someone's taking contracts, uh, and Putting it through, I don't know, ChatGPC or Claude to test something, not knowing that that proprietary information or NDA covered information is then in breach of that NDA. That, you know, so there's all of this kind of stuff that we need to be aware of. And I think part of that starts with having conversations within the organization to say, okay, what are the things that you can just have free reign on and really go for it and play with these tools? And what are the things actually we need to sort of draw a red line around for now until innovation catches up? Um, And there's plenty of examples. Sorry, go on. No, go on. The final point I was just going to make about, because there's also this kind of false dichotomy that's made between guardrails and innovation. People had to innovate to reduce death rates in cars when they came up with airbags and um, uh, seatbelts. They didn't say, let's make a plane and if 20 out of 100 planes crash, then we'll deal with the safety. No, they were like, right, this plane has the possibility of crashing. Let's make sure it's unlikely to happen before we get anyone in the air. We have to take the same approach with AI. And I think it could foster a much wiser, more interesting form of innovation than the ridiculous kind of tech bro. Fuck it. Let's just go for it. And see what happens. It's just, it's just insane. And we don't have that much time to get things right. So come on. Why is that? Exactly. And I think, I think in there, you've also touched on a number of different things, right? Like, I think a lot of the work that's being done right now is very process content generation, you know, which does open up kind of worms around. Um intellectual property and ethics and a number of other things. I know that alongside your work as author and and speaker and and wonderful things alongside it podcast podcast host you are also um, you know an artist and a singer and so i'm sure there's lots of bits of that life that that really starts to think deep and hard around what does this mean for People like me in the future and you know, what's happening with my contents, etc, etc, and they're very complex You Complex areas. Um, the organization, I think, which is obsessed with intellectual property and patents and protecting things in a million and one different ways. Um, is going to wrestle with that in its in its own way, obviously has legal might behind it. And, you know, maybe the answer is just be a legal firm in the future, because you're probably going to have endless amounts of opportunities to, uh, to make money. But it does feel like we're at a, we're at a shift moment in terms of skills, right? And I mentioned earlier that you're one of your exciting new ventures at the moment is to be a guest lecturer on this topic of human AI interaction. What is it you're, what is it you're going to be covering there? Can you share anything? Yeah, for sure. So one of the things that I do here, but also in the UK is I, I founded and host the Flourishing Futures Salon, which is connected with my podcast, which essentially takes some of the questions around how we relate to technology with one another and with the living world, you know, nature, and bring that into conversations between people so we can ask better questions to be able to hopefully furnish more rich and appropriate, you know, long term solutions to some of these wicked problems that we can, we now face. And it was through one of these events that one of the professors who, actually the one who headed up the course, um, got in touch with me. And so he wants me to come in to create these fears for discussion, debate, to come in with provocative questions from the world of, um, applied technology, whether it's generative AI or the use of AI in other domains, specific domains. to be able to get their students to start thinking about the real world impact. Because within university settings, in this setting, for example, you're going to have students who are kind of crossing those red lines because that's the place in which you want to be doing it. You don't want to wait until you've got VC funding, in my opinion, to then carry out experiments at scale, which could backfire. horrendously. And then you end up with all these lawsuits on your hand. Like you want to have testing, testing grounds within which you can, um, experiment with some of these, these technologies. And so that's kind of, uh, my role is to come in and help people to think more critically about what they're doing, their wider impacts, the ethical implications, uh, and kind of that eudaimonic question of what does it mean to lead a fulfilling, meaningful life in a world that is Completely entangled with the evolution of the technology that creates us and that we create. Which is, which I guess is a big, a big lean in also to what does it mean to have a purposeful and meaningful job and create purposeful and meaningful impact within, within an organization. And I do feel like that there is a, an increasing shift from organizational identity in that sort of direction for a, for a number of, for a number of businesses. That is. Um, as they move forward, that other piece that you mentioned about, you know, asking wicked, you know, dealing with wicked questions and getting into the sort of the substance of some of these thematic problem areas in a more conversant and open and honest way, I think is super important. One of the things I'm quite excited about from a AI perspective is the idea of. The idea of asking good questions, right, is from my perspective, the innovation conundrum is not about solutions. It's about falling in love with problems, right? And that being willing to really spend your time there, like, the solutions are kind of easy. And actually, maybe AI can really help with some of those solutions, right? It's good at leaning into, um, adjacent knowledge spheres and pulling things in that you may not have been aware of that help to stimulate those solutions. Those things, but you got to write a good prompt and a good prompt is like writing a good problem statements effectively to get there. And so someone that's been on the bandwagon of challenge driven innovation and therefore writing good problem sets and falling in love with your problems for a year. I'm kind of excited by that, that, that emergent. I don't know how emerging it really is because I'm one of those tech bros you keep, you keep talking about, I guess, I don't think you are. You have a conscience. Everybody. All right. I'm a tech bro with a conscience. Maybe less of a bro as well these days. I'm more of a dad. We need more tech dads in the room. Yeah, exactly. Too old to be a tech bro. Is is very much that coming into the human consciousness that it's about, you know, the more you, the better questions you ask, the better prompt you ask an AI, the better information that you give it, the better, the better, the better. And yes, you got to sense check it, the better the outputs are, right? And it'll only ever lead you so far, I think, as well. But if that accelerates some of that skill sets. And that, um, conscious thought process focuses more on the thing at the front end of this, rather than racing to a solution at the back end, I think there's a, there's a lot to be hopeful for that, I hope. Yeah. And I think, you know, there is so much sort of doom, dooming this a little bit, but like, there is so much to be excited about, um, And also for instance, so even just let's take an interesting, simple, everyday example. Say you want to create a really interesting essay or, I don't know, podcast series on a specific theme and you know roughly what you want to talk about but you need to break it down into individual episodes. AI is great at coming up with, like Claude for instance, which I really like, you know, got chatty tea and various others, perplexity again, I mean not without their problems, but if you take them within the context of. their limitations. They can be an amazing sounding board. So even if it's just a question of right, give me 10 things and I use three, just by virtue of the fact that there is something for me to relate to engage with respond, um, in almost like in an adversarial way. So it can create a really interesting way in which to, to have a more dynamic. conversation with yourself. You know, there's all sorts of ways to make these things more interesting and to work with them. But again, it's back to, you were talking about the importance of elegant thinking and language and prompting. Um, we ourselves can do better in response to these tools. So I think that's the invitation. It's like, okay, if they have their limitations, which they do, and they have these extraordinary capabilities, which they do, what do they require of us? Or what do we require of ourselves in order to get the best out of them? Um, And I think that's where, that's where the interesting stuff is going to happen. Yeah, I completely agree. I know we're going to, we're starting to run low on time, so I'm going to start winding us in a little bit. Um, with, I want to talk a little bit about designing a more user centric AI and, you know, is it, or maybe we have that already, right? Do we think this sort of conversational, And, um, then sort of data drilled approach to it is, is the way that this, that this moves forward. It's somewhat accessible, right? It's somewhat useful. As you've said, it's kind of got its limitations. I've struggled a bit with perplexity. I'm glad you put your head into it a little bit more, but I kind of, I kind of get lost, lost in it. Um, it's like the alternative universe of, of, uh, cat videos on YouTube and slightly different, different guys. Um, I said that slightly tongue in cheek, of course, but, but. There is an importance in designing these things with user behavior, human psychology, um, in mind to make them accessible, not just to the highly technical literate, right? And so I think that comes with skills, different skills as well. Um, you know, you and I have spent a reasonable amount of time in and around this technology at this point in time that we understand it has limitations. We understand it can hallucinate. We understand what, what that even means as a sentence. Um, and we understand therefore to be suspicious and appreciative of it in, in, in, in the same breath. But how do you, how do you think about that with your various hats on from your sort of psychology background, your Your value creation background, your ethical background, your user centric behavior background. Like what does it look like to design this thing in a way that's, that's, that's, that's creating the right value and being used for the right way for most people, not a few people. That's such a good question. I think one of the first things to say is that obviously, because it's predominantly through language, we're not yet at a point where it's gestural or there's Neuralink, and you can just think responses or prompts directly into the machine. Oh, that is not a world that I want to live in. Anyway, um, because we're still in the realm of language and it's becoming increasingly easy to just dictate. I think in terms of accessibility, it's improving, uh, for folks who don't have hearing or sight. I would be very interested to see what a haptic AI could look like. Um, that's perhaps cause for another conversation, but there's potentially very exciting innovations that could happen there also. I think in terms of getting people, um, into that sense of enough leaning in, like getting into that sense of feeling comfortable enough to try. I would love to see an example of an AI that onboards a person and teaches them as they go, almost like an onboarding of an employee saying, okay, these are the things that I can help you with. Do you want to try X, Y, and Z? And they go through, like, for instance, if I'm thinking about my parents using Claude for the first time, um, Claude might say out loud, These are my, these are my capacities. These are some of the things I can help you with. These are some of my limitations. Uh, let's have a go. Let's try exercise A. I mean, you'd make it more like a game. You'd gamify, obviously, thinking from a UX perspective, but then also dealing with some of the expectations that we fall prey to. One of the interesting, um, interventions I've seen people suggest, especially when it comes to confabulations and hallucinations, is, um, Is to change the language from the AI itself saying something like, I may not be 100 percent certain of this answer. So please check it. These are the things that I found. These are the things I suggest. It's kind of building humility into the machines responses to make sure that the end user is aware of its limitations as part of that day to day moment to moment interaction. So I think there's lots of things that we can do. I'm optimistic that, um, accessibility is going to be improved. And hopefully reliability as we hone these things, you know, these are really early technologies in terms of people being able to access them at a commercial level. So yeah, let's see what's possible. I think we're going to see some great things coming from some interesting places, probably not the big players, but, um, you know, the exciting stuff often happens at the edges. I think, yeah, I was gonna, I was gonna add to that. I think there's this, there's this understanding, I think, at the moment, because there's some brands that are emerging around these things, and they're kind of getting a lot of money pumped behind them and driving the breadth of awareness and the breadth of capability that will, you know, underpin a lot of the great work that's going to be done. Going forward, but not only on the edges, I think it's done in the specialisms, right? If you're going to start to standardize a lot of human life, which is the bit that I struggle with the most, actually, because the bit of human, the bit of the thing that makes us magical and human is not very standardized, right? And so if we're going to, we're going to try and figure out which bits we want to standardize, what makes, what's the most banal aspects of the things that we have to do? Um, that's quite a journey to go into and no one likes spending much time thinking about those things either. And often when we do, they're quite. It's quite hard to standardize in a million or more ways. Um, I've got another guest coming on soon, and we'll probably explore more of that, um, who, you know, is constantly, constantly trying to drill into, it's great that you can do this thing over here, but most people are trying to solve this problem right here. If you could just make it easier for me to, I don't know, book a hair appointment, or really get, kind of book an entire holiday experience, because it just still, or buy a car, or do all these things. It takes too long, it's too complicated. Um, and, you know, help me to do that in a holistic sense that, that whatever that thing might be, right, that there's a lot more utility in that enhanced productivity and that as aspect of the frustration of human life rather than some of the other things that we're necessarily focused on that, um, that, that could add, add some of that value. Um, I, I'm gonna push you then for a final thought before we run out of time. So. We've, we've touched on lots of different areas. What would your advice be for business leaders right now looking to make the most of these AI technologies whilst, you know, managing their business risk and also these ethical practices? Like if you were the CEO of a multinational organization, where would, where would your mindset be on this right now? If I was a CEO of a multinational organization, I wouldn't, I'd want to ensure the longer term survival and thriving of my business. And so I would take longer term views on what would be necessary to make that happen. I'd probably create. bubbles within the organization of contained risk where people can make, um, different decisions based on the AI tools they're developing or what have you, test them out, different scenarios to see actually what are the opportunities and risks associated with them. Uh, I'd also want to skill up the people coming into the organization so they're better able to make decisions. Smart choices that don't put the company at risk, but do identify new room for opportunities and growth. Um, and personally, cause I have to throw this in, I'd be looking at ways in which to integrate AI that support the flourishing of life. So whether that's looking into AI companies that are more resilient in terms of, uh, reduced. Uh, resource consumption or helping to solve some of the more complex problems. Like we have to all align everything we're doing to that end, including and perhaps especially big businesses. So aligning all of those levels, not easy, but absolutely worth the effort. So that's what I would probably close with. I think it's a very salient point. And as a B Corp, we're, we're constantly looking at that as well. Yes, you are huge component parts of this, of how do we make sure we do this? In a sustainable planet first way as well. So I think I'll draw us to a close there, Natalie. Um, a huge thank you for sharing your insights into this fascinating emerging intersection of digital psychology, ethical AI, the human and the machine, the human versus the machine. Let's see which direction that goes. Thank you. Um, it's been a, it's been a real pleasure. Likewise, as always, as always. Um, and so listeners, thank you. Stay tuned and please hit subscribe because our next episode is just around the corner and it's literally going to be out of this world. We're thrilled to have Steve Rader from NASA joining us. Um, just after you, Natalie. So I'm sure he'll be quaking in his boots after that. Steve has been instrumental in building one of the world's largest open innovation and open talent programs. He'll be sharing his journey and the secret behind NASA's success in harnessing the power of open innovation and open talent to accelerate impactful outcomes at scale. You won't want to miss it. So thank you, Natalie, and look forward to Steve coming up next.