Build What’s Next: Digital Product Perspectives

The Human Side of AI: Design, Change, and Reimagination

Method

In this podcast episode, Method’s Dr. Vanina Delobelle and Reema Pinto discuss "The Human Side of AI: Design, Change, and Reimagination. They discuss the crucial difference between viewing AI as a 'solution' versus a 'tool,' and uncover its four transformative elements: efficiency, augmentation, invention, and reimagination.

Learn why organizations often struggle with successful AI adoption, examining the role of human emotions, cultural differences in approaching change, and the necessity of designing AI for genuine human interaction. Discover the three key approaches for organizations to prepare for AI: an ecosystem-first strategy, a data-driven mindset with measurable behavioral goals, and a deeply human approach that prioritizes decision-making, career growth, and the celebration of 'pragmatic pioneers.'

This is a must-listen for leaders, designers, strategists, and anyone interested in the intersection of technology, business, and humanity, offering invaluable insights into fostering sustainable AI adoption and creating a future where AI truly serves human needs.

Dr. Vanina Delobelle on LinkedIn: /in/vaninadelobelle/

Reema Pinto on LinkedIn: /in/reema-pinto-945394/

Method Website: method.com

Hitachi Website: https://www.hitachi.com/en/





SPEAKER_00:

You are listening to Methods Build What's Next Digital Product Perspectives, presented by Global Logic. A method we aim to bridge the gap between technology and humanity for a more seamless digital future. Join us as we uncover insights, best practices, and cutting-edge technologies with top industry leaders that can help you and your organization craft better digital products and experiences.

SPEAKER_02:

Good morning, good afternoon, good evening, everybody. Welcome back to uh the podcast. We are here today to discuss AI with a different spin, but uh first let's present each other to our audience. I'm Dr. Vanina Dorobel. I'm part of the Hitachi AI COE, and my job consists of shaping how Itachi builds and scales its global AI capabilities and especially how Itachi transforms internally with AI. So I'm here with Rima Pinto.

SPEAKER_01:

Hi, my name is Rima Pinto. I am the global head of design and strategy at Method. In my role, I strive every day to bring a little more humanity to the business world. I've spent the last couple of years exploring how change impacts us at work and its relationship and connection to our historical reactions to change in life. Great.

SPEAKER_02:

So today we're together to have a conversation, really. And uh I'm particularly interested in pushing the conversation about AI beyond technology topic and consider AI from a human lens. When we are building AI solutions today, we hear always about LLMs, accuracy rates, architecture, but we never really talk about designing for humans. And this is really the next, I would say, stage of it. So today, this is what I would like us to approach, really. So, Orima, let's start with this topic. And can we define what a design means for AI? What is your perspective on this?

SPEAKER_01:

That's such a good and important question to start with first. So I want to take a step back first, Vanina, and just talk about the difference between a solution and a tool. When we use the word solution, we have to think of it as a collection of strategies, processes, and yes, tools that help solve a problem. And when we think about a tool, we have to think of it as a resource that's going to perform a specific task. Now, that task might have a singular purpose or and or it might have limited usage. Now, to answer that question around design and AI, the challenge I'm seeing today is we largely think about AI as a solution. And in the enterprise context, and I'm sure you see this every day, there's this promise and perception of a solution completely tied to efficiency. And the efficiency manifests in, hey, we're going to save costs in the thinking and the making. And the efficiency narrative is very loud. So it manifests in how fast can we conduct research and insights? How fast can we design and prototype? How fast can we measure the success of any of this? And in my opinion, AI is a tool. I like to think about it in four elements. It's a tool that is a utility that enables efficiency and also augmentation. But it is also a tool, and I want us to really spend some time talking about it, that is a catalyst for both invention and reimagination. We've covered efficiency, right? Let's talk a little bit more about augmentation. And I think the best way for me to explain it is to just give you two examples. So in both our jobs today, we are very often called upon to write perspectives. Now, we have a choice. We could use an LLM to write that perspective, or we could use the LLM to augment that perspective. So I might write a point of view on something. I can send it into Gemini or Chat GPT and ask if this perspective is unique, is it differentiated? What is missing? And the way I like to think about the LLM is to think of it like a neutral party, hopefully, that can tell me what's missing, what do I need to work further on. And even if you and I are working on a collaboration together, I can send you those missing elements and say, hey, Vanina, can you think about it? So that's one example, very like real-world example that I think designers, strategists can apply today in the augmentation space. The second example, there has always been a history of speculative design, right? And I do believe that with the correct prompt of AI, we could show combinations of seeds of inspiration that we have not thought of before, that AI would give that to us. And a designer can then think about those worlds autonomously and begin to imagine alternative futures and scenarios, and then go about designing possibilities that change assumptions and expand our own creative horizons. And ultimately, what that would do is inform more innovative and resilient solutions in the present, right? Because speculative design, in my opinion, is only as good as what it allows us to prepare now for what's coming next. So that's augmentation, just two very quick examples. So the second element that we should look at is invention. Now, AI as a tool has to be designed for human interaction. And there are so many opportunities, and Vanina, you and I have talked about this a fair bit, for invention in those interactions. It is not about taking the human closer to the machine. There's a lot of discussion about the humans have to interact with the machine, but about bringing that machine closer to the human-to-human interactions, just like we're having right now. And there are three questions, and I'm sure there are more, I'm sure you will add, that I think we should think about in the context of invention. The first question is, who am I? The second question is, where am I? And the third question is, what is an interface? So let's start with who am I? I don't think there's a one size fits all. Like, even look at the context of each of us, you and me, is different, right? Your lived experience is different from my lived experience. Your emotional response to a situation will be different than mine. And your access to and understanding of technology is going to be different. So any of these tools that have to be designed have to account for that context of who is the person. The second question: where am I? And we all know this. We as humans are not just looking for output from this tool, right? We want the tool to mimic human interactions. And I think memory, our human memory, is such an important part of our interactions. And I'm not just talking about functional memory, I'm talking about the emotions that are tied with memory, those feelings that we have. And we remember all the context of those interactions and we build on our interactions over time. And you might laugh at this, but I, when I look at enterprise AI today, you know the movie Finding Nemo? It's like having an interaction with Dory, where you're starting from scratch every single time. There's, you know, I mean, we can be Dory today, but I don't think we want that, we don't want to interact with Dory all the time. The third question is the question of what is even an interface. We have to move past the chatbot because that's such a limitation and a constraint we are putting on ourselves. There are new communication paradigms that are so possible for a world beyond interfaces. And these paradigms are going to keep evolving. So we need to be so fluid as ourselves, as humans, and not cling to one thing. And also be mindful of the fact that our future generations, like my kids are young, they're going to just grow up with this being part of their lives, right? They're not going to think of this as being something new. And I think it is such a privilege for us to be part of that invention. And I truly believe that design has this critical role to play. The UX principles, everything I've talked about right now, falls under the category of the UX principles of familiarity and adaptability having to be kept top of mind. Now let's talk about the last category, reimagination. Now you look at any ecosystem, right, Vanina? There's never one root problem. The way I always think about it is like death by a thousand cuts. There's always many, many problems that need to be solved. And I'll just talk about two buckets, and I'm sure you will add to this. Let's talk about process and people. Process, let's do process first. In any ecosystem, if you were to underpin it with traditional automation or AI-driven solutions, we could reimagine how all processes are designed and executed and continuously improved. And to be clear, this is not about replacing existing systems, right? That's why I'm calling it reimagination, but about reimagining them to be more adaptive and more fluid. Then let's talk about people. How we communicate with each other is so key to who we are and so key in any organization. And this is a critical problem, I believe, to solve for collaborations in organizations. We can reimaginate, reimagine what communication and collaboration means. We can create organizations with far less hierarchy that are more adaptive, AI-enabled networks of teams. And, you know, what I said at the start, death by a thousand cuts. I actually believe if we do this right, we could actually breathe life by a thousand stitches into the system where every stitch matters. And in the context of design, I'm going to connect it back, service design and systems thinking are critical to make this happen. So overall, I think we have to shift the narrative from AI as a solution to AI as a tool. And when we look at human history, think of the printing press. It's invented in the 15th century. The story we tell today about the printing press, we don't talk about what became obsolete or hey, this is the efficiency stuff that happened. We talked about, we always talk about that it accelerated the dissemination of knowledge, that books became more accessible, that it accelerated the renaissance movement, higher literacy, that it powered the scientific revolution. So yes, Vinena, efficiency matters, but the capacity of augmentation, invention, and reimagination are far, far greater. And if we want adoption, technical capability, of course, matters, but our capacity to design with what humans need in this complex ecosystem we function in is critical for adoption. So I'm going to pose a question back to you, Vanina. You've, of course, spent over a decade working at this intersection of technology, business, and design. What is your perspective on this?

SPEAKER_02:

Well, this is very interesting. To me, it's uh it's about understanding that AI is a completely different paradigm. We no longer design for interfaces, we design for human interaction. So it's fascinating also to look just at uh, you know, like for example, the open AI uh moral specification. They are which they published, right? So as it literally expresses how AI should respond. And when you read them, you can see that they are trying to encode human values, morality, and ethics to the machine. So comments to the AI are following uh human psychology. Uh, example that they are giving in their uh their rules for the AI is that assume best intentions, express uncertainty, avoid overstepping, be empathetic, be kind, be rationally optimistic, don't make unprompted personal comments, avoid being condescending or patronizing, use accents respectfully, and so on and so on. This is all the elements that you are, you know, applying to a human. This is what you are teaching your kids, right? You say, hey, you need to be a good human being. So we also realize that we are also using human behaviors theory to train the AI itself. So there are some uh theories like the Skinner theory, which is usually applied to human, which is about reinforcing learning where behaviors are learned through consequences, right? If you did good, you get a reward. If you did bad, you get a penalty. The schema theory, it's another theory which is about the cognitive psychological framework that explains how individuals organize, store, and retrieve information about the world. And another theory, and there are many more, but I'll just mention these three: the Watson theory, which is the behavior that your behavior as a human being is shaped, and you mentioned that earlier on, based on the environment and your past experiences. So these are all theories that we are applying today to the machine. And finally, when we are considering agent communication, so it is like if you are operating and people say this is a new employee that you have to integrate with your team. So here it's like operating with an employee but whose communication style is different. And in that particular case, in the AI, it's characterized by its low context nature. So examples like the Dutch or the German languages, right? So in low-context cultures, really the effective communication requires precision, simplicity, explicitness, efficiency, clarity. Uh, all ambiguity type needs to be minimized. And the goal is really to ensure that the message is transmitted and received exactly the way it was formed. So this is this is what we need from here. We need to be extremely precise in how we are asking the question to be able to receive the most accurate output. And so once this is understood, right, it's it completely changes the approach we are taking. We use human codes, and with human codes, then there is no limit in the spectrum, right? Because we know our emotions are just everybody is unique. So therefore there is no limitation. So we do not talk about pixel, we talk about behaviors that the system should embrace. We do not talk about buttons, but about personalities. We do not talk about messages, but we talk about communication, all types of communication: oral, touch, eye, behaviors, you know, body language, all this. We know that for human, I mean, there are many, many elements that come in. And this is why I believe that design is the real game changer and not the technology itself. And this is this is not, I would say look at look at uh look at open uh open AI. They uh recently assigned a new person, uh Fujisumo, who is coming from a business background, because the CEO of AI recognized that if they needed to scale and to bring a real ROI to the system or to the technology, they needed to do something differently. So we are at that door, uh at that door right now. If AI is becoming more like uh an employee, a colleague, or partner, should design then evolve into something closer to organizational culture design, where we would design for how AI fits into teams, values, and shared rituals, right? This may be a question for the future, but for now, for now, I believe that organizations are struggling today to adopt AI because they actually only look at technology. Technology can only do that much. It has limitations. But what is the value of a product if it's not used? You can have the best LLM in the world, but nobody uses it. I mean, that's ridiculous. I also see a fundamental issue when companies, organizations primarily focus on technology. It is that technology, uh technological advancement have gone at such a pace lately. So innovation every week, LLM updates every month, new tools coming in the radar every day. Nobody knows anymore. There are way too many. That the human brain itself can only adjust that much. You can speed up the technology a lot, a lot, but you cannot speed up how the brain absorbs the information and adjusts to its environment. It will come quickly to a limit, right? So imagine if in the 160s someone came up with a spaceship. I mean, they were still on horses and throwing uh everything out of the window. So what would have what would have been the value, right? Just think about just Galileo, who was very, very much in advance for uh for his time, he was imprisoned for heresy at that time, just because he stated that the earth revolves around the sun. So imagine, I cannot even imagine, by the way, if the guy had said, well, now you can travel to space. I don't know. Maybe, yeah, bad, bad outcome for him, probably. So, in a nutshell, I believe that organizations don't fail uh at AI because of AI. They fail because they forget it's humans who have to live with it. And that's really the the heart of what we're talking today. So, what about you, Rima? What do you think that organizations struggle to adopt AI successfully?

SPEAKER_01:

So I loved everything you said just now. I love the Galileo reference. I just had to put that out there. This amazing reference. So that's a good question. Why are organizations struggling today? There are so many lenses, Vanina, to this, right? Is there a clear framework in place today to measure the impact on revenue from AI? Not really. To your point earlier and that we were making around reimagining of an ecosystem, are organizations looking at AI holistically? Not really. It's like we're building different floors of a building without a clear blueprint in place. Do organizations have space or created space for what I refer to as happy stumbles, you know, the capacity to experiment and in that experimentation learn and yes, fail? Not really. I mean, we could spend the entire hour here here just making a list of like not really, but so in the interest of time, I'll focus on what I think is, and you're pointing to it already, the most critical lens being the human lens, or let's say the employee lens. It goes a little bit to what you were saying earlier about that connection between technology and humanity. I've always loved what Edward O'Wilson, who was the Harvard professor and the renowned father of sociobiology, said. He said, modern humanity is distinguished by Paleolithic emotions, medieval institutions, and godlike technology. Our emotions and technologies are evolving at completely different paces. That creates high levels of dissonance in the system. I also believe that dissonance creates high levels of fear in the system.

SPEAKER_02:

I guess we agree on that, right?

SPEAKER_01:

It's very much aligned. Yeah. And what I've found in my observation of organizations is we tend to, what I refer to as we fear the fear in organizations. And I'll talk about it in three lenses, and I'm sure you'll have more to add to this. The first is what I refer to as silence in the spiral. The second one is from fear to motivation, and the third is emotional contagion. So let's talk a little bit about silence in the spiral. In any organization, when there's change at play, there is an automatic movement to focus on the individuals that we perceive to be on that downward spiral towards apathy, fear, and resistance. All change management's efforts go towards that. My observation is organizations are not paying attention to the other end of the spiral, that upward manic spiral towards acceleration, where those who are fastest to accelerate tend to hold the information and the intelligence and they control the end game. And does anybody have the capacity to control the end game? I don't think so. And what happens is between this upward manic spiral and this downward spiral exists this lack of dialogue, which creates massive dissonance between the acceleration and the apathy. The second point I want to make here is from fear to motivation. Now I talked about this apathy to acceleration journey, but there are steps in between that employees have to go through awareness, action, and if yes, that acceleration happens well, you'll have a population that is autonomously functioning. But we seem to skip steps and we don't tend to realize that to move a population from say even just like say apathy to awareness, there's a direct correlation between fear and motivation. I don't think organizations are spending enough time today to understand individual motivation and create catalysts for that motivation. And then the the last one, emotional contagion. And this is like just human biology. This is not me making something up here. Emotions move through a system very, very rapidly. My observation of organizational systems today is there is a mix of panic and passion when anything new is being introduced into a system. And I think organizations are being very, very slow to regulate that panic or passion with a clear sense of purpose. And I'll and I'm very curious on your perspective, but I'll add this final point is that if we bring a bit more humanity to it all, we will remember that this is not transformation. But we as humans are constantly in transition. Fear is a very, very natural emotion. It's like a roller coaster ride, right? It's a series of transitions we are on. But on a roller coaster ride, you can hold fear and exhilaration and joy in that same moment. But this does require, if we want that for our employee base, our populations need to feel that they matter, irrespective of where they are on that spiral that I mentioned earlier. We need to understand their individual motivation. And yes, we need to give them a clear sense of purpose. And so my question back to you is this you've spent the last decade, of course, thinking and working in this space. Do you have any real-world examples you can share on these challenges we're seeing?

SPEAKER_02:

Indeed. What do we do then? That's uh that's a big challenge. So first, I think we need to stop chasing every shiny object and our shiny tool and put a little bit of blinders on. But focusing very specifically on the outcome we need. For most organizations, that outcome is clear. Improve productivity in the context of talent shortages and market pressure to deliver faster. That's the table stake. Right? Take the example of manufacturing. Many organizations didn't start with AI-driven reinvention in their factories. That's true. They started small with uh computer vision models that help detect some defects in real time, for example. It's not flashy, but it has an immediate uh impact uh in uh improving efficiency, saving cost, and and freeing uh workers, giving them more time to actually focus on more skilled tasks or value-added tasks, as we call them. So once teams get comfortable with that, then they can begin or they can begin experimenting with predictive maintenance, eventually rethinking the entirety of the supply chain design. But that that has its different steps, as you are saying, right? And this is the human part. You cannot ask people to jump straight into reinvention. Most employees need time to build confidence and trust in the tools. It's a bit like uh, you know, learning to climb uh stairs, right? Or take a baby who is learning to walk, right? It's not gonna walk and run right away. You don't leap straight to the top floor. They don't run, they start crawling. So you start with the first step efficiency and augmentation and augmentation, right? Once that path is safe and familiar, then you can move to invention and reinvention, or reimagination, actually, just to follow the framework that you are you are taking. So, yes, there will always be individuals who want to leap ahead, but for the majority, the path to AI adoption is gradual. And when organizations respect that journey, adoption becomes much more sustainable for the vast majority. I also believe that the other issue is that even when you start small, you need a vision, you need a foundation. I am seeing too many dispersed projects. I call them the offhand projects, you know. They they just like throw spaghetti on the wall and see what's uh what sticks, and then they cry wolf when it's failing, and where it's like, oh, what is the ROI? Um, so how many projects have I seen where they start and they not even finish it? They don't even give it time to prove the ROI that they terminate them prematurely. So we need to learn fast, fail fast, but we still need to give it a little bit of time before we give up too early. Because otherwise we conclude, well, it's not working. Of course. You didn't really try it. You stop halfway. So we need a minimum of strategy. Strategy is not doing something dirty and cheap and hoping to bring that dirty and cheap to production and that it will solve the problem, right? If you want to create a mansion, you will not uh go, you have to go step by step. You are not going to use hay as your foundation. This is this is not gonna work. So you need to think beyond and build every step in a very intentional way. You start small, but you start strong. And you don't bubble towards an on-goal, right? You stay, you're you remain steady. So um, anyways, this is this is so fascinating, right? So Rima, let's move, let's move a little bit uh the topic, a little bit. So we are working on uh in a global environment, you and me, every day. And and we are so fortunate to do this, right? With uh Itachi being all across the globe and uh and you know dealing with people from everywhere. So do you think that people approach change the same way across the globe, actually?

SPEAKER_01:

Oh my gosh. Yeah, and and I'll just echo that like you and I are very fortunate that our roles place us in a global context, and what might seem very natural to us does take some understanding. I'm gonna step away and give some different references that I hope will help our audience today. Is first reference, uh, Shige Oishi, he wrote this incredible book earlier this year called Life in Three Dimensions. He is a professor of psychology at the University of Chicago. And he's been studying happiness for the last 30 years and psychological richness for the last 10 of those. And he makes the distinction between a happy life, a meaningful life, and a psychologically rich life. And I love the way he defines psychological richness. He says it's about challenging oneself, about About experiencing the unusual and learning new things. This is directly related to change. The context in which you live and work will define how you want to be challenged, experience the unusual, and learn new things. It's going to vary. Now, the other complexity of this is not only is your lived experience going to matter, but the stage of life you are in will influence what mix of even happiness and meaning and psychological richness you want. So it's not only what types of change you're open to, but how open are you to change in general? And in my role, managing a population that's spread between the Mexico, US, or many parts of Europe and India, I'm hyper aware of this. Vanina, what I would say is for leaders who find themselves in global roles and who are in a position to help populations navigate change at scale, they cannot cling to what I refer to as the familiar and the similar. And I'll say that again. You cannot cling to what is familiar or similar to you. The reference points have to shift frequently. And you mentioned context earlier, and I thought that was such an important thing to bring in. We also have to think about when you are communicating change, you have to understand cultures and different levels of context they need. As an example, Japan is a high context culture. So it's not just about language, it's about nonverbal cues, shared context, versus the US is a low-context culture. You can be very direct, have very explicit language. What you say is what you mean. An excellent book for anyone wanting to understand cultural context and how to work with a global population is Erin Meyer's Culture Map. I know you love that book. So my question back to you, similar to me, you've been working in a global context for a long time now. Do you believe this to be true?

SPEAKER_02:

Absolutely. And you know, Rima, how passionate I'm about uh international culture and differences. So I totally agree. No matter where you are in the world, change always provokes uncertainty. But each culture tells a different story to make sense of that uncertainty. So if we put this back into the context of AI, for example, in East Asia, where collectivism, respect for social order, and high value on science and engineering dominate, technology is woven naturally into daily life, right? So in Japan, for example, robots are often seen as helpful companions, no threats. You know, there's this cinto beliefs that attribute spirits to objects that makes it easier to find a harmony between the human and machine. I'm always fascinated when I go to Japan and put foot in Japan to see this. In Europe and in the US, however, decades of science fiction have trained the public imagination, you know, to associate AI with dystopia, the Terminators, the bad guy, the fear that machines will surpass and control us had seeped into the collective mindset, making skepticism a default. So now if we take the example of Africa, Middle East, and South America, AI is often viewed more pragmatically as a lever of transformation, where it is improving culture, agriculture, widening access to education, enabling financial inclusion. The narrative isn't about domination and companionship or companionship, but about opportunity for the future. In other words, AI acts like a cultural mirror. In each region, in each region, it reflects the hopes and anxieties most alive in the society itself. What's universal though is that change always provokes uncertainty. What differs is the story we tell ourselves about what that change means, right? More than culture, I also think that people's experience, exposure to different cultures, and you mentioned that earlier on, right? Personality and the past as well influences how they all react to change. Culture is definitely a big factor because it molds it molds your foundation. In the same way where if you got raised in a family with more progressive behaviors versus more conservative behaviors, you will adjust to change differently. So it's like raising a kid, right? So at the end of the day, we keep coming back to the fact that change and how we make change valuable has very little to do with technology, but more with human factors. So this tells me that when we do AI design, we need to take into account cultures. We cannot design the same way for Japan that we would do for Europe, for example, because the reinforcements that we need to inject in the AI experience need to be different. So, Rima, how about you? Uh, how do you think our organization gets ready for AI and how, you know, and all that my offer to them, right? All these opportunities.

SPEAKER_01:

So, you know, Vanina, I loved what you said. Like this shift needs to happen from a technology first approach, right? So I I will propose three, and I'm sure you'll add to this. My three are an ecosystem-first approach, a data-driven approach, and a human approach. So I'll do the ecosystem approach first. You talked about the need to shift from being tech first. I'd say shift from being tech first to ecosystem first. To understand any ecosystem, you have to understand the challenges that underpin it in its products, its people, and its processes. And you can assume that AI is one of the many tools that we have access to to help solve the problems that business is facing. The next is you need roadmaps. And this is not just about process roadmaps and product roadmaps, but what I refer to as an adoption roadmap. The human lens on this is critical, such that you've got to think about it like you're turning on the lights of a building, and yeah, they might be out of sequence, but you have a grandmaster cunning plan in play for that. The second lens, let's talk about a data-driven approach. You talked earlier about ROIs and the complexity that lies in. I think it's really important to set measurable goals for every test, every experiment that an organization is running. And it's not just about the time and cost saved or the revenue. To me, those are pretty obvious. I think it's really important to also measure the employee base. How are they thinking? How are they feeling? How are they acting as a behavioral level and that metric? Now, the way to think about it is do they feel a deeper connection to the organization's mission? Do they feel confident about the skills they are acquiring in this new world? Do they feel that the environment is set up to help them succeed in a changing world? The behavioral science elements are essential to put in place. And this is not about running a survey once a year. And of course, I have to bring in the human approach because that's the lens I always look at businesses with. The fear of becoming obsolete is so, so high. We have to reduce the fear in the system and increase the motivation. There are so many ways that one can reduce that fear and increase motivation. And I'll just talk about three: decision-making, career growth, and pragmatic pioneers. Decision making, let's do that first. One of our key fears as humans is the feeling that we are going to lose our capacity to make decisions. And when I think of how we make decisions, I think of it in three ways. We use all three. We use our instinct, you know, that's that feeling in the gut of your belly. Our intelligence, it's the things we have learned and known through our lived experience. And the third one being information. It's information that's out there in the world that helps us make decisions. Today, AI is there to provide that information lens. Our instinct and our intelligence is still very critical. And organizations need to show employees how those three can be combined. Second is career growth. Historically, organizations have always viewed career growth, and even employees view it as very much a ladder, right? I take this step, I take that next step, I take the third step. It's very linear progression. My hope is we, employees ourselves and even organizations encourage a mindset of a polyhedron. And by that I mean an object with many, many sides. We need to change how we view ourselves. And in a polyhedron, each side is a new dimension that it's adding to us, right? It's limitless in terms of harmony that you can add. And we might learn skills if we take that mindset, and if organizations create that kind of perception and reality of what growth means, we will learn skills we never imagined were possible. And we'll shift this linear progression of growth to a much more broader definition of it. And this is very hard to do, but it's critical. And to every designer, strategist, or to anyone that's listening out there, I've always loved this quote from Vincent van Gogh. He wrote this in a letter to his brother Theo in 1888. He said, But the painter of the future is a colorist such as there hasn't been before. All our jobs are going to some extent be reinvented, be reimagined, and we get to be those colorists of the future. The last category, which is very dear to my heart, pragmatic pioneers. In transformations, we tend to celebrate the pioneers. The individuals first out of the gate, the loudest and the boldest. I strongly believe that for this AI journey to be successful, we must find and celebrate the pragmatic pioneers. These are those individuals who are not trying to control the end game. They are balancing the need to move at pace while taking people along on the journey. They truly understand that this is not a solo adventure.

SPEAKER_02:

It's much bigger than that.

SPEAKER_01:

Oh, much bigger, much bigger than that. And here are my final thoughts, because I do lead the design and strategy team at Method. I must talk about creativity. In any moment of human history when we have made our biggest leaps, creativity has always been there. And the poet Maggie Smith, I love her ten principles of creativity: attention, wonder, vision, surprise, play, vulnerability, restlessness, connection, tenacity, and hope. And in particular, she says, inside hope and tenacity are patience, devotion, and fortitude. And my hope is that every organization makes space for each of these as they navigate this new world. That's the way to solve this problem.

SPEAKER_02:

I think we need uh we need uh also the following. So uh communicate in different ways to appeal to the largest audience, right? People have different motivations or fears, and they first need to find what is in there for themselves to be able to carry the torch, right? People need to be presented with a vision, and this vision needs to be reiterated over and over and over. The consistency is what makes success, right? The fact that you're changing direction every other day, I mean, you're losing people. So um, this would also erode the trust that you are building, and and it will drive to failure, obviously. Um then you need to build the right tools for people to learn and also to advocate for change, right? We cannot just put tools for people in their hand and say, hey, here, just go figure it out. That that doesn't work, right? There needs to be the right uh structure in place and the right operating processes so people know how to go uh and what to do, right? To uh for uh with this uh this change. Champions, uh this this are is another topic, right? Champions need absolutely to be identified and sprinkled throughout the organization to constantly relay the message. And these are going to be uh the people who help accelerate the process. It doesn't need to be the same type of people, they can be a very different background experience, different organization. But uh in in that case, they are they have an interest and they are ready to take the journey uh uh with us. So in in AI particularly, you need to put the right person in the driving seat, not just the technology people, but the people who are representing the entire ecosystem needed for a change. People need to relate, that's also one of the elements. And you mentioned that you need to set goals and measure progress, right? What is actually the benefits that they are getting from that technology? And finally, and I think this is uh the one that I learned the most over my career, but you need to be patient and persistent. Change does not happen overnight, right? It is a very, very long process, and that's the reason why so many uh innovation fail, because we don't necessarily have this patience. So it could be because we don't have necessarily the finances to you know to be able to sustain the longevity of the process, but very much the patience to see the fruits of what you put in there. So in conclusion, I believe that you and I what we can say is that AI has the power to transform, it's undeniable. AI absorbs our human traits functionally, like learning problem solving socially, dialogue, empathy, simulation, and symbolically agency creativity, right? But unlike humans, AI doesn't have consciousness, right? Emotion or meaning, right, making. It only reflects them back in the longer term. The boundaries between human interaction and AI interaction may blur. We'll see that, right? Which will force us to refine what authenticity and authentic connection mean, right? We need to make sure that we do not create parasocial bonds with the machine, which sometimes we can think about. So here we are, Rima. This is uh this is the end of our discussion. Thank you so much for spending the time today to have this conversation uh with me. I think we both agree that it's a very fascinating topic and certainly an emerging one to consider AI from uh this new perspective and not the traditional technological lens that that is really the narrative that we are seeing most of the time, but really that human perspective that is undeniably the one to consider if we want AI to uh be persistent and and bring the value that it should bring to our society.

SPEAKER_00:

Thank you for joining us on Build What's Next Digital Product Perspectives. If you would like to know more about how Method can partner with you and your organization, you can find more information at method.com. Also, don't forget to follow us on social and be sure to check out our monthly tech talks. You can find those on our website. And finally, make sure to subscribe to the podcast so you don't miss out on any future episodes. We'll see you next time.