The Space In Between Podcast

A Unique Take on Who Should Run AI: With Liat Ben-Zur

Leigh Morgan Season 2 Episode 16

In a time of dazzling technological breakthroughs and deepening divides, how do we ensure artificial intelligence (AI) serves society in ways that elevate humanity? What can be done to ensure the right people are leading in AI-based companies and that leaders are well prepared to navigate technological, moral and ethical dilemmas now and in the future?  In this episode of The Space In Between, I sat down with the brilliant and well-respected technologist Liat Ben-Zur to explore this question with both candor and nuance. With more than 25 years of leadership experience at companies like Microsoft, Philips, and Qualcomm, Liat brings a rare blend of deep tech fluency, moral clarity, and lived experience in all-things AI and leadership. Her take on who should run AI is important, powerful and prescient!

Liat shares four key messages that matter now more than ever:

  1. AI is not eliminating human leadership—it’s exposing it.
  2. Those on the margins often hold the wisdom we need most.
  3. Big tech’s choices matter—and so does ours.
  4. Leading in an AI era demands moral courage, radical empathy, and a refusal to stay silent.

Support the show

Hello and welcome to the. Space in between podcast. I'm your host Lee Morgan. Again, this podcast is for listeners who are fed up. Up with the hyperpolarized nature of the world today. And who crave. Craves spaces where current events can be discussed in construct. enlightening and delightful ways. Let's get.

Leigh Morgan-1:

We have a great show today and for sure, please put your seatbelt on because my guest, Liat Ben- Zur is a true out of the box thinker on all topics related to artificial intelligence or ai Specifically, she has some insightful and evocative assertions how AI is impacting dynamics within organizational settings. And how our life experiences and identities influence how we navigate and deal with the complexity of influence in society. We will explore her novel ideas about all of this and how in a world that can too often feel uncertain and fragmented, we can ensure leaders and organizations AI technologies in ways that improve lives and create more just and prosperous communities. Leet has lots to say about all of this and has an incredible biography. She is a very well respected and inspiring tech executive and influencer. With over 25 years of leadership experience at iconic companies like Microsoft, Phillips and Qualcomm, at these companies, she led a wide range of transformational initiatives and very complex business settings. While at Microsoft, she revitalized a stagnant business line that resulted in doubling revenue from get this 6 billion to 13 billion. so she knows a thing or two about bringing people together to achieve remarkable results. I've come to know her as a kind fierce advocate for equity for the potential of AI to be a tremendous force for good in the world. She's also a gifted speaker and be sure to check out her LinkedIn posts she regularly drops thoughtful observations about tech and a wide range of pressing social issues. also glad to call her a friend. T welcome to the Space in Between podcast.

liat benzur-1:

Thank you, Lee. It's a real pleasure to be here with you today.

Leigh Morgan-1:

Oh, I'm so glad. And we have, such a juicy topic to jump into about AI and. Your novel, thinking about AI and, leadership and different identities and how we can leverage different backgrounds, but I wanna ask you first about your career and also the energy that you bring to scaling because you've done some big stuff in the corporate sector, and you've definitely played a big role in shaping the role of technology in companies and in society. And I wonder where does that passion come from?

liat benzur-1:

You know, I, I haven't thought about that too much, but I think, growing up I always kind of felt like I, I didn't quite belong. I've been driven by questions that I think others were not asking. And noticing small patterns that I felt hinted something bigger Underneath, I've, I've always kind of tended to see signal in, in the noise really quickly. And then for me, technology kind of became this, I don't know, like a canvas to, to bring some order to the chaos. Like I was able to turn these abstract problems into something real, something that you can build and scale and, and put people's hands on. And at the end of the day, you know, the goal is you build these things to make life easier or, faster or make people feel more connected. I think at some level maybe it's that, but there's probably a deeper answer here. If I trace it back even further, it kind of starts with my grandmother. She survived Auschwitz. And the stories about the Holocaust and the camps where her whole family was killed, and the silence of, bystanders that she would talk to me about. I think it taught me at a very early age that looking away makes you complicit. That speaking up isn't optional. Whether that's for yourself or for others, you know, who can't speak up for themselves. know, when I look back, I definitely see that that lesson is a thread throughout my life. you probably have some of these stories of your own, in your own hardships, but you know, from kids at school,, who would hurl anti-Semitic slurs at me? Customers who would look past me because I was a, a woman leaders who told me I was too assertive to succeed. all the years I stayed closeted at work'cause I was terrified from telling the truth about being gay. But that would undo everything I already built. Each of these moments, etched the same truth deeper inside of me, which is the world isn't gonna change by itself. It changes when you refuse to accept things as they are. And that drives me, right? Whether it's how teams operate or how businesses grow or how systems leave certain people out. I feel this cellular need to challenge assumptions and question the status quo and rebuild systems to be better. technology's pretty powerful, but I think it's what it can unlock. Across society that probably keeps me up at night and kind of gets me going.

Leigh Morgan-1:

Wow, that's a powerful lineage that you have, and I just wanna lift up your grandmother. For the lessons that she passed on to you. her own survival of so many that did not, she was able to pass on this lesson of don't be silent

liat benzur-1:

Yeah. it definitely is not something I think I was conscious of most of my career, but now you kind of get to that point where you can, you're lucky enough to look back and reflect and connect dots and I definitely see that, that's kind of a common thread in everything I've done.

Leigh Morgan-1:

Yeah, that's, that's really powerful.'cause you really are, you're such a courageous person and I've seen you, you know, when we're together having coffee or seeing you on a main stage at a big conference, there's a consistency about your courage and ability to speak your truth. That I think is really remarkable. So I think your ancestors are. Giving you high fives, right, right now, my friend.

liat benzur-1:

And thank you, Lee. That means a lot to me.

Leigh Morgan-1:

And, you're a mom as well. How are you passing on this, spirit of speak your truth in an expansive way, all have our ways of speaking their truth. How do you, how do you pass that on as a mom?

liat benzur-1:

You know, the funny thing is, is that like no matter what you do as a mom, you'll never feel like you're doing enough. You always worry, worry, you're not quite doing it. So I am very, hesitant to say if I'm doing it or not. I mean, I, I definitely try., my wife and I try to share and instill all the same values that I think drive us. And I couldn't be more proud of my kids today, but, but this is, isn't that the mission of all of us? We just are on this place to hopefully instill some of that goodness to others around us. And maybe it works and maybe it doesn't work on some days.

Leigh Morgan-1:

Well, we're all a work in progress and I'm sure you all are

liat benzur-1:

Yeah.

Leigh Morgan-1:

a great job with the kids. So my first question for you, you know, you, really are at the forefront of thinking about AI's role in business and society, and you speak a lot about this. You write a lot. You, you have a book in progress. I hope we'll talk a little bit about that, how do you see the development and use of AI tools power dynamics and leadership structures within organizations?

liat benzur-1:

I think first is important to recognize that AI isn't just changing how we work. It's completely redefining who holds power. And I think traditional leadership structures, they were built on kind knowledge gatekeeping, right? Where leader was the person who knew more, who'd been there longer, who had the deepest expertise, if not the deepest experience. And today what we're seeing is that AI collapses that hierarchy almost overnight. All of a sudden, I, I see this in a lot of the companies that, I advise now, you'll see a junior employee with the right AI expertise, if you will, understands how to prompt a machine. They can access insights that used to take an entire team months to generate. And now, knowledge is no longer scarce, it's abundant. That doesn't erase the value of traditional expertise or domain knowledge. But I do think it shifts the role, of those types of leaders from gatekeeping information to contextualizing, to interpreting ethically guiding AI generated insights. And so now the differentiator isn't, who knows the answer, but who can ask the questions that no one else in the room thought to ask. Who can see the hidden patterns, who can recognize and make judgment calls? When the data's incomplete, who has the courage to act when the AI output falls short? I often will say that AI doesn't eliminate human leadership, but it exposes it.

Leigh Morgan-1:

Say that again. I think that's so prescient.

liat benzur-1:

AI doesn't eliminate human leadership. It exposes it,

Leigh Morgan-1:

Wow.

liat benzur-1:

lays bare who's truly adaptable, who could navigate uncertainty, who has the moral courage to say, Hey, just because we can do this doesn't mean we should. Just because the AI says this is the right answer doesn't mean we should immediately accept it as is. And so I'm actually writing the book, just like you. You mentioned the book is called The Bias Advantage, why AI Needs the Leaders. It wasn't trained to see. I really explore this concept of why leaders, who are best equipped for AI aren't always the one at the top today. They're often the people who've kind of spent their lives on the margins and navigating systems that weren't built for them.

Leigh Morgan-1:

Well say a little more about that.'cause are you talking about the engineers? Who are writing algorithms, overseeing, quote unquote, overseeing these large language models. I'm not sure they can be overseeing. Or are you speaking about people who bring backgrounds where because of their identity or how they were brought up contextually, they might have to look around corners because they might be the only person in the room that looks like themselves or has their background. I mean, tell us a little bit more about what you mean around the, the people who are best equipped.

liat benzur-1:

Yeah. More the more the laterally. So I would say, think more about marginalized groups who are really well positioned to help organizations and society navigate the nuances of ai. from marginalized backgrounds develop a different type of intelligence, a different set of muscle, mostly just to survive, like when you grow up outside of the center of power. You learn to read between the lines. You learn to detect hidden risks and build influence without formal power. You learn to see what's missing from the data, what's overlooked in the model, what unintended harm a decision might cause because you've been in the receiving side of that. you smell things in the air that maybe others aren't even looking for. what I talk about in the book are, are, these attributes that these folks tend to have that you wanna look for when you look across your organization? They build radical empathy. And, and I don't mean like performative niceness, which I tend to see a lot in big tech organizations.

Leigh Morgan-1:

Seattle.

liat benzur-1:

It's not just Seattle. Believe me,

Leigh Morgan-1:

Okay.

liat benzur-1:

lived all over the world, and I can say, the tech industry has that pretty cornered.

Leigh Morgan-1:

Okay.

liat benzur-1:

But, but it's this ability to like sense human nuance and pain points that sometimes algorithms don't parse. An example might be noticing how a, a product, unintentionally excludes certain users because their experiences were never considered in the inputs. Um,

Leigh Morgan-1:

Very specific example in a business setting

liat benzur-1:

yeah. Then That's what I mean when I say radical empathy. an ethical radar. When you've kind of live on the downside of bias, you don't just spot, I think algorithmic harm. You try to stop it before it scales. You learn to influence without titles, right? When you've never had default authority or when it was really hard for you and it took you a longer amount of time to, earn default authority. You learned how to create impact through relationships, through credibility, through coalition. And that's precisely what AI does not know how to automate. You build resilience, right? When the standard class didn't open up for you, you kind of learned how to invent new ones. I think it's an adaptability that's born from muscle memory of, pivoting, of resetting, of always having to prove yourself again and again in every single meeting, in every single setting. listeners who are out there, Often are considered marginalized leaders. And marginalized leaders might be different things in different industries. In tech, for example, women, women, people of color, L-G-B-T-Q, handicap, there's a lot of folks who are leaders in tech and are, are kind of on the margins. They recognize this, they've built these muscles,

Leigh Morgan-1:

Hmm.

liat benzur-1:

and I no longer think of this as nice to have traits in an AI powered world where, decisions carry moral and societal and economic consequences. I see these skills as being the difference between deploying technology that works for everyone and, and scaling blind spots that can harm the very people we claim to serve.

Leigh Morgan-1:

I think this is so important because this notion of moral courage and, being able to make ethical choices or, rather navigate ethical dilemmas, which is often the case, you know, It's the, the stuff in between, if you will, these dilemmas where there might not be an obvious path. that's where these attributes that you're describing for people who identify as being on the margins have had to dilemmas and uncertainty wasn't this linear path of just perform and I'll be rewarded on my merit the way folks in some dominant cultures may have experiences is that what you're hinting at?

liat benzur-1:

That's absolutely a big piece of it. Right? anyone who's, felt like they've had to, get very creative. In order to get from A to B or A to Z as recognized, when they have to prove themselves in every meeting that they're in, that they deserve to be there. There's a muscle that you build. There's an attribute that you build in terms of how to influence, how to bring people along. These are, skills that no one even teaches in business school. These are skills that no one really talks about in any HR training, but they are skills that every single marginalized leader I've ever talked to that I interviewed for my book, immediately recognizes, immediately recognized that they have built this muscle without even recognizing that they were building it along the way.'cause they've been in these rooms where they have to sniff out what are the power structures here? Who are the decision makers? Not because I'm trying to influence and sell, but because I'm just trying to survive. I need to make sure that I'm.

Leigh Morgan-1:

powerful, thing of when you have a sense that survival, which might be keeping the job or getting fired, right, or whatever that sense of survival means to a person. That's a powerful motivator to learn and adapt and

liat benzur-1:

Sure.

Leigh Morgan-1:

flexible. Doesn't always play out that way, but one's in a place of agency will say, ah, I have to survive, so what can I do? And that's amazing. You know, your grandmother. instilled in you that sense of from to actually doing something in the world don't remain complicit. And so you have your particular way of expressing that. That's a powerful thread that I'm

liat benzur-1:

Yeah, I think you nailed it.

Leigh Morgan-1:

notion of, radical empathy, which you gave voice to, we're in a world where there's a lot of fear. There's higher levels of distrust. You and I were at a conference where we learned some, the increasing levels of fear in the world. the same time, we have these amazing technologies coming on board, and so radical empathy strikes me as a quality in leaders. So, so important to allow us as a collective to move out of survival mode into the application of expansive thinking, where being empathetic is actually a practice to help us be innovative and creative.

liat benzur-1:

Yeah, absolutely. And I think On a long list of attributes that I think are, critical for this next era of leadership. And you sometimes, see some of these ideas in, business books like radical empathy, like psychological safety, like resilience. It takes on a whole new meaning. I think when we, we start to talk about algorithms that are gonna codify the biases that are in our society.

Leigh Morgan-1:

Okay. I wanna, I wanna dive into that and a first pit stop would be to get your sense on. These big companies, there's, the critical few that dominate the AI landscape. I mean, there's lots of small companies that are doing really interesting things with AI tools and technologies. And then we have the Microsofts Facebook, Amazon, Google, et cetera, that have so much scale. And I might add Anthropic, whose tool CLO is being used by many people. Huge influence over the development, the pace, the funding for AI tools. Can you talk about their influence today and what you see in the future, especially given how powerfully positive and powerfully negative these AI tools can be.

liat benzur-1:

you're spot on in recognizing that their influence is, undeniable. I mean, these companies hold extra ordinary power. They always have. But today, even more so, not just over markets and over tech, but how knowledge is being defined, whose experiences are being prioritized, what is getting automated, what is getting erased? And I think that their, you know, future really depends on whether they remember their responsibility to society, because that's the social contract I think we all have with these companies. The understanding that their decisions affect all of us, not just their stakeholders, not just their shareholders. And, and right now, um,, I think a bit unfortunately, we, we are seeing the true colors of many of these big tech companies, after years about talking about diversity and equity and. Running all kinds of initiatives across HR and measuring all kinds of things. We're seeing many of these big companies quietly abandoning those commitments as soon as that spotlight fades and it, and it just shows that for them, inclusion was a bit of a PR strategy. It was not a core value. That is dangerous. Again, because AI isn't just about building bigger and bigger models or faster chips, or getting more data center access, it's really about embedding, human wisdom into systems that are gonna shape decisions at scale. And wisdom requires what? It requires diversity of thought. It requires different lived experiences. moral compass imagination. Without that AI is, is, is gonna reflect and reinforce a narrow perspective of those who are already in power. It could miss entire markets, it could misread entire populations. It could amplify harms that they don't even see'cause they're not looking for it. And so I think the companies that will shape the future aren't just the ones with the best algorithms on the biggest data centers. I think they're gonna be the ones that understand human complexity and figure out how to build for it. And that is not something you code, that, down to leadership.

Leigh Morgan-1:

it does come down to leadership and my friend Everett Harper, who wrote a book called Move to Edge, declare It Center, he talks about diversity perspective as not being a bug in a system. should be a design feature it gets us better outcomes and the outcomes are infused with more wisdom because you have people from backgrounds bringing different perspectives that's kind of that space in between if curated well, and you and I know it's not always curated well, um, it's really just that simple. And when we have. That then we have the opportunity for wisdom to flourish. And you know, I was thinking a about our conversation a couple weeks ago when Grok, Elon Musk's horrible AI tool came out. And he basically said to his team, build this Grok, which is like Chap, GPT or Claude. But make sure not to be quote politically correct or woke, which means, when you scrape the for data, which informed their large language models, they excluded nonprofits or authors that wrote about antisemitism he wouldn't include knowledge about how antisemitism is on the rise and how it's bad. I mean, it was just like this classic I'm building bias in and yep. Not only that, you're getting hate speech that's coming out of the tool.

liat benzur-1:

I, I love the example of the book your friend wrote because I think that is so spot on and it's so aligned with, a lot of the points I'm trying to make in my book. There are these hidden power structures that are embedded in AI systems and if you aren't conscious and you aren't looking for it, they will surface. I think one of the biggest misconceptions that I hear out there is that AI systems are neutral. No, they're not. AI systems are not neutral. They encode the values and the blind spots of whoever builds them to your rock example.

Leigh Morgan-1:

Mm.

liat benzur-1:

And I'll give you two other very brief examples. I think these are just representative cases. It's not meant to be exhaustive, but they illustrate this broader issue hopefully quite clearly. And one is in healthcare, right? Medical bias in in care algorithms that get really close to you. I'm surely, um, healthcare. Was an algorithm that was used by major US hospitals to allocate extra care for high risk patients, but it systematically underestimated the needs of black patients. Why? Because the way that it was trained, it used past healthcare spending as a proxy for health needs. Well, it turns out that black patients historically spend less on care, not because they're healthier, but because they have less access. So what happened as a result, these black patients were significantly underrepresented among those that were flagged for extra care. I think the algorithm ended up identifying only about half as many black patients as it should have to get that extra care.

Leigh Morgan-1:

Wow.

liat benzur-1:

Another example, you know, was in, knowledge sharing. So there were some recent reports from routers that highlight how Chinese AI models like Alibaba's Quinn and, and several others, again, systematically align their responses with state approved narratives, right? China State approved narratives. And meanwhile, western models, we embed our own cultural and ideological assumptions into ours. for me, what this shows is AI isn't just this neutral technological tool. It's an instrument of power that shapes what billions of people see as truth as truth.

Leigh Morgan-1:

Yeah.

liat benzur-1:

and what that reveals is that whether it's lifesaving care or information access, these AI systems embed. Hidden power structures. Again, like when you've lived on the downside of this, you sniff for it, you smell for it. You, you recognize some of these things. And if we don't ask whose worldview is being scaled, we risk automating bias and entrenching inequity, invisibly and even more dangerous at speed.

Leigh Morgan-1:

Wow,

liat benzur-1:

It's just happening so fast.

Leigh Morgan-1:

it is happening so fast. So I have two questions for you. One is for those of us who use AI tools, the ones that are consumer facing,

liat benzur-1:

Yeah.

Leigh Morgan-1:

chat, GPT or Microsoft Copilot or Anthropic, then I wanna. Talk about leadership but first for those of us who are and are, might use in our work, how can we have that lens around the worldview that is built into some of these large language models? Or if we don't know, what are some practices we can do to try to mitigate the potential downsides of these large language models, which do have bias?

liat benzur-1:

I mean, I, I think the most important thing for everybody, whether it's a user or, or a leader that's running some of these next generation, services and solutions, is first you gotta assume that bias does exist because it does. Right? That's where we all gotta start. I, and I think, you know, you gotta ask the same questions you would when anyone tells you something. You gotta understand that it's important to triangulate information. It's important to double check, to verify, if you're in a position of, of power where you're able to influence the development of some of this stuff, A lot of the companies that I work with today, Lee, they're not necessarily building the LLMs, but they're trying to use AI in all kinds of different contexts. It might be for healthcare, it might be for, uh, banking and finance. It might be for B2B SaaS, you know, it might be for food and, a hospitality business. And like All of these different sectors are trying to use ai. So if you're a leader in those, what do you do? Again, if you start by assuming that bias exists, then you can build systems to surface it. And that means putting diverse lived experiences at the table from design to deployment, not just in the ethics reviews at the end, not just in your board meetings. you wanna think about how you're using these AI solution, by the way. My job is to help companies accelerate AI adoption and to leverage AI to drive financial impact, right? Whether it's new revenue streams or more efficient, uh, p and ls, whatever it is, I'm in the business of accelerating ai, so it might sound like I'm, you know, scaring folks on ai, but it's about using AI intelligently. And so you gotta use frameworks like power mapping to understand who benefits, who's harm you wanna think through. Second and third, order unintended consequences before they can scale. To me, bias isn't a technical bug, it's a leadership blind spot, and if you can't see it, you will not fix it. So it starts there.

Leigh Morgan-1:

so this takes me back to your earlier comments about why those people or leaders, anyone who has as being on the margins or, whose and, and again, we, we have many identities, but for, folks who in are seen as at the margins,

liat benzur-1:

Yeah.

Leigh Morgan-1:

you described earlier, there is a greater likelihood for developing this sixth sense, right? For unintended risk.

liat benzur-1:

Yeah.

Leigh Morgan-1:

For thinking about what are the second or third or order consequences if I speak up in this way, in this room. Right. So I'm just trying to tie that thread about, again, why, diverse perspectives and lived experiences actually can be very, very additive at a time of great potential and great peril these AI tools accelerate and amplify in use.

liat benzur-1:

You got it. And a lot of times folks who haven't lived these experiences don't even recognize what it is that we're talking about.'cause they don't know what it means to be in a room

Leigh Morgan-1:

Mm-hmm.

liat benzur-1:

and sense that there's a power inequity. And figure out how they have to balance what they say in order to make sure it gets heard. they haven't experienced what it means to kind of be on that downside of bias where no matter what you say, it already has. Some wrapper around it that makes it less important than whatever someone else says. And, that is a very hard lived experience to articulate and communicate unless you've had it.

Leigh Morgan-1:

Yes.

liat benzur-1:

It doesn't mean anyone better, it doesn't make anyone less than, it's, it's just, it is what it is. And it happens to be a very important tool in the tool bank, for managing and leading with ai.

Leigh Morgan-1:

You know, the image that's coming to my mind is, from Star Wars. I'm a big Star Wars fan and. I think about the, the Jedi Knights and one Jedi is Yoda in a certain form. The other you know, is, Skywalker, the white male Jedi. And there's a whole host, you know, there's a woman, young woman in there,, all of them, trained around a moral and ethic ethical framework, I think the potential is there for regardless of background to have sensitivity.

liat benzur-1:

I do believe that.

Leigh Morgan-1:

I think what you're also saying is that it's logical to assume that for those of us who predominantly are, majority in the spaces that we live in work. We might not have as much sensitivity to how folks with different backgrounds might have these skills. I think both can be true, it is quite a dilemma, isn't it?

liat benzur-1:

if you take a look at who's leading most of the AI innovation out there today, if you take a look at most of the AI conferences going on around the world, it is predominantly male.

Leigh Morgan-1:

Is it? Yeah.

liat benzur-1:

Very little, very little diversity, in a lot of these conversations. And, you know, my goal in, in the book, in a lot of the things that I'm sharing here is to encourage folks who are involved in all of these different companies in different ways, at different levels, but who don't see themselves as the AI experts to recognize how much expertise and capabilities they actually have within them that is needed in today more than ever. And to encourage them to, you know, have the, have the courage to go after it because they are exactly the leaders that we need, we need right now. So,

Leigh Morgan-1:

Yeah, that's so true. I mean, we need technologists. We need non-techies in these rooms just to have very diverse perspectives, and that's a lot of the work that you bring as a leader. Would you call yourself a technologist? I would you're. Real close to that. So someone who knows tech really well, but I also

liat benzur-1:

Yeah,

Leigh Morgan-1:

kind of a broader, you know, kind of executive leader with a, a lot of seasoning on your shoulders.

liat benzur-1:

well, I definitely, um, have seasoning and the more salt and pepper i, I get in my hair, the more purple and people hiding to it. But I, uh, no, I've, I've always seen myself as a technologist. I mean, I've been on the bleeding edge of driving product and envisioning the future since the mid nineties and, and,, driving the, the evolution from two G to 3G, 3G to 4G, 4G to 5G the early days of the internet of things at Qualcomm and connected health at Phillips. And, and finally, um, my corporate vice president role at Microsoft. So I've always been deep, deep, deep in the tech. I've only seen myself as a technologist. I think only recently as I've started this kind of portion of my. Career where I, I'm helping a lot of different companies across very different industries and sectors. I'm connecting a lot of dots. I'm seeing how AI is disrupting so many industries the leadership role, the cultural role, the organizational change role, like that's the stuff that I'm recognizing how important it is. Um, so I'm talking about that more. But, you know, most of my career I've, I've been deep, deep, deep on the tech side.

Leigh Morgan-1:

And you're an engineer by training, is that right? In

liat benzur-1:

Yeah.

Leigh Morgan-1:

Yeah. Well, as a non-engineer, you're a huge techie to me because not trained in that. Uh, and interestingly, you know, this, I'm, uh, getting more involved in the tech sector with

liat benzur-1:

I love it.

Leigh Morgan-1:

work that

liat benzur-1:

Wait.

Leigh Morgan-1:

Some, some other amazingly courageous, brilliant, brilliant women to think about what does responsible AI look like and, and related to that. Many people see AI just as a existential threat, you know, hair on fire, and then you have folks on the other side, these of, oh, it's, it's just gonna usher in only goodness. What's your advice for how we can stay in a more nuanced, thoughtful place where we can hold the complexity of AI tools and the the ethical dilemmas that we need to manage?

liat benzur-1:

yeah, I think the truth is that both extremes completely miss the point. AI isn't magic and it's not doom. It's leverage.

Leigh Morgan-1:

It's leverage. Wow.

liat benzur-1:

It's leverage. there are a lot of debates about existential risk that. Tend to center around speculative kind of longer term scenarios like the AI alignment problem, super intelligence. I think those are important conversations, but we also gotta carefully consider like the immediate social and societal risks and practical implications of the stuff that's being launched today, yesterday, last week. And it's scaling super fast. And like any tool, its impact depends on how it's used and by whom. And so the real risk is when organizations treat these AI outputs is truth rather than as inputs for human judgment. we can get into some, you know, predicaments and so I often get asked, what are some of the warning signs that an organization. Is losing some of that collective intelligence to ai. I'll tell you, I I tend to look for several clues

Leigh Morgan-1:

Okay.

liat benzur-1:

is when frontline employees stop raising red flags or, or questioning the AI outputs because they just assume that the model knows best. It's a red flag. When I see meetings that stop debating ideas or when all of a sudden, and you know, as a board member you can sense this too, but all of a sudden all the ideas that come in start to sound the same probably because they were all AI generated. That's a red flag for me

Leigh Morgan-1:

Yeah.

liat benzur-1:

when I see performance metrics go up, so like, hey, we're, you know, our efficiency metrics and our time, but I see trust. I see creativity or psychological safety go down, or when I see performance metrics go up that are very short term, but a longer term metric like retention goes down. Immediate red flag for me, is, this is probably somewhat similar, but when leaders measure productivity savings instead of longer term outcomes that really matter, it's always a red flag.

Leigh Morgan-1:

that gets us to short term outcomes, right? Of get it to the next quarterly earnings call versus thinking about if we do this, what might happen? What's the second and third order consequence that you gave voice to

liat benzur-1:

Exactly,

Leigh Morgan-1:

in three years? Losing sight of the ladder is what is a warning sign for you.

liat benzur-1:

that's a warning sign. I think also when decision making, um. It gets to be so automated that no one could really explain to you why a choice was made.

Leigh Morgan-1:

Mm-hmm.

liat benzur-1:

and you kind of get into this like, well, the AI recommended it. That's a red flag for me. And probably the most dangerous is when people with lived experience, like maybe those who've seen some things that the data can't see

Leigh Morgan-1:

Yeah.

liat benzur-1:

Ignored because their insight wasn't backed by the algorithm, then I really get worried. So if someone is in a meeting and says, Hey, I see that the AI is recommending this, but like in my experience, that's not what happens. Here's what has happened to me, and that gets ignored. flag.

Leigh Morgan-1:

Disregarding human perspective in all aspects, that seems to be a theme of what you're getting at, and it, strikes me that. The power of AI tools needs to be matched by the elevation of innately human qualities of empathy, discernment, questioning, which really is in the realm of humanness and you're just saying, can't be delegated to AI models, which I wholeheartedly agree with it. It's an irony there, right, of we're getting dazzled by these technologies, at the end of the day, it's humanness that needs to move us forward as a collective.

liat benzur-1:

Yeah, I mean, here's, here's the funny thing, and, and this is where I think people can get lost in the debate. I can make. Any point you want me to make with ai, I can show you how ai, can amplify, and help surface hidden insights and amplify overlooked voices and reveal systemic bias and issues that leadership might miss. I can show you examples of how it can catalyze and democratize information and make people feel more seen who have not been seen. So when used for good, like I can show you examples of that. Yes, I can do that, but it won't always do that on its own because I can also show you examples where ai, strips out context. It finds patterns, but misses meaning, it's not trained on fairness or ethics or inclusion. And we just lean towards, hey, the data says so instead of grappling with real complexity. I could show you both of those things. Both of those things can be true. I think in the, in the end, AI isn't gonna build bridges for us. You know, if we use it to illuminate rather than replace human judgment, I think it could strengthen bridges. but if we use it to replace some of that, I think it's gonna, you know, I know you talk a lot about the divides right? On your podcast between groups. I think it could further the divides between us. So the point is not debate goodness of AI and the harm of ai. We all recognize it can do both. And there are so many examples of both and showing me 50, examples of one does not negate 50 examples of the other.

Leigh Morgan-1:

I love that you're saying that because that means we're holding the complexity and not it to simplistic good or bad.

liat benzur-1:

That's right, that's this is what the future of AI is about. It's about that capability of leadership to recognize that trust isn't created by tools. It's created by how bravely and how wisely we use those tools. these tools can be used for good. They can be used for bad, they can be used for mediocre. They They could drive meaningful value, they could help you drive growth, or they can be used just for some stupid little, experiments that don't ever move the business needle. They can do all of those things,

Leigh Morgan-1:

Yes,

liat benzur-1:

but by themselves, themselves, it is not enough. It does require shepherding them and leading them and asking questions and building the right culture around it, and I think that is the leadership we need to build for the future with these

Leigh Morgan-1:

I love that. And you know, it's like AI tools. They're just like, lightsabers, we need to train Jedi.

liat benzur-1:

I.

Leigh Morgan-1:

Right? And some Jedi can go to the dark side, so we know that's also true, but done well. And then Jedi need to create, help, create, organizations, societies that can replicate and scale the kind of critical thinking, the nuanced, where it is not just good or bad, it's what we do with what we have. And that is squarely in the realm of, humanness. You're lifting that up in such a profound way. Uh, when were you, when is your book coming out? Because I, I want it now and I want all of us to read it. Is it, do you have a general timeframe? Um,

liat benzur-1:

Uh, in case there is a, a, a publisher or an agent out there, that wants to represent me, please reach out to me. That will be the, uh, the baseline of when this book gets out. I'm, I'm in that.

Leigh Morgan-1:

just wait? In my 10,000 listeners of this episode, you're gonna get many, many calls. I'm certain of

liat benzur-1:

I'm looking forward to that. I'm looking,

Leigh Morgan-1:

of that. And so I have one last question to wrap up that is, So If you had a magic wand, li's magic wand, and would be one hope you have for listeners is they try to navigate the complexity, nuance of AI tools and, and those that lead teams. to make sure that AI tools can be used for their its highest purpose. What's that one wish you have?

liat benzur-1:

You know, Lee, my, my hope is pretty simple. Don't stay silent, ask the questions. No one else is asking. Challenge assumptions, especially when the AI output feels too easy, too smooth, too certain. Bridge divides by showing up fully human. That means curious and empathetic and flawed and principled and uniquely you with your crazy lived experiences and perspectives. Because at the end of the day, AI is gonna replace a lot of the tasks we do and the jobs that exist out there, but it's not gonna replace what makes us human, and it will reflect the best or worst of us, depending on who's leading it. And so my hope is that we choose to lead with courage and the imagination and the care, for those who've. Often been left out or can easily be left out.'Cause this really is going to codify, you know, the best and the worst aspects of society. By definition, that is what AI is trained to do

Leigh Morgan-1:

Well, powerful words to end this wonderful conversation. you Leah, for being a wisdom keeper and a wisdom sharer, and this really, really important, topic and for not being silent and using your voice in meaningful ways in a lot of different areas. So, so thank you. For sharing today. It's been super fun. And I, I think we're, we're gonna have to have you on again on the, the space in between podcasts, so just, just saying,

liat benzur-1:

I'm not a big Lord of the Rings or Star Wars expert,

Leigh Morgan-1:

missing out my

liat benzur-1:

but I, but I am an SNL expert and so

Leigh Morgan-1:

Alright,

liat benzur-1:

I don't, while I don't know, if I have what it takes to like hold a lights saber correctly, I can teach you a few things in a van down by the river. So I'm glad I could help out.

Leigh Morgan-1:

well let's meet there soon. Alright everyone, thanks for, for joining today, leet. You're awesome.

liat benzur-1:

Thank you. Thank you. Likewise.

Leigh Morgan-1:

Okay, bye

liat benzur-1:

Bye.

I hope you. Enjoyed this episode of the space in between podcast. If. If you did, please hit the like button and leave a review. Wherever you listen to the show. And check out the space. Space in between.com website, where you can also leave me a message.

People on this episode