Digital Transformation & AI for Humans

S1:Ep87 AI Governance & Ethics 2030: Human-Centric Conscious AI Leadership Beyond Compliance

Emi Olausson Fourounjieva Season 1 Episode 87

Today’s guest is the brilliant Erica Shoemate from Washington, United States, a bold disruptor, dynamic storyteller, speaker and global thought leader whose career spans nearly two decades across national security - as a former Acting Chief /Senior Intelligence Manager for the US Department of Justice, Big Tech, and policy innovation. Erica is also an AI Public Policy Global Fellow.

From her years of leadership safeguarding national interests with the FBI and the U.S. Intelligence Community to shaping ethical AI and trust frameworks at leading technology giants like Twitter, Meta and Amazon, Erica has consistently stood at the front lines of transformation - where technology, leadership, and humanity converge.

As Founder and Principal Strategist of The EN Strategy Group, she helps organizations design human-focused strategies that align innovation with accountability through her signature LEAD framework.

Join us as we go beyond checklists and compliance to explore what it truly means to practice human-centric, conscious AI leadership - and how the next decade of governance will be defined not by control, but by courage, compassion, and collective responsibility.

Erica is a part of the Diamond Executive Council of the AI Game Changers Club - an elite tribe of visionary leaders redefining the rules and shaping the future of human–AI synergy.

🔑 Key topics

  • Why AI governance must evolve beyond checklists and move toward human-centered, conscious leadership
  • What AI governance will look like as intelligent systems become active participants in decision-making
  • The inner qualities needed for the next generation of leaders navigating accelerated intelligence
  • How to scale ethics without losing humanity
  • Where policy and reality diverge – and how to close the gaps fast
  • The moral courage required to lead responsibly in a high-speed, high-stakes environment
  • How to build governance systems that protect dignity, freedom, and societal well-being
  • How leaders can embody clarity, purpose, and wisdom as AI reshapes power dynamics

🔗 Connect with Erica on LinkedIn: https://www.linkedin.com/in/ericals/

🔗 Learn more: https://www.leadwithenstrategy.ai/about-us

Support the show


About the host, Emi Olausson Fourounjieva
With over 20 years in IT, digital transformation, business growth & leadership, Emi specializes in turning challenges into opportunities for business expansion and personal well-being.
Her contributions have shaped success stories across the corporations and individuals, from driving digital growth, managing resources and leading teams in big companies to empowering leaders to unlock their inner power and succeed in this era of transformation.

AI GAME CHANGERS CLUB: http://aigamechangers.io/
Apply to become a member:
http://aigamechangers.club/

📚 Get your AI Leadership Compass: Unlocking Business Growth & Innovation: https://www.amazon.com/dp/B0DNBJ92RP

📆 Book a free Strategy Call with Emi

🔗 Connect with Emi on LinkedIn
🌏 Learn more: https://digitaltransformation4humans.com/
📧 Subscribe to the newsletter on LinkedIn: Transformation for Leaders

SPEAKER_00:

Hello and welcome to Digital Transformation and AI for Humans with your host Annie. In this podcast, we'll delve into how technology intersects with leadership, innovation, and most importantly, the human spirit. Each episode features visionary leaders who understand that at the heart of success is the human touch, nurturing a winning mindset, fostering emotional intelligence, and building resilient teams. My today's guest is the brilliant Erica Schumate from Washington, United States, a bold disruptor, dynamic storyteller, and global thought leader whose career spans nearly two decades across national security as a former acting chief senior intelligence manager for the US Department of Justice, big tech, and policy innovation. Erica is also an AI Public Policy Global Fellow. From her years of leadership safeguarding national interests with the FBI and the US intelligence community to shaping ethical AI and trust frameworks at leading technology giants like Twitter and Amazon, Erica has consistently stood at the front lines of transformation where technology, leadership, and humanity converge. As founder and principal strategist of the EN strategy group, she helps organizations design human-focused strategies that align innovation with accountability through her signature lead framework, which stands for Learn, Elevate, Advocate, and Drive. In today's conversation, we go beyond checklists and compliance to explore what it truly means to practice human-centric, conscious AI leadership and how the next decade of governance will be defined not by control, but by courage, compassion, and collective responsibility. I'm honored to have Erica as a part of the executive group of the AI Game Changers Club, an elite tribe of visionary leaders redefining the rules and shaping the future of human AI synergy. Welcome, Erica. It's a great pleasure to have you here.

SPEAKER_01:

Thank you so much, Inin, for having me. I'm super excited to be here. I'm super excited to be able to speak to your guests about my work and also the lens and how I think about digital transformation specifically with that human-centered mindset and focus. So, yeah.

SPEAKER_00:

Amazing. I've been looking forward to our today's conversation. So it's going to be really valuable for all our listeners and viewers. I'm sure about that. Let's start the conversation and transform not just our technologies, but our ways of thinking and leading. If you are interested in connecting or collaborating, find more information in the description. And don't forget to subscribe for more powerful episodes. And if you are a leader, business owner, or investor ready to adapt, thrive, and lead with clarity, purpose, and wisdom in the era of AI, I would love to invite you to learn more about AI Game Changers, a global elite club for visionary trailblazers and change makers shaping the future. You can apply at AIGamechangers.club. I'm so happy to have you here, and I'm looking forward to everything we're going to uncover and unfold today. But to start with, I would love to hear more about yourself, about your journey, about your passions, and also some of your stories connected to a professional journey because it's absolutely amazing and so impressive.

SPEAKER_01:

Thank you so very much. So a little bit about me again. My name is Erica Schumaid. One thing I always kind of think about in how I show up in the world today is that I'm Erica Schumaide, right? And I'm also a first-generation college graduate and from inner city, Memphis, Tennessee, for those who do live internationally. And so I grew up with very humble beginnings and having to really figure things out. But to see me as this person, first generation college grad, turn national security, an intelligence leader, and now an AI governance strategist, and beyond even just the governance strategist, also I would say I'm the queen of career pivot and also very big on giving back to the community. And I'm able to do that also as a maternal health strategist and advocate. And what no matter what type of work I'm in, my North Star is always centered on I build systems that keep people safe in a world run by all of this amazing and sometimes not so amazing emerging technology.

SPEAKER_00:

That's wonderful that the world has people like you because technologies need to be balanced by humans who know what it means to be humans.

SPEAKER_01:

I don't disagree with you. And and going back to like a little bit more about uh who I am and the person that your audience is get to know a little bit more about, is that um my career starting in local government, and then most of it growing up in the FBI and also working across the US intelligence community, where I led intelligence work, including related to emergent threats and trends, critical infrastructure, counterterrorism, counterintelligence, transnational organized crime, national kidnappings, uh crimes against children, and then being able to also take all of those things that I learned, not only um working in various parts of the FBI, including taking time to serve overseas and high-threat areas as well, then being able to pivot into big tech and work across trust and safety, cybercrime, and advertising integrity, thinking about election safety, human rights, women's safety, and what those things actually mean in today's society and also regulatory readiness and compliance audit that um I've worked with big, big companies, as I mentioned, Twitter and also Amazon, and also have done some work supporting Meta as well. And today I run the End Strategy Group Boutique Consulting Firm, where I help leaders and organizations build this people-first, culturally informed AI and safety frameworks. And as also an AI 2030 Global Policy Fellow, I'm able to focus on making sure that AI innovation protects human dignity, especially for underserved communities who are often left out of not only the design process, but also the executive decision-making processes as well.

SPEAKER_00:

I'm happy that you mentioned it because that problem is something we need to highlight and address. So, this conversation is going to get back to those topics through the run of the conversation. But now we're entering the 2030 horizon, and AI is no longer just a tool, but an active participant in power and decision making. What does human-centric governance truly look like when influence is shared between humans and intelligent machines?

SPEAKER_01:

My gosh, great, great question. So when I think about human-centric governance, for me, that means designing systems that honors people's lived realities and not just strictly the data points and thinking of people as data and and not this, we're very complex beings, right? And when we talk about AI sharing influence with humans, governance can't just be technical oversight. It has to be also a moral stewardship. And with my background in national security, what it has taught me is that power without accountability truly erodes trust. It also erodes transparency or people feeling like they have so much power they don't have to have transparency. And I believe that the same applies here. So human-centric means bringing culture, the big word equity, and also empathy into this decision loop. And that means we have to make sure that when we're thinking about this um a mom and her interaction with a digital health app focused on maternal health, or the student facing an algorithmic grade, and it's seen as safe, and that it has been tested over and over and over again, that that the AI must extend human dignity and not this automated human bias. And we hear bias so much that I believe that sometimes people can feel a bit desensitized around what that means and how it shows up. And I think that that also is something that we have to counter as we're thinking about the interaction of humans with this technology as well.

SPEAKER_00:

I often refer to AI as a mirror of who we are, and it includes our biases, and then it also gives it a completely different magnitude, which reflects it back to us in all its beauty or sometimes less beauty. And I like that you mentioned biases because that's exactly what it is. Among all the other amazing things, it's both good and bad as everything. So, and human dignity, it is something we should mention more often when we're talking about those amazing technologies.

SPEAKER_01:

I don't disagree with you. It really is so important that when we are using data to tell a story, what you know, a good story, let's say, you know, AI with positive social impact or AI for good, it's it's still important to say, you know, you have your data, but data layer with the humanization piece is really the best kind of story to tell, in my professional opinion. And I think that when we as a society and also as corporations, you know, for profit, that you can have profit and you also can maintain a level of that moral stewardship while also making good profit. Two things can be true, and I believe that both can exist in the same ecosystem.

SPEAKER_00:

I love this. And let's dive a little bit deeper into this. You've seen both government and corporate worlds from the inside. Why do most AI governance initiatives stop at compliance? And what must shift in leadership mindset to turn governance into a living expression of conscience, not bureaucracy?

SPEAKER_01:

Oh wow, great, great question. And as someone, like you said, has lived both worlds. I believe that compliance is the basics, right? It's the floor. And but conscience is the ceiling. Conscience should always be in my uh human uh-centric mindset. It should always be what we're aiming for. And compliance is really to the company's check in a box. But if we want to be better than just checking the legal requirements, then we must take it a step further. And I've seen both government and corporate sides of this, where compliance gives that cover, right? It's protecting the company from a legal and regulatory framework perspective, but conscience is what gives us clarity. I believe that conscience is also the driver of innovation for good. And when many leaders stop at compliance because it's measurable, like it's so quick to be like, well, I can point to this data. Sometimes we have to look at the qualitative piece. And that's also where I see the human piece coming into this is governance just at its core is rooted in humanity. It requires asking not only who is this good for, but also who can this harm. I created this great system, and I believe that this product innovation is great. But on the flip side, the alternative competing hypothesis is always safe for my my former national security days, is but what does it look like when someone that a bad threat actor is attempting to exploit it for the thing that it wasn't originally intended for? And so not are we covered, but in trust and safety work, we learned that real accountability happens when leaders invite that discomfort and listen to the people most impacted by their systems, and so bureaucracy really protects the org, as I talked about earlier, and conscience protects the people.

SPEAKER_00:

This is so true. I like your approach very much, and uh I would like to talk a little bit more about conscious leadership. The speed of AI evolution is overwhelming, even the best leaders. What inner qualities, not technical skills, will define the next generation of conscious AI leaders, the visionaries who can balance wisdom with progress?

SPEAKER_01:

This is so, so, so good. Inner qualities, wow. This really speaks a lot to me as someone who is very much so an empathetic leader and someone who is always thinking about people and are we doing the right things. And so this next era of AI leadership really demands this high level of emotional intelligence or this emotional um EQ, right? This culture, humility, and courage to slow down when we know that speed sells. And I know, you know, before um our pre-interview, that was one of the things that we kind of talked about as well. Is like this this whole idea to like move fast, be the first, be, you know, part of the first, the second or the third, is at what expense to the conscience, at what expense to the to society, at what expense when we take it down even further to our children, to our youth, that's part of that initial thing we should be thinking about. And then leadership in this space isn't about who knows the most code, it's who knows themselves, because you can talk about all day how the sausage is made, but if you're not actually implementing the guardrails to protect the thing and how it's made and what it's also meant to not do, then the conscience is basically an afterthought. And so conscious AI leaders will be empathetic strategists, not technical celebrities, by the way. And they'll listen more than they speak, they'll understand that protecting humanity sometimes means saying pause, which can be very hard before lunch. And it's actually the very thing that I talk about when I'm working with startups on whatever that their big vision is, is saying that, hey, you cannot afford to launch an MVP product without building policy guardrails alongside that big idea. And actually, when you are transparent with your users and stating how you're building and building guardrails alongside your great innovation, if and when something was to go awry, say, you know, a hack or whatever, because the people who want to be bad will try to do any and everything to finish their not-so great mission, right? And so I think it's important to like having those plans in place. But when you are bringing your targeted community along for the ride and being transparent and how you're doing that, if something was to not be a bright spotlight for your company, you're able to easily tell that story and what you're doing and what you had did on the front end, right? And in national security, restraint, as I'm just speaking to, was often the smartest move. And that same applies to AI. And so, real wisdom really is this balance of curiosity with caution. And I believe that that is how we best censor humans when we are progressing, but we're also being cautious about the technology that we bring to the forefront.

SPEAKER_00:

Brilliant. This is absolutely brilliant, and you mentioned that AI is so closely connected to the ethical aspects. So when AI systems touch millions of lives daily, how do we scale ethics without diluting humanity? Can moral clarity survive inside metrics, data, and shareholder expectations?

SPEAKER_01:

Yes, one, but but let's start here. You can't scale ethics if you don't center empathy in design. So if your bottom line is always the numbers, then we've already lost empathy. And I believe that empathy and and your profit can exist, but let's remember first that ethics isn't a slide deck, it isn't the thing we're just talking to the board about. It's a it's really a practice, and so every policy, every product, every product decision is a chance to operationalize care. And even how we how a CEO or COO may even provide storytelling in the great product innovation that their company has built, I think if you layer that with that real life um humanity and human-centered examples, I think that is where you bring your board along for the ride. And when you're saying, hey, we need to think about how this design will impact society, when you have already built your company on these things and have stayed true to those. That's when I think you're able to then not only be able to operationalize with care, you're actually bringing the right people on your board to have that same level of the people in society, the people that will use this product or who may not ever use it, may have an indirect impact of whatever the innovation is. And so when I worked in digital trust, we didn't start with the metrics. Started with people's stories. We actually worked very closely with civil society and meeting with people where they were using our product and also the good, the bad, and the ugly and the indifferent stories around how our product has been used or been weaponized. I think all of that is really important. And we must really, really think about the stories in that piece around ethics. So if ethics becomes just another KPI or just another training, then it really loses its power. But if ethics becomes our design principle, then that's something everyone can own up to your board. And we can scale humanity along with innovation, as I mentioned before. And that's how moral clarity survives inside metrics and shareholder expectations.

SPEAKER_00:

When I'm listening to you, I'm so happy that you found me in this big world and we're having this conversation today. And I'm looking forward to many more conversations. But what you are sharing needs to be heard, and we need to spread it because it is so needed in today's world, and it is getting increasingly more relevant with every day AI is developing and impacting our lives. Yes, and always. And always so true. Erica, you worked at the intersection of policy and technology. Where do you see the most dangerous gaps between how AI actually works and how it's being governed? And how can we close them fast enough?

SPEAKER_01:

Ooh, this is another like good one, and also so many layers to it, right? AI moves at the speed of iteration. Governance still moves at the speed of legislation, or and we know the or the lack thereof, depending on where you are in the world. So the biggest gap I see is time and real understanding and the ability or the capability to want to understand. And so policymakers often don't grasp how AI actually behaves. And technologists rarely see this societal ripple effects. Those of us who've lived in both spaces must build these real translation bridges, meaning that we must also be able to speak the language of the people, speak the language of the policymaker, being able to speak the language of corporate. Like you have to be able to be a chameleon in many ways and like speak your truth, speak what you know, but speak it in plain language that everyone can at least have a basic level play and feel on how they understand this big technology and how it works. And so we need our governance that's adaptive, that's inclusive. And the one thing that from the very beginning of my career that I've always been is anticipatory. We need to be thinking about the what ifs, the if this did happen, this could happen, instead of saying, no, no, no, no, no, that could never happen. And particularly being um an American, this sometimes our our ideals of that could never be is this kind of Western mindset as well. And I believe that Western mindset or you know, groupthink, all those things can actually get us into trouble and prevent us from being able to think about some of the real issues and challenges around emerging technology like AI, and also being able to show this anticipatory thought process to policymakers, to leaders. I do believe there are certain regions that are doing a much better job of being able to iterate a bit faster. Some may say maybe it's overreach. You know, you have in the EU, there's a lot of policy that has been developed. And a lot of that is in many ways protecting privacy and data, protecting children with, you know, online safety, which honestly is really, really important and should not be just at the mercy of the parent. And the last thing I'll say on that gaps between the reality of how AI is iterating and governance is we really have to bring more diverse voices into the room. And when I say diverse, I'm not just talking about your years of experience, your different companies you've worked at. I'm gonna say the thing that is like literally forbidden right now here in my world as an American. And that really is diverse people from you know different ethnicities, different races, and really bringing those people along for the ride, people like me. Like I have all of this knowledge and expertise, but what I unfortunately see is not me being asked or called upon to be a part of those conversations. I continue to see kind of a recycled group of the same people that continue to just move around in different circles, but different companies, almost like the same script, different cast. And in order for us to really be able to bridge these gaps, we need to start thinking about those that don't have the biggest platforms or all the buddies that are in the system, because the communities that are most affected by AI are rarely the ones that's actually writing the rules, enforcing the rules, and actually have the power to say we need to change the rules.

SPEAKER_00:

That's very true. And I think it's quite typical in this world, unfortunately, still with many types of situations and dimensions. No matter how you slice and dice, it often boils down to the echo chambers and to the segments which are defining rules for some other segments which are excluded from the conversation. And that's the name of the game. That's why with AI Game Changers, we're lifting up the conversation to the level of human beings, of humanity and civilization as such, because technologies and humans they need to find that balance and uh achieve the state of peace and co-creation instead of being in eternal competition and sometimes unfortunately even war.

SPEAKER_01:

Yeah, I agree.

SPEAKER_00:

Erica, genuine governance often demands saying no when it's easier to say yes. How do moral courage and core human values look like at the leadership table? And how do we cultivate it in a culture obsessed with performance, speed, and scale?

SPEAKER_01:

Whoa, this one is loaded. I it is actually adding to, as I mentioned earlier, my my North Star of being of a very human-centered mindset in all the things that I do or work in. I actually just recently said last week, I was talking about moral courage, and we need more people to really be okay with being uncomfortable because that's where courage is actually built from. It you know that something isn't right, you're seeing something is not right, but it takes a whole lot to actually be the center than to be the person that's either gonna remain silent or who's gonna just go along with the group, right? And so moral courage is the ability to say not yet, even when everyone else says ship it. And and I definitely want to just kind of add a little layer to that of not saying always no, because I know that that also you see this kind of uh tension between policy and profit, or can policy to yes, and that policy is trying to protect the people in many ways. And so the courage is saying, hey, let's pause, going back to pausing, and think about what the impact is if we ship prematurely versus waiting a bit and thinking about some of the implications and being able to account for those. And so courage and AI leadership looks like protecting people when the profit pressures you not to. It's being the one voice in the room that says, we need to rethink this, and then providing the why. And I've been that person in policy conversations and national security conversations, and it is rarely comfortable. As much of an extrovert as I am, I'm also a person that is joyful, you know, full of life, full of energy and excitement. And so I want to be on this kind of like higher ground of like all things are like great and live in my perfect world utopia. But comfort does not create conscience. Comfort does not help the ones who need it most. Comfort doesn't save the future user of your product when it goes bad. Comfort does nothing that is going to be good for that child, that that the person that doesn't have all of the resources. And so we have to redefine success beyond speed and scale. Moral courage is truly contagious. And when it starts at the top, and when our leaders model it, the culture really shifts. Everyone gets to be their best selves, everyone gets to deliver at a capacity that one has never fully imagined before because people are mentally healthy. People are not feeling threatened, that their jobs are threatened. People are feeling like that they're actually making a difference and they're able to feed their families on top of being able to make a difference. A world like that is literally the utopia that I really hope for. And I know that there's so many other voices about profit, profit, profit, that what I'm saying can feel like that's so far away. But I do believe when we start to stand up and living in our courage is where we start to see a lot of shift in society as a whole.

SPEAKER_00:

You know, your description of that dream world just warmed up my heart because I still trust that that type of freedom and our ability to tap into who we are as humans, into our inner power, in conjunction with running successful, profitable business, can create a very bright future, and that's what we should have as a North Star. But coming back to the transactional level of how it is today, how do we move AI governance from the surface level of boundaries and limitations, the transactional level, to a contextual level of meaning, values, and human depth?

SPEAKER_01:

Oh wow, this is I mean, all of the the questions that thought around them are just so just wow, honestly. And so governance, when I think about that kind of from transactional to transformational governance, governance must evolve from rule keeping to meaning making. What do I mean? Transactional governance checks boxes, as we talked about earlier. Transformational governance asks even better questions and it connects policy to purpose. And as someone, and and that's how I live my life. I think that in my work, I've seen how centering that human context, especially the experiences of marginalized or underrepresented communities, turns governance into a living, breathing system of care. And that's where innovation truly becomes humane. And we don't just need rules for the sake of rules, we need relationships between our policymakers, our technologists, and the public, and being able to show this transformational governance through storytelling, layered, as I mentioned earlier, with data and why bringing these two things together is why this certain type of policy should be passed. And then the for-profit side, the corporations being able to create this policy governance around the very innovation that actually started the company, let's say, or has added value to the company. If we can really think about what is our ultimate goal besides the dollar, then even our own minds, our own culture begins to truly transform and blossom in a way that I believe we have yet to fully realize that that type of dream can exist.

SPEAKER_00:

Beautiful. Absolutely beautiful. Let's come back for a while to the questions of human dignity and freedom and try to put them together. How can design, policy, and leadership converge to make human dignity prioritized and human freedom preserved in the age of intelligent systems?

SPEAKER_01:

When I think about those three big things coming together: design, the development, the policy, so how we want it to work and what it should or should not be doing, and then this leadership work. So design, policy, and leadership work in harmony, they produce justice, not just functionality. Design also shapes experience, policy shapes boundaries, leadership shapes values and our moral conscience. If any one of those is missing, dignity gets diluted. And I see this clearly in my maternal health work. When data, design, and policy ignore real lived experiences. Lives are lost, literally, a matter of life or death. And this same logic applies to AI. Freedom in the age of intelligent systems means ensuring that the people remain the decision makers of their own narratives. And that is where we design for liberation and not surveillance. And this really speaks to converging for dignity of many companies wanting to develop this amazing technology, yet thinking that the technology can absolve people from the process, yet doing so actually runs the risk of jeopardizing design policy and the leadership around it and can dilute not only the dignity, but also the product. And this is something that we, the people, really have to hold those who have the most power of like creating this technology also have the greatest responsibility. And it is the people's job to hold those same leaders and innovators to a higher level of accountability and responsibility.

SPEAKER_00:

I couldn't agree more. I like that you are diving so deep and uh share so much wisdom. I truly appreciate it. And you mentioned that sometimes it's really important to slow down and to apply your critical thinking to what are going to be the consequences and evaluate the next steps before you are rushing into it. But at the same time, in my practice, I often train people to review everything they are doing, not only to slow down, to speed up, but also get rid of certain pieces. Because today I see oftentimes, even in terms of use cases for AI implementation, that something which should be removed in the first place, it is optimized, and uh AI is applied to that part which shouldn't be even considered, actually. Exactly. Right? So it is also about thinking twice. What are you going to choose? And you need to choose it wisely in order to create a sustainable business growth and development. So, what is one thing we all need to unlearn to build wiser, more human-centered AI systems for the decade ahead?

SPEAKER_01:

Ooh, this is good. This is actually a lot of what keeps me up at night, also the thing that keeps me frustrated all day, every day. Again, going back to what I mentioned about almost like the same people getting recycled through various companies and developing the same names and the ones with the platforms, and what is seen as like this is the righteous North Star kind of thing. And so we must unlearn the myth that technology is neutral. And the idea that technology is neutral is flawed from its inception because humans created the technology, and therefore we all come with our own bias, our own lived experiences. And those things are are already going to be naturally implemented and injected into the thing we're making because we're coming at it generally from our own lived experience. And so AI mirrors systems that made it. And if the inputs are biased, the outputs would be unjust. We have to unlearn the idea that data equals truth and remember that data often reflects inequity. I've worked across various sectors, as we talked about in the beginning, long enough to know that neutrality is a luxury of privilege. And this idea that a system can be neutral is absolutely asinine, and people just need to throw it away because true leadership requires. Intentionality. So if you are building a system for all, then we need to be very intentional about the all that is a part of the final product ahead of lunch. The intentions, what we expect to see. And when we don't see those things, how are we then iterating on that? How are we fixing and resolving for these very issues versus saying year one, year two, year three, year four, year five, it's biased, it's biased is biased. Yet you're doing nothing different to change how we are doing the work. And so true leadership, again, requiring that intentionality must also be asking whose stories, whose data, and whose humanity are we missing from this model. And we know that most of the models that we have are very Eurocentric, very whitewash. I can give you an example, just not even a full two months ago, I was asking AI to give me an image of diverse women and you know that are professional. And unfortunately, it took me about six, I think six or seven times before I got an image of diverse women who were various shades from the lightest to the darkest. At first, I kept getting the same, mostly everyone being more fair skin than a mix of darker tone and fair skin. So that's what I would definitely say to people.

SPEAKER_00:

That's a very interesting example, and I appreciate you sharing it. And I totally agree with everything you mentioned about the unlearning part, because I guess it's about time to lift up this lock and start looking into what we are cooking here, yeah, because otherwise it might be too late. And this time it might be even irreversible. But Erica, what is one piece of advice you'd share with today's leaders who are ready to move beyond compliance and embody the next tier of conscious visionary human-centric AI leadership?

SPEAKER_01:

So, one piece of advice for leaders lead with conscience, not convenience, and surround yourself with truth tellers and not the echo chamber. If you're serious about human-centric AI, as so many of the buzzwords and the responsible AI framework speaks to, let's really build diverse teams and create room for dissent, where dissent does not cause the person who is speaking truth to power to feel ostracized. The best leaders I've seen aren't just visionaries, they are listeners as well. And they're listening for the thing that is different, that is something that they've never considered, and they're taking it into account. They turn reflection into strategy. And one of my guiding principles is very simple. Technology should expand our human possibility, not replace human purpose. Governance isn't a checkbox, that is compliance. Governance is truly an act of love and care for people, for humanity, for society, for the most vulnerable, and to the next generation. If we lead from that place, the future truly takes care of itself. And we all are able to thrive in our now. And our children in the next generation are able to thrive in the later.

SPEAKER_00:

I'm so grateful. This is such an amazing advice. Truly one of the best ones in the human-centric perspective through all the episodes I have here on this podcast. So thank you so much, Erica, for being here with us today, for sharing your experience, your wisdom, your vision, and your empathy, and your way of creating this better world and your passion, your fire, everything you have when you apply it to this hard and uh quite cold world of technologies and AI development. Thank you so much.

SPEAKER_01:

Thank you. Thank you so much for the space and the time to speak with you and your audience today.

SPEAKER_00:

Thank you for joining us on digital transformation and AI for humans. I am Amy, and it was enriching to share this time with you. Remember, the core of any transformation lies in our human nature, how we think, feel, and connect with others. It is about enhancing our emotional intelligence, embracing a winning mindset, and leading with empathy and insight. Subscribe and stay tuned for more episodes where we uncover the latest trends in digital business and explore the human side of technology and leadership. If this conversation resonated with you and you are a visionary leader, business owner or investor ready to shape what's next, consider joining the AI Game Changers Club. You will find more information in the description. Until next time, keep nurturing your mind, fostering your connections, and leading with heart.