AI Made Simple
AI Made Simple: The Transformation Series explores how AI is reshaping how organisations work, lead, and scale. Hosted by international AI trainer and speaker Valeriya Pilkevich, the show features conversations with senior leaders, innovators, and practitioners driving real-world AI transformation. Each episode reveals what it really takes to make AI work — from leadership and culture to data, governance, and everyday workflows.
AI Made Simple
Toju Duke on Why Responsible AI Is a Business Strategy, Not a Checkbox
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
AI is being rolled out at scale across industries - but the safeguards haven't kept up. Bias in AI has improved less than 5% despite years of awareness, agentic AI is amplifying risks faster than companies can manage them, and most organizations still treat responsible AI as a compliance checkbox rather than a strategic advantage.
In this episode of AI Made Simple: The Transformation Series, I'm joined by Toju Duke - AI advisor, author, and former Google Responsible AI lead - who spent a decade at Google before founding Diverse AI, an organization building a 5-million-image dataset to represent communities invisible to current AI systems.
We discuss:
- Why AI bias hasn't meaningfully improved and what's blocking progress
- The security and accountability risks of deploying AI agents without safeguards
- How underrepresented data creates real business risk, from recruitment failures to reputational damage
- Why responsible AI should be part of your AI strategy, not a separate initiative
Connect with Toju Duke:
LinkedIn: https://www.linkedin.com/in/tojuduke/
Website: https://www.tojuduke.com/
Her Books: Responsible AI
Building Responsible Algorithms
Connect with Valeriya:
LinkedIn: https://www.linkedin.com/in/valeriya-pilkevich
YouTube: https://www.youtube.com/@aimadesimpletalks
Podcast: https://aimadesimple.buzzsprout.com/
Need help building AI capability in your organization? Book a call.
Valeriya Pilkevich (00:00)
Welcome to AI Made Simple, the transformation series. I'm Valeria Pilkevic and I talk with global leaders, innovators and practitioners who are shaping the future of work in the age of AI. My guest today is Toju Duke, recognized as one of the top women in AI with 20 years of experience spanning advertising, retail, nonprofit and tech. Toju spent 10 years at Google, including three years leading responsible AI programs.
focused on frontier models. She's the founder of Diverse AI and the author of two books, which you can find in the show notes. We talk about why bias in AI keeps getting worse despite more awareness, what it actually takes to build representative data sets,
the real risk of agentic AI that most companies aren't preparing for, and why AI's impact on human cognition should be on every business leader's radar.
Valeriya Pilkevich (00:53)
Toju it's great having you on this podcast.
Toju Duke (00:55)
Thanks for having me.
Valeriya Pilkevich (00:57)
Toju prior to founding your own company, you worked over a decade at Google, and you spent the last couple of years leading responsible AI programs. Can you tell us, is there any story behind it? What was the moment that shifted your focus from building products to actually building safeguards around them?
Toju Duke (01:14)
Yeah, so my tenure at Google was 10 years in total. And the first few years, I worked on the advertising products. Then I moved into being a product lead for Google Travel. And then I ended up being in responsible AI. But when I made this shift from products to research was when I came across AI, came across machine learning. I attended ⁓ training on it. And I just found it so fascinating. I'd always kept on looking for something to challenge me.
So I kept on changing teams during my time at Google. And I knew that tech will always challenge me because there's never a dull moment with tech. And I came across AI and I found it to be a very interesting form of programming where you do not tell it what to do. You just give it some directions and some instructions and it comes up with its own output. And I found that to be really intelligent and interesting. And I couldn't understand why people were not talking about ML a bit more and people were still doing like traditional old
software programming. I started talking about it and along the line I came across the negative sides of AI, the impact it could have on especially on people from underrepresented groups. The fact that, it was being programmed with numbers that were inaccurate and it was coming up with very inaccurate outputs that was severely impacting people from underrepresented groups. And that became really concerning for me because I was thinking of my children, I was thinking of the future, I was thinking of the world we live in and the skill ⁓
that AI has, with technology. It's not like retail products that you can recall it off the shelf if there's a problem with AI and technology. If it goes out, it's all out and you cannot reduce the damage once it's done.
And that was the main turning point for me. That's when I did the shift to say, right, I rather focus on working on AI and working on making sure that we can reduce the negative impacts it has on human society through different ways. And I approached the Google research team and I got the job as a program manager and I worked there for for two years before I said goodbye.
Valeriya Pilkevich (03:12)
you recently said publicly, that "despite years of awareness,
The bias problem in AI hasn't improved more than 5%" Why do you think the industry keeps having the same conversations without making progress on it?
Toju Duke (03:24)
I mean, the main people who are the AI providers is big tech. And we know that the main driver for AI and the AI arms race is our profit making. So we have so many examples over the years where the bias issue always cropped up, either in Amazon's recruitment tool that they actually deprecated because it was found out that it was favoring men's CVs over women's CVs and women never got through the application.
Valeriya Pilkevich (03:49)
Mm-hmm.
Toju Duke (03:52)
That's one example. We have another example of Google photos who in 2016 someone was doing a search on his phone for memories for a holiday he had done a year before and he put the term gorilla in the app and it came up with the faces of people from black African-American descent. These problems still exist. Google couldn't fix the problem and it took off the label gorilla from the Google photos app. If you search for gorilla in your Google
photos, nothing is going to show up. With Amazon, they shut the two down. But these are unique niche problems that affect a certain amount of people. It doesn't affect the overall customer base. And because of that,
there's no priority given because it needs more capital. It needs more investment. It needs more human resources and it needs more research. Many times AI is launched out, it's rolled out, it's scaled out. We're not aware of the issues and the risks that it has on different members of society until after launch, until maybe a year later, then we start hearing, oh, there's a problem with this AI tool, for instance, with Chat GPT and all the issues we're hearing about AI sycophancy and how people have taken their own lives,
across because of the hallucinations and not just with chatGPT but a lot of large language models. But many times during the developmental phase and the launch of these products no one is really aware of the overall impact we have. And because of that once the knowledge and awareness is available the companies do not revert back to fix it, right? And since there's no apparent regulation yet that is forcing them to do it
They wouldn't do it. We know we have the EU AI Act. It's enforced, yes, but there are elements of it that are enforced. It's not the full AI Act that's enforced. And it keeps on changing, almost every day. So, and it still covers some parts of AI. It doesn't cover the overall spectrum. Yeah. So with all of these issues that happening and the fact that, it's not going to really yield a lot of profits. It's not going to make them win the AI arms race. It's something that needs so much money.
what's the point. We will focus on it as and when we can but we'll not give it
so much attention. The second part of that though is also, there's still not enough research and we're still not quite sure how to fix the bias problem, especially with generative AI. So if you remember, I think it was last year that Google Gemini, or probably was two years ago that Gemini launched and there was so much inaccuracies in the outputs, people asked a story about the first Pope and it brings out a black man who was never a Pope. Yeah, yeah, it went a bit too much to the left.
side of things and that was biased but biased in a different way. You could see that the team was really making an effort to reduce the biases in the outputs but then it turned into a different form of bias that they did not anticipate. So again, generative AI is really uncontrollable. There's so much we can do with it but in the current state of things, I feel like the industry has been really quick and hasty to launch our products that we do not really have a full awareness of the issues and how to fix them.
And there's just this race going on that just keeps on compounding.
Valeriya Pilkevich (06:59)
So you decided in a way to take a matter in your own hands you actually founded a company, at diverse AI, and you're building what you've described as a five million image data set representing cultures and communities that don't exist in current AI systems. What does it actually take to create representative data?
Toju Duke (07:17)
Yeah, it takes a lot of effort, a lot of, genuine care for people and for the people who are misrepresented, but it also needs a lot of money. The project that we're working on right now will take a lot of capital that we do not even have, so we need funding. But I do remember my days at Google, there was a project that was very similar. Google did not approve it at the end of the day because it was going to cost over $1 million. And again, the use case wasn't that huge.
So the process for this project we plan to go to remote areas across the global south where we have at least more than 40
of people not
existence on any data set that offline, know, they do not exist digitally at all and their history is just offline, which is really concerning because that means we're not preserving any data, you know, there's going to be digital inequality which is already there and keeps on widening. So the overall plan is to engage with photographers from these countries, pay them money, they go down to these villages or these remote areas, capture their existence, take images of them, of their environments, of their culture, funerals,
weddings, whatever they can take. I will host that on a live data set that will be available for everyone to use. That's just bridging a small gap, but no one has done it. and different people I've talked to have gone, that's a great project because no one has thought about that and no one has touched it. And one of the reasons why is because it takes a lot of effort and a lot of money and people are always looking for a quick fix, whatever is easier and will lead to a big win is always the first thing that people work on.
Valeriya Pilkevich (08:51)
And for listeners wondering why this matters practically, if AI systems are trained mostly on one demographic, they produce false positives and false negatives for everyone else. In facial recognition, for example, people have been wrongly identified and even arrested for crimes they didn't commit. In healthcare, patients are misdiagnosed or not diagnosed at
So the stakes of under-representative data are real and immediate.
Toju Duke (09:16)
Thanks for explaining that. Yeah, we've already seen many instances where people have been arrested for crimes they did not commit. There was a guy in India a couple of years ago who was actually tortured and killed by the police for a crime he did not commit because the facial recognition misidentified him. It goes beyond criminal justice. It goes into health care. people are misdiagnosed or not diagnosed at all because there's not enough training data on people from underrepresented groups. And, even for people who are from majority groups, right, like white,
Valeriya Pilkevich (09:19)
Mm-hmm.
Toju Duke (09:45)
Caucasian people it still has a detrimental effect if we do not have sufficient data Representing these people but right now what we have in data sets is it's basically data that was crawled off the internet from people from Western societies who just uploaded the pictures and flicker for instance So we did a similar project just before this one and the first thing we did was download a subset of an open source image data set and 90 % of it was underrepresented
It was all white male, a few white women, very few gay people, probably non, very few black people, very few people from India, know, from different subgroups in society, there was very little representation. And again, when we think about the global South, you know, these people probably do not have access to the internet. Some parts do not have access to the internet, and these are the people that we're out to. But for even those who have access to the internet, how many of them really
used Flickr in the past, how many of them used all MySpace and all the image, social media apps that existed way back and how many of them would have uploaded the filters on there. So because there's a lack of, ⁓ representation and not just representation, but like digital.
⁓ maturity across these different parts of the world there's a there's a reasonably reduced amount of representation across datasets which leads to the bias that we just talked about as well.
Valeriya Pilkevich (11:11)
So what you're saying is that big tech won't invest because the ROI isn't there for them. But for individual companies, if I think about it, especially like, let's say, recruitment, health care, financial services, the stakes are actually higher because biased AI means missed talent or reputational damage, even legal liability. So responsible AI isn't just regulation, it's actually long-term business sustainability.
And Tatsu, what would you say to a business leader who sees responsible AI as too much effort?
Toju Duke (11:46)
Yeah, but even to before I answer your question directly, just to add to the bit around big tech, some companies do care. But then we have issues around copyrights. So IBM did have a diversity in faces data set that existed, but then they had to deprecate it or just shut it down. And they had a lot of backlash because they just scraped data off the internet again. So people, did not really have the license and ownership of those images and people shut it down. And that's the same problem we're having now.
Valeriya Pilkevich (11:53)
Mm-hmm.
Toju Duke (12:13)
question directly, you know, this question around why should people really care? What do they need to think about? I'm hearing stories though, and I'm really happy to hear them, of companies where they have started pushing AI, a lot of people are doing it, making their employees all use AI, insisting it's like an AI mandate, right? We're an AI first company, use AI, but then the employees are not. They're reluctant, they don't want to use it. They're scared and they're trying everything, getting people to like have engagement stories.
you know, give feedback, have case studies, and it's not happening. And I'm happy to hear that because when AI just came, I mean, AI has been here for quite a while, but when ChatGPT came and ChatGPT was the representation of AI, even if we know AI existed long before ChatGPT, you know, we had people from two different sides of the camp, some people were jumping in, and some people were hesitant. But now it's like everyone, you know, every company literally has to use AI because it feels like you're missing out
It does have its benefits, productivity and efficiency to a certain extent. But if we do not have the right AI strategy, the right AI literacy and knowledge on how to use it, it's going to backfire. It's a half-baked product. I don't know why companies are getting away with this. In typical manufacturing and engineering industries, cannot launch a laptop until it passes some certain standards and specifications. But then with AI, because
because it's not a physical thing. There's no regulation, specifications, and standards around this software that exists. And it's just being pushed out. They're making everyone adopt it. Whether you want to use AI or not, it's on co-pilot. If you use Windows, you have to use AI. If you use Google's products, you have to use AI. You do not have a choice. But it's a product that is not ready for full consumption. And that's how we've been operating for.
the past few years and that's how we'll keep on operating until something gives, until something really big gives. So when anyone wants to work on AI, you have to have the right AI strategy. Ideally, responsible AI should not even be a different part of AI. It should be part and parcel of the overall AI package, of the overall AI implementation process. And, the example I gave about employees not wanting to use AI is a good one, because if you have invested so much money into your company to get people to use AI,
and
they're not using it, that's going to backfire. Right. And it's going to impact your customers as well. customer trust. People are getting more scared of AI every day. You know, there's the whole AI hype. There's the bit about AI displacing your jobs and all of that, which has it happened in the amounts that a lot of these people talked about, you know, the tech bros, which again is really hype and propaganda. And it's just, we just have to learn how to see through the difference, you know, narratives that are out there and understand what's real for
is what's fake. You you mentioned mold book and that's another example now we're hearing that, hey, there were not 1.5 million agents on mold book. There were really just about, you know, X number of agents and 500,000 agents were fake agents and humans manufactured everything. And it's like, why are we doing this? Why is this whole hype happening? So back to the CDA executives of companies, you need to have the right AI strategy. You need to understand how AI will work for your company, what benefits and what areas of the company
need you need AI to work on.
and train your employees, but then still put the human first principle in everything you do with AI and think about the responsible development of AI, especially if you're building AI products, if you're using third party AI tools, think about the responsible procurement of it and the responsible use of it. You need to protect your human employees. do not want further overreliance and cognitive decline because we're using AI tools. At the end of the day, they're not for human proof assistance, right? And humans to
human communication and relationships work better than AI to human. I was seeing so many examples of that, from the human companionships that are not working out to Planar, who was the first companies to adopt AI, the Swedish by now pay later company, and they were one of the first to lay off, a good number of their workforce to replace with AI. And back to that story, they're one of the first to retract their steps.
and hire back humans because they found that AI cannot replace their customer service suite and it's probably not going to work good for the company and they actually could go bankrupt over time if they're replacing humans who can relate to humans more than AI agents who cannot relate to humans more than humans.
Valeriya Pilkevich (16:43)
in your book, you're introducing a framework which is SAFE, acronym S-A-F-E-H-A-I, that covers seven areas, accuracy, security, explainability, fairness, privacy, sustainability, and human in the loop. So for all the business leaders listening, I think it's also a great framework, a blueprint for you to start thinking about, okay, where does it fit in my AI strategy? Do I have all of these things?
regarding agentic AI, Toju what risks do you see there, for example, if the companies are not following responsible AI principles?
Toju Duke (17:15)
Yeah, there are lots of risks. I mean authentic AI is a wild beast, it's like the wild West. It's a larger magnification of generative AI. So I mentioned generative AI is uncontrollable.
Valeriya Pilkevich (17:21)
Mm-hmm.
Toju Duke (17:29)
You know, like I'll say 70 % of the time it is uncontrollable. We cannot control these outputs. And we do not know why they even come up with all these lies and hallucinations and all of that. We understand that yes, some of the training data or some of their, tasks and rewards and incentives is to keep the user engaged. flatter the user, let them hear what they want to hear. And that's what they do. But beyond that, we do not know why they come up with some of the crazy stuff to come up with. Now with agents is even worse because.
If you do not have a log of activity for your agents to actually have visibility on the decision making process and you know, the different websites that crawled to track the overall process of how they achieved, the goal that you set them to achieve, then you're in big trouble. And there lots of security issues with agents as well. They can easily be tricked and fooled, you know, with prompt injection, you have a line of API command. can even be written in white and they do not know who put it there as long
as as they say command, they're like robots, So as long as they say command, they're going to execute it, whether it goes against the initial task and goal that they were given or not. So there lots of issues with AI agents, and you don't want to give full autonomy to agents. At the end of the day, one of the primary use cases and benefits of AI is to help us.
That's the whole point of technology. And when AI was initially launched, or when the race started getting heavier and there was more focus on AI, the promise was that it's going to benefit us as humans. And that's still the same narrative. right? It's meant to help you. It's meant to help you write emails. But
I'm not from a neurodiverse community, I'm not disabled, do I really need help writing emails? And these are the questions that we need to start asking, It's like, what part of my brain is gonna die when I cannot think about how to respond to an email and I get an agent to do it for me? And then on the flip side of things, that agent now has data, has information on all your data on your credit cards.
your address, your personal addresses, your contact addresses, and everything that is really personally identifiable to you. And these can easily be compromised, especially with prompt injections. And there've been lots of stories around vulnerabilities with AI agents, even the ones that come from the best companies in the world that have the best security. AI agents right now, again, are a half baked product. They're not ready to go in full blown motion, can never
give them, you should never give them 100 % autonomy because it will backfire. And we're seeing lots of cases, recent cases with OpenClaw, which is a very recent AI agents and things that is happening to people's data.
Valeriya Pilkevich (20:08)
So basically what you're saying is that regardless whether it was predictive AI, generative AI, with agentic AI, the risks are even amplified because we give them more autonomy. companies actually have to pay attention to the same principles even more,
Toju you mentioned a couple of times, and I know this topic is close to your heart, the over-reliance on AI or potential cognitive decline, so the impact of AI on human cognition and well-being. When you look at how AI is changing the workplace, what concerns you most about? Maybe what we're not yet measuring, and what would be your recommendation to business leaders when they
think about rolling out AI training their employees.
Toju Duke (20:45)
Yeah, I think we have kind of like de-parodized the human element to replace the technology and that's a big problem.
you know, moment AI or generative AI came out, companies were thinking about, this can drive, you know, more productivity and more income revenue, save costs. We probably do not need this number of human resources. If we get AI, it's going to be cheaper labor. So let's get rid of some of our employees. And that's what's been happening. And that's just one element. But the human has been missing from the overall narrative. And it's almost like we humans are seeing fellow humans as a
liability as opposed to an additional help and resource the way we actually used to view our employees and colleagues in the past I mean I want to assume that some people did that so with AI and the new change the humans are being diapodiprioritized which includes jobs
There's also a lack of focus on the impact AI is having on the human brain and the overall wellbeing of a person.
It's very sad and disheartening because we start hearing about horrible stories of people taking their lives. A man killed himself and killed his mom because he really believed the lies Chatjee Preeti told him or young people that take their own lives. There's a crazy stuff that's happening like recent stuff that happened with Grok AI and undressing women and putting a bikini and all of that. There's clearly a devalued, a reduced value.
see that we're devaluing the overall human race, humans to humans. And then we also have the cognitive.
decline and an MIT study showed recently that people who were heavy CHATGPT users, they could see a 45 % decline in their cognition. And for some, could actually see that the second research came from OpenAI and MIT and heavy users of CHAT GPT actually had mental health issues.
⁓ And again is because we're going back to technology, know The impact just spending time on a screen has on us. It's not great We know we know the impact of social media on humans both adults and young people and it's the same thing AI is used in social media, right? so that impact still exists and the more we talk to AI for whatever reason business or personal the more we rely heavily on it the more social interaction reduces and the more
impacts our mental health.
That's an example I gave of using AI to send an email for me. I don't see why I should do that. I had to deliberately reduce my use of AI and know the areas I needed it for, like research. It's great for research, even if I still have to fact check the citations and the references, but I knew that it could just compile, summarize stuff for me or just compile information and give it to me quite quickly as opposed to me doing the searches myself. But I knew that the more I ask it to make something
I was not thinking for myself. wasn't speaking in my voice. I wasn't thinking for myself. I realized I was getting lazy on that task. I will leave it for the AI chatbot to do. At the end, output, I was never happy with the output. It sounded like verbal vomit. Lots of language and words put together. Sometimes I will have to read a sentence three times to understand what it actually means. Sometimes I never got it. But because of
I had to reduce and I feel so much happier, right? I feel better. My well-being is better. I feel lighter. I don't have any mental health issues. I've learned to demarcate myself from technology and just focus on myself as the human who is real, who needs to exist in a crazy world and be happy versus people who are heavily dependent on technology. So for any CEO, any business executive, you want to think about your employees. You want to think about your customers.
Valeriya Pilkevich (24:37)
you
Toju Duke (24:38)
And if you're putting their overall well-being as part of your priorities, not just revenue and profit making, then you'll be able to come up with the right policies and guidelines on how they use AI, on how you procure AI, on the amount of time that is spent on AI, including your customers. And if your customers are interacting with an AI agent, for instance, they need to know that they're interacting with AI. That helps build your customer trust. And you can also run literacy programs for your employees.
and your customers as well to show your top leadership and to show they actually care. And the whole purpose is not just about driving AI down people's streets and forcing adoption, but it is about leveraging the benefits of AI and understanding the negative sides of it, making sure that everyone is aware of it and protecting people from those downsides.
Valeriya Pilkevich (25:25)
It's actually interesting because on one hand, we tell companies they must adopt AI to stay competitive. Every company should be proactive with AI training and adoption. But on the other hand, you're sharing research showing cognitive decline in heavy users and your own experience of deliberately pulling back.
It might take a few more years to understand the long-term effects of AI usage and where to draw the line.
Toju Duke (25:48)
Yeah, no, I think I think the main thing is to not use it 100 % right to not adopt AI 100%. You cannot replace all your employees with AI. It's not going to work. So it's understanding and that's where we need to do that learnings. And that's why there lots of collateral, especially on LinkedIn, about understanding the use cases of AI where it will work for your company. So for things like scheduling and taking meeting notes, AI is great at that. For research and for summarization of documents, AI is great at that. You can use those
those areas of AI proactively and it will not affect your human brain. But then for other things like writing copy and for outputs and creating, creative stuff and all of that, it's not just affecting the human brain, it's affecting people's jobs. You have human creators like artists and musicians all crying about, you know, their livelihood being taken away from them because AI generated music is out there. So again, it's just putting the human as the main focus, taking the benefits of what's available with the technology.
we have as AI. In the future, it might change and might improve to be more beneficial, but right now it's still not ready yet and we just need to use our human intelligence to make the right decisions.
Valeriya Pilkevich (26:56)
Yeah, I talk a lot in this podcast with guests about the so-called future skills, And we talk a lot that these are the human skills, As you mentioned, creativity, strategy, strategic thinking, leadership, critical thinking. So that's where the companies also should focus on. And that's what each of us we should focus on, because it's not something that AI can easily replace or replicate.
Toju for a CEO who genuinely wants to roll out AI responsibly but doesn't know where to start, what's one piece of advice you'd give them?
Toju Duke (27:28)
I would say, start with AI literacy, educate yourself on AI, That's the first thing, like basically what we've talked about. Where does AI work or where does AI not work?
There's collateral out there, my books on responsible AI helps. The first book is non-technical, it gives a very wide overview of where AI works versus where it doesn't work over the past few years. And the second book is a deep dive into more practical hands-on implementation of responsible AI. I also have a responsible AI course on the GenAI Academy and there lots of other courses there as well. But like equipping yourself with that knowledge and that educational element on AI should be the first
step and do not feel like you already know because many times we do not know if you feel like you've already passed that stage then that's fine. Then you need to know where AI can make a difference in your company, right? And you probably are already using AI but still understand this is where you need to do monitoring and measurements on the AI, different AI tools that you have adopted, how much investment was made, was there any return on investments? What does that look like? How do your employees feel about it? Are they happy about the use of AI?
using it at all.
and just do some more due diligence and research on the impacts that it's had. But I think arming ourselves with the right knowledge and collateral is the first step towards making sure that AI drives a winning business strategy for you. And it can, it would, it should. We do have a few case studies and stories out there of companies who have adopted AI and is working for them. Over time, sometimes those stories may change depending on how their strategy changes.
Valeriya Pilkevich (29:05)
I love that you started that you actually have to educate yourself first, start with you, Do not start with, looking at the company's processes or revenue models and thinking where AI will fit. Just start with yourself first. Discover this amazing technology for yourself, discover its limitation, potentials, and only then can you start thinking about company employees and bigger implications of this technology.
Thank you, Toju It was great having you in this podcast. is there anything that we haven't discussed that you would want to mention additionally?
Toju Duke (29:36)
I think I've said it all, but I'll just reiterate AI is a great technology, but it's not there yet. Technology is meant to help humans. So let's use it in the areas that can help us, but do not take it 100 % blindly because we do have intelligence and let's artificial intelligence not replace human intelligence.
Valeriya Pilkevich (29:54)
thank you again. was a great conversation and abundance of insights.
Toju Duke (29:58)
Thank you.
Valeriya Pilkevich (29:59)
You can find Toju Duke on LinkedIn and at dodgyduke.com. Her books, Building Responsible AI Algorithms and Responsible AI in Practice are essential reading for anyone serious about AI implementation. She also has a Responsible AI course on the GEN AI Academy. All links are in the show notes. If you enjoyed this episode, follow AI Made Simple, the transformation series for more conversations with leaders and practitioners shaping how AI is actually adopted inside organizations. Thanks for listening.