The Takeover with Tim and Cindy

The Hidden Risks of AI Adoption and How to Build Trust with James French

Tim and Cindy Dodd Episode 101

Yes, you’re using AI, but are you using it ethically and in a way that builds trust?

In this episode, you get a masterclass on ethical technology from James French, the founder of Ludulluu, a pioneering AI governance platform. 

James shares his fascinating journey from leading high-stakes finance deals in West Africa, to transforming a U.S. museum's board through stakeholder-led governance, to now building the trust layer missing in today’s AI revolution. 

He breaks down what AI governance really means, why it's essential for any business using AI, and how trust will become the most valuable currency in the future of tech.

Whether you're an AI builder, user, or investor, this episode gives you a whole new lens on risk, compliance, innovation—and how to do all three without compromise.

Keynotes & Chapter Markers:

[00:00] Introduction:  Why AI Governance Can’t Wait
[08:09] The Turning Point That Led to Ludulluu
[16:15] Practical AI Governance for Business Leaders
[25:55] Why Competing on Trust Will Define the AI Era

Connect with James:

Website: www.ludulluu.com

Email: james@ludulluu.com

LinkedIn: https://www.linkedin.com/in/jamesafrench

Download The Free Outbound Sales Playbook:

  • Master cold outreach, close more deals & drive revenue to your business using The Outbound Sales Playbook. Battle-tested on over 1,000+ businesses and proven with over 100M data-points, this Free E-book will give you the tools you need for predictability and scalability in your marketing and sales efforts. Disclaimer: Only download if you want more customers!

Join The Takeover Community:

  • Sign up for The Takeover newsletter and get the latest marketing tips, sales strategies, and business insights delivered straight to your inbox. Join a community of entrepreneurs & high-performers dominating all areas of Sales & Marketing. Sign up for the newsletter.

About The Hosts:

  • Tim & Cindy Dodd are the Co-founders of PEMA.io, based out of Miami, FL. Connect with Tim and Cindy: Instagram

About PEMA.io:

  • PEMA.io is a Inc 5000 Outbound Marketing Agency specializing in Enterprise Sales & Appointment Setting. With over 7-years and 1,000+ clients served in the industry, PEMA is the leading agency for cold outreach appointments & systems. Learn more about PEMA.io here: www.pema.io/discover

00:00
think most of us have had the experience of using said chat box. You have a really strange answer, right?  You you suspect that the answer is incorrect.  And there are variety of ways that can happen. There's a term called hallucination, in which  AI models, LLMs, of  create their own reality in their response and give kind of false information. But there are also errors.  And there's a lot of work going into kind of what they call prompt engineering, the way to  accurately prompt the LLMs so that you can minimize that.

00:29
All of those issues can have a huge impact upon businesses.  So part of governance is the desire for accuracy. And the end user will want to be able to trust the answer that they're being given.  Welcome to The Takeover with Tim and Cindy, where we show you how to dominate every area of life and business.  Let's get winning.  If your business is using AI, but you're not thinking about governance, compliance and trust,  you might be sitting on a ticking time bomb.

00:57
Today, we're diving into why AI governance matters now more than ever with somebody who's been building the future of ethical AI. Welcome back to The Takeover. Today, we've got a powerful conversation lined up with James French, founder of Ludulu. James has one of the most unique backgrounds I've ever seen  from growing up in West Africa to working in global finance and even leading governance reform at a US museum. Now he's taking all of that experience and using it.

01:26
to fix one of the biggest blind spots in AI, governance and trust. James French, welcome to the show. Thank you so much for having me, Cindy. I'm really excited. You've had a very unconventional path to AI  and just listening to your story and your background from finance in Africa, you and I connected a lot on that,  to the museum governance. How did a lot of those experiences shape kind of where you are now?

01:54
and give us a little bit of the backstory leading to where you are now and the launch of Ludulu. I look at my path as having a lot of different pieces of a puzzle that have brought me to where I am today. Probably couldn't have done it any other way. Like lessons learned from different careers, if you will, in my life. So I've had multiple careers and I think that's kind of what the future looks like for young people today and having several different careers  over one lifetime.

02:18
So I started off as a banker  after I graduated from business school, but I chose to go back to  West Africa to be a banker. I grew up in Cote d'Ivoire and Ivory Coast.  When I finished college and grad school, got my MBA,  the Citigroup asked me to come and work with them. I guess it's because I knew the region and I spoke French. That was a good combination that allowed me to really jump into global finance. I became a treasurer.

02:46
I ran the corporate finance desk in these new frontier markets  and did a lot of transactions and those  opaque markets, not opaque, but there wasn't a lot of information and data. And I was able to find ways to measure and price risk. After that, I went to the  U S treasury and I was hired in the office of technical assistance to.

03:09
leverage a lot of the  lessons that I had learned in capital markets in various different countries. They had me go overseas to advise  central bank governors and ministers of finance in helping countries, their capital markets  and their reserve management. That was a really a great job.  It allowed me to,  from a public service standpoint, it allowed me to really understand how  government officials  see the world and analyze things.

03:36
and think, and that was really very helpful.  Next, I went to the IFC, the investment arm of the World Bank,  and I joined their independent directors program. And they had me go down to  sit on a bank that they were shareholders in, and it was in South Africa. It was the largest microfinance bank in the world at the time.  And it was, you know, having some

04:02
some issues and they wanted independent oversight on the board, which I was only too happy to provide. So that, guess it was my first real critical experience in governance. It opened up the whole governance world to me because this was a bank  that was in difficulty. It had to be turned around. It had to be sold. It had to be  brought back to life.  And those were all governance issues. And I really found that fascinating, but  more fascinating or perhaps in parallel.

04:29
to that. This was during the time of the rise of fintech in Africa. I had a front row seat into how technology was changing the access to finance for everyday average African business people and consumers. And Africa has the largest population of unbanked people. so technology was, know, like fintech was a way of reaching those people, bringing them into out of the

04:56
purely informal market into the formal market and giving them the ability to transact and send money and just like every other normal business. And that was a lot of fun to see that rise.  A lot of people don't know that Africa plays a really pioneering role in FinTech. Just two examples. One is in Pesa in Kenya, which is  one of the very first payment companies. And that was  developed in Kenya. And  they innovated really a very sophisticated payment platform, which is

05:25
huge today, used all over Africa.  And think of it as Venmo before Venmo, but it was developed in Africa.  And then a  lot of people know about the platform WeChat in China. So in China, really to do practically any transaction or anything, really, you do it over WeChat. You have that facility, but it's kind of almost a necessity in many cases.  Well, that technology developed in South Africa.

05:51
by the largest company  in Africa called NASPRS. Very few people know that. And they joined up with some Chinese partners  and scaled it to China. And it's now biggest product of  Tencent, the Chinese company, of which the South African company is the largest single shareholder. So really, this was an exciting time to be the African finance space. And I thought,  let me get involved in fintech. This is really a great.

06:16
you know, growing sector, moved back to the U S and when I came back,  I was  ready to, to launch my concept of FinTech platform that really benefited from my experiences  in treasury as a treasurer for city group,  valid concept for lowering the cost of these transactions.  I was pulled aside, if you will, by a museum that was  in the area where I, where I live.  And  it was a historic property of.

06:46
really great importance in the country  related to the founding of the country. They wanted me to invite me to a meeting of descendants of people who were enslaved at this museum and who wanted to get involved in the management of the foundation that ran the museum.  And a long story short is that  I went to this meeting,  I founded an organization that organized the descendant community  and

07:13
We became involved in the board of the museum and were able to bring our knowledge, our knowledge of history, our expertise, and many PhDs and lawyers and doctors, people with a lot to offer the museum. And we were able to really transform the museum and make it more inclusive, more resilient. And that really kind of woke me up on, there's a lot that we could do in governance to improve the way not just this nonprofit museum is governed, but

07:41
you know, perhaps that same model could be adapted to other areas.  So I have a problem, which is that I like really big challenges. And so I, when I stepped down,  I became chairman of that museum a year afterwards, I stepped down  and I wanted to go back to kind of my tech roots, my tech fascination that I was following before. And so I had this new governance model that I just implemented that had become kind of a gold standard for museums.

08:06
but I was in search of  a really big gatekeeper crisis to apply it to. And so I looked around the landscape and the biggest,  orniest  gatekeeper crisis that I could find was an AI. And so that's how I got to AI and I really dove into it  and learned everything that I could, met some really, really smart people in that field. What year was that?

08:31
I started doing this kind of formally about a year, little over a year ago, so a year and a half. And what the important thing was, I look back is to see how I had to go through really each one of those phases to be able to add value to, this it's almost like a thesis antichrist and this is the synthesis. I had to go through banking and fintech and big transactions with governments and small microfinance transactions to really understand how consumers

09:01
what their needs are and how they, the voice that they need to have.  And had I not gone through each one of those experiences, I don't think I would have felt the governance insights  that allowed me to  be successful in not only the museum, but in  seeing the connections to the governance crisis in AI. You have such  a broad understanding and perspective.  And I'm curious.

09:25
What is the meaning of Ludulu and how did that get started? Well, that's a great question also. Let me just tell you what we're trying to do. Then Ludulu will make sense. So as I said, I was looking for a gatekeeper crisis.  I came across AI and I thought I really want to get involved in that. And what do I mean by that?  I mean that AI is  a transformative technology. changed. It's really changing everything.

09:49
The way we do business, the way we do science, the way we do, you know, creative  endeavors as well. Everything is really being affected by AI. The difference between the main difference, there many, but the main difference between AI and kind of the previous, say, the rise of the internet  is that AI is not a tool, it's an agent. By that, mean, AI takes information, processes data through algorithms, and then not only comes up with the most efficient path between the data points, it has the capacity to actually

10:19
implement things, whether it be in healthcare and, you know, education, finance across the board. So you're following Google maps to get to your destination. Google maps is a question that I had was if you have these big, huge, massive seas of data that LLMs  thrive upon, which that they use, and they're being interpreted by algorithms, who is involved  in selecting the data and in designing the algorithms?

10:49
And the reason why I think that's a  necessary question is because of impact in ways, some ways that are irreversible.  And I also have my previous experience in which I have good data that stakeholder led governance  or stakeholder involved governance  and strengthen an institution can strengthen a company. And in the case of AI, I also, there's a safety component. And this gets to your question. What is the Duluth? There's a safety component to this because

11:17
when you can involve and we'll get into why do this and what I'm doing. But when you involve people who are most impacted by the technology in some of the decision making, then those people will be assured and can be assured in this process  that you've taken into consideration their own safety. So they're able to kind of bump up the  scope of governance to  focus on that. So what does Ludulu mean? I know it's a long, windy answer,  but so Ludulu is the name of my startup.

11:50
L-U-D-U-L-L-U-U. And it's actually an ancient Sumerian term. And I wanted to go kind of go backwards to go forward. So kind of back to the future. Ancient Sumerian is one of the oldest written languages and it means protector of humankind. And so I thought that that was a really good name. And I hope that as we develop this company, because of the way I've designed it, the way we've designed it is to scale with the entire industry. I hope that that will

12:17
make a lot of sense to people. And in the way that you can say, did you Google something we can say,  has this LLM been Ludulud or what is the Ludulud rating of this LLM? So good. It's so fascinating. I was  not aware of this concept of AI governance  no more than maybe a month or two ago. And so for our audience who may also be new to this concept, explain it really simply. What is AI governance?  Let me just kind of back up a little bit. Governance  is

12:47
a set of principles, it could be principles and rules, or it could just be principles that organizations are in themselves to. Okay. And if they bind themselves to it, you know, there's an overlap with ethics, business ethics. If they are bound to it by, by either industry practice or by regulation, it could be, you know, externally imposed. so governance is the, when a board of a company appoints the management.

13:17
and the management kind of reports to them, that's governance.  They represent stakeholders in many private companies. Those stakeholders will be founders. They will be shareholders. But in what I'm doing is I'm kind of expanding this notion of  who the stakeholders are. And so the stakeholders,  all of the different actors in the value chain, if you will, of AI. So you have the developers of AI. You can think of the large LLMs out there, the open AIs of the world, the anthropics of the world.

13:46
You have the deployers of AI. That could be a corporation that uses AI agents. It could be a university. could be a  medium-sized corporation.  It could be an airline. It could be  healthcare.  And they deploy AI that's either in-house or an agent that is derived from one of the large providers.  But then you also have the clients, the end users.

14:11
And they are also important stakeholders who need to be taken to account in the governance of AI. And the simple reason is, well, the two ways to look at it. One is they're impacted. So, you know, they kind of have skin in the game. I remember this as an agent, not a tool. So that would be the ideal way of governing these companies would involve the end users having some say in how it's developed and deployed. But there's even a more.

14:40
a more fundamental reason why it's important to involve end users. And that's because as the power that AI has, as people become more aware of it, they're scared, to put it simply.  And there's a trust gap. Though I've read  somewhere a statistic  that  something like 85 % of Americans are unaware of AI in their lives, but those who do learn about it, the vast majority do not trust  for one reason or another.

15:10
AI companies to self-govern.  And so there's a trust gap there. And AI companies, for them to be successful and sustainable over the long term, they need broad adoption. So the missing element between, you know, the gap between broad adoption  and  to kind of satisfy the  uncertainty that end users have, that missing element, that epsilon,  if you will, is trust. And so I look at trust as being the main

15:39
byproduct in  the main purpose of governance. That's so good. Yeah. I think about those words, right? Trust, ethics, compliance. mean, even risk, the speed at which  AI is getting adopted and the speed at the rate at which companies are utilizing it. It's becoming increasingly difficult, I think, for people to keep up. And this AI governance is not a topic that we hear.

16:04
spoken about very often. And so I love that we're bringing this conversation to our audience as well, because it's very necessary.  And I think if there's no trust among business transactions, consumers to businesses, there isn't anything, right? And so  for our audience,  think about their entrepreneurs, marketers, salespeople, executives, if they are using AI on a day-to-day basis,  what should they be thinking about? How can they incorporate

16:33
governance really practically to make sure that they're staying on top of it and actually practicing ethical AI practices.

16:58
think most of us have had the experience of using, say, a chat box. And I won't name which ones, but there's several out there. That you have a really strange answer, right? You you suspect that the answer is incorrect. And there are variety of ways that can happen. There's a term called hallucination, in which AI models, LLMs kind of create their own reality in their response and give kind of false information. But there are also errors.

17:24
And there's a lot of work going into kind of what they call prompt engineering, the way to  accurately prompt the LLM so that you can minimize that. But those errors, and then we can talk about like data rights,  the provenance of data that are in some of the LLM models. Where does it come from? All of those issues can have a huge impact upon businesses.  So part of governance is the desire for accuracy.  And the end user will want to be able to trust the answer that they're being given.

17:54
once I make a query. My contention  is  that that is not a purely technological issue, but that is fundamentally a governance issue.  The way in your lifetime when you trust someone, just even between people, you don't trust someone because they say, me. It doesn't matter how many times they repeat it. You trust someone in most cases, not all cases. You trust someone when that person places trust in you.

18:21
And when that person in a case in a governance, in a boardroom, when that person empowers  another person, if you can participate in the decision-making  process in some way or another, or if you can benefit from being trusted, then you build trust and you return that trust to the person. Well, it's the same thing for corporations. AI is really a order of magnitude more powerful, several even,  than previous technologies. So  currently it's unregulated.

18:50
In the U.S.,  there's a new EU. AI will have a big impact across the world.  And there there's a  vital discussion being had in Washington right now amongst regulators and industry  industry representatives. The question is, where do the end users  fit into this process  so we can accelerate the bridge to trust? And remember, the AI producers  need trust. It's the missing element for adoption. So it's really in their vital interests.

19:19
to bridge that gap. And the way I look at it is that they can't do that solely by  internal means  and they can't do it  by technological means. mean,  but  they shouldn't do it themselves because trust should be verified and accorded from an independent entity that allows  users to be involved in some way or another. So independent governance or independent trust is authentic trust.

19:49
And authentic trust, opposed to trust in solely the brand power, authentic trust, it's rocket fuel  for an AI company. It's  something you can't buy. And when the company has that, and when a company can demonstrate it and can say, you know, I can show you empirically that I am trustworthy, that's a truly powerful competitive advantage  that a company can have. Yes, I think there's nothing more powerful than

20:17
that third party validation, right? I could say, Hey, trust me. And probably people will not, but when you have that third party stamp of approval, I think that helps in that, in that trust gap. So walk us through a real world example. How is Nudulu helping to bridge that gap and create that trust? mean, you can take examples from mortgage finance, from healthcare insurance.  Uh, you brought that up. So

20:44
There's something called shadow AI. don't know if you've heard of that term. So shadow AI is refers to when private companies,  the employees of companies use an AI chat box  in doing their jobs, but they're, don't tell anyone, you know, it's kind of, that's why they call it shadow. So they don't tell anyone that they're using it and they may or may not be the company may or may not be an official customer of a, of an LLM.  They use it. And if the answer is wrong and this has happened in many cases, the answers.

21:12
that are given are not accurate and the company makes a decision based upon that information, there's a problem. There can be liability, serious liability for a company. Now, where there's risk, there's a market for that risk. And that's where the insurance company comes in. So banks, know, I come out of the banking world, banks  really exist in the market of transferring risk from one party to another.  And you have willing sellers and willing buyers at a certain risk reward ratio.

21:41
Well, same for AI, except the problem is that this is a brand new industry  and insurance companies and some of the largest in the world are already offering policies  for AI performance to the AI developers against shadow AI, for example, or against, uh, you know, some of the errors or, or the data rights, the data pipelines or hallucinations. You can go up and buy one of these policies. If you're an AI company from one of the major.

22:10
major companies.  But again,  insurance companies,  what they do is they identify,  mitigate, and price risk. And that's based upon historic actuarial data in every field. The problem is that there's very little historical data in AI. So white paper after white paper has been written in the insurance industry about, know, put our hands around all of this risk and assess it and price it.

22:40
directly with a lack of data. And so that's one of, that's where Luduloo one example, one case, a use case where Luduloo can provide really in that our model essentially brings stakeholders,  users,  experts. And when I say users, I don't mean just, you know, somebody off the street. mean, experts who kind of the representative users and in different domains, different sectors,  and they have some expertise and they have some knowledge of how these models are built. And.

23:09
their ability to interact at the governance level in real time through our platform, it results in the generation of governance metrics. And those governance metrics  are of use to insurance companies. And think of it as kind of a proxy for historical data. So let's imagine you have AI producer that is very much aware of their liabilities, their potential liabilities. They want to go out and buy some insurance  to protect against that liability.

23:36
but they're kind of surprised at how expensive that insurance is. Well, one way that you could bring that cost down is if the insurance agency has reduced exposure and the way that what we can do is through our risk metrics, provide that information needed for the insurance agencies  to lower the cost of their premiums  and kind of attract more  customers. And that's just one example of where there's kind of like a win, win, win. So a win for the developers,  a win for the insurance, and also

24:05
because of trust, win for the end users.  Oh, good.  I have not heard of a company doing what you all are doing. So I think it's so needed in these spaces. And especially with we hear a lot, there's a lot of noise in the AI space, right?  And especially as marketing agencies were  hearing it, there's so-called experts on just so many different levels. Very few people are talking about governance in the way that you are. James, what do you wish more people in tech and in business leadership?

24:34
understood about the governance  side of AI, maybe something that you haven't covered that you think our listeners need to be paying attention to. This is going back to my previous experience, both as a banker, as a government, a public sector consultant in the US Treasury  and sitting on boards of banks. You have to involve multiple stakeholders in the decisions about governance.  AI, think, is kind of

25:01
the case where that is the most, almost the most important, because again, everyone has access to it and everyone  is impacted by it. What I see  is  that  in recent months, we've seen a huge shift in the industry. We've seen some of the very largest firms  spend hundreds of billions of dollars and there's multiple hundred billion, well, hundred billion dollar or close valuations, market valuations of some of these firms, many of which

25:31
are still private. And suddenly out of nowhere, you have this kind of upstart from China come out, it's been in the news, come out and make a claim that they have at least an equally performing LLM model that they developed at a fraction of the cost. I think I saw a term about $5 million and not only a fraction of the cost, what they did was they put it all on open source and it's out there and anyone can use it. So it's there. That tells me very simply,

26:00
that there's a new  game in town. Competing  for performance alone  is not going to be enough for success over the long term for these companies. there's a term I heard recently called the velocity of innovation. And I think it really describes the AI industry  very well because  the velocity is really, really fast, how fast this industry is changing.  But performance  is not enough.

26:29
because the barrier to entry has just crumbled.  And so there will be a mushrooming effect, if you will, of new LLMs that will be, you know, for  million or two or whatever the amount will be down the road  of  medium sized businesses, even some small businesses could conceivably someday soon  develop their own models, their own LLMs, go online, get the open source code, develop it, train it.

26:56
for a fraction of a fraction of a fraction of the cost. Cost and performance are no longer the only metrics, the only market definers. What is  the thing that they will compete for to differentiate themselves? And I believe that we're seeing performance yield to competing for trust. Who can you trust? So governance will be in the driver's seat, actually.  It's not an afterthought. It's really important. Trust will be  the accelerant.

27:26
of the adoption of this technology.  And the way that, you know, again, I'm repeating myself about this because I think it's really important that the way you obtain trust  is by empowering others, is by giving some degree of autonomy to others  in an independent way.  And  one last point on this  is that right now there's kind of a notion of there being a fault, I think is a false choice, but you have to choose between, you know, the philosophy of innovation and safety.

27:54
And I think that's a false choice. think you can do both. And I think you can do the governance at the same speed of innovation. And that's what I'm trying to demonstrate. So good. Trust is that currency. And I know for our listeners, there are several people that are already utilizing it, right? I mean, if you are not currently utilizing AI or even thinking about how it's being used in your business, for sure, that is the first step.

28:20
But the second part to those users that are so active in AI, I know in our company we're developing our own agents, we're actively utilizing this every day. And I know a lot of our listeners are as well. This is an opportunity for you all to think about what does the trust side of this AI utilization look like? What are all the stakeholders that are involved? How can we actually incorporate governance into our practices to truly build that trust among

28:49
not just the users, all the stakeholders that are involved. This is such a timely and much needed conversation, James. So I'm so glad that you joined us to have this.  Let's talk about James. We always ask our guests on the show, what does  winning mean to you? Winning means to me that we can bring together producers and users  into a consensus.

29:14
And the consensus, I'm not talking about some kind of idealized version of the future. This is not a utopia, but the consensus will be built upon trade-offs where we can  build this industry of AI on plus one thinking, not zero sum thinking.  And so the AI companies and the representative users can agree to a framework and can trade off  aspects of that framework as it applies to their particular use case.

29:42
and understand that an element of collaboration will yield better outcomes and more sustainable outcomes for all as opposed to win lose.  Though it's interesting you asked me what is winning because  I think winning means kind of getting out of this win lose mentality that we often have in  business  and understand that we can through collaborative effort, we can have.

30:06
winners and winners on both sides of the equation. mean, just your philosophy around impact. I can see how you  include everybody in the conversation. I think you are probably the best person to be leading this AI governance chart and being at the forefront of it. Last question for anybody listening who's intrigued by what you're building. What's the best way for them to connect with you and learn how Luduloo might help their businesses navigate AI governance more effectively?

30:34
Well, we'd love to hear from everybody, investors, from users, from developers, AI companies.  We want to hear from insurance companies who understand the benefit of having risk metrics.  So the best way to get in touch with me is you can go to the website at www.ludulu.com. That's L-U-D-U-L-L-U-U. Remember, it means projector of humankind.  Or you can email me directly at james.

31:04
or you can look me up James a French  late in. Excellent. And we'll leave all of that information in the show notes as well. James, thanks so much for joining the TakeOver podcast. We'll talk soon. Bye for now. Thank you so much for tuning in everyone. Again, go back and listen to this episode. There were so many golden nuggets that James shared and I don't want you to miss not even one. If you're loving what you're listening to, make sure that you are subscribed to the TakeOver.

31:31
on Apple, Google, Spotify, wherever you are listening.  New episodes come out weekly. Remember, domination is not a destination. It's a way of life.  Stay winning.