The Entropy Podcast

Exploring Ethical AI with Louise McCormack

Francis Gorman Season 1 Episode 4

In this episode of the Entropy Podcast, Francis Gorman speaks with Louise McCormack, a trustworthy AI consultant and PhD candidate, about the critical importance of trust in AI systems. They explore the ethical frameworks that govern AI, the implications of AI decision-making on society, and the need for transparency and accountability in AI technologies. The conversation delves into the potential dystopian future of AI, the impact of technology on human resilience and language, and the pressing issues of data privacy and consent in the digital age. Louise emphasizes the need for regulation and the societal responsibility to ensure that AI serves the public good rather than corporate interests.

Takeaways

  • Trustworthy AI is defined by ethical frameworks and principles.
  • The AI Act aims to reinforce existing rights and protections.
  • There is a significant gap in understanding AI's implications.
  • Over-reliance on technology can diminish human resilience.
  • AI is changing the structure of language and communication.
  • Data profiling by private companies poses risks to democracy.
  • The impact of AI on society is poorly researched.
  • Transparency in AI algorithms is crucial for accountability.
  • Digital detox may be necessary in an AI-driven world.
  • Consent for data use needs to be revisited and regulated.

Francis Gorman (00:00.706)
Hi everyone, I'm Francis Gorman. This is the Entropy Podcast. We're on episode four and I'm joined by Louise McCormack, Trustworthy AI Consultant. Louise is also currently doing a PhD in Trustworthy AI Evaluation. Louise, how are you doing?

Louise McCormack (00:26.242)
Very good, very good. Good time to be an AI. So it's a good day.

Francis Gorman (00:29.617)
Very much so, and it's a really intriguing area to discuss. So trustworthy AI in today's world where we seem to be very much in a gold rush and you don't always send the canary down the mine. What are you seeing in the space at the moment that we should be paying more attention to?

Louise McCormack (00:47.778)
I guess the first thing for most listeners is to think about what the word trustworthy means. So whenever I'm going to talk about it or what it means to me is it's basically, it's an ethical framework that we use to evaluate AI systems. So when people talk about trust and AI, often it's around the public trust or trust within businesses, but trust or the AI itself, the concept is a kind of seven principles and they're mentioned in the AI act and they were created by the EU.

high level expert group back in 2019. So, trust with the AI is seven principles like transparency, fairness, and what I'm seeing right now, I suppose, is first, just a lack of understanding of what that is, a lack of understanding of the AI act. And yeah, a lot of confusion and I think fear as well around AI in the public and also within organizations as well, and the fear of introducing risks into their business.

Francis Gorman (01:40.881)
Louise, I was watching your TEDx talk recently and you start off speaking about what is almost dystopian type future and then tie it back to the realities that we're living today. Can you give me a bit of a synopsis of that content for our listeners just to bring it to life, please?

Louise McCormack (01:58.146)
Just a bit of the intro from the TEDx talk.

Francis Gorman (02:00.369)
Yeah, please a little bit. I just I thought it was really intriguing the way you you framed it and then brought it back to the realities of today. And I think it is something people would actually enjoy listening to.

Louise McCormack (02:10.22)
Yeah, it's a little while back, but I'll see if I remember it actually, just like that. Basically, I asked the audience to imagine a future a thousand years from now where an intelligent AI system was in charge. It monitored everybody so it could make their key life decisions like what house they could buy, what kind of house they could live in, what kind of job they could have, and basically all of their decisions, even around their health. Like if they got sick and went to the doctor, an AI would decide if they could receive treatment or not.

And if they broke the rules, the AI would decide what sentence they received. And so in this world, the big question would be how could we evaluate these systems to ensure that they are trustworthy?

But actually the reality is that we don't need to wait a thousand years because all of those decisions are currently made right now by AI systems or by businesses that deploy AI systems. It's a little bit sensationalist the way I kind of had said, you know, that AI makes those decisions. But the reality is that they're just tools that are being deployed in businesses and they're being deployed for the business, for the benefit of the business. there's something that

aren't really being properly evaluated at a kind of policy level. There's the technology that basically doesn't exist. And I guess what the point of that talk goes on to talk about is this huge gap between the regulators' ability to enforce the laws that we have in place. We have laws in place around anti-discrimination, but the government's, don't, the auditors, the regulators, don't have

access to technology to properly evaluate the current laws and the new laws that are coming in like the AI Act, they certainly don't have the technology to evaluate AI systems for those because that technology simply doesn't exist. We're developing the technology without the ability to evaluate it. So we're developing one but not the other.

Francis Gorman (04:12.849)
In terms of the technology, lot of these systems, especially in the machine learning space, to differentiate from generative AI type solutions, are black box. Do the developers themselves even know what the decisioning mechanisms are after a period of time?

Louise McCormack (04:30.882)
The black box systems, and I suppose in that talk, I do kind of introduce the concept of black box as, know, this thing that, yeah, in some cases, they don't understand. The idea of a black box algorithm is that we don't fully understand how it made its decision. It's been given so much data that we can't go and pick apart where that came from. And that's another piece of technology that isn't being developed is the transparency aspect. So when we develop systems,

We can develop transparency and explainability for those algorithms, but that kind of falls to the wayside. The main goal when training the algorithms is for accuracy. And that is simply because accuracy equals profitability for the companies. Transparency equals, you know, auditability and accountability, but the businesses don't care about that as much as they care about the accuracy of the model.

because that's their bottom line, it is profit.

Francis Gorman (05:31.409)
Let's talk a bit about profitability. So we're very much in, as we said at the start, a gold rush era. It seems to be that every day there is a new model that has a deeper level of richness and capability. Every SaaS platform you interact with has some level of AI now built into it. It's getting very hard to differentiate between where data is going, what it's being used for, the outcomes that are derived from that data.

And it's all very much driven with a bottom line attached. When we look at the regulation and we look at the US versus Europe, you've got the EU AI Act that was formed up three months ago, I would have said it's going to be a very powerful tool. However, since the Trump administration have come in and we've seen 500 billion investment in

America in the AI space followed on by a 200 billion announcement from the Paris Accord a couple of weeks ago on the European side and almost an unknown quantity from China and others as to what's happening in those nation states. Do think that self-regulation will become the preference for governments or will they bed in and push

the EU regulation forward in the European space at the risk of potentially falling behind the Americas.

Louise McCormack (07:00.11)
I feel a lot, I have a lot of thoughts about the concept of Europe falling behind. If I look over at America and I look at their society, I don't see them ahead. I see the billionaire class as being ahead, but I don't see them as a more successful society. So this idea that we're not going to get the good stuff or the good AI or we're going to be held back here in Europe. I wouldn't necessarily agree with that. I think what they're doing in America is what they're doing in America.

The AI Act, it just reinforces existing laws. It's there to protect fundamental rights. So we found the GDPR that, yeah, it came in and changed a of things, but it was just reinforcing rights that we already had. So it wasn't really changing anything in terms of like giving us new protection. was just laying out for the specific new industry that exists around our data, like the nuances of that.

With the AI Act, it's very similar. We have fundamental rights that are protected under European law, under Irish law. We have rights that we should already have. And so the AI Act, all it does is reinforce existing rights for this piece of technology. And that's simply all it does. It gives us an opportunity as innovators, as business people, as entrepreneurs, a framework to say, here, here's how you can do it properly, and here's how you can do it in a way that's going to be good.

for our society. for me, that's very exciting. I don't particularly like the idea of just racing ahead and rolling out technology that transforms not just our society, but all the societies around us that we inflicted on. And to have an opportunity of somebody laying out for you, here's how you can do it correctly, and here's how you could measure it right, and here's how you can make sure what you're producing is good for people, and that people will want it. Consumer power is rising.

And that's a fact. So I see it as a huge opportunity. The idea that America is racing ahead, for me, that is not how I see America. What I see happening across there in America is not them racing ahead, not at all.

Francis Gorman (09:13.271)
very good perspective Louise and I do agree like I'm very much in that we need regulation, we need transparency, we need to bet in appropriate ethics and controls and it brings me back to a situation that I've been thinking about a lot lately and that's human resilience and when I'm talking about human resilience I'm actually worried that we're losing cognitive ability in certain areas and I'll give you an example I ended up in hospital last year for a bit of a procedure my phone died at the time

And as they do in hospitals, they brought me a pen and a paper to fill out my next, the cane and all that good stuff you need to, you need to put into those forms. I was writing my wife's name down. I came to her number. And I've had a smartphone since 2007 or thereabouts whenever the first iPhone came out. So the data I needed was the numeric digits of our number, but the information that I've always been presented with since we started going out all those years ago was her name. So I just click, click, click call Jill and you know.

phone call happens. And I start to think, I know all of my friends numbers because I used to have to type them into the phone, talk to their mothers first for a while to get vetted before they came on the line, you know, and all of those numbers are hard coded somewhere in the back of my mind. And as I look at what's happened over the last kind of year and a half with the end user agent driven AI workflows, I'm wondering, we really considering all of the consequences

that may fall out the back end of this in terms of human resilience. And then I think it was last week the week before I picked up the paper and I read a report from Halfords, the car tooling and services company, and it was on 17 to 27 year olds. And basically their top line was one in five have never changed a light bulb and believe climbing a ladder is too dangerous.

And they had some acronym around it that gets someone else to do with generation or something along those lines. And it kind of brought home to me the human resilience factor. Are we in fact diminishing our resilience? Will companies pay a price for augmenting end users or giving the crutch as more and more tasks become AI driven? we lose the ability to write a document, to create a presentation, to communicate at a level

Francis Gorman (11:39.035)
that is human and relatable.

Louise McCormack (11:41.1)
Yeah, over-reliance on technology is a real risk. Like it's a real tangible quantifiable risk that exists. It's a similar conversation to when, you know, factories came in and skills were lost, you know, the art of, I don't know, cobbling. these are things that are very niche now. And I don't necessarily see all of the loss of these skills as a bad thing. There are a lot of tasks that are, you know,

not needed and take too much of our time, but certainly it's happening so fast that we don't have the time to really look at what we're losing and what we're giving up. In terms of like creating a document, one thing that I often think about is we're changing the entire structure of our language. There are researchers who believe that the way we structure our language affects the formation of our cognitive paths within our brain.

We're changing the language structure of huge parts of the world. So anybody that's using AI is going to speak differently. They're going to adopt the language that the AI has and our entire way of speaking is going to shift in a really, really short space of time. Already, you know, they're writing documents for us and we're kind of adopting in this way of this new way of speaking. And that's a huge, that's a huge thing to think about. Like I tell the story about

the Irish language. So within the Irish language, when we talk about emotions, we talk about them as separate things to us. So, you know, the fear is on me, the happiness is on me, the sadness is on me. And what that means for us is that we're in a more shared collaborative society because our emotions are connecting us. They're these things that exist outside of us. And when I tell you that that emotion is visiting me, you know, because that emotion visits you and you're more connected to the other person. Whereas in the English language, we say

You know, I am scared and that's a very individual inside permanent almost feeling and as opposed to something in the Irish language and that changes our society. changes our entire society. if you might kind of listen to my struggle to kind of conceive what a big deal that is, but you might be familiar with the phrase, the fear. Have you ever heard that? If you go out for a few drinks and you wake up the next day and you think, my goodness, what did I do last night?

Louise McCormack (14:04.334)
You know and you're sitting there and you're you're replaying the night that you had and thinking why did I have that those last drinks and you text somebody and you say I have the fear Immediately you feel better about yourself because the fear is an external thing to you. It's not your fault It's not with you forever. It doesn't define who you are as a person. It's simply an emotional state of being that you have Imagine if all of our emotions that we had We talked to other people like that. I have this thing

you know, it's visiting me. That's something huge that we lost with the Irish language. And that's just one small thing on this one small island. But we're talking right now about changing the entire language structure for all people who use AI around the world. And nobody's even having the conversation of what that means for us culturally or, you know, as a society, what it means to structure our language in this formal way. And we don't have a choice about it. It's a huge cultural shift.

That's just one of the fall offs. There are so many things and that is actually one of the principles of trustworthy AI is the impact on society and the environment, including on culture. And those things aren't really happening right now. That conversation isn't happening. People are just doing what they're doing as quickly as they like. And all of those kinds of things are getting lost.

Francis Gorman (15:24.881)
That's a really interesting view. I never thought of the kind of the emotional intelligence or our ability to communicate in that lens before, but it makes complete sense if we start to articulate things through a machine interface. know, we'll all be going around saying ream and different contexts and constructs that AI uses quite a bit. In terms of that, that is

Louise McCormack (15:44.43)
Okay.

Francis Gorman (15:54.609)
You've trolled me a bit now because now my brain is doing a... God, that is really a key consideration in terms of...

Louise McCormack (16:00.94)
Yeah, you change a language and you change a society, you change the culture. And what we're doing with AI is we're changing the language with generative AI. And I think it is important to think about AI in terms of like generative AI, in terms of its utility, like something like machine learning that's used to perform a classification task that's been used in, you know, the financial services sector for 20 years already. That's basically just maths and it's very explainable and it's very, very different than

this newer type of AI, the stuff that's making the headlines, the large language models. It is important, I think, to think about the AI as its specific function and ability, as its type of technology, rather than just like this blanket, everything. It's difficult to have the conversation when it means everything and nothing.

Francis Gorman (16:52.613)
fascinating Louise. In terms of the bigger picture and you touched on GDPR earlier on in the call

Do think we're marching into a real problem in terms of consent? And what I mean by consent is if I look at a lot of the tools on the market at the moment, consent is one and done. It's a first time use consideration. So Francis, do you consent to me accessing the data that's available to you? And I say yes. However, I may not actually have the permission or the right to give access to certain data that is available to me to...

a third party entity. And if we never ask for consent again, and we're interacting with say customer information or company intellectual property or whatever it may be, are we going to have a real battle on our hands in terms of data protection, as well as control in the transparency and ethics of how AI is deployed and it's

its natural boundaries in terms of what is appropriate for it to consume and where it's appropriate to be stored, etc.

Louise McCormack (18:04.238)
Yeah, people of course give consent, I don't think people, a lot of people don't care. And I think a large part of that is because they don't understand what people can actually do with their data. If you think about what's happening right now, a conversation people might be having as though, I let my child on TikTok or should I be on TikTok? TikTok is, I shouldn't talk about TikTok specifically, but those types of companies, any company that...

Any company that feeds you content, so any company where you're swiping, almost all of those companies will be using how you behave on the content they serve you to build a profile on you. And if they can build a profile on you and if they can build a profile on, you know, billion people or within a specific country, they can build a profile on all the people very easily. can show them they could show them a certain political content and

quickly understand how they feel about that content based on how they view it, whether they pause on it too much, not just whether they like it or comment on it, but even just based on their viewability of that content. If they can show that to a person, they can quickly establish their political leanings. Imagine having that, a private company having the political leanings of the majority of citizens of a country that they don't even, you know, that's not even their country.

That's really terrifying for democracy to think about the fact that these companies are also our newspapers. They are also our key source. More than 50 % of people get their news from passive recommender algorithms who have profiles on them.

So essentially these private capitalist organizations have the ability to decipher our political leanings and also to change them. And that is, you know, not even a, you know, a fairy tale. That is literally what's happening. And that's really scary for the future. You know, our conversations we're having about this, that's a 10 year old conversation. The idea that social media platforms are profiling and influencing political views. That's a 10 year old conversation.

Louise McCormack (20:17.998)
What's happening now, we'll have the conversation about in another 10 years. And when that all gets figured out and that's what's really, really, really scary that we don't have the ability to just turn on a news channel. I don't know what content you're seeing. You don't know what I'm seeing. And so we all have different perspectives and we all believe we're right. And, you know, we're seeing this polarization happening globally. And even in Ireland, where people are obsessed with American politics and politicians, like where they weren't 20 years ago, 20 years ago.

We liked America, but we weren't obsessed with the ins and outs of their political polarized society. And now we're all obsessed with it because we're being fed content around it and we're being almost radicalized portions of our society here in Ireland, which is really terrifying.

Francis Gorman (21:07.257)
And I think you can see that in the flesh now, if you go to a pub or a social setting or even, you know, a family dinner, it really depends on what's within the device in the user's hand in terms of what shapes the reality. And I've noticed it a lot more since COVID per se, that people have some really strong views and you're of looking at them and going, where did that come from?

I'm not a social media user myself, but I see it. I see it on my on my wife's feed. Sometimes I'm looking like, you know, we we had a new baby. She's looking at baby stuff and everything in our feed is is tailored around baby related. Do you want to get a baby sleep therapist or whatever it is? And you're kind of like going that is completely singular to your reality and your point in time. So if somebody is clicking on.

I don't know, some far right material around Ukraine or whatever. I was talking to a friend of mine today and he's like, that's the Lensky guy shirt. Trump obviously found out he was doing something to it. know, like, what are you on about? You know, but whatever it whatever he's seen in his in his social media feeds or whatever content has been fed into his device. As you click onto that, you get more of it. You get more of the same. You you become.

Louise McCormack (22:17.966)
It's not just about why are you talking about it, it's more like why is that the thing you care about most to speak about? We have our own country here, we have our own things going on and we're spending more time focusing on American politics than anything else. And that's shaping us and forming little pods of groups, of political groups. Before it was the Irish Independent, the Irish Times and a couple of other newspapers.

and they presented the different views and people could debate those. But at least it was transparent. You you can go and find a newspaper in a museum and you know what people were seeing. Nobody knows what anybody else has seen. Nobody knows what they're doing. Nobody knows who's been profiled, what the algorithm is being tailored to, which versions of algorithms people get. There is just no transparency around algorithms whatsoever. No control over algorithms. Now, Instagram did come out and give people the option to limit political content.

which is the start of what's going to happen. And this is going to have to change because people want control over the algorithms. You you can decide a newspaper to buy, you can decide a TV station to turn on and you should be able to decide which version of the algorithm you're on. It's not reasonable to keep going with this lack of transparency.

Francis Gorman (23:39.217)
Do you think it's going to change in the near future or have we got a way to go?

Louise McCormack (23:43.734)
Yes, I think it's going to change. think we all, TikTok almost collapsed in the US and had there been any decent alternative, people were looking for it. They went for Chinese platforms and stuff that didn't have the right stuff for them. But either TikTok and Instagram and so on make their platforms more transparent and more user friendly, or we're going to do a better job in Europe. We're going to build a platform like that in Europe that

allows people the respect and autonomy that they deserve when they're engaging with content online. And then people will come in and use the European version. And that will happen because it's not feasible to go on like this. It's completely bizarre. If there was an alternative to it, people would be on it. So there will be an alternative to it.

Francis Gorman (24:35.653)
Will we get to a point Louise where the technology almost becomes toxic? was in a train station the other day and I was sitting and I put my book down and I looked around and every single individual was looking into a device of some sort. Have we broken something in society?

And is that being accelerated? we need to do we need to almost advocate for a digital detox? You know, a day a week where you you spend time in nature, you go for a walk, you talk to the people around you, you know, are we gone too far? You're doing a PhD in this at the moment, obviously. So you're you're looking at all of the different considerations around ethics and bias and trustworthiness, et cetera. Is there anything popping out there that's just sending off alarm bells in your head at the moment?

Louise McCormack (25:24.078)
The least researched principle is the impact on society and the environment. So we have a good bit of research into bias and algorithms and a good bit of research into transparency and security and data protection. There's a lot of research for those principles. There's a little bit into human oversight, but the impacts on society and the environment is so poorly researched that really all I have is my own perspective on it.

Yeah, that is look around us. Look at how our kids are. Look at the effects of the iPads. And it comes back to profiling and feeding. You're being profiled and fed. And the purpose of what you're being fed for under the GDPR, these companies are obliged to tell us what data they're collecting and what they're using it for. And so they tell us they're collecting our biometric data.

buying in third party data. They're telling us they're doing this so can build a profile and personalize our content. Personalize it to do what? Nobody answers that question. What are you personalizing it for? Are you trying to make me buy stuff or are you trying to make me stay on the platform? All of the above, are you trying to influence my social values? It doesn't tell me what they're trying to personalize it to do. And that's really scary.

a big part of it, are you trying to personalize it to make me more addicted to it? Do you want to keep me on the platform for a long time? And they can tell within a couple of minutes, these feeder algorithms, what your mood is relative to your baseline behavior. So you go on a platform within a couple of minutes, they can tell your mood and your algorithm gets shifted accordingly. Accordingly, but for what? So yeah, we're already so addicted to our phones. I'm addicted to my phone. Everybody's addicted. You have a book and you don't have social media.

Please, you should be, you're in a much better spot than I am. But yeah, we're all addicted to our phones and it's way, way scarier than television. It's not just the case of television came and people said, the television is dangerous and bad for society. And then the computers came and they said, computers are dangerous and bad for society. And then the phones came and people said, they're dangerous and bad for society. The phones actually are different because the technology is different.

Louise McCormack (27:44.152)
The fact that they can personalise and profile, that's what makes it so different. We're not in the same world and the same sphere as each other anymore. That's what makes it different.

Francis Gorman (27:55.909)
What would you say to anyone who says, I don't care if they take my data, who am I? What difference does it make? I'm only Francis.

Louise McCormack (28:04.674)
Well, it doesn't matter if they take one person's data realistically, but when they collect the data of a billion people, that's when it matters. That's when you're going to see your entire society change. That's where you're going to see the oversight and the insights into society being held by private organizations. People think, put business people in charge or, you know, put this person in charge of this person in charge. They'll do a good job for us. They'll do a good job for us based on what, you know, based on

They're going to help the stock market. They're going to bring in more jobs. What kind of jobs? What kind of future are we trying to create here? We put certain people in charge of our data and having oversight of our entire society, and then they can control it. I don't necessarily think that those are the people that should be having that data and having that insight into our society.

The idea of private companies, the kind of data they have on every single person with a phone right now, and what they can do with that, that is honestly terrifying to me. I cannot believe that we live in a world where private companies collect and harvest this much data on all of us. And yeah, one person's data, fine. Maybe on an individual level, you're fine. But look around you at what's happening in society. That's all because your data...

and the person next to you use your data is being collected. So yeah, one person's data, fine, but it's ruining the entire society because they have all of ours.

Francis Gorman (29:41.527)
seen this with the elections in the last number of years, the Cambridge Analytica, you know, that was very much a test in the social media weapon, we could call it, you know, how do we weaponize information to get an outcome that's paid for? It is really serious when you extrapolate it from an individual to a society, to wider regions, you know, it's a powerful tool and I think

having these conversations and even just speaking about trustworthiness, ethics, bias, et cetera, it's a conversation that needs to almost trump functionality, you know, the new shining light.

Louise McCormack (30:24.078)
I'm like, I don't understand why we're not all enraged. Like people are saying, it doesn't affect me in my life. If the guy next to you in the bar has extremist views on niche politics and niche events, that's strange. That's something to be concerned about. know, if your kids, the views of young men now.

totally changed. You send your kid to a good school that you get to pick. You make sure that they have friends who you like and you put them in hobbies that you think is appropriate for how you want them to be raised. But if you give them a phone, their values and perspective and outlook on the world is being shaped by a private company who gets to decide what kind of kid they're going to be. And that is just a fact. You have no oversight over their algorithm. You have no control over their algorithm.

You see these videos. said this to friend of mine. said, you know, if you want to know your kid, her son was very quiet. was 15, I think, at the time. I said, if you want to know your kid, ask him to show you his algorithm on whatever platform he's on. And with reluctance, he showed her. And it was all videos of like, what seemed harmless, like, gold digger prank, woman gets humbled. They seem harmless prank content, but their underlying belief behind that is that

as a boy, his value is measured by how much money he has by women. And that's how women are. And that's how, you know, he needs to go and get, make some money or he's going to be, he's not going to be a worthy man. That's, that's what's happening under the guise of these kind of little platforms where it's just fun content, but actually what it's doing is shaping the entire outlook and values.

that your kids are going to have. And that's just terrifying. Our entire culture has been shifted in the space of a generation. You have kids with entirely different values than you have yourself, ones you never intended to pass on to them. But they're going to have them regardless because they're going to spend two or three hours a day in a virtual world where that's real to them and that's the reality. And they're going to pick up on all of this. And that's shaping.

Louise McCormack (32:43.118)
an entire generation of people, an entire new generation of people being raised by a virtual world that you have no control over. And it's time that we should get furious. It's time that we should be absolutely furious at companies doing this with no transparency and making profits from it all the while.

Francis Gorman (33:05.937)
I think it's something every parent should hear in blisters and all black and white type of a conversation. It's something that should be in the schools. We should be talking transparently about this. The minute you give your child a device, you lose control over their content because most parents aren't technical enough to lock these devices down. And even if they are, the kids are smart enough now to bypass them. There's a way around them.

Louise McCormack (33:29.294)
If they're scrolling in a feeder, if the platform feeds like this, they're in an algorithm. They're being profiled. They're being fed. They're being shaped. They're being influenced. They're being raised by that platform, their values, culture, their outlook on the world. That's it. Do you want to have a good child or a child who's going to be really, really bold? Well, that's not up to you now. If you give them that phone,

the algorithm's going to decide. And it's not necessarily that they get to shape their algorithm. That's not how it works at all. This idea of a rabbit hole or the kid chose the direction of their algorithm, that's not how it works at all. They can be in test groups where they say, let's give this half of them this version of it and this half this version. It can be completely random. What we want is transparency. What people should be fighting for is transparency. On the 2nd of February,

the Article 5 of the AI Act came into force. that Article 5 says that any AI technology that's being used to manipulate vulnerable groups and influence them in a way that kind of has a fairly negative outcome on their lives, that that should be illegal. For me, if there was a parent group, I would be looking at some of these platforms and I would be questioning whether or not Article 5 is something

that actually make some of those platforms illegal as they currently are.

Francis Gorman (35:02.469)
Louise, really insightful, really valuable content there. Thank you very much for coming on the show. I know we're up on time, so we'll leave it there, but it was a real pleasure having you on.

Louise McCormack (35:14.008)
Thanks so much. This is a tip, great to be on.


People on this episode