The Security Circle

EP 117 Pauline Norstrom CEO Anekanta "Fact-Checking is Dead: Can We Still Trust Social Media?"

Yoyo Hamblen Season 1 Episode 117

Send us a text

Podcast Summary: Security Circle Podcast with Pauline Norstrom

Overview

In this episode of the Security Circle Podcast, host Yolanda welcomes back Pauline Norstrom, CEO of Anacanta Consulting, for a second appearance. The discussion centers around artificial intelligence (AI), misinformation, cybersecurity, and the broader ethical implications of technology.

Key Topics Discussed

  1. The Impact of Meta Removing Fact-Checking on Facebook
    • Meta’s decision to remove fact-checking raises concerns about misinformation spreading unchecked.
    • Facebook previously introduced fact-checking after the Cambridge Analytica scandal, which involved unauthorized access to 87 million users' data for political influence.
    • The potential return of large-scale misinformation campaigns is a key concern.
  2. The Role of AI in Social Media & Misinformation
    • AI is at the core of social media operations, influencing what users see.
    • The removal of moderation could allow algorithms to amplify harmful content.
    • AI's ability to manipulate user sentiment and engagement raises ethical issues.
  3. The Future of Social Media & Trust in Tech
    • Increasing numbers of people are leaving platforms like Facebook and X (Twitter) due to toxicity and lack of control.
    • Without trust and proper regulation, platforms may lose advertising revenue, making unchecked AI a potential commercial risk.
  4. The Risks of AI-Generated News & Fake Information
    • Example: Apple AI wrongly reported false news stories, further eroding public trust in media.
    • AI models can "hallucinate" or generate false information based on incorrect or biased datasets.
    • Responsibility should lie with companies to ensure accurate AI outputs.
  5. AI Regulation and the Business Risk Factor
    • Lack of AI regulation in the UK is causing uncertainty for businesses.
    • The UK has voluntary AI guidelines but no strict legal framework.
    • Businesses are hesitant to adopt AI fully due to liability risks.
  6. AI in Business vs. Public Use
    • AI adoption in regulated industries (healthcare, finance, law) requires human oversight.
    • The Air Canada chatbot case highlights liability issues when AI provides misleading advice.
    • AI should be a tool for enhancement, not a replacement for human decision-making.
  7. The Online Safety Bill & Protecting Children
    • The UK’s Online Safety Bill is 10 years overdue and lacks enforceability.
    • AI-driven social media poses risks to children, exposing them to harmful content.
    • Ethical concerns arise around uncontrolled AI algorithms influencing young users.
  8. The Future of AI and Its Ethical Challenges
    • AI can be beneficial if used correctly, but over-reliance on it can be dangerous.
    • Businesses and governments must establish clear accountability for AI decisions.
    • Pauline argues that AI is just maths, and humans must critically assess its outputs.

Final Thoughts

Pauline Norstrom emphasizes that AI should be seen as a tool to enhance intelligence rather than replace human expertise. The conversation underscores the need for critical thinking, regulation, and ethical AI deployment to prevent harm while maximizing AI's benefits.

Security Circle ⭕️ is an IFPOD production for IFPO the International Foundation of Protection Officers

If you enjoy the security circle podcast, please like share and comment or even better. Leave us a fab review. We can be found on all podcast platforms. Be sure to subscribe. The security circle every Thursday. We love Thursdays.

Yoyo:

Hi. This is Yolanda. Welcome. Welcome to the Security Circle podcast. IFPO is the International Foundation for Protection Officers, and we are, dedicated to providing meaningful education, information, certification, and superb podcasting for all levels of security personnel, and make a positive difference where we can to our members mental health and well being. Our listeners are global. They are the decision makers of today and tomorrow. And I want to thank you, wherever you are around the world, whatever you're doing. Thank you for being a part of the security circle. If you love podcasts, we're on all podcast platforms. And don't forget to subscribe or even better, just like comment and share the LinkedIn post. Thank you for your company. Well a return doesn't happen very often only to very special people. Do people come back to the Security Circle podcast a second time around? Pauline Norstrom, thank you so much for visiting us a second time. How are you doing?

Pauline:

I must say it's an absolute pleasure, to be back and I'm delighted. it's always a good sign when I'm asked to come back, to be honest. This industry, security industry is absolutely fascinating. It is a prolific user of AI. So. I am really excited about this conversation.

Yoyo:

Oh, well, listen, it's an absolute pleasure. I, got in touch with you on the 7th of January. There were a few alarming news stories, and one of them, let's just go straight in, was the story around Meta removing fact checking from Facebook. I messaged you and I was like, what the hell's going on? And why would they do this? And you had a response straight away, didn't you?

Pauline:

Well, yes, and I think I may have said, That they've lost control, maybe in a mission, that it's whack a mole, that actually, you know, disinformation and misinformation is exceptionally difficult to control, on a mass scale. The independent fact checkers that they commissioned. So this isn't, an outside force doing this, Facebook brought them in, pretty much after Cambridge Analytica. So we can talk about that if we've got time, but, you could argue that this is an ambition of loss of control, and that opens up, quite an interesting debate, which I'd, you know, I hope we can touch on some of these issues today.

Yoyo:

Cambridge Analytica happened, we know that Facebook was fined a lot of money for basically allowing, users data to be used with an external company without their permission. And it was used, wasn't it, for lots of surreptitious reasons, including, you know, persuading people to vote in a particular way and judging sentiment of users and exploiting sentiment of users. I think that would be a very good kind of summary. It's a long time since I read that case.

Pauline:

Yeah, and you know what? I also had to dust down my knowledge on this because it's kind of dropped out of the public dialogue, hasn't it? But when you go back and have a look, it was 87 million Facebook users and the issue arose as a result of poor controls over data access, which enabled Cambridge Analytica to access all the friends. of those who were completing surveys, and that propagated out exponentially. So that resulted in them having all that data, and they then profiled the data. So they then segmented people off into different personality types and geolocations, and then targeted them with messages that were Intentionally designed to manipulate their view about, an election or a particular political event that, may have been going on in their region. That was the crux of it. There were actually fines. The biggest fine came from, the Federal Trade Commission, the FTC in America, which was five billion. And UK ICO fined Facebook, and Cambridge Analytica, half a million, which was the maximum at the time. That was the root of it. It was poor control over the data. And then once that data had been harvested, it was used in a way that was intentionally designed to manipulate.

Yoyo:

In fact, it was around 2013 to 2016, wasn't it, the period in question. Of course, by, by that time, some of the more serious things we're going to talk about hadn't even been, you know, accessible, on the market like AI. The reason I wanted to draw your attention to this is because this was a big slap. for tech to say, watch yourselves, keep yourselves in check. You cannot get away with this kind of blatantly abuse of people's trust and confidence, right? I'm wondering, Pauline, if that's the last one we'll see.

Pauline:

No, I don't think it's the last one we'll see. I do think that,, The language you mentioned, AI, just now, so accessible genitive AI, is something that we're seeing. However, AI has been around for a very long time and in fact has been at the heart of Facebook and other social media platforms for 20 years plus. And of course we know from security use of AI, and I know this because I was writing guidance 15 years ago for the use of, computer vision. sort of AI video analytics, and best practice to, avoid the pitfalls and false activations. But the, The issue is really about data control and Cambridge Analytica was really about that. And then the mission of Cambridge Analytica was also added on top of that as a layer which compounded the problem. But in reality, if the data controls aren't in place, then it's, I would suggest it is likely to happen again if it's not happening continually.

Yoyo:

I think the common thread of our conversation today is going to be around ethics and efficacy. So where does Facebook sit? we have to consider that all of this is available to children under 18, under 16s, and even under 14s. And so they have access to this adult world, this adult world that even adults are really struggling to navigate. And Facebook has always been that safe go to, that's where our friends are, that's where we can share our personal memories and our stories and holidays and photos and things like that. But it isn't as safe, is it? Especially if they're going to continue to remove. The fact checking functionality, which I always felt was quite important with social media, especially social media that children are using.

Pauline:

Yeah, I think this is where we were trying to figure out where the boundary between social responsibility and the actions of a commercial entity actually sit. I think we have to get some reality into this dialogue. First and foremost, Facebook is a business, and it was set up to provide services. And some of those services include data, and their mission is to make money and making money. is important for businesses because it causes employment and prosperity for the people who work for the company and for those who use the platform for their businesses. But in terms of what it has become, the idea of having a trusted space where adults and young people can meet online has been lost somewhat. The argument here is that And I would say that Facebook could be seen as just a supercharged, dark web, actually, because what was behind a very difficult to access, set of web pages which aren't generally available unless you have the tools and the knowledge to access them. That's now filtered through into Facebook and due to manipulation, disinformation, misinformation, young people are manipulated into maybe being confused about who's trying to connect with them. Someone that's trying to pose as a young person and infiltrate their network. it's not just that it's the algorithm,, based system, which serves up more content. There are a whole host of issues in there. So protecting young people from predators, to put it simply, but also protecting young people from the algorithms doing their job, which otherwise would be. Promoting services, which might be of interest, which is a perfectly valid use of an algorithm. But if they're used in a way and not controlled, so the algorithm is out of control, shall we say, then, that's when, children, young people, and vulnerable adults, and not vulnerable adults, who, are, scanned and, uh, manipulated without their knowledge.

Yoyo:

I have so many people now, people that we know Pauline, just saying they're coming off uh, Twitter X now, to fill up with it. I think it was Godfrey, Godfrey, Godfrey Hendricks, he put a notification on Twitter last night and just said, listen, I am not going to participate in this platform anymore. it wasn't like, you know how people always say, no need to let us know you're leaving kind of size, really mean. so he preempted that and said, look, I'm still here, but I won't be participating. I've had so many people say they don't use Facebook anymore, and it's such a shame. I had a chat with my pal the other day, who I was in school with, I'm surprised we can even remember each other, we're that old. We were just talking about how you can't even find your friends. On Facebook, you scroll, scroll, scroll through shit, shit, shit, shit, shit, you know, just shit that you keep getting sent. And you find yourself saying, I just want to kind of check in and what my mates were doing. I know that I've got a friend right now taking a cruise. She's just flown into Miami. I've got other friends, celebrating birthdays, but it's. few and far between and that platform it's heart and essence used to connect the communities to connect people and connect friends. So with people feeling an, apathy. around this platform and the disinterest and you can't help but feel that x has gone to shit as well to be honest with you without nice people there it's just a pile of rubbish you can't help but think what does the future for social media have without the efficacy, without the values, without the control?

Pauline:

A lot of people are leaving it, but equally, if you look at how People convene, the typical town hall meetings, they have a convener, the convener stimulates conversation and moderates that conversation. That's how people come together in groups, face to face, and companies are run that way. So you have a leader, you have a group of leaders who set the behavior and, um, encourage people to contribute in a positive way. If you take that away, so the argument is that the moderators, the content, moderators and the fact checkers are those unofficial leaders sitting in there saying, Hey,, that's really not a very nice thing to say. It's going to stimulate, negative feelings. if that's not there, then people, it will all out, it's Pandora's box is probably a good way of, looking at it. And you could argue that, all those evils and hate have to come out and we hope there's hope left in the middle, that we can all, hang on to and hold on to, but unfortunately hate can be easily stirred up and without any control over that, also automation that, propagates, negative speech, it's,, could go really quite awfully wrong and, it's probably early days. to be able to make judgments because there isn't enough empirical evidence to actually compare, Facebook, Twitter, probably Twitter's, you know, X, sorry, um, X there will be evidence on X, but Facebook, it's early days, arguably, it could be commercial suicide for them. Because organizations that really want to reach the audience in a way that is productive and positive, may vote with their feet and remove their advertising revenue, because their ads are appearing next to awful content, and they don't want that. Reputationally, it's bad. So, actually, rather than the, people are leaving, but the source of funding. may leave as well. So it's a big, it's a big risk to be taking at this time, but arguably that's what big tech does. So let's do something dramatic and see what happens. They can always put it back. So we're in that strange period whereby, hey, it's a free for all. Let's see what happens.

Yoyo:

Do you know, as you were talking, I was forming an analogy in my head and it's one of my languages, actually. That's how I communicate and understand and things. And I was imagining this playground, because. I've learned of late, as have you probably, that stimulating hate in humans is, it helps to release endorphins and things. There are things that go on chemically in the brain, like we're hardwired to, have this rush of stuff, which I've forgotten. But if you think about this analogy of the playground being social media, there's no doubt about it. When bad stuff in the playground goes on, everyone stops what they're doing and looks. And that's kind of like the analogy I'm drawing in social media, in the sense that you can't help but not notice it. You can't help but, not notice somebody's being canceled. What Kanye West's wife didn't wear on the catwalk for Grammy's and, and all this kind of stuff, you get to see it and you never, I'm one of these people that never forgets what I see either. But if you decide to remove yourself from that playground, there is the fear of missing out. FOMO is a genuine thing. You know, you feel like you're not included. You ostracize yourself. It can lead to loneliness, but there are lots more people now who are more in tune with their introversion. that are quite happy to sit outside and hear the noise going on in the playground and be like, I'm just quite happy out here listening to the birds, but how long for? That's what I was forming in my head while you were talking there, Pauline.

Pauline:

People will leave, people will leave, and they will experience the world again., there is an argument for that, because if you look at the problems that are manifesting in our young people and also in adults with regard to mental health and, maybe an inability to be mindful and just notice the world around them, which is very important. That's when we go out and get fresh air and look at the blue sky and walk in the mountains and, or do your hobby, whatever it may be, something other than looking at a screen. Um, so, yes, there is. There is an element of that, but it's all about balance really, you know, obsessive scrolling, just by using those words suggests there's something wrong with that and that it's unhealthy. Obsessive scrolling, what is that actually doing? Arguably, like watching, just watching TV incessantly, sitting, you know, sitting on the seat, you take nothing away, it might be relaxing actually, but in moderation.

Yoyo:

I've got like a five count in the sense of if I'm scrolling and there are five things I go past and I'm like, what are you sent, what? I don't it's having a consciousness. It's about saying But I love all the kind of the funny dog videos. It's a guy that does doggy voiceovers. He's hilarious but then after that I get kind of sent a bit of shit on my scroll said shit a lot in this episode I've got a feeling i'm gonna say it a few more times in the next subject we're going on to After the funny dog and the funny cat memes and videos and things and a few stuff around, Greenland, America, and Trump, which I find humorous, it then gets a bit shitty. And so I'm like, oh, okay, shitting out, switch off. I think we have to discipline ourselves to not, it's like, you're, put the gate down, stop it coming in.

Pauline:

if you can imagine, you know, think of the reason why sugar and sugar content in food. has been regulated because sugar is nice. It is addictive. Okay. It may give you the endorphins, not dissimilar to, doom scrolling, as I call it.

Yoyo:

the other thing I put in the message to you on the 7th of January was, it was on the BBC News, um, that Apple had suspended a new artificial intelligence feature that drew criticism and complaints for making repeated mistakes in its summaries of news headlines. And one of the examples is you'll remember the story about Luigi Mangione, the man accused of killing the UnitedHealthcare CEO, Brian Thompson. This news app generated by AI reported falsely that he'd shot himself and they realized, and that was, and there were a number of stories, inaccurate stories. Basically, what was the other one? Look, I, I won't go through them. You can look at it. But, um, yeah, it reported on two different occasions that the CEO shooting suspect had shot himself in an angry outburst outside the courts. the. Upshot of this is that not only has it misinformed the public, it's kind of got potential to further damage trust in the news media and a lot of very responsible people in this AI space, of which I include you, are basically saying it's imperative now that we don't rush this kind of content out.

Pauline:

So don't rush the content out. I think that that is reflective of how business looks at the use of AI in reality. And if we've got time, I can come on to the gap in business use of AI versus, public and consumer use of ai. yeah, they are very different worlds'cause business has to take responsibility for the decisions. The boards, take the risk. Okay. and live with the consequences. there is, in a lot of ways, it's quite good that this happened because it's actually highlighting that AI doesn't know what it's saying. So, um, you could argue, ask the question, where did it get its data from? And in order to create fake stories, it has to hook into something it can then infer from. So it could be argued, and I don't know for sure, um, but it could be argued that the reason why it produced the fake story is that there would be some fake news out there reporting that, and it just picked up on it.

Yoyo:

But also the AI developers have always said that the tech had a tendency to hallucinate, make things up, you know, and the chat, all carry disclaimers saying the information they provide should be double checked. I'm like, no, why don't you take the responsibility to put out correct information? Yeah. Yeah.

Pauline:

Yeah. Yeah, they can't. So it's the nature of the technology. So, um, we're into transformers and GPT. Um, so that's what a lot of you alluded to it, but, um, that is what AI is seen as now, but it's not really, it's just accessible, but in reality, it's just a bunch of neural networks stuck together. And some of the tech in the GPTs and the transformers dates back to the 80s. Which is fine. They're good algorithms that do the job, but they've been put together in a way that enables them to produce contextual responses that seem very real. And in reality, this is one of the challenges that businesses are having in terms of navigating the same answer. So how do you rely on the data that is produced by, a public, or widely accessible model, when actually its responses are determined by the quality of the prompt that is given to them. So, in terms of what, who, you know, who is doing something about this, there are humans sitting behind, the main, pretty much all of the models checking, uh, and adjusting the weights of the model dependent on the kind of responses they're producing. But the developer is not in control of the questions that the GPT is being asked. And that is the reason why they can't guarantee the outputs. And that's quite a big problem for business.

Yoyo:

So I feel like another analogy coming out here, um, it's like we've all seen the output of a very badly or poorly or non trained puppy dog. And we know when that puppy goes into adulthood, how bad it can be without that training, those guidelines for it to operate in. You see where I'm going here. So I'm thinking, why are we, what, in terms of what Apple did, if we use that as an example, I feel like they just, they just unleashed an untrained puppy to the masses and then expected everybody else to go, Oh, okay. it's acting a bit. Surely that's the, I should check this. And I'm thinking, crikey, that Where is the burden of responsibility? Screw social responsibility. Where is the burden of responsibility on the creator, the commercial enterprise to have responsibility and make sure it's being used in the appropriate way.

Pauline:

Well, this is where, you know, I think the world of business and, um, governance, uh, meets, you know, the developer of a GPT and, where is that line? So some of the providers by embedding their tools into business tools, to a degree, take some responsibility for that. but the use case is more tightly contained. So narrow use cases are easier to control. I mean, that cannot be a surprise to anyone. and in fact, it does appear to be a common use case, speaking very generically, but if you narrow the use case down and the model, so this uncontrolled puppy, if you put a puppy in a pen, then it will play in the pen, won't it? It won't destroy everything around it. If you let it out into your house and it ate your sofa, um, it probably would have a go at that, wouldn't it? Because no one told it, no, you don't do that. Um, so if, if the puppy's contained and then trained, um, with the right policies in place. So we talk. about setting AI policy. And in fact, our mission as a business is to get to the boards now, because the boards are saying that are not in our business. And that's actually creating a bit of a blocker in terms of the adoption of well trained puppies. As opposed to completely uncontrolled puppies. in, in terms of who's responsible, there have been various attempts and, I would cite the, California SB1047, now vetoed. legislation, which was asking the developers of the GPT's to take, responsibility and be accountable for harmful outputs. And there was a lot of to and fro ing here. and it was Governor Newsom who eventually vetoed his own bill. And I think it was a bit of playing dare with big tech, in saying, Hey, you know, We can reign you in and we can make you accountable for everything. in reality, if that bill had gone through, then developers would have to embed a kind of kill switch in their tech as well to prevent, propagation of. attacks on infrastructure, for example, on power, on, you know, internet. if all of that was switched off, we would have a problem, wouldn't it? I think that this is being worked out, but in the meantime, in the meantime, businesses, until they can get some certainty, some legal certainty around who is responsible. Who is accountable, who is liable, and the insurers are also clear about whether they should, can, would insure some of this. there's going to be a lot of reticence in the business world, unless the AI is used in a very narrow context, which we see a lot of that in security AI. So very clearly defined context.

Yoyo:

I'd like to talk to you about the media's, influence here on assistive technology. And I learned that phrase recently, assistive technology, which you'll know about. And I thought, Oh, I have to look that up. And it's basically technology that assists humans. And if we, I could never remember, you know, when you scroll through stuff on social media and you think, Oh, this is a great quote, I'll save it. This is one that I didn't. And I can never find out or remember who said it. But he was a very influential guy in chopper tech. And he basically said, you know, AI replace humans, but humans that are using AI will replace other humans that are not using AI. And I thought, actually, that's quite profound. So I will invite your response on that. But if we go to, if we took myself down such an interesting route, I diverted the way I was going to go in the first place. So let me answer, give you that to begin with. And then I'll try and remember what it was I was leading to.

Pauline:

Yeah, that, that, um, that sentiment is actually coming from a number of quarters, including the academic community. And um, you know, there, there, there is a sense that, well, AI is just maths. Alright? So of course when it's embodied in physical, um, robots and so on, it, it's interacting, uh, with the world in a different way. Uh, but we mustn't forget this is just maths and, um, we. are in control of it. And I think alluding to your point here, that if you're smart, if you learn how to use it to your advantage, you will be a smarter human, but you have to know whether it's giving you good data or not. And that requires critical thinking skills, it requires expertise, knowledge in that field. So if you're really good at what you do, and you have years of experience, um, that wasn't learned from chatGPT, um, you'll be able to leverage these tools in a way other people can't. And that, I would agree that a divide will grow there. But what it will also do is allow people who are very smart who may not have access due to social, even social class in the UK. We have a class system in the UK, uh, they can gain access to circles. They wouldn't have got into otherwise. And I do believe that is a very, very positive element of, the GPT is gen AI, equally in education whereby I believe we should see every. child in school, in a state school system, receiving one to one tutoring to the quality that children in the private education system receive. So that they have access to the opportunity as a result of that intense coaching and tutoring, that everybody else does. And I think you'll see a much more balanced society if that happens.

Yoyo:

Hell to the YES on that one. But let's look at the media influence. And the reason I mentioned this is because of this very morning, the media announced in what was, it felt like two sentences that AI is going to be used in terms of, you know, in, in the NHS for scanning scans, for reviewing scans, uh, for breast cancer. Now, nobody can dispute the fact that using the best available technology to enhance healthcare. Is a good move, but where I feel like and this isn't a dig at any particular media news out there They all do the same. I'm like, hang on a minute why are they not saying? But look whilst we're whilst we appreciate most a lot of people have concerns about where ai fits in the national health service It would undergo rigorous tests There would always be a human checking all of the ai's work until there's a symbiotic understanding That the ai is only ever wrong not 1% of the time. We're not getting this reassurance. We're just getting the scary lines where people are going, oh God, what if it's wrong? I don't like that. I am a critical thinker, but I also Pauline like, that there are a lot of people around that aren't critical thinkers and we have to critical think for them because they can't. Yes. Yeah,

Pauline:

I think that's a really crucial point that you've raised there, and I have been talking about critical thinking quite a lot recently in terms of AI literacy and, how do you actually train people to challenge the machine? when in many, and I will get to the point here, but in many businesses, that is the way we do things around here, so the software tool that is used. It's the way the business operates. So to then train people to critically think and challenge the business process. It's the antipathy of what businesses generally want people to do with their processes. there's some challenges in there, at this point, you know, yes, you're the critical thinking is something that isn't taught in school in the UK. It's not on the national curriculum and it should be. Um, this starts, uh, young, and especially with the amount of, um, disinformation, misinformation that is on social media. How do young people learn these skills? So the schools need to take some responsibility for that. Um, in terms of critical thinking, For others, of course, keep challenging, keep, uh, asking. To be honest, before ChatGPT, one of my stock phrases to people in business, when they would give me some totally unqualified statement, is that's a fact. Is that a fact? And that would just stimulate, Oh, I don't actually know. So is it your opinion? And what is it based on? So just say it's your opinion. Um, because then we know if we're dealing with facts or not. We, we, you know, we've got an idea and there's, there's a lot of sort of feeling and sentiment in, uh, in, in business. connections that lead to looking for facts, um, but also looking for facts to confirm bias. And that's a whole different topic. We could do a whole podcast on that, but in terms of the fear, the fear mongering, I think is the way of putting it. Um, I think there's, there's good reason for that. This fear mongering that occurs with regards to machines taking over and, you know, what percentage of accuracy we're looking at in healthcare screening. Well, you know, all this is doing is actually helping humans that have a massive workload. And what I'm alluding to here is that it is the human, the medical professional that is responsible. And in terms of tracing causation through to the decision that led to whatever. Harm was caused, it's always to a human, it's not the machine, and various attempts to shift this balance have failed in the court system, and the well known case of Air Canada, although it's not a medical, uh, application, uh, Air Canada produced a chap, also on the website, somebody tried to access a special benefit, um, given to those who are traveling on bereavement grounds, And the chatbot gave wrong advice, which resulted in additional cost for the traveller. Um, Air Canada argued that, the chatbot made the decision and it wasn't their responsibility, uh, which is very interesting. We talked about it a lot in terms of, is this the point, you know, that point, tipping point in a legal case, whereby a chatbot is held. liable. In reality, the human is the cause and it is the human that is insured, not the AI. So certainly in regulated professions, You will not see AI taking over those jobs. So people should be reassured by that. So when you've got regulations, so accounting, medical profession, legal profession, the person, the human is always responsible and there is no sign of change.

Yoyo:

And that should be something that does get permeated throughout industry, Pauline. It goes back again to my point, doesn't it? You know, who is responsible? They can't turn around and say, oh, you know, AI does that. It hallucinates from time to time, or it only will regurgitate information. It's been told. I'm like, no, you have to have some accountability for what it does and what it communicates. It's like, I feel like I need a goddammit at the end. So it's

Pauline:

very. Yeah, and you know, this is, you know, this is a business conversation, you know, because, um, this isn't, this isn't an isolated conversation, you know, business leaders are having this conversation continually, in terms of how, how do we actually, uh, guarantee the outputs such that we can be confident that we've put some technology in our business that is not going to attract liability because ultimately that washes through into the board and should be on the risk register and could impact shareholders and stakeholders of that business for various reasons. So it is a consideration that has not been Missed by boards, which is why we're focusing on boards, uh, because we have recognized a gap in terms of trust in the technology and adoption and as AI adoption in the UK is considered to be a route to growth, that's not going to happen unless boards make decisions to do it. Uh, so boards have to take responsibility and they are choosing not to at this time.

Yoyo:

We also touched a little bit on the online safety bill. I mean, look, personally, I think the online safety bill is 10 years too late, uh, which is a shame because it would be lovely to see how the online safety bill delivered after being 10 years old rather than just a couple of years old, but it doesn't really touch it. It hasn't got a lot of teeth. And we know that. AI regulation is non existent. Let me just say, when I found out on the 7th of January that AI regulation is non existent, I just thought, Oh God, we've got loads of puppies everywhere that haven't been trained everywhere, hundreds of them. And no one is saying, excuse me, can somebody just take responsibility for these out of control puppy dogs? You know, what's going to happen in the end? Are they going to get a bullet in the head metaphorically because that's the only way to kill the problem off?

Pauline:

Yeah. Well, businesses are doing that and it's not a criticism. Um, they're making business decisions based on risk and although they would like to adopt and there are lots of proofs of concept happening, they're not going into production across businesses because of the difficulty in assessing risk. The AR regulation, so it is perplexing, to hear talk of the government saying we're going to reduce the barriers, that regulation puts in place. There aren't any barriers, so let's be clear about that, there is no regulation in the UK, other than protection of, uh, privacy, and that's GDPR. Yeah. And there is a data access bill going through at the moment, which is attempting to change some of the rules with regard to the use of research data, repurposing data, data scraping on the internet. So if you're interested in that, I would highly recommend that you go and track that. But the AI regulation in the uk, you could argue it's been a wait and see, process, which is valid, and in fact it's not without. an attempt to put in place some voluntary measures. So UK governments adopted, some of the core principles from the OECD, which is Intergovernmental Organization, which, advises governments, with regard to policy. So it's really good that the UK explainability, and so on, but, they have no teeth whatsoever. because they're entirely voluntary and this time businesses are making the decision that they don't trust it because there is no legal certainty attached to those principles and therefore that's creating inertia. It's where lack of regulation creates inertia, it's not stopping anything. And then you look at the flip side and the EU approach to this which is It's much miscommunicated and misunderstood. it's based on the embedding of AI in products, largely, and looking at the effects of products and the use of AI in systems, public systems, and so on. And this actually affects about 10 to 15 percent of AIs, and there is very little, in the way of providers. producing AIs and selling AIs in Europe that don't cause harm to fundamental rights. So that's the barrier in Europe. So you could argue who would produce an AI that would intentionally harm fundamental rights and then claim regulation was in the way of selling more of that. So I'll just leave that with you, a bit of a rhetorical point, but the lack of regulation has been frustrating. So specialized areas, so the security domain is one of those areas where the lack of regulation has been problematic for some of the more, well, the rights impacting AIs like facial recognition software and, which is hugely useful and exceptionally beneficial in terms of solving crime, um, post event as much as live. Uh, but in terms of Some guard rails around that. The government didn't want to do it. So the industry did it and produce guidance and a standard. And that standard bs uh, 9 3 4 7 has the OECD principles embodied in it. And it covers training, it covers furnace transparency, governance. So the industry has actually done that. But, it shouldn't be that way. There should be some overarching rules in place, um, enshrined in statute, it doesn't have to be complicated, and having done that, it could put the principles on a statutory footing, so then the regulators can grip onto those and actually put some measures and enforcement in place. But in reality in this country, uh, enforcement is retrospective. It takes a very long time for any action to occur. So in the Serco case, which was the use of facial recognition software for a time and attendance system, it took about five years for the ICO to say, Hey, what are you actually doing there? And they said, well, you know, you can't actually get valid consent from your employees because of the power differential, the power gradient are contracted so that they hadn't given enough choice and this is all in the public domain if you if you want to go and read about it and this had fair due to look we'll just not use it but it took five years.

Yoyo:

It feels like the ICO don't have enough teeth sometimes, like the Environment Agency. You know, fish have actually got to be floating on the surface of a murky looking dodgy bit of water somewhere before they get involved and go, Oh, maybe there's something in here that shouldn't be in here. I just think, Oh, we're all just a little bit slow. Or is it just me? Am I a bit whiny

Pauline:

today? No, not at all. No, what you're expressing what a lot of people are thinking. And, I thought I was different. Yes, you are because you're expressing it, but you know, as you know, people are thinking it and not necessarily acting and getting it out into the public dialogue, which is really good that we are having this conversation. But the UK is very retrospective. It mirrors the legal system, which is also retrospective. and there may be a statute, but it's case law that sets the rules. and that is how we are structured here. Whereas in Europe, they're very much more proactive. But equally, there were cases that fueled the need to do something, like the Dutch government issue with the, automated racial profiling in the child welfare system. That, that fueled, the prohibitions which have just come into application, this week actually, including, social profiling is prohibited entirely. And you could argue that roots directly back into the Dutch government issue. So they've acted pretty quickly. It could take a UK legal case, 10, 15 years to go through and actually create precedent that the rest of the profession relies on.

Yoyo:

So a bit of doom and gloom. in that case, even with good intention, AI can still cause harms because of its ability to again, not be factual. And that can have a number of different consequences where we don't have regulation to support that even with the best intentions and with the best commercial enterprise, no business wants to do something that's going to deliberately fail. But what about when we haven't even got. Those guardrails in place, what happens when someone does do something with incredibly bad intention that is even yet down the line and we haven't even got our eyes on it yet. We have no laws, no legislation, no guardrails to even mitigate or risk. assess something we don't even know can come our way? The unknowns are a bit scary, aren't they?

Pauline:

I think that's, the answer is in the phrase, in the term, fear of the unknown. and to an extent there is good reason for that, but I think that, It's important to remember, AI is just maths. Remember that. And that you're not talking to a person, you're talking to a machine. And it's giving your best, it's giving its best guess. You know, and in reality, when you talk to people, often they give you your best guess. But they'll probably tell you. I think. The, the danger really that we're not talking about enough is an over reliance and trusting the machine. And we've already seen that in the post office scandal whereby it wasn't even AI. Because of this, the legal system allowed it, you know, in terms of, evidence from a computer system was not challenged, in court. but that's changing. But in terms of people trusting the outputs. People are a bit conditioned to do that. so what we're saying is adopt it, use it. I'm a great advocate of gen AI, of, all of the models, um, if used in the right way are incredibly powerful, but they add to intelligence that already exists. They are not going to make stupid people intelligent. And I say that in a very blunt way, just to stress the point. I love it. You've got to have the intelligence first and use AI to enhance it. And that's where you get your supercharged, super intelligent people, um, who are streets ahead. So what I want to really convey here is try it out, but be aware that you're talking to maths. So if you keep that in your mind and don't be fooled that this isn't a person you're talking to. And it's only giving you a probability.

Yoyo:

So, I would say to Trekkies, think about AI as your Spock, you know, even in the Voyage Home, Spock wasn't himself, and his best guess, you know when you know, his best guess was better than most humans guesses, put it that way, and I think if we refer to it like that. But look, to end on a funny note, I did send these to you, but after it came out that Meta were going to remove their fact checking, we circle back to the beginning, it did enrage a lot of very sensible people who then decided to put on all of the social media these complete untruths for comedy around Mark Zuckerberg. They printed in tweets, recipient Mark Zuckerberg was a recipient of the world's first rat penis transplant. they also reported that, uh, Zuckerberg died because he fell out of a window while installing a Russian version of Windows on his Mac. There were thousands of these, just complete mistruths, proving a point that actually anyone can say anything and be unchallenged. And I thought, Humans are great sometimes.

Pauline:

Yes, and I think that opened the door to it. But in reality, let's remember Facebook is not providing a public service. I think people get confused about this when, and there is a study actually circulating which is really examining the psychology of why people who saw a fire did not report it to the emergency services and instead posted it on Facebook. Facebook does not provide emergency services. So I think we've got confused here about what it's there for. but in, in reality, if there's no fact checking, its value will drop and the value of the data will drop and that will reduce the confidence of the advertisers. Because they cannot be sure that the datasets they're getting access to are targeting the people they want to target.

Yoyo:

Pauline, what can I say? Pauline Norstrom, CEO of Anacanta Consulting. Your go to peeps for anything AI related. Pauline is awesome and if you ever get the opportunity to see her live, she is breathtaking. Thank you so much for joining us again on the Secure It's been a pleasure.

Pauline:

Always a pleasure. Lovely conversation. I hope it's been useful to the audience. And please don't be scared of AI. Challenge it. But remember what we've talked about today. that it is just maths. It's a machine. And if you use it as a tool to enhance your intelligence, you'll do okay. But if you rely on it, on the basis of zero knowledge, then it will make you look rather stupid. It's been lovely and it's been an absolute pleasure and a delight to come back. At some point I'd love to be back again and see how things unfold with this story.

Yoyo:

You

Pauline:

got it.

Yoyo:

You smashed it. Thanks, Pauline.