The Security Circle
An IFPOD production for IFPO the very first security podcast called Security Circle. IFPO is the International Foundation for Protection Officers, and is an international security membership body that supports front line security professionals with learning and development, mental Health and wellbeing initiatives.
The Security Circle
EP 034 Pauline Norstrom 'It's Only a Matter Of Time Before Deep Fake Technology is Used To Send Someone To Prison'
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
'Please note that the AI landscape is moving very quickly and information discussed in this podcast may have changed since its recording'
"For further information about standards discussed in the podcast:
BS 9347 to be released late 2023'.
You can find out more at:
British Standards Instutite National Standards Body: https://www.bsigroup.com/en-GB/about-bsi/uk-national-standards-body/
AI Standards Hub: https://www.aistandardshub.org/
British Security Industry Association: https://www.bsia.co.uk/ai
BIO
"Pauline Norstrom is the Founder of AI strategic advisory firm Anekanta Consulting where her team's focus is on research, and the ethical, trustworthy application of high-risk AI across a number of sectors from civil security to transportation and smart cities.
She has been building her business in the nascent high-risk AI market since early 2020, having gained over 20 years experience in senior statutory board positions in a number of international businesses specialised in video surveillance and analytics for security and safety purposes applied cross-sector. She is also a former overall Chair of the BSIA, which represents all aspects of the UK security industry providers from technology to people, and is now an Honorary Member appointed by the board.
She is a champion of the ethical and legal application of digital technology playing a leading role in the BSIA's guide to automated facial recognition software. This will become a new British Standard for the ethical use of facial recognition technology in video surveillance systems and which operationalises AI trustworthiness from the developer through the supply chain to the user. Pauline also provided input into CoESS's position on high-risk AI and the EU AI Act which was acknowledged by the European Council and Parliament. She is an advisor on the subject of AI ethics and trustworthiness to the Institute of Directors Expert Advisory Group for Science Innovation and Technology, Digital Catapult and Archangel Imaging. Continuing a long relationship with the North American market formerly as an officer of a USA company, Pauline now sits on the Security Industry Association's cybersecurity advisory board and data privacy board provding input and guidance on AI privacy and cyber good practice.
She presented evidence to the All-Party Parliamentary Group for AI on the subject of the safe use of facial recognition software for national security in 2021, and again in
Security Circle ⭕️ is an IFPOD production for IFPO the International Foundation of Protection Officers
Hi, this is Yolanda. Welcome. Welcome to the Security Circle podcast. If PO is the International Foundation for Protection Officers and we're dedicated to providing meaningful education and certification for all levels of security personnel and make a positive difference to our members', mental health and wellbeing with me today is a very special lady. Pauline Norstrom. We actually met at an A S I S Spring Seminar here, 2023. I was very taken with her presentation. She is the C E O of Anacanta Consulting, an international strategic research and advisory company specialized in de-risking high risk AI applications in security, critical infrastructure, retail and manufacturing. She has specialisms also in video analytics and surveillance. She's the former chair and strategic advisor to the British Security Industry Association on AI and automated facial recognition and one of the lead authors on their ethical and legal guide. Now a British standard B S I E C I S O 2 2 9 8 9 of 2022. Welcome to the Security Circle. Thank you for joining us today, Pauline.
PaulineThank you very much for having me. It's an absolute pleasure and I must say it was an absolute pleasure delivering my presentation at the ASIS spring seminar as well. So very, very good questions from the audience. And I was very impressed with how engaged everyone was. I was quite worried that I was gonna lose people. Talking tech when you in the tech world, you talk tech constantly and don't always stop to allow people to sort of catch up with what you just said. So, delightful response. Lovely to. you
Yoyono. It's always good when you, when people's eyes aren't glazing over, when you talk tech stuff. I know what you mean there. But look, let's be very honest, you were talking about a very, very interesting subject, ai, whether it's chat, G p t or deep fake videos or anything to do with risks. It is very much a topical subject at the moment, hence why we've got you here. But look, that's not where it started for you. Is it? Where did it start for you,
Paulineprobably like lots of little girls, I wanted to be a vet because I absolutely love animals. So I prefer animals to people. So I was very set on that. And then I realized I had to have to get all a's in science A levels to do that. And I was starting to lose interest at that point. So, I went from a vet to wanting to be a forest ranger'cause I thought the planet probably needed saving. And our forests even at that time there were a few problems. And then evolved again to wanting to become a barrister actually. And that was having Well interacted with family, friends and thought, oh gosh, this is such a cool job. I really, really would like to do that. It kind of also played to my sense of justice that I wanted to fix the world, so fix the planet. There's a fixing sort of theme all the way through this. Save animals, fix the planet and do justice. I didn't actually become a barrister at that time. I moved into entrepreneurship and technology and found myself in all sorts of weird and multiple places, including sitting on the team that developed the first e-commerce. Site in the nineties and nearly went into coding at that time.'cause I was working with Oracle who wrote the database from scratch, and I was fascinated by how all this came together. There was no Google, we barely had networks and the idea of satellite television and shopping channels and e-commerce was really ahead of its time. So that kind of got me into tech which then evolved into a role in a tech company in the security industry. And that was 20, nearly 25 years ago. And That evolved. I just got totally immersed in what this industry does always on the technology side. But even if you are developing and selling technology into the space, you have to deeply understand what, what it's doing. So what is that problem the technology is trying to solve? this evolved over many years. I made it to board roles, became the chair of the B S I A which is a very broad, wide-reaching position non-exec. It's pro bono, that's that kind of work. But what you learn from that sort of opportunity is corporate governance. The relationships you're interacting with board members in the industry on a continual basis. And also it gave me a very broad understanding of not just the tech side, but the people side of the industry as well, and all of the, the issues associated with training and value and so on and risk. so this is sort of 20 years of experience. International companies, I've been international board director solve problems all over the world from India, Singapore, us, you name it, middle East. And. Very interesting conversations, usually about tech. Usually what it should do and maybe what it wasn't doing. Because bleeding Edge tech generally behaves in a different way when it's used in the wild versus when it's actually in the lab. So, lots of very interesting experiences there. what I wanted to do following all my corporate roles was actually build a business that was totally my own that was built on my own ideas, built on my own values, and also was moving towards a kind of technology and managing risk of using those technologies in AI and specifically in the sectors that I've been already selling into for many years from a security perspective. And that's how Anacanta Consulting was born.
YoyoI was still stuck in the nineties when you talked about, the first e-commerce site. What did that look like? Because I remember I was an M S N user. I was chatting to people in chat rooms. I don't even know how to do that now. obviously there's gonna be a lot of us here that will be Gen X or, or older and younger. And we remember Dialup,
PaulineYou talk to
Yoyomillennials now about dialup and they look at you like you, you've just invented the phone. What was the first e-commerce site like? I mean, take us through like it's a open door to the past. This isn't it,
Paulinevery slow, but yes, it was, it's absolutely fascinating project. I worked on it for about six months and it was, I have to say it was bleeding edge and essentially, If you imagine an internet without Google, so Google kind of made information accessible and also you could probably trust the links. Prior to that, talk about m ss n there was a o l which provided a kind of interface, like a portal into different things that you could get to. And if you weren't doing that, you had to kind of know the funny such academic language that was bandied about, which gave you access to the universities and the papers and so on. And then it was. Otherwise the wild West. So probably how, if anyone has dabbled in the dark net pretty much like that with no order and structure to it. So this first e-commerce site was a revelation really. It brought together just the means of looking at things at products that you'd seen on the tele. So it went with a broadcast that was satellite shopping channel and the e-commerce site. You could go in and you could find your products and you could actually actually make purchases. And that was so new and different. Nobody was doing it at that time. There's this pre-Amazon and so on. And it completely inspired me in terms of taking everything from this world of physical objects into a digital universe, so to speak. But despite being enticed by Oracle, I decided not to go down that route.'cause there were no Pre-coded objects JavaScript was nascent bottom line is I don't think I got the attention span to be a particularly good coder. So it was probably best where I didn't do that. But nevertheless, being in that environment and producing this beautiful object that was like nothing else on the internet at the time, was incredibly rewarding. And still being in good stead really, because it gave me the fundamental principles an understanding of the internet, which not many people had at the time. So it was extremely rare skillset and enabled me to go into, when I did join the security industry in a development company, I was the only person that knew how to do this. And that was streaming video to a webpage, which we take. Innovation at the time to be able to do that. And it just shows you how, the internet networks, the coding technology compression technology for images and video, how they've improved over this time. And we, do things that lightning fast speed content is served up to us in a way that some of us don't particularly enjoy. I'd switch off all cookies and advise people to do that. So you retain some control over free will, but you, your conversations and browsing aren't leading to, being marketed further things you talk about. But that's, that's just my resistance to being tracked. Mm. If it's there, you should, you should be aware of it and you should control it. But yes, it was pretty interesting, interesting time, shall we say. And by the way, you couldn't go and research on the internet to find out how all this worked. Yes, You relied on honestly, magazines and, you couldn't go to the library and get a book on the internet. Yeah. So you'd imagine not having access to information. I
Yoyohave a gap. Like, I remember M S N I remember the slow loading of pages on the internet. I remember everybody, was so patient. Really, we didn't have any other options and so we didn't get frustrated. I remember sort of setting some time aside and going onto the internet, whereas I think the way that's changed is that we have access to the internet all the time on our phones and iPads and computers. Sometimes simultaneously, sometimes, through TV as well. But I just remember this gap, like I don't remember what happened pre-Facebook,
PaulineOkay.
YoyoI didn't join Facebook to begin with. I think I sort of waited a year, see how everyone else is getting on with it and then sort of I bent to peer pressure and so many of my friends are doing it and like I went onto Facebook and joined eBay fairly early. So I don't really remember what happened pre that, that timeline
Paulineisn't that bizarre. That is bizarre Well, do you remember my space?
YoyoYes. I mean, I didn't have a MySpace account. I kind of could say that I never knew it was gonna go anywhere, which it didn't. But there are some people that don't even know what MySpace and Friends Reunited is another
Paulineone. Yes. Well, all of these were kind of pioneers and I guess they were. I. Before they were ahead of the time because actually the sort of underlying technologies that were necessary to make that work, like, the, the transmission speeds, the compression required the phone networks were all copper, very slow, and we talk about 30 k I think my first foray into the internet was at 18 k or less. So it was, it was pretty horrific. So these were innovative ideas that came to market too soon. And they do say sometimes it's not great to be first because you're doing so much pioneering. And not only that it wasn't possible for them to deliver a credible product because all the technologies that deliver good internet speed just weren't there. So, yeah. Facebook, that was 2000 and was it 2004?
YoyoOh, I think, yeah, four or eight. Why do I think it's 2008? Must have been before then. did the
Paulinestand still prior to that point?
YoyoI still have, I mean, I was, in my twenties and I, Facebook is how I connected and broadened my social network outside of my city or town. Yes. And there are still people I'm still connected to now on Facebook that I've never met, but they've only always been a Facebook connection, but they've just been a really good one, yes. And, and I think there's two I can think of just that spring to mind live in different areas around the world, and we still share each other's posts now, and, that's coming on for, for quite some time. It's powerful stuff. But look to your point about it's not great to be the first, they take all the risks, don't they? Definitely, yes. The people that go first
Paulineand they don't necessarily get rewarded for that, and without all the people going first. So I think we we're seeing it more. Overtly now. And let's, let's take an example, sort of Elon Musk is quite happy to show the success, but he also shows the journey of failures that lead to success because you have to try the thing, realize it doesn't work, and then redevelop it, try it again. Oh, there are no customers yet because they're still using the other stuff. you just have to keep going and keep going and keep going. And I think that if we didn't have that sort of mentality and technology, then nothing would ever move forward because fundamentally you need the early adopters as well to try us out. And those communities were those who were feeding back in to say, look, this works, this doesn't work. And the community was driven by its need to be first. So you have to have all of that. But fundamentally with Facebook, we're just talking about Facebook, There were learnings from the predecessors at the end of the day and sometimes the capital in these organizations, it needs to be, we talk about patient capital. It's not very patient because if there is a kind of Gartner hike cycle type curve starting to appear when sort of realization, the technology seems a long way off. And there's, there's too much hype. Investors get cold feet. in fact, I've just seen an announcement today about Graphco in the UK has lost one of its investors. Sequoia seems like on that basis they've written off the debt because they're not realizing yet. Now that's an AI chip. Okay. European British impact might be British roots. So that's not patient capital. That's not helpful. And actually what we need is, is sort of getting on to the reasons why things just stop. They run outta money. It's always a balance between trying and nearly working. The same happened with British Vault. Keep trying, keep trying. Haven't got the customers yet, but the tech doesn't work yet, so we haven't got customers. So it's this kind of cycle of pain for them. And the patients run out. It runs out, which is really sad. In contrast, and this is why, probably in the UK we need to, to learn from some of the big tech in the us. So take DeepMind for example. It was purchased by Google. And if you go into, because I'm kind, kind of forensic in understanding things, go and have a look at their journey. Google wrote off something like a hundred million dollars of debt, just wrote it off. So Sequoia have just written off the value of their investment in graphical, which is really sad. Really, really sad to read that today.'cause they just need to keep going they keep going is rooted in risk. How long are they prepared to take the risk? And we possibly put more money in because there were other learnings that were needed during that journey. So you can't see the future. You can predict so much based on past events, but when you're pioneering and innovating, you literally don't have a past event'cause it's new. So you're taking new data points and then trying to do a trajectory from new data points. And that is not a straight route. So it would be really good to see a little bit more tolerance, to companies that keep trying. And we're seeing some recycling, Virgin Orbit. Very sad that that went into chapter 11, which is like a kind of administration which protects the company from its creditors. But why, why did that have to happen? But this is the nature, this is the balance between innovation, trying, being the first and then you're lighting up the market, getting the early adopters on board, and then then it stops. So why does that happen? Yeah.
YoyoPauline. You mentioned not a straight route, but ai, I mean, it's taken a few curvy corners, hasn't it? Somewhat. Yes. Take us through, I mean, look, first of all, women in tech quite rare. Yes. Let's cover that off. And the fact that you are in AI as well, that must make you an awful lot rarer. But I think a lot of people would be thinking, what attracted you to AI and, and what's your AI journey been like?
PaulineSo ai, yeah. Women in tech. Okay. So I'm from the area where, where we got served up star Trek in black and white. And James TKI was an absolute heartthrob, absolutely lived and died for that program. And the other characters just the idea of being able to transport through space. I know huge, huge fan. Incredible. And the storylines and still, I, I go around shop and it was a farm shop and there was this little sheepskin sort of poof thing and. My daughter was with me. I said, I'm gonna take a photo. It's a triple. It's a triple. She said, what the heck are you talking about? Look at trouble with Tribbles. But it's brilliant, isn't it? The funniest episode. Yeah, that's absolutely fantastic. But so that, that, and of course we didn't have on demand tv. Mean, I sound like I'm sort of from a, I'm Gen X, but technology. Yes. I'm just kidding. Black and white tv, a video recorder. So, people are listening. Go Also
Yoyovideotape recorder.
PaulineYes, definitely no on demands. No. So it was Star Trek and that really inspired me. And it sparked my interest in sci-fi. So as an avid sci-fi reader, I. And I, I just consumed books in hours and read and read and read. And it was just the abstract nature of it, which I found incredibly interesting. And in terms of ai, obviously Hal appears in my presentations, 1968 movie, which was 2001 Space Odyssey. What an incredible movie for its time. Yeah. And it was Star was
YoyoStar Galactica, b D Bd. B d b. Absolutely. Absolutely.
PaulineSo it's, it's, women aren't meant to like tech apparently, but, it's not my experience. But the bottom line is I also played with Pipa dolls and, and loved makeup, but I also liked, tech and, and science and, and the idea of there being other worlds and sort of teleporting into my adult life, the AI journey. I. Really started many years ago, and I'd say that most of my experience of ai certainly in this, so my, I dunno if I'd call it my second or third career, but certainly security industry very nascent technology that didn't work very well that could possibly be operated in very sterile environments as we describe them without much going on otherwise. It was a lot of companies tried. There was a, it had its own security industry, had its own Gartner hype cycle. There was massive investment that went into AI in the early two thousands. And bottom line is, the tech, the core technology wasn't available, so the processing power required wasn't good enough. And, and lots of reasons why when you put these ais into an unknown environment, they didn't perform very well. And we have lots of conversations with British retail consortium, with analytics and buyer behavior and people walking around stores and sort of, grouping around different products. There was a lot of overselling probably driven by a desperate need to sell stuff. The investors would've been pushing that. And it created a kind of lack of trust in a lot of this technology to the extent that I remember sitting with B B R C and them, them saying, we tell our customers not to buy AI analytics because it generally doesn't work. And there was a bit of a winter really for the security industry whereby, companies were really trying, there were no neural networks that were more intelligent in determining what were in video scenes. And otherwise AI was kind of just a bit of conditional logic which works effectively just on the basis of different inputs, if then do this which isn't really ai, but it's now scooped up under that banner. And it wasn't until. Late two thou, 2010, 2015, that we started seeing technology that was more credible, that was more tolerant to the different environments. So it's been a very long journey. And, we were putting certainly in the, the industry, as industry groups, we were putting together guidance in the early two thousands with regard to safely using AI for security purposes. But it actually wasn't getting adopted because more often than not, if you had an AI technology in your system, it would most likely fail and damage the company's reputation as a result which just wasn't worth it. Mm-hmm. Because fundamentally it would. There was a lot of expectation. Good technology was taken outta say, perimeter environments. And AI is put in place with cameras with without due consideration towards lighting. So many of these systems didn't work in the dark. And you think, well, that is so obvious, but it, it wasn't obvious at the time'cause nobody really understood what the limitations were. And the only way really to move forward was to actually get this technology out in the field into these difficult environments. There's no sort of training data sets that you could, you could utilize. So very mixed success over time. And the, what, what changed That was a whole set of factors including processing power, access to the internet the vast array of training data that is now available Also. A bit more patient capital in terms of what it costs to develop an ai. You could be looking at five to 10 million pounds, absolute minimum to develop an AI based product and to test it and to actually get some real world experience. And then later neural networks, which I explained in my presentation as a, a kind of a way of, it's all maths at the end of the day. It's just numbers. But a neural network will chop an chop up an image and have a little go at guessing bits of it having been shown a representation of the thing it's meant to be identifying. And they're much, much better at doing this. There's more tolerance. There's Through the network, there are sort of cross checks and balances in terms of similarity matches and so on, and they're just getting much better with poor quality data as well, and with fewer, fewer images to determine what something actually is. And as a result of that if something works really effectively and generates value as a result which could be a benefit in the operational. Cost it could be an insurance risk that couldn't be mitigated previously. Then the technology start being adopted and it's accelerated. And since I started my business, I've seen a massive acceleration towards very, very credible ais, but not just video-based. The joining up of different technologies has become more credible and not just credible working very effectively to have a kind of sequence of ais a kind of chain of events doing different things to achieve a result. And those are in action now, and, we can see they're starting to touch on the edge of are they really in control or, who's in control of these ais? And that's kind of where my business is focused, looking at the risk and aligning with the regulations that are, are coming out around the world. There's good AI
Yoyoand there's bad ai, and let's look at the journey, especially in relation to films and tv. R two D two made AI desirable. Yes. And so did data in the next generation, and we learned, didn't we, through his journey around where he malfunctioned and where he wasn't knowledgeable enough and where he was not able to cope in certain social situations. We learned about the weakness there, but there's also been a lot of negative press. I don't want to use the word press too generalistically because it's also in tv, film, and movies that AI isn't necessarily good for us, and it has a lot of consequential negative outcomes. So, number one, why is there such an interest in pushing AI as a negative thing? And how it can necessarily harm our existence as a human race. Where is the incentive in pushing that? And then second of all, where is AI really going to endanger our existence today as we know it? And I don't mean existentially, I just mean where are we in danger and where do we need to be eyes forward. Mm-hmm.
PaulineVery good questions. Why is AI used in a negative way? Because it can, okay. So that's not a very adequate answer. I accept that. But fundamentally, if there's no one saying no the human nature for some people is we'll just keep going until someone stops us. And that also includes negative connotations, but also you've got geopolitical situations, power struggles, and so on. And we, we, we're not just talking about an arm's race, now we're talking about the race to, the most intelligent ai as, as being at the heart of the superpowers. Why negative? Well, because there's a need for some to dominate other human beings in a very bad way. And that isn't just the technology, it's a manifestation of bad humans. And to fix that, we need to really get into the root of society have a more balanced society, certainly into, I'm gonna push this, women on boards. We need to have more women around the table and to actually balance the decision making process away from one mindset. And if that happens, then we'll start seeing some improvements. But unfortunately, we're in a race. We're always in a race against evil use of everything. And in this particular case, unfortunately with ai, it can be used in an autonomous way. So you wouldn't necessarily know it was being used in an evil way.'cause it's not visible, which is why I kind of draw the analogy between AI and radiation. You don't necessarily know you've been exposed to it. Yeah. Until something starts happening. And aside from the geopolitical, the arms race, the AI arms race in our everyday lives. You know how I don't see a humanoid ai being an existential threat to us. I see it invading our minds in a way that alters our ability to even recognize whether we have free will or not. That I would suggest is the biggest threat we face, which is not knowing whether we are reading the truth. Not, not just that, but every interaction. So these, these cool toys we had, the first e-commerce website, that was very cool. Wasn't very intelligent though, its intelligence was just delivering the thing up to you so you could look at it and click and make a purchase. Now, if you're not aware of the AI you're being exposed to, you could be subliminally influenced towards or away from political views, theories, support of one thing over another, buying things that may not be good for you. Chocolates in my case. But you know, the, the bottom line is how do you actually know whether what you're, you are exposed to through apparently your preference or was it your preference or was it served up to you because. You clicked on a website, you said something and your Alexa or Siri picked it up and now you're being offered up the things and isn't that funny? Or you were just thinking of doing that and look, isn't that convenient? I can now go and buy that. So that's where I see it being a threat to our everyday lives if we're not aware of how it's being used. And I think there has to be a, a big catch up on education that not just with the tech world,'cause we get this stuff, we're in the room with people who understand how technology's applied. But for those who don't interact with technology as a kind of forensic level I think it's really important that People in, the public actually understand what's being used, why it's being used, and how it might affect them. So that's, that's my take on it. Yes, we should be worried about the geopolitical, but more so about subliminal influence that actually alters our personal choices away from what was, are we gonna have ideas anymore?'cause someone else is, well maybe,
Yoyomaybe, maybe for those people who ideas come at a struggle will find life a lot easier. That's the only way. Yeah. I can think of phrasing that,
Paulinewhat I meant. Yes. I think that that probably is a good way of putting it. Yeah, but that's just false, isn't it? So yeah, it's, and ultimately AI will become more stupid because it relies on humans to become more intelligent. So, oh, that's, that's a
Yoyoworry, isn't it? Really? We don't need, yeah, we don't need a human race to become more stupid.
Between the 11th and 13th of September, 2023 at the Kay Bailey Hutchinson convention center, Dallas, Texas. We'll be the global security exchange, commonly known as GSX. It's a yearly conference. That ensures as a security professional. You never fall behind. Once there you can access CPE eligible education on the most pressing issues that affect us as security professionals. You can build or strengthen professional networks and connections. You can develop strategies to remain resilient against evolving cyber and physical threats. You can discover new products, technologies, and services to enhance your capabilities in this expensive exhibit hall. With thousands of exhibitors. Brought to you by ASIS international, the world's largest membership organization for security management professionals. So basically, if you haven't booked. And you think Guinea to go then book a ticket. There are so many events and you can join. Peers security leaders and practitioners from every industry. And every sector. Based in the center, downtown Dallas, personally. I can't wait to be there. I'm going to go this year for the first time. And probably this year is the first time I really feel like I should be there, you know? I'm going to be there to support if PO. At GSX, we have a booth there. So if you want to come and see me say hello, swing by. And that's the whole reason about it. Connect with people in our industry. so hopefully I'll see you there. I check in with me. If you are going, I'd love to make an arrangement to say hello.
Yoyowas in the office once and I was chatting to my boss about flip flops. And within half an hour he had flip-flop commercials popping up on his yes. On his social media feeds. And it was February, it wasn't like, oh, this is the time of year that, it was just kind of, we were thinking, oh my God, is the phone listening? but look, my biggest concern, especially when you look at things like Nextdoor, which is the neighborhood app, you see a lot of faux PAs on there there's a lot of people out there using technology before they really fully understand it. And, and we've got to start, haven't we? Pushing that in forms of education? Certainly through schools. But look when, when it comes down to ai, let's talk about the national AI strategy. You've, you've been very much part of that. Take me through what the sort of the key things you are
Paulineworking on. So the UK has been forming its position on AI for a number of years now, starting with a national AI strategy that was published leading to a AI policy paper, AI regulation policy paper which has just been superseded by a white paper, which is setting out the government's plans with regard to regulation of AI in the uk. Now, the way that's structured at the moment, there is no plan to immediately regulate AI in the uk and it, it's, it's, it's a lot to unpack. So I'll try and I'll try and. Compress this into some blocks of information. The first being that the principles of safe AI have been laid out and those mirror an organization called the O E C D which is like a global think tank policy group that has been, part of it, has been working on AI for some time. And these are trustworthy principles for AI may include accountability, fairness, explainability, and those are high level principles that businesses are expected to adopt when they use or they develop ai. And what underpins those principles are let's say requirements of the regulators. To figure out how their regulation will actually work alongside those principles to put guardrails in place. So there's a lot of work going on in the background, but put simply, the UK government is not ready to regulate ai.
YoyoAnd that's, I mean, is that just because they can't physically get there yet, or because they're not ready in understanding the efficacy
Paulinebehind it? I, I think there are a number of reasons. It's not, not for one to try and, because there's lots of work that's gone on in the background. So, there's cooperation between the, the regulators are trying to figure it out, but I think it's a mix, a blend of those things. First and foremost. The need to gain experience from UK uses seems to be at the heart of this also. In the language of this paper, it suggested that hypothetical risks, so what we think could go wrong, are not considered to be valid present risks. So it's almost as if it needs to happen before we'll do something, which is kind of how UK law is constructed. So we're in this wild West, as I would describe it, where the government, it has laid out a framework, but it, none of it is statutory at the moment and it will move into a statuary duty. But that is my anticipation, but it's not there yet. And also there is a central risk function in government, which is very encouraging. And we have actually framed that as AI regulation and weighting and that risk function, essentially how this will work. Businesses develop and use ai. Those businesses sit under one of the regulators. It's for the businesses to tell the regulator, can we do this? Can't we do this? And the regulator sets up what's known as a sandbox, like a play box. So trying to put some new tech in the market. We will monitor it and figure out what needs to be done. And then if it seems terrible putting it simply that is referred to the central risk function. And then the government decides, shall we legislate, shall we monitor? And then it goes round in the loop. So it'll take longer and it'll be based on experience. But it does appear the government is kind of lining up to be ready if something terrible happens. It's just quite a difficult environment for UK developers and users to operate in because there's no clarity and generally, Putting it bluntly, I've sat on a lot of boards as, as we all have, and the decisions around risk are made on the basis of what are the regulatory concerns that we have? What, what are the legal implications? Is it legal? Is there a risk that the regulator might jump on us? Those are the questions that are asked first, whether it's a good thing to do or not. That is, that comes after those initial questions. That, going back to what I was saying about why is ar used in a negative way? Because it can be so, because there's nothing stopping it, so there's nothing stopping poor use. So if it's not illegal, the reg and the regulator has no particular guardrail for it, So how, how do you make a decision? So we're seeing. So it's, it is quite difficult. There are some standards at the heart of it which are actually the same standards which will underpin the European legislation on ai. So there are some golden threads in there, but a company like mine can interpret those and then advise what to do. But generally, companies coming to the market with their ideas will not have a clear path to navigate through right now. So that's a great concern because all sorts of things will be happening over the next couple of years which will have to be reigned in later. And that
Yoyoleads us nicely like a segue into what happened in the
PaulineNetherlands. Yes. Now I seem to be one of the only people and organizations talking about this. It is talked a, a, a about in Europe. This problem in the Netherlands was fundamentally down to the Dutch government procuring and using type of AI technology which. Predicted whether people would commit fraud in the childcare benefits system. And it made some very simple correlations between dual national people and mistakes on forms, on claim forms, and that must be fraud. And the AI was allowed to issue reclaim notices. To thousands of people and not for small amounts of money. So a hundred thousand euros in some cases. And that continued. And continued. It was racial profiling and nobody could challenge the system'cause they didn't know how it worked. So this explainability problem was very deeply rooted. Also, the teams using the software were incentivized to do more because it costs so much to develop. So there was a conflict of interest within the organization. There was no oversight, there was no ethics committee. There was no one saying to stop this until, some of the human rights groups got involved and I think it was Amnesty International that out this problem. And said for five years you've been doing this. What say you? And I said yes. We have to acknowledge and accept that this was wrong. And they fessed up. And as a market respect to the people and the harm that they had actually done, they, the government resigned. The whole government resigned. And this happened during the pandemic. So it was pretty quiet because we were a bit busy on other stuff. And as a result of this, they got, they had a general election in the middle of the pandemic. But as a result of this, the learnings from this terrible event were actually rooted in the EU AI Act. So this is where something bad has already happened. It's been analyzed, and the measures have been put in place to stop it from happening again. And legislation has now addressed this in that those problems are now codified into law and you won't be able to do that under the new EU rest. So we've already got evidence from. A country that's quite close to the uk. So, I find it curious and quite baffled why that isn't rooted in real risk scenarios, actual risk in UK legislation. It like,
YoyoI think you hit the nail on the head when you said it's the, it's like the west. The wild west, yes. And not only am I thinking, wow, what's going on now we don't know about that's gonna come to light and how, that can be hidden in the sort of monstrosity of people not knowing. And therefore if they don't know, then the whole act itself will never be discovered. There's that whole area, but then I sit in the risk sort of mindset and which is normal, and I'm thinking, why didn't they sense check themselves? You know what it's like you can't just let a design team go off and design something. You have to have a design team. You have to have an ethics team. You have to have a risk team. You, it's like, crikey, I just don't understand how they missed it.
PaulineI think we're seeing this, what I describe as a gap between the decision making bodies in the organization. So the governance bodies, the boards and the development teams, and the development teams are getting a hard time. It's not necessarily their fault. They don't have a brief, they've just been given some data and we can do something cool with it. And the boards, so in this case, the board was the governments, but we see this in other organizations where the boards are not connected to what's happening with AI in their organizations. And it's one of my missions and the reason why I'm working with the I O D to help move forward this narrative within that community of directors and to get this on onto the governance and the risk registering companies which is been very successful so far. And they've, they've just released their AI in the boardroom business advice paper which covers those. Have an ethics committee, the board must ask the questions, what does this do? Who will it impact? And so on. There
were
Yoyoother errors as well, weren't they? Around the proactive policing ai Yes, yes. That was used.
PaulineTake us through that. So this has been a very contentious area in the us. Predictive policing software has been used by a number of forces, LA being one of them. That's probably the most well-known case study. And unfortunately, it was used to amplify bias. And we've also seen that with facial recognition software, which if used by untrained officers who haven't had bias and, diversity training will go and target community and use the software to confirm their biases. And that's actually happened. So it became a really, it was a vicious circle and eventually it was, it was banned. Until they figured out how do we actually train police officers to use this in a way that doesn't amplify what they thought. It's actually providing actual forensic evidence, but it's always on the edge of being, well, is this helping to prevent crime or is it, is it actually helping community? Mm-hmm.
YoyoNow, I remember when I first heard you talk about this and there was an element of the police officers, being told by the tech that Yep, that's a positive, that's an arrest. And they weren't even questioning the validity of it. It was just the machine says yes. So I go and arrest. Yeah. And there's this kind of sense of Christ, where's the intelligence here? Is it in the tech or is it in the human? And why didn't the human ever challenge the tech? And then I start to think, this has got to be the key to using tech and AI successfully is there's, is always knowing. We'll come on to Chat g b T in a minute.'cause I can see, I'm going that way. Is, is, we've got to, we've got to know what the tech know, what the tech is telling us to do, but we've got to be smart enough to know when it's wrong.
PaulineYes. So that's the humans haven't quite caught up on that one yet. That's an understatement. Trying to put it kindly. My belief is that certain types of technology for instance, facial recognition software and used in, in policing and in private security, I, I believe it should be licensed. So there's some kind of record of who's using it and also confirmation that the people using it have been trained in order to interpret the results correctly. And it, I think we're moving towards that. But that's been as a result of, think about the, the tension and dynamic in the market. You've got developers trying to sell stuff you've got. Those who know this will solve the problems, trying to use it but don't know how. And then the public and injured and impacted party saying, excuse me, this is a complete violation of my rights. And they're right. So how do we bridge this gap? And so initially bait, so that starts with a ban. Then, ah, now we have some data. And it's very interesting that some data emerged up in New York that showed that as a result of banning facial recognition software, they now have visibility of crime solved with the software. Also the impact of not having the software so they could actually see the stats, so they could see the effect of not having it. And that was in the post, in the forensic review phase of solving the crime. So finding the person in the video footage. And as a result of that, it's being reintroduced with guardrails as we described them, which are industry standards, guidance, and are actual local ordinances as they're described in the us which are like sort of council laws bylaws. So it's through the ban that the guardrails started to develop and also concerns that massive investment has gone into facial recognition software. And the investors would not be able to realize the benefit because it's banned all over the place. So, there's a vested interest in helping the market to do this well, but I think there's still a long way to go and this element of certainly in the UK when we've been working together as an industry to try and solve these problems. We followed guidance in the EU regulation, which requires another human to verify. So the human that looks at the software can be the first human and then a second human to say, yeah, actually, or no, before any action is taken. Certainly in the live scenario. So the industry is starting to do this, but we're still lacking any regulation. And G D P R is good regulation, but it's not specific in terms of the use case. And it's like abdicating
Yoyoresponsibility to to ai. Why would we wanna do that if we don't want it to take us over? Why would we just so very easily abdicate responsibility
Paulineand control? Sometimes actually taking responsibility and being accountable for our decisions is really difficult. And people are, are moving away from doing that. In fact and social media is helping them. There was a, a a, a study done on the psychology and behavior of people observing the fire in Liverpool at the arena which started in the car park and it took out the entire car park. People were seen on C C T V filming the fire and posting it on Facebook and not ringing the emergency services. Yeah. Is Facebook is not gonna do anything. Yeah. So that's abdication responsibility and accountability for actually doing something and standing up as an individual. And, it, social media has brought the ability for any individual to talk to any individual worldwide, but. It doesn't seem to have improved sort of human accountability for actions. Mm-hmm. And that's gonna have to be codified, written into law standards, guides, and then trained mm-hmm. To make sure, it's frightening.
YoyoYeah. And, and I, that, that takes me straight back to an episode of Black Mirror around social media and it's, and it's huge influence on how we think and act. You think when the tsunami happened in Fukushima, that C C T V footage taken from a place of safety was critical Yeah. For scientists to understand for the first time real eyes on, how the devastation and, and where the water came from and how it acted. That, that was phenomenal getting that footage, but I. When you, when you go back to Liverpool, you kind of think, someone could be rolling around on the floor with their hair on fire and, and people are video it instead of trying to help them. And that's just, that's not the way we wanna go as a human race. So take us through some success stories with
Paulinechat G P T then. Success stories. Trying to think of some so, okay. It's quite good at writing bids as well. But you know, the, it is good at structure. It's almost like, I try to draw an analogy. It's like an Instagram filter that does your makeup for you, or, or yes. Makes you look good. And What are you talking about, Pauline? I don't need to use those. No, of course not. Just don't use Instagram. So the, it's, that's, it's just polishing it up. So it's just reordering it, it's just kind of, shuffling the deck to, to sort of polish it up. But you've provided the data. So that's the success story in itself. So you provided. Trusted fact-check data that you are happy with. And it's done a really good job of rehashing that in a different style. That's great. It's good at writing bids, so again, fed information and then ded and sliced in, in a very structured and good way. It's also being used, the same technology is being used to create art which is a great bone of contention. Yeah. With regard to intellectual property it's also very good at this kind of technology at materials discovery, at drug discovery.'cause fundamentally it's the same kind of technology, large language models sitting at the heart of a company like Benevolent AI who has been pioneering in terms of accelerating the discovery or the repurpose of different drugs, for different diseases.'cause you can read loads of stuff and it can interpret it really quickly and produce a report. But what's. The success in there is that it's fed, verified, academically researched, source checked, trusted information, and in that context, yeah, it's, it is highly powerful and can be trusted because the data it was fed was trusted and it's actually doing a job that humans can't do very well, which is read and interpret millions and millions of books and papers and so on. Where it's not doing so well is when it's asked to produce facts from a pool, a sea of unverified information. Yeah. So if you ask it those things, tell me about the UK AI regulation. Hmm.'cause we, we've tried it to see how useful it could be in our business and really figured out quite quickly that if you fed it good data, you'd get good results. So throw, throw it at a, a verified data set. Ask it a a, a question that is more general. You'll get something that resembles what, what looks like facts. But actually it told me that the regulation had already been passed. And I said, oh really? When was that? And it said, ah, terribly sorry. Made a mistake there. It's not actually been passed. And, corrected the mistake. But if I didn't know that that was the case, then I could have taken that output away and, and passed it off as fact somewhere, and that's where chat D p t has its limitations in that, the version that runs on the internet. Is almost always going to have errors in it because the internet isn't fact checked. And some sites are, so if you throw it at some verified sources, it would do a really good job of that. But when it tries to work it out, it will try and do a good job. It will try and please you try and give you an answer. Doesn't want to say it's not been programmed to say, I don't know. If you ask for links, they don't work. And then if you keep asking for links and you keep saying, these links don't work, it'll stop offering them up. So it's learned that it's, it's it's sent to you d information. So really it's just yeah, it's still has its limitations. It's, its precursor was being asked to write about write stories and articles about how credible and useful AI is. And it, it's kind of, It doesn't know what it's writing it, it's, it's, doesn't know whether it's correct or not. It's the human that has to verify that. So, it can be used very successfully. But I think we're seeing some really silly stuff coming out that should be just treated with a pinch of salt and certainly not to be relied on for any kind of serious research without specialists who know the field to actually go in and check everything fundamentally. Yeah,
Yoyono, a a agreed. And of course, we've discussed this in previous podcasts that chat, g p t is, is like a rock. It can be used to harm and it can be used to build. And and, and the same with a lot of tech. Really deep fakes. I mean, look, It's phenomenal. Now my brother and I send each other on Instagram, these deepfake videos with Arnold Schwartzenegger standing in as Sharon Stone in Basic Instinct on the famous leg crossing scene. It's hilarious. And, and then there's lots of Tom Cruise DeepFakes as well. Yeah. And you, you find yourself thinking, is that him? Is it, is it him? Yeah. Yeah. So I mean, and they're even making TV programs outta deep faking, so, yeah. Well, I mean, look, where's this gonna go? Because I've seen something on tv, a drama series, I think it was, was it Capture? Yes. Where they used deep fake technology that was way more advanced and very dramatic for the TV program where, the guy thought he was, he was a, he was an MP and they thought he was speaking on the B B C and he wasn't. It was a deep fake thing. And it's like, oh, is that where we're headed? Yes.
PaulineOh dear. And that we have real reason to be concerned about this and it's kind of crept up on us. Deepfake technology can be used very positively. And I'll give you an example in creating training data. So that's one of the issues when training ais, so like facial recognition very difficult to get lots and lots of data of people. And the same technology that creates a deep fake creates training data for facial recognition software with a very, very broad demographic. So that's a positive use. But that same software as we saw in this, this series capture can actually alter how things appear. And where that's most dangerous is, is if actual people, so identifiable people are used in the DeepFakes to, create misinformation or to frame somebody and so on. And they're so good, you can hardly detect the difference, the capture, what they did there would be very, very hard to achieve at the moment, but not impossible because it was done real time. But we are not far off that in reality. And This, this sort of lends us to leads us to look at what checks and balances, what measures do we have in place to verify the authenticity of evidence that enters the criminal justice system. And at the moment, if I told you that video evidence is considered to be true and starts its audit trail, when it enters the criminal, when it goes in the evidence bag, as you'll know Yep. Not before. So its authenticity is not verified and it's down to the court to, and the process to challenge that. So a good barrister, sort of expert forensic evidence could reveal the deep fate, but may not be so easy because the trail right back to the authentic original video is broken. There's nothing to join it. So I think it's only a matter of time before we see a very serious criminal case, a conviction on the basis of deepfake technology. Because we're already seeing cases, digital evidence coming to cases coming to court where there isn't very, very much else. And the deep fake could be so convincing it could put someone in jail at the end.
YoyoI mean, it's, it's worrying, but I'd like to think, even though it's been sort of 15 years since I left the police now, that the C C T V was considered very weighty when we went Yes. To the CCPs with our investigations and our cases. In fact, we, and there'll be some people who really get this we used to knock on the door, walk in, and they didn't even say hello. They used to say, have you got C C T V. And it was a little bit frustrating to be honest, because you've put a lot of work into the investigation and there might not be C T V and you almost think Okay,'cause there's no C C T V, you've already got a bias that's pre-judged that it's not gonna go anywhere. And what we're gonna talk about for 45 minutes, it's gonna be a waste of both of our time. Yeah. And that's, that's worrying. But we always knew, even back then, That there needed to be some other type of corporation. And then on the flip side, how many bandits are gonna go to court and claim that the evidence has been deep paid to get outta
Paulinegoing to Yeah, they'll, but, but they'll be able to do it now because, without it's possible. And if it's possible, it's not beyond reasonable doubt and that's all they need to do. So yeah, we really need to be concerned and I. Yes, the the deepfake problem is actually being recognized. I mean, on a more serious note, that's being recognized as an issue and the problem and the potential problem certainly of impersonating people that's been recognized in the EU regulation. So deepfake technology has gone up a level of risk into high risk, which is good, but not, not in the uk.
YoyoAnd look, there are certain new laws now aren't there. For example, the one that springs to mind is the new law about posting sexual act with somebody online, right? Yeah. Without the consent. And it's almost like the same law can be applied here, that if you knowingly use someone else's identity to go public with something they clearly haven't said or done without their permission, yeah. You are committing the same offense. It's a huge invasion of personal privacy, whether it's, yeah. A training video or somebody slagging off another business or, yeah. It could be something just very inappropriate. That is deep faked. Okay. And before we finish up, one last question. Yeah. What is your favorite little piece of AI right now and why? Oh,
PaulineI think for all reasons that we've discussed, I do actually like chat. G P T who wouldn't like it? What's not to like, I, I think
Yoyowe've gotta encourage everyone to use it.
PaulineI think that if we encourage people to use it, they'll see the limitations. They'll also see the humor in it. Because I did ask, actually ask it. You've just made all of that up, haven't you? Actually, yes. Sorry. We don't check the facts. So it's an amusing tool. So for that reason, I would put it at number one at the moment.
YoyoHave you ever asked it to rewrite something in the style of a child?
Paulinecause it does that too. Does it? I thought I wrote that way already, but I dunno. I'm very hypercritical of my own writing, but I would try that and yeah, I mean, you could have all sorts. I mean, the mind goes wild. You could, you could have farm animals, the voice of a farm animal or something, but I mean, this is where I move towards sort of it being like an Instagram filter, yeah. It's just silly and it, well, if it's making us laugh and, get some enjoyment, that's good. Because, we should have more laughter and there's so many serious things going on in the world. We've got to find some humor at the end of the day, haven't we? Yeah,
Yoyo100%. Good way to end it. Pauline Nordstrom, thank you so much. Anna Cana Consulting will provide all of your details, including your LinkedIn profile on the podcast. Thank you so much for joining us today.
Paulinethank you. It's been a pleasure.