The Loop

Navigating the risk and regulations of generative AI

February 08, 2024 RSM UK Season 4 Episode 2
Navigating the risk and regulations of generative AI
The Loop
More Info
The Loop
Navigating the risk and regulations of generative AI
Feb 08, 2024 Season 4 Episode 2
RSM UK

In the second episode of our generative AI series, our panel of expert guests are discussing the risks associated with this ever-evolving technology, alongside the generative AI real economy campaign, and how regulations could impact the way businesses use generative AI. Join host Ben Bilsland, media and technology partner, along with guests Sheila Pancholi, technology risk assurance partner at RSM UK, and Mark Webber, US managing partner at Fieldfisher, for a captivating discussion.

Read our full real economy report on generative AI via the link https://bit.ly/3O6KVM2

And follow us on social:
LinkedIn - https://bit.ly/3Ab7abT
Twitter - http://bit.ly/1qILii3​​
Instagram - http://bit.ly/2W60CWm

Show Notes Transcript

In the second episode of our generative AI series, our panel of expert guests are discussing the risks associated with this ever-evolving technology, alongside the generative AI real economy campaign, and how regulations could impact the way businesses use generative AI. Join host Ben Bilsland, media and technology partner, along with guests Sheila Pancholi, technology risk assurance partner at RSM UK, and Mark Webber, US managing partner at Fieldfisher, for a captivating discussion.

Read our full real economy report on generative AI via the link https://bit.ly/3O6KVM2

And follow us on social:
LinkedIn - https://bit.ly/3Ab7abT
Twitter - http://bit.ly/1qILii3​​
Instagram - http://bit.ly/2W60CWm

Hello and welcome to The Loop. In this episode of our genitive AI series, we're going to look at the risks associated with using this technology. Were gonna think about how regulation could shape the way businesses use genitive artificial intelligence, and how the technology itself evolves. So joining me today is Sheila Pancholi, a technology risk assurance partner, RSM UK, and FieldFisher US managing partner Mark Webber. So Sheila and Mark, welcome and thank you for joining me. We're sat here in early January 2024. And at the risk of sounding a bit down and grumpy, we're going to talk about the risks around genrative artificial intelligence. We're mindful there are lots and lots of upsides and opportunities. But actually, as we kind of navigate this technology, certainly finding lots and lots of areas to think about. So you know what general genrative artificial intelligence risks are using in your work and with your clients. Yeah. Thank you Ben. I think it's it's very much contextual. We've got clients who are building their own gen AI. We've got a lot more that are incorporating Gen AI into their products. And then there's a whole user base that are just using and experiencing Gen I that's being brought to them from other third parties. So you've got to look through the lens and work out where you are. Let's put aside those who are building, because there are probably only 20-25 businesses building right now, those that are incorporating and those that are buying into use it themselves are really faced with supply chain issues. And it's actually quite a traditional issue to think about. What am I buying here, who buying it from. How does it perform. What assurances do I get from this technology. So there's a whole set of approach which you need to think about relying on something that's brought to you by something else. And we saw that with the open AI kind of fun and games in November December. Suddenly a number of businesses are relying on something which is a, you know, a new and emerging business which has its own governance issues. We're already talking to businesses about maybe scaling across other operators and other providers, so they've maybe got some kind of continuity protection. So yeah, I think you have to think about it through a relatively traditional lens. But then there's the I've purchased it, I'm giving it to my teams, my employees, and then going on to think about, okay, maybe I'm making this available to others. And at that stage, you really need to think about what's going in, what we call the inputs and what's coming out and the outputs and the reliance on that. And I suspect we might go a little deeper on that. But yeah, that's the that's the early perspective. I think I'd give you Ben. It's funny isn't it? Because when we think about supply chain, we typically think about, you know, cargo ships in the Suez Canal or, you know, what's going on in global areas with manufacture but now we're talking about a completely different thing, but using the same terminology. And it makes lots of sense. I mean, Sheila, like, you know, general intelligence has been around for more than a year, but in Jan 2024, it's fair to say we're probably a year into it being in public awareness. Have you seen any new risks or areas being discussed more and more in your work with your clients? Interesting. One by one, because as you say, it has been around for a long time, particularly in some markets, and industry is longer than probably most of us would even consider. I think just picking up on something, Mark said. It's almost something new, but thinking about some traditional controls and processes around that sort of shiny new thing that we all want to now sort of rush forward new. So the kinds of things that come to mind for me, or have we got the right and the necessary controls implemented to ensure that, you know, from a business perspective, the investments that are being delivered are right, in terms of business results, but whilst also thinking about privacy and confidentiality, and, you know, I think that's a big area in itself, thinking about sort of the lack of adoption of some well known principles such as security by design, privacy by design, ethical by design, which is becoming much more of an interesting area as we sort of start to adopt artificial intelligence. But concepts which actually, if they're not managed effectively could lead to quite a lot of exposure and, you know, a huge risk of data being used for and, you know, training of models being adopted. And I think finally, Ben you know, security technology really needs to keep pace with the development of generative AI. There's just so much to unpick, isn't there? I mean, I've not come across the term ethics by design. Would you mind kind of unpicking that? Because that sounds really interesting. Yeah, I mean, this is thinking about, well, what are we actually using Gen AI for? Have we considered what we're doing with the model or the technology actually ethical? Particularly when it comes to the type of data that we're going to be using with these models as well. Is it okay for us to be potentially considering modelling using certain types of sensitive personal identifiable data. But also where is that actually going to end up? Is it going to be out in the public domain, and is it okay for that to be out in the public domain, particularly because there's a lot of questions over actually what happens with these models? Will it actually end with results that are even accurate? And I think, you know, that it's a subject area in itself, the whole sort of accuracy of what Gen AI is actually delivering. To repeat the point earlier, it's not a new technology in a sense. It didn't just appear in Jan 2024 from nowhere, it did exist, but it's very much on the on the radar. And I know, like in our talk with clients, we surveyed businesses and found an overwhelming proportion by September of business leaders and founders felt that they understood what genrativeof artificial intelligence was. And we felt if you'd asked that question a year before, they would have said, we don't know what you're talking about. So it's in the public awareness. But what are the experience of like, early adopters? What are people stumbling over and finding? I kind of put that to to both of you, Sheila and Mark. We're seeing a number of things. There are those that are consciously going into the adoption of Gen AI. And going out on that procurement strategy, implementing putting in there, you know, tentative assessments of policy and doing some training and kind of running with it. But many, many other adopters are finding actually they have a whole vendor suite. Yet these vendors are incorporating AI, including Gen AI, and bringing the AI to the business almost through the back door. I've been running this software for the last three years. Suddenly it's now got Gen AI, you've got a system there where you know it wasn't planned, it was a bit of a surprise. And sometimes you get a choice as an admin to turn it on or provision it with. We'rethen finding with those kind of businesses that work with the insufficiency of current policy, definitely insufficiency of education and really helping to guide and and help the business and employees within the business make the most of AI. You know, there's some have the knee jerk reaction, just turn it off and ban it but gradually it's crept back in. You know, Sheila touched on, you know, impact and that word impact. And you know we're finding policy guides. But you also have to think about what tools you're putting in the hands of, of the business and sometimes your customers. So there's an emerging practice in the world of governance. And particularly, you know, we'll say it may even be mandated by law in time of a process of impact assessment, essentially something, you know, has come out of things like privacy. I'm a GDPR privacy lawyer. If you do something which has a high risk to personal data, you have to conduct a data protection impact assessment. If you're going to, you know, launch AI within your business, good governance and good practices to conduct an AI impact assessment, which is partly, looking at the the some of the privacy issues and confidentiality issues that she was talking about, but also wider impacts to the business. And, you know, what it means to myself and the business that I'm working in, but also what it means to others and thinking a little bit more existentially about what others, impact may be. And that's where we start getting to ethics and other kind of things. And then once you've thought about that risk, you can then implement controls. And some of those may be your traditional controls. You have a security policy, you have an acceptable use policy, but there may be additional controls that you want to put in place. Having considered and considered the impact through that kind of impact assessment. It's almost that playing catch up. Traditionally, if you were going to implement new technology, you would consider the risks and the governance requirements up front, and it would be security by design and privacy by design. But it's almost as if early adopters are playing catch up with those things. And I'm not suggesting it's always an afterthought, but it's certainly coming secondary to actually implementing. And, you know, those risks are quite broad as well. You know, you mentioned data privacy. Yes, there are data risks, but there are also risks around the generative AI models and the risks around people. There's infrastructure risk. So, you know, the risks are far broader and need a lot of consideration even if it is coming after adopting gen AI. I just wonder how on earth do you stay on top of it all when it's. It feels like there's so much to do and it's so fast moving life. What guiding principles are we seeing where businesses successful when running as fast as the issues are arising? It's a different view across different markets and different industries. So some industry sectors absolutely are a little bit more developed in this area. Others definitely are looking for quite a lot of guidance and support and not really sure where to start because you as you say, it's broad, it's complex. But going back to those sort of basic principles, you know, policies and procedures, governance, risk assessment, you know, thinking about continuous evaluation and monitoring, setting up processes, particularly monitoring around infrastructure. And actually a big piece is around awareness. So encouraging responsible use of AI and discouraging malicious or unethical applications with generative AI. Really I think being able to embed those basic principles. I guess, same thoughts you Mark, but it'd be useful to understand what you're saying. You're based in California, working with, you know, Californian businesses by and large. Is there anything we're seeing on On your Side of the Atlantic that might be different to what me and Sheila seeing here in the UK? I think there are a few changes in that. We've had AI for longer, and a number of businesses have been using AI internally or developing it for a while. The, the one, the one thing I think we're realising and businesses are realising is the complexity that Sheila alluded to, means you need to bring together different parts of the business to make AI work. So, you know, historically, the privacy laws have thought about privacy. Security's had a budget to go and do it. Security and cyber things has just run along and done, their HR functions. But actually, you know, a lot of good governance and best practice is starting to bring those different silos together and make them work together. And that's one of the big challenges that's definitely recognised in some of the guidance that we're seeing. We are getting some pretty good guidance. I think we'll go on to talk about US regulation. But, you know, there's best practice out there from Canadian regulators through to NIST, the, the American organisation and, and even out to Singapore, where, you know, there's been a lot of guidance coming out with best practices. So we're sort of overwhelmed with what's good. It's about cherry picking some of that, and we're actually finding some fairly sophisticated programs coming together. You know, committees, organisational committees sometimes reporting up to boards. But really thinking about good AI and good implementation of AI you know a steering group comprising a body of different interested parties, partly looking over the shoulder of the business as it runs fast, but also partly guiding, and helping, with, with sensible implementation and borrowing various principles along the way. And I think that's working well. And there is a level of real sophistication out here, you know, led by people who had the initiative to bring something together and been empowered. But that empowerment is often coming from the top with the right kind of sponsorship. But I think for boards and, directors that are listening, yeah, thinking about how you support and how you give time to individuals to to go on that kind of journey is particularly important. So, so you mentioned security, actually, Mark and I was going to ask you and Sheila about this. I've got a couple of questions about cyber security. Might turn to Sheila, just to, to ask you a kind of opening question in terms of cyber, are we expecting, you know, generative AI to kind of create a boom or change, are we seeing criminals use it in their work against, businesses? I think we are expecting an increase in cyber security as a result of gen AI. And again, this goes back to some early adoption, but also potentially a lack of consideration of the security and controls that need to be in place prior to that Ben. And again, just the sheer complexity of it all, there's probably quite a lot of education that needs to be done in terms of what does good look like. So absolutely, organisations are going to be having to think about building the adoption of AI into the security principles. And I certainly considering from a cyber security perspective, I mean, Mark, you made reference to, the NIST framework, which is, a US Department of Defence led framework for cyber security, but it actually is very globally accepted. And I know there's been a a recent revision to that to take into account some of the requirements around Gen AI, but also actually, if you think about it from a regulatory perspective, broader the regulatory landscape is already very, very complex. And when you start to try and overlay that with what else will be needed from an AI perspective, it could become increasingly complex. I you know, just to give you an example, applying generative AI in something like something simple, everyday use consumer lending, actually, it's going to require compliance with a whole array of other standards and rules that are already in place, which I know it's going to add another layer of complexity. And we've got to be thinking about that in addition to okay, well, you know, how do we safeguard everything from increasing cyber risks? Because there will be increasing cyber risks? Mark, if you if you got any kind of thoughts around security and cyber risks. To be honest, we've not seen many that have resulted from Gen AI. Certainly see, you know, more sophisticated phishing attacks. But what I am seeing, which is interested in is a lot of use of AI protect against you. And, you know, a number of the businesses we're working with right now, there's an ability to process much wider data sets, look at, real time intrusions and look at compromises and sort of fight some of the security risks with AI, which is very powerful. But understanding how that works and also understanding some of the. Complexity and risk around that. You know, if you're going to look at every transaction and every exchange and every server connection and every email within a business, that's fantastic and powerful, but it also can amount to employee monitoring, and it can amount to other kinds of compromise. When you bring new technology to to solve a problem, you open up new doors. And particularly, you know, again, another supply chain issue which often comes with bringing a third party into to protect and defend your systems. So yeah, I would I would agree with that, Mark. We've certainly seen organisations have ramped up various detection countermeasures building endpoint detection and response. So sort of EDR types of platforms using generative AI to identify and detect anomalies with few to no false positives. So, you know, using gen AI in a in a very sort of positive way to protect against, cyber security as well. Mark, we want to talk about regulation, which is a big topic, possibly the biggest topic, and some would argue the most hottest topic. Are you able to start us off with a sort of a helicopter view of where we are? What does the what does the regulatory landscape look like as we sit here in, January 2024 around gen AI. Yeah, around Gen AI specifically, there is very little, in fact, no generally specific law and anywhere maybe in the state of New York. We've got something around hiring of individuals. But that specific law, that's not to say there's not a whole lot of law which applies to the use of generative AI. It we've mentioned privacy, we've mentioned security laws there. A whole lot of, we have rules around confidentiality and, intellectual property use. So we're we're actually just fighting and pulling together an awful lot of law, and applying that to new circumstances. There, there are, you know, conceptual issues around product liability and protections and different rights that could arise from, from legislation. And there's no doubt there are tweaks, you know, within, the world of intellectual property, there are some great philosophical debates about whether gen AI is capable of creating a patent or whether it's, you know, really a registrable, you know, copyright as an output when it's being created by a machine. And they're the kind of things that we'll see play out and will tweak, particularly in the US, and that will have influence elsewhere. But as we stand, no gen AI law, there's been a bit of a fight to be the first to come up with some gen AI law, or at least some AI law, and there's been the sort of much trailed EU AI act. You know, Brussels, for the last four years has been looking to regulate. Of course, you know, as an English lawyer know that's now going to fall outside of the remit of the UK and we can talk to what the UK is doing. But, Brussels has been very interested in, you know, almost having, you know, being first to market with law to influence the world. Just before Christmas they reached a, quote, political agreement, quote, and have some kind of, you law on the books, but you know, that there's quite a lot of, you know, thrashing out and finalisation, we still don't see the final draft of that. And we'll see some of that start to bite late this year, and a lot of it biting two years after it is implemented. But I think, generally speaking, unless you're a gen AI developer, that's not going to bite. Maybe those incorporating it may need to take assurances from that general developer to explain how they're using gen down right down the stream, but it doesn't really apply to the use of AI. All of that law and it yeah, it probably applies to around 10% of the AI that's taking place globally. So, we can talk a little bit about how you know, how that works if you like it. At the same time, and not to be outdone, the UK has focussed in, on AI risk, but really taking the approach that we probably don't need one, you know, new holistic law, we just need to tweak our regulations and help our regulators implement the law that we got. And I think it's a more, yeah, probably AI friendly, simpler approach, but it hasn't answered all the questions we've got about what regulation might look like. But I think it's going to be easier to operate in the UK because of that. And then in the US, we've got elections coming and, you'll be aware that it's very hard for Congress to pass any law, to even fund itself right now. So there's very little chance of centralised AI law or legislation, although that's probably needed. Instead, we've seen President Biden come out with some different orders around improving safety, improving security, and, a number of transparency issues and standards. And, you know, really looking at what best practice will look like, leaning on organisations to be responsible and tocomply with some principles, sometimes on a voluntary basis. So I actually think, you know, because it aligns so much with best best practice because it aligns with some of the things that the, the NIST, organisation. And I actually think that may have more of an impact on us globally because this is where a number of the developers are, and this is where the AI is coming from. The EU may have missed a trick in this instance. Is there a risk with the EU that they've almost tried to create the regulation before the product? Definitely the case that the law was proposed and conceived four years ago, before I was really a, thing. And then they responded in the last, you know, almost hours of negotiation to kind of thrash out some kind of compromise. So while you've got over 100 pages on regular AI, it's almost a 2 page agenda which deals with the AI issue. So it really is an afterthought. But, you know, for all the hype, it probably applies to 10% of AI that's out there, and most of the time is actually prohibiting a lot of things which it sees as, you know, unacceptable. So things that violate fundamental rights, manipulation of, sentiment and real time policing. I'm using facial recognition, building facial recognition databases. To be honest, all worth addressing. But the sort of thing that really goes to overextending government use of technology already stopping things that really shouldn't be happening in the first place. And then there's other tiers as they come down looking at high risk and low and minimal risk AI, high risk and AI will be permitted but with a number of compliance steps. But if I look back over the last couple of years of AI projects I've done, it's, you know, it's a handful of things that are impacted by this. So nothing that really impacts the use of AI by people who own building it or training it themselves. And of course, 2 years off and who knows where we've gone in the next 2 years. Seeing what's happened in the last 12 months. They'll be trumpeting, get excited about it. But I don't think it is the significant thing. We'll say it is where we draw best practice and where we learn from some of that. And do things on a voluntary basis will be far more interesting in my view. And actual globally, a number of different, countries that will go through various elections. When we talk about regulation and government perspective, a huge amount of potential regime change globally, which piles uncertainty into this area. And, Sheila, what's your take on regulation? It's it's a big topic. How do you think about it? How do you approach it? It is a big topic. Picking up on Mark's comments. It shouldn't be about AI bringing in lots of will a new regulation. It really should be about, developing and considering changes to existing regulation to take into account what we're now doing with the use of AI, or what we might be doing in the future. I think we've seen a little bit of that coming through. So, you know, federal health care regulators, if you think about, the Food and Drug Administration, the FDA, they've been advancing existing guidelines, but for the use of AI. And then, you know, you've got sort of a the state is also considering regulating data and AI. And, you know, if that might be rolled out wider to other businesses, I guess coming from, our role as advising, organisations on what is good governance and risk management look like, from my perspective, it would be good if there was regulation, which would drive some change on the use of gen AI, but to the point where actually gen AI becomes trusted i.e you know, you can use it safely, securely and resiliently having strong risk management processes driven by regulation to ensure that the adoption of gen AI is, it's done very, very effectively so that you've got use that's fair, it's not harmful without bias. It's valid. And you know, the the output is reliable, accountable and transparent. So again just very much around sort of driving good governance. Mark you of talked about the UK a little bit and it sounded for me as a non-lawyer almost as like a, like a decentralised kind of approach, if that makes sense. I mean, isn't there a risk around that, that when you have a technology that's moving so fast and has kind of publicised, let's call them kind of doom case scenarios that if you have that approach, it creates too much risk at the benefit of potential innovation. There was a risk if the use of a technology goes completely unchecked. I agree, but I think I would argue if you go back to the laws that are there, we're not without those checks and balances. So I'm a European UK privacy lawyer, predominantly in Silicon Valley and GDPR, and the UK has the GDPR bumps into almost every project we have in the world of AI, so we're not without that underlying check. In balance. It's not like it's a free for all. I think what what appeals to me about a decentralised approach is you can look at things specifically without this sort of broad brush, horizontal approach. You know, if we're interested in output from a chat bot, I can see broad AI rules saying you must disclose that you're using a chatbot. I want to know that I'm dealing with a an AI and not a human. That's a kind of principled, you know, but it's also good practices already, you know, there for some. But I'd rather have a set of specific rules from an organisation that's specifically thinking about chatbot regulation than I would thinking about somebody who's worried about, you know, health care at one area and policing in the other area, and military application. The problem with a broad brush is you spend a lot of time talking about, you know, maintaining technical documentation, running conformity assessments, preserving the right to audit, but not actually getting in the detail of what may happen, what what every country needs is some extra thought around liability and what can happen if AI goes wrong. And that's partly a consumer protection issue and protecting consumers from harm. But it's also building upon some of these uncertainties. If somebody has stolen my entire data base to train their AI, I'm not recompensed. What happens to that? Can I prevent that? Can I put a simple flag on my website as a content producer which says, don't scrape my website to make sure my copyright and intellectual property is protected? Those kind of additional steps, I believe, are best of thought about by individual regulators, that a specialist in a field and are more likely to be calm about, rather than doing something on a large unrealistic level, I would suggest. So I think it is actually an area where I think the UK is pitching it right. And of course, the UK is also seeing this as partly a Brexit dividend because they can be different to what the EU is doing and maybe more permissive, etc., around the use of AI, whether that works, whether it doesn't, you know, it is open for some debate, but I suspect the UK is going to be far more aligned with what the US is doing in this instance. So how does a business ensure they protect themselves whilst also experimenting with these tools? I don't know, Sheila, if you've got any thoughts around that question and guiding principles, great ideas or I think even things just to absolutely avoid at all costs. Yeah. I think maybe if we start, you know, from the sort of cyber perspective ban automating, updating, upgrading cyber security, obviously a key principle. And there's quite a lot that sits under that, you know, things like continuous access, accessing access privileges. Thinking about continuously evaluating the actual models, vulnerabilities could be open to adversarial types of attacks in different domains, you know, using, what we're seeing coming through in our emerging evaluation toolkits as well. And, you know, more from a control and governance perspective, almost going back to the beginning of a conversation, you know applying the basic principles again, policies and procedures, good governance, risk assessments, continuous evaluation and monitoring processes for monitoring your infrastructure and really building that, environment of awareness, encouraging responsible use, discouraging malicious or unethical applications of Gen AI a lot in the Ben, but they are, you know, basic principles that should apply across any technology landscape. But all this conversation leads me to the question of, if you were going to sit with, you know, a board or with a founder, CTO, someone in the C-suite, you know, what are the key things to think about around risk and regulation, Sheila? We've talked about a lot, but what sort of things would you put top of the boardroom agenda if you had time with one of our clients or with a prospects? I would suggest, Ben, really spending some time considering what your application of gen I is going to be and how you're going to manage it, which is, gen I is being adopted and the risks are being considered afterwards. So I think thinking about it the other way around, again, having a clear strategy for the adoption of gen AI and thinking through sort of the governance principles as part of that. So going back to, a comment Mark made earlier on, doing those assessments, you know, we're good at doing privacy and risk assessments when we were adopting or, thinking about other types of changes and transformation of business. So just considered this as another one of those, but a very important one. Mark, same question to you. What are the, upcoming top tips? It makes it sound too simple, but what sorts of things would you be putting on top of the agenda? Well, there are a lot of top tips that go into Sheila's point. This is contextual, so understanding and just taking a moment when you're going fast. What am I doing with this generative AI. And I think a at a board level asking question what's going in and should it be going in. And if it is going in, you know, is the provider learning from my data and trading on my data about maybe being compromised? Are they just catching that my data via the back door and am I protecting that. So thinking about that from a, you know, proprietary perspective, but then also what's coming out and how's that being used? And there are some things that come out and I work in a law firm. We have marketing content, blogs. We're happy with our actual client advice. Has it got human review. Are we careful with that? Are we trusting what comes out then it really comes down to, yeah. Do I have enough business control and policy around what's going in and what's going out? But realising that control and policy is one thing, you've actually just got to educate those that are doing it. So an awareness, because your best protection for your business is making sure that individual and Cheltenham isn't putting it in in the first place. You can have all the policy you want, but are they aware, that in the same way? Yeah, it's Friday, I'm moving fast if I just cut a corner. But is that kind of corner going to hurt the business. So again, awareness and support and you know, rather than prohibiting and preventing and helping people and guiding people within an organisation to understand what good would look like, and there's an awful lot of trickle down of awareness and education. Mark. Sheila, thank you so much. Really insightful. For anyone listening who'd like to know more about genrative artificial intelligence, please do take a look at our website. We'll put a link into the show notes, and please keep an eye out for further episodes of The Loop, where we'll continue to investigate the impact of gen AI on businesses. And thank you very much for listening.