The Loop

The ethical implications of generative AI

March 25, 2024 RSM UK Season 4 Episode 3
The ethical implications of generative AI
The Loop
More Info
The Loop
The ethical implications of generative AI
Mar 25, 2024 Season 4 Episode 3
RSM UK

In our latest episode of our generative AI series, explore the ethical considerations surrounding the use of generative AI tools. Uncover the challenges posed to human workers and the complexities of consent and ownership of the data that fuel them as well as the biases and errors in the results they produce. Our expert panel, featuring host Ben Bilsland and RSM consultants Dr Priya Khambhaita and Matthew Clark, reveal the deep ethical consequences on businesses and society.
For more insights on generative AI, explore our Real Economy report

And follow us on social:
LinkedIn - https://bit.ly/3Ab7abT
Twitter - http://bit.ly/1qILii3​​
Instagram - http://bit.ly/2W60CWm

Show Notes Transcript

In our latest episode of our generative AI series, explore the ethical considerations surrounding the use of generative AI tools. Uncover the challenges posed to human workers and the complexities of consent and ownership of the data that fuel them as well as the biases and errors in the results they produce. Our expert panel, featuring host Ben Bilsland and RSM consultants Dr Priya Khambhaita and Matthew Clark, reveal the deep ethical consequences on businesses and society.
For more insights on generative AI, explore our Real Economy report

And follow us on social:
LinkedIn - https://bit.ly/3Ab7abT
Twitter - http://bit.ly/1qILii3​​
Instagram - http://bit.ly/2W60CWm

Hello and welcome to The Loop. We spent the last few episodes talking about the huge impact of generative AI across the economy, across businesses. And be talking about what business leaders need to be thinking about around this really exciting topic. Today we're going to talk about ethics. Normally this topic is left to philosophers to debate the ethical debate around gen AI and has and will continue to cascade through boardrooms everywhere. So it needs lots of careful consideration. And I think a lot of people are saying, okay, where do I start? Joining today to demystify two debates are two members of our RSM consulting practice, Doctor Priya Khambhaita from our strategy, economics and policy team, and Matthew Clark, who's a member of our data team. Welcome both to The Loop. Thanks for having us. Let's talk about ethics. With gen AI becoming so widely used so quickly, Matthew, why don't you start what are the major ethical concerns around a take up of these tools? I mean, I think with generative AI, there's a lot of concern about is it actually going to replace people. What are we going to do in the workplace if suddenly gen AI takes over and suddenly find that there's loads of people out of work, cause there not needed to do some of the more basic or entry level tasks that we had previously? There's obviously a lot of debate as well currently in the press, about how generative AI was trained, where that content came from originally. And do people consent to having their, their life's work being used to train artificial intelligence? I think that's two kind of big areas of concern for people at the moment. What resonates with you prior or do you have more to add? So for the type of work that we do within the social policy team, that I sit in is we run evaluation and research studies, and for us, ethics is about ensuring that we conduct today's research and evaluation studies in an ethical manner. So that means structure of the study, for example, deciding on which specific groups, stakeholders and subgroups we should be, including the specifics around the research materials themselves. So actually the survey design and what the topics are to be included within that, or a focus group discussion guide all the way through to informed consent. What happens with that data? I think ethics touches upon our entire type of work that we do and our our field of study. I think what is interesting, though, is that if we're able to obtain informed consent from research participants in a prompt way, in an efficient way, with the help of AI, then what we are able to do is analyse data, records or data sets efficiently and disseminate the findings from that quickly for the benefit of the public. An example of that is with Covid 19, we were able to do quite a lot of studies around Covid 19 and the take up of vaccinations and what the public thought about them, and collecting data on symptoms and those sorts of things. And we were able to do that quite quickly with the right tools and be able to disseminate sort of data sets and break down data sets by variables and characteristics to inform policy more quickly. So there's a lot of opportunities there around that. If it can be done in an ethical way. Especially interested Priya, you're using generative AI already, and what I might call the design of your surveys and your work. Is that correct? Yeah. That's correct. So we can actually use generative AI in a few different ways. We can actually get a starter for determine on what the study design might be in the different work strands and the type of research methods we might use. So we would a survey be appropriate? What type of survey, what would that survey be focussed on? Or should we be including some focus groups and in-depth interviews or some observation? That kind of thing. But at the moment the team are using AI for a few different reasons. We are using it to help us find key data sources for literature reviews and to underpin the work. We're using it to decide on the best way to reach certain participants for our studies, and that might include approaches to reaching them. What kind of gatekeepers should we be going through, and how can we reach them? What's the best way to communicate with them? And another way we're using is actually analysing some of the data. So we can actually automate that in terms of the qualitative data we collect from interviews and focus groups. AI can do kind of the first round of analysis and putting out the key themes that are emerging from those sorts of discussions. You talked about load of things there, and Matthew talked about how there's this whole ethical point around, do these tools replace the workforce? So what's your experience of using the early stage gen AI tools versus using perhaps a more junior team member or more experienced team member? What's been your findings? Yeah, so this is really interesting. And I was talking to my social policy team, that I work with about this. Some of the things to consider are. So for example, I said we do literature reviews and systematic reviews of evidence that exists out there. One of the things we have to be careful is that sometimes these, generative AI, software packages and tools actually make up some of the sources and are incorrect in terms of the dates and the information about some of these sources. You can also use this type of software to and chug through many different papers and lots of sources all at once to do a first round again, kind of analysis and pulling together synthesis of what these papers are saying. We've got to be careful when that happens, because sometimes what it's pulling out is not everything we want to be pulling out, especially if it's around inequality, hard to reach groups, subgroups that are not being included. The thing around people and replacing people is important point, I think. So when we're analysing qualitative data, what we've got to make sure we do is even if we're using AI tools to do that is not to lose that point that we might have where we're doing it manually by the members of the research and evaluation team. We're putting together the narrative of what it all means against the kind of evaluation questions we're trying to address. So ensuring that those touch points that you would normally have as you're working through different transcripts or interviews, that as you're doing that, you're still talking to each other about what it all might mean and doing that analysis piece as you go along and making sense of it and doing justice to the nuance in the data. And as you then, you know, work towards reporting and putting together that you have accommodations and findings. So you're still finding you need a like a human as well as the AI and what you're doing? Absolutely yes, and sometimes it takes things very literally when people are talking and members of the public are talking, and actually the meaning that you attach to a discussion or a point being made in an interview or a focus group isn't something to be taken literally. It is something that needs to be thought about a bit more carefully and that that isn't there yet in, in, in any AI tools that are available to us. Okay. What do you think, Matthew? I think that human in the loop is is still really important. Generative AI it isn't at the stage of fully replacing people at the moment. It's it's not got those capabilities and having that human in the loop or the human on the loop, it's part of that sort of decision intelligence kind of methodology I think is really important. You do need someone to review that, the output you do need to make sure that it's accurate. You know, we're certainly not the stage where anything we've done as a business, we could trial things using generative AI, but it still needs that expert review before it's ready to go anywhere near a client or to actually be used, on its own. We have a tool, we're using more and more. We have humans using these tools. To what extent do we need to disclose that new partnership? I think it's depending on the situation it's being used in and ultimately you as a business, happy with the output that's going to your clients. I think if you are a business that has replaced all of your human workforce with AI or a large portion of it, that's something that's likely to get out there. And people may or may not have a problem with that. I don't think we're too far away from a situation where people will be, saying we don't use generative AI. We're going to end up in a situation where people are proud of the fact or advertising the fact that they aren't using generative AI, because it probably will cost them more to do that. So I think it's something that's going to evolve over time, but ultimately I've reviewed the output and you're happy with how that output looks. I don't think it's necessarily something that you need to be telling absolutely everyone that you deal with. Interesting. Priya, what's your perspective on disclosing to what extent you're using the tools and the work you do, or indeed, like when you're out in the walk of life? And I don't know, you're engaging with the chat bot when you're trying to book a holiday, what's your perspective on this sort of thing? Oh, I don't really use it to book holidays. My husband is far more advanced in using these types of tools for that type of thing. But in terms of transparency around using it, I think that is important. So a lot of our clients, departments that sit across His Majesty's Government or third sector charities and those types of people, and they actually building up their own approaches and perspectives around the use of AI for the type of work that we do in terms of research and evaluation. My general view is that it's important to be transparent and to say how you've used the tool. But in terms of ensuring and reassuring ourselves and them that we have followed the appropriate processes around quality assurance and ensuring that we haven't taken anything at face value, we still stress tested it, we have analysed it and interpreted it as it should be, and with the amount of depth that should be, to to use it as a tool rather than a replacement to a key member of the team, for example. So Matthew, talking about bias, what are the implications of biased or inaccurate data inputs in a gen AI algorithm? I think that one of the things we need to remember with, with a lot of the large language models we have at the moment, is that they were largely trained using the data that came off the internet. It's one of the reasons why they could happen now, because there was that huge amounts of data for them to be trained on. And I think what we saw with some of the earlier versions of these models, that they had real issues of time, they could be really racist, for example. And that was partly because they'd been trained off of areas, the internet, where there had been a lot of disturbing content. And I think that's something that we need to be aware of, is that what we train these models on is going to really influence the output that you get from these models later. Now, of course, the next step there is someone goes in and says, well, obviously this is wrong, we don't want this. So they go in and they edit the dataset to then change what that model looks like. Of course, you're then introducing another layer of bias because someone is dropping in to change the data that you're training on. So it's all got to be trained on something somewhere. And you've really got to think right back to that initial data source that's really going to influence the the output you get from your model at the end. I mean, isn't there a risk if you just talk about that and editing the data? Is it akin to having a Lego set, and we're basically saying let's remove elements of the Lego set. Doesn't it damage the completeness of the product, if we're approaching it that way? I mean, absolutely, but then also, was the internet itself necessarily a perfect source of data to start with? There are certain people who are likely to be much more active on the internet, certain sources of data that you're going to find much more easily, but a lot more news websites or forums and things like that. That isn't necessarily the whole of humankind, but it is the whole of the humankind on the internet. And that isn't necessarily the full data set itself to start with. So absolutely, perhaps you are starting with Lego and you're ignoring, you know, another building tool that you could be using instead, even just to start with. I think what Matthew is saying is quite interesting, and I think it kind of resonates with it, can reflect something it shouldn't necessarily reflect. When we are asking AI tools to help us design an ethical approach to study, or indeed, like the individual research tools we use and the research materials, we don't want it to just reflect us because it might play out in terms of the privilege we carry with us through class or any other of those sort of characteristics. So I think it's really important to be mindful of that. And I think the quality assurance process and often in our studies, we would have a lived experience panel to ensure that we we bring them along on the journey, ensure that what we're producing is actually fit for purpose and relevant, and is something that could benefit them rather than something that we do to them. So which is kind of the crux of ethics for us in the type of work that we do. Have you had any weird and wonderful examples where using the tool thrown out very unusual biases or things that are inherently wrong, where I guess you as a kind of human review, have been able to capture it and look at it. I think the kind of normal thing that we see is that the type of groups that we would like to help through the work that we do, in terms of the recommendations we make for the clients that we're working for. Are those who are the most disadvantaged in society, and many of them have got issues around trust. And trust is important around ethics and research and evaluation as well, because many of them are those who don't, participate in research and evaluation because, first of all, they're quite difficult to and hard to reach in the first place. There might be other barriers, for example, language barriers that need to be overcome before you can speak to them and gather data from them. Then what often happens is when we're looking at these, outputs, is that some of the key groups that you should be, including in the key stakeholder groups and means of engaging them in a meaningful way, often get missed. So I think that's one thing that routinely happens. Yeah. I mean, we've been trialling a meeting summarisation tool internally. And I mean, this is a very small scale example, but with the way that the LLMS work, they are effectively predicting what the next most likely word is. You know, over the course of a sentence. And we had someone who just said they were going on leave for two weeks and a colleague of ours, and she was female. And in the meeting, summarisation notes, it said she was going on maternity leave. That's not what she said at all. But the AI had assumed that it was likely that that was what she must be doing, being off for a few weeks. So there's an inherent bias within the data set or the algorithm there. Something's just predicted it. Yeah. And obviously you need this goes back to our human in the loop. You need someone to go back and review that to make sure that that's accurate to someone that is actually sort of following those meeting notes afterwards. But again, that's the data set itself has led the model to just assume something. So we have potential for quite deep impacts on society through ethical bias within gen AI. What responsibility sets in the boardroom? Where do they start when thinking about their role in using gen AI? Ethically, the things they have to think about to prevent perhaps a widening existing inequalities, maybe an impact on how cultural norms are affected. What's the responsibility of a, your standard business leader? As we sit here today. I think business leaders a slightly struggling they really actually want more help from this, from governments. I mean, you saw leaders of some of the gen AI companies out in the states before, before Congress talking about the fact that they actually would welcome some regulation in this space because they're not quite sure what the boundaries should be themselves at time. And I don't think we've had a societal discussion. We've not really talked to the society about how we feel about decisions that gen AI would be making, places that it's going to be used, and governments are struggling to catch up. We've seen last year with with Rishi Sunak's summit, we've seen Biden putting out some regulation, but the technology keeps moving at a faster and faster pace. And I think businesses generally are crying out for some of that support, because they're not quite sure the limits they should be going to. Are we saying business leaders do nothing until the regulators move? And not just business leaders, but people in positions of responsibility, headteachers, trustees? I think Matthew's right in that it's quite difficult to keep up. And so it's difficult to know from their perspective, where should they be drawing the line and where should they be giving that role of agency and what's open to interpretation I think it's it's difficult. I think from our perspective, you mentioned social norms before, Ben, and I'll come back to this point around trust. So for example, if you're doing a lot of research around health inequalities, a lot of the members of public from particularly from certain sections of society, do not trust health services. They do not trust government, they do not trust service providers. They will not give their data out and they will not participate in these types of studies. And there's a number of reasons for that. But what we've got to ensure we do is that the use of generative AI doesn't broaden that gap and widen that gap even further, so that those sections of society do not participate in these sort of feedback mechanisms constructively, meaningfully, and that they are routinely then out of the picture in terms of how can we help them, how can we be there for them, and how can these key institutions and key decision makers not ignore them? Matthew, I'd be really keen to get your thoughts on this. We're talking about data a lot, and that leads into a whole area around ethics, which is ownership. So, just to touch on an example, we've seen, the writers actors strike recently, which shows some of the difficulty around ownership and IP right when gen AI exists. So now how do we move forward with all the discourse around ownership when there's so much uncertainty with how the technology works and the data it sits on in the current environment? Yeah, I think that absolutely, there's a real it's not even just uncertainty, but it's just a complete lack of understanding of how that tonic technology works, even when the companies themselves are sometimes coming out saying, we don't quite know how it came to this conclusion. These, these models have got so big and so complex that they're struggling themselves to do that. The data underpinning it, the cat's always slightly out of the bag with this. It's the data has already been used to train these models. And absolutely we can try and roll them back. But that was shown in Italy where they tried to stop the use of ChatGPT at one point and very soon after they had to carry on allowing it because, you know, technology and the the tide of opinion has already moved past there. I think, absolutely, there is a real issue here with, with rights of where, these models are trained over the data these models are trained from, and the fact that really they're not generating anything new, they are just taking what they've learned and regurgitating it in a this is the next most likely pixel, the next most likely word. It's not really new content. It's a regurgitation of whatever it already exists. So that's been built from someone else's work initially and aren't being credited for that in the way perhaps they deserve to be. Is everything on the internet free data we can all use just by being in the public domain, is okay for us to use it to train models and do what we do? This is interesting, Ben, actually, because now where there's a lot of data available online, are a lot of researchers and evaluators will be looking online to free data on some topic areas that it might be difficult to survey people on, you know, particularly sensitive topic areas, for example, or indeed, as I said, hard to reach groups. But the broader research and ethics community often have sort of ethics councils or research ethics committees that will sit and evaluate whether your approach is actually ethical. And different ethics boards will give you different answers to that question. Ben, in terms of. Can we actually use that data then for this purpose? And if we are going to use it, what caveats should we use and where can we give acknowledgements to our participants? You know, those sorts of things that it raises those sorts of questions, I think. But it has enabled us to do research and understand a bit more certain phenomena that we would not have been able to otherwise. I think as well, if you think back to the late 90s, early 2000, when people were just quite regularly basically stealing music off each other, everyone was ripping CDs, everyone was passing that around. That created a whole new model for how music was distributed. And now you have places like Spotify and Apple Music, because that completely upended the way people consumed that media content. I think there's ultimately there's going to have to be another way for these businesses to absorb this data. But as you say, the data is already out there. People are going to be unscrupulously training or probably will be training themselves on this data because it's available. So we're going to need to try and come up with a different model for the way that it's distributed in the first place, to make sure that people get the credit and the monetary rewards that they probably deserve for having put that out there in the first place. The music sharing model you share is a really interesting example, though, because what we're talking about still is the same debate, which is that if something is basically spoken in the public domain, is it free to use? And what does that mean if you're a business leader? When you're conducting your activities and you start using large language models, gen AI tools, which are then trained on this, what are the kind of consequences of not thinking about it? It's going to be interesting to see if the regulation does start to catch up, and people do have to start rolling back. If you've really committed to something and regulation comes along and forces you to roll back on that, that's potentially an issue there. Or if you know, it turns out you are going to have to start paying extra for that. That's something that people are going to have to, to look into. In a purely ethical sense. How bad do people feel about this? As a society, we've not turned round and jumped on this and immediately said, this is wrong. As a society, people have enjoyed the fact that they can have a recipe written out of what's in their fridge or turn something into poetry. That still have been trained off of work that someone else has done. But we seem to be kind of like we used to quite enjoy exchanging music with each other. We've decided that this is something that we maybe we don't value enough, and we kind of quite happy doing. Unless something can change that kind of cultural feeling, this might be another area where we're starting to devalue the work of writers or whoever else it is, because we decided we like the end product more than we we like what was there before. What's your take Priya on on this area of ownership? What's your starting point? I suppose when you're thinking about it in the work you do. Yeah, ownership is important, I think. And it comes back to who are we doing the work for in the first place and who are the beneficiaries, as well as the outputs from a particular, say, research or evaluation studies? If we were to ask AI, who are the key audiences for the outputs that are going to come out of this project? For example, if it doesn't include the types of groups or a type of channel of communication or feeding back the recommendations, results from a project, for example, to the people to the wider sections that it should be including in that then that shared ownership towards these types of projects or programmes of work isn't there. And that's where some of these issues around survey fatigue and mistrust of and and not willing to or wanting to take part in research and evaluation activities comes from that lack of feeling and lack of power in terms of ownership. So I think ownership is quite important and there's different ways of looking at it. For us, it would be around audiences, of the outputs in the type of outputs that we produce that are not just clunky reports that I need somebody who's highly educated, from a very well-off background, could actually sit and understand. Do you mind explaining what you mean? Sorry. By that point. Yeah, sure. how can be used for that. That sounds like quite an interesting area to explore. That does. Yeah. So I think increasingly there's a lot of pressure on and rightly so on researchers and evaluators to think about how are you disseminating research findings and how are you continuing on the conversation around some of the key policy debates and the key issues that are facing society? And it's not fair, really, to just collect data from from people and then just hide it away, not, include them through the journey of of that research and evaluation study to have multiple points of actually being able to feed into that study and into that work and then decide and what happens at the end, what happens with this now? What can we do to improve on it? What can we do to build on it, and how is actually going to benefit anyone? And then that's where things around types of outputs, types of communication around some of these findings and recommendations are important. So it could be that you just need to create a leaflet that's very easy to understand in a different language, to ensure that those who helped you do your study actually are able to engage with the outputs and feel an ownership. I think that ownership of the output point is, is really interesting because at the moment we haven't really defined who has the copyright on the output from gen AI materials. You've got the content that it was originally trained on. Somebody had the copyright to that, as we've just been discussing, that's possibly often been ignored. You've got the person who crafted the prompts, who did the prompt engineering to get the output in the first place. You've got whoever it was that created the model, maybe the company that owns the model, maybe it's a blended model that you've created as a business. There's a lot of different steps there and who actually owns the output from that? It's something actually varies between between countries. Copyright laws, even just between the UK and the US, define that differently as to who would own that copyright or whether that copyright can exist at all. Again, that's certainly somewhere where regulation is trying to catch up. And I don't know if we've really discussed, again, as a society who we think should own the copyright with all those different inputs along the way. It's a really good point. And it kind of leads me to think about creativity, because we're talking about tools that can create outputs that, at a minimum look creative, whether it is crativity or not, could be a debate. We could kick around. What are we risking here in using these tools? You know, when we think about creativity, autonomy and even a phrase that I came across last week, which I hadn't touched on, which is human dignity, like what's at risk? I think we're going into the art debate here, isn't it? It's very much in the eye of the beholder. And ultimately that's I think that's something that we're all going to end up deciding over time whether we respect that output the same. I think there is an element in there, though, where, as I say, we respect the output of people putting the effort in to do something that we may not respect the output in the same way that gen AI does. We may be able to look at it and utilise it. Whether, we're going to look at that and think that it, you know, touches moves us in the same way. I don't know. And as you say, that's a lot of that's going to come down to how we know it was produced and whether that affects us or not. I think what Matthew just said about creative to whom is a really important question. I think coming back to what I said earlier about research methods and different ways of collecting data also links to what I said about outputs and sharing findings. I think co-production is an important piece for us, so ensuring that we are actually creative, not from our relatively privileged position where we're able to access different types of art and media. But for those who are not so able to access these types of media and forms of communication is important. If we are going to ensure that we are creative in a way that's ethical. So actually, you're saying that there's a route way in the technology to improve access, fix inequalities exist in the system by using and deploying these technologies. Is that what we're talking about here? Well, I think you can certainly help, but it's important that we include the right people in that. Matthew, how can businesses ensure their use of AI is compliant with individual privacy rights, especially when handling personal data? I think you need to be really careful to compartmentalise your your internal data before it's being used. You can't just have a model that runs on absolutely everything internally. You're still going to need to have the same access rights that you currently have. Model shouldn't be able to access something that you can already see, and that's going to really involve making sure your data is well aligned before you start with anything there. I think as well, making sure that, you know, depending on what data you have, making sure that it's not going out and being used to help train the model. If you check something into many of the online models, your data can absolutely be then used and could be being used to help train stuff. Just as we talked about, once data's out there, it can be used if you do have access to, you know, model that you've built up internally and it's just based on your internal data, at least, you know, that's not then going out and being used to help train that model or being used in the further development of that model in the future. I think this is an area that's only going to sort of grow in importance. I think if you've used, any of any of the gen AI at the moment to help you write an email, for example, you'll find that it's not written, you're style afterwards. And that can be a bit jarring for someone that receives it and is used to the, you know, the normal way you write an email and then they receive something that's been written by gen AI. Even if it might read like a human, it's not going to read how you've written it. In future, you're gonna want to end up in this situation where models are being trained on your own data, and you're going to very much need to make sure you've got the consent of people that are giving you the history of email, writing or other content to help train models, in the future. And that that's something that's certainly going to be an area of concern and development. I think the informed consent is an interesting one. We have to, as best practice, obtain informed consent from our, participants, and we do our best to do that and give whatever assurances we are able to around anonymity or pseudonymisation, for example. But where now you can access different versions of datasets even though they are anonymized. For example, there is something around how clear and open can we be with our participants around the end users of their data? We won't know actually what they end users of their data. We can tell you and this given day that we think it will be this, but thereafter, what will happen to that data and what purposes could it be used for? We don't actually know, so it's difficult to give assurances around that and obtain informed consent. Is there a risk that people don't know what they're signing over? I think there is a risk to that. And then that also, you know, feeds into those issues around research fatigue, survey fatigue and trust. Yeah, that was an example not so long ago where I think it was open AI were in London and they had a piece of technology it was scanning people's eyeballs. To create a kind of data set of global eyeballs for the purpose of, I'm not even sure what their purpose was. And in exchange for a scan of your eyeball, you received, a few Bitcoin tokens. And I'm very happy to say for me, that feels like, an unequal value exchange. I was like, I'm not willing to give over something that's highly unique to me, to an organisation not knowing what they're going to do with it. But actually, that's for me as a relatively informed understanding of the world. Like, what do we need to do to make sure people do understand as much as possible what they're signing over it? Yeah, I think there's principle about unequal exchange and power relations is really important. And it's an ongoing debate for many years in the field of kind of social science research, policy research, for example. I mean, an example was given to me. I have done a lot of work around substance use and barriers to treatment, where those who have been using substances are asked very personal questions and they're saying, hang on a minute, you asking me about my medical history? You asking me about my relationship history? Why don't you tell me that? And this isn't a situation or research in cancer that I was in, but it's just one that a peer was kind of sharing with me. And you think, actually that's fair. We're asking these people to share a hell of a lot without giving anything back. And I think it's important for us to think about that, linking into the ownership we were talking about earlier and lived experience panels and co-production. I think that's such an important point, to consider. Does gen AI just turbocharge a debate that's existed for years was what do you think, Matthew? I'm not sure there's been enough of a debate. You know, there was a whole scandal with with Facebook and people hadn't realised how their information was being used. People don't read the terms and conditions that you sign up to. You get an end user license agreement and you scan to the end and you click agree. Because it's so long, it's so complex, you never get to understand that. And we're doing that with countless products every day. Again, I don't think there's been enough discussion as a society about if we're happy with that. It seems that we we generally sleepwalk forward saying that we all kind of okay with it until something blows up and we realise how something was being used and then we have an issue with it. Again, ties back to the, you know, some of the writers strike and people complaining about how the journalism has been used to train models now. We could talk about it all day, indeed we have. Thank you so much for joining me today, Priya and Matthew. If you're listening and you'd like to find out more about genrative artificial intelligence and our own research, at RSM to the topic. Please do have a look at the RSM website. Thank you for joining us today, and look out for our final episode in this mini series where we'll be discussing the future of AI and how it could change the world over the next 20, 30, 100 years. Thank you for listening.