The Loop

How will generative AI affect business in the middle market?

January 17, 2024 RSM UK Season 4 Episode 1
How will generative AI affect business in the middle market?
The Loop
More Info
The Loop
How will generative AI affect business in the middle market?
Jan 17, 2024 Season 4 Episode 1
RSM UK

In the first episode of our generative AI series, our panel of experts discusses the impact and potential of this emerging technology. They delve into the rise of Gen AI, the crucial role that data plays, and the insights gained by RSM UK. Join host Ben Bilsland, media and technology partner, along with guests Sarah Belsham, data analytics and insights partner, and Chris Knowles, chief digital officer from RSM UK, for an engaging conversation.

For more insights follow the link https://bit.ly/3O6KVM2


And follow us on social:
LinkedIn - https://bit.ly/3Ab7abT
Twitter - http://bit.ly/1qILii3​​
Instagram - http://bit.ly/2W60CWm

Show Notes Transcript

In the first episode of our generative AI series, our panel of experts discusses the impact and potential of this emerging technology. They delve into the rise of Gen AI, the crucial role that data plays, and the insights gained by RSM UK. Join host Ben Bilsland, media and technology partner, along with guests Sarah Belsham, data analytics and insights partner, and Chris Knowles, chief digital officer from RSM UK, for an engaging conversation.

For more insights follow the link https://bit.ly/3O6KVM2


And follow us on social:
LinkedIn - https://bit.ly/3Ab7abT
Twitter - http://bit.ly/1qILii3​​
Instagram - http://bit.ly/2W60CWm

Hello and welcome to The Loop with ChatGPT recently celebrating its first birthday. We're going to be discussing generative artificial intelligence. So joining me, our RSM chief digital officer, Chris Knowles and data analytics and insights partner, Sarah Belsham. So, Sarah, Chris, welcome. Thank you. Great to be here. Thank you. So nobody had heard of generative artificial intelligence until ChatGPT. So Sarah, why has Gen I taken on so much momentum compared to all the other AI tools that have been out there for a number of years? Yeah, it's an interesting point Ben, because obviously AI isn't a new technology, not by any means. It's been around for, you know, 50 plus years. Um, but I think the traditional AI that, um, people may be aware of requires quite a lot of extensive compute and processing power, large volumes of data, um, and actually quite a lot of very specialist skills around sort of statistical analysis of data. So it's not for the masses by any stretch. Whereas ChatGPT launched and offered a very different flavour of AI, something much more accessible to people and something more akin to what people are used to in their day to day lives, more intuitive and not necessarily requiring the level of technical skills that traditional AI requires. Chris, do you do you get excited about Gen AI year on from ChatGPT? I am excited, and I think, as Sarah said, it's, uh, you know, it's building on a technology that's been around for a while, but it's it reached a tipping point, didn't it, where the usability, the quality of the natural language output became such that suddenly it was something that OpenAI, in conjunction with Microsoft, as we know, was confident enough to put out to the world and suddenly the world embraced it. So yes, I am excited about it, and I'm sure we'll come on to some of the reasons why one needs to approach it with caution. But I think you need to put it in the same category as the internet, stroke, World Wide Web and the cloud and smartphones. In terms of a once in a generational, uh, technology that's going to have huge repercussions for the business world. Yeah. I mean, I was thinking about it this morning and, um, why has ChatGPT become so well known compared to the other tools? And I'm just wondering, you know, what you might think about that, Sarah? What? Why is that one really captured the market? People's imagination. Why do people think ChatGPT is the only product in some cases? I mean, I think some of that is to do with the marketing and the hype about it. There's been some really clever, um, messaging that that's made everybody realise that it is a tool that they can, you know, literally log on to themselves, create an account, you're up and running. Um, so, you know, I think just the snowball effect of the marketing and then, you know, someone has a go, does something great, tells their friend they all want to get involved. They want to see what they can generate, not necessarily things that are kind of cutting edge, sort of maybe commercial outputs, but actually quite a lot of fun. But it gets people understanding the art of the possible. We've seen all the exciting things that tools can do, like images and poems. And I was watching a television show over the weekend, and a couple were using ChatGPT or another tool other tools are available, um, identify where they should move to. Um, out of London. I thought it was quite interesting, but Chris, why is this so exciting from a business perspective? I think Sarah's partly touched on that. It's ease of use. You know, it's technology for the masses. It's something that went viral. That what? That's how it went from, you know, zero to hundreds of millions of users globally within a matter of months. Um, and I think why it's so exciting is because so much of professional work these days involves written outputs or visual outputs. And, um, you know, for any business that's got a substantial amount of written or visual output, it's already potentially transformational. I say potentially because, you know, your people do need to be trained up to a certain degree in how to use it and some of the controls around it. But certainly for the more routine note taking or research based, uh, scenarios, it's already a game changer. We're seeing it very extensively in RSM for that. Other industries, such as advertising, the legal profession, for instance. It's already been a game changer. We should put that on the table. But as the technology advances, as it surely will, then a lot more use cases will come into play. Where are we on that that game changing journey? You know, if we're going to use words like hype cycle and use cases, um, dare I say it like the idea that we use the technology to just take notes and summarise doesn't seem as exciting as some of the things we've promised. So where are we on that journey from your perspective? Yeah, a couple of thoughts on that. Then, I suppose, first of all, the the most productivity enhancing use cases aren't always the most exciting. Now there is a lot of routine stuff data entry and manual copy creation and so forth that's done across a lot of professions that actually, if I can do that, then you're freeing up a huge amount of, um, intellectual time to devote to, let's say, more value adding things like winning new business or, um, coaching staff or whatever. So even though there's some unsexy use cases that it started with, actually those can be massively productivity enhancing. But also it's not like software that you buy from a vendor that goes through a gradual upgrade in what it can do every few years. The capability of generative AI is is increasing exponentially every month with the release of new language models and and so forth. So it's not a static thing. Even since ChatGPT was released, it's already far more capable than it was then. Yeah. Um, what do you think, Sarah? Well, I think, you know, Chris is absolutely right. And I think, um, you need to start somewhere that people can get their heads round. So the things we're seeing now are already embedding this kind of new way of working into things that people do in their daily lives. Which helps them to get their heads round, actually, some of their work is now being done by a machine. How do I make that part of what I'm doing and feel comfortable with it? Because it's only going to evolve. It's only going to become a more integral part of daily lives, both, you know, professional and personal. But people do need to be educated. And so I think what we're seeing now is those foundational steps that people will get used to using this kind of technology, and then we can build on that with the really more advanced use cases and kind of evolve that as the technology evolves. But do you think people are bored yet? So just to articulate behind me that I was in California and we talk about it, some people seem fatigued by the topic. So I was just curious if you think that's just my experience of the conversation I had last month, or if you think actually it differs, like where are people out? I don't think people are bored. I think, you know that there is a lot of hype. And yes, I've seen a few people have said, oh, you know, talking about AI again, but actually they want to learn. They want to understand what it means for them and they want to understand how to get started. So yes, again, that the publicly available, the ChatGPT used the things about the clause that are out there. It's a great way to get started, but you can see that organisations in particular are starting to think that how can I take that to the next level? How can I actually make that a bit more personal to me? So I don't think they're bored, but I think there's a lot of curiosity, a lot of questions. Yeah, I think partly because it's one of those technology domains that spans both the personal and the business domain, doesn't it? Most people or many people are using Intel on their smartphones. You know, they're using it for recipes or gym workout schedules or whatever it happens to be. And I think that builds both knowledge and experience of how to use it. And, uh, no, no excitement or enthusiasm is the right phrase, but just it becomes part of how you work. It becomes part of what you do and how you produce things. Now, rather than having to start from scratch, you can use generative AI. And I think a lot of people are getting quite familiar with that, actually. So, you know, for all those reasons, I think that this is going to become second nature to most professional workers within a very short space of time. It's probably a good time to talk about what we're doing here at RSM because we are using it. So, Chris, I mean, are you happy to share all. Journey and our experiences and our approach of what we've done around this area, and perhaps touched on some of the key things we've learned. Absolutely, Ben. I mean, I think the first thing to say is that the accounting profession of which were part hasn't exactly been known in the past of being at the leading edge of technology adoption. But in this case, at RSM, we decided that we had to be we had to be exploring this, engaging our people with this, understanding the risks. And so back in March, April, I think it was Sarah, wasn't it? Yeah. We put together a a control pilot. So this is pretty soon after ChatGPT, you know, made such a splash that we invited volunteers across the firm. We had, I think, about 100 people participate in that six week trial. And we tried to strike a balance between bottling and distilling and capturing how they found it, what worked, what didn't, and just letting them get on with it. You know the word some guardrails, let's say, around it. The main one being don't produce client deliverables using ChatGPT. So do research, do the background prep work let's say. Um, and what was really interesting actually, was the number of surprising use cases that dropped out of that. For instance, Excel macro creation. Now that's one that when you think about it, it's quite obvious because it's quite a structured thing that generative AI is really good at. And an accounting firm, we've got lots of Excel spreadsheets and so forth. So we went through that process for about six weeks and um, used that then to build confidence to roll out an enterprise wide secure version of ChatGPT, which of course is being chat enterprise, which has come off the back of Microsoft's investment in open AI, which ensures that the prompts do go back into the public domain. And from then, we've rolled forward into, um, Microsoft co-pilot and trialling that. So we've taken what I describe as a a rapid semi-structured approach, trying to get sort of democratic, um, experimentation going on, but with some controls and guardrails. That there's loads in there. I mean, just to share, like my expertise was in the pilot. Um, and I remember, um, it was the unexpected things that came out. So a team member using it to, um, take their very technical report, writing and then use the tool to make it into something which is closer to kind of more simpler English. And actually, um, you know, that men reflect and think is sometimes what you don't know is the exciting that comes out of the these trials. I mean, what what do you think, Sarah? What have you seen at RSM that is useful to share? Like maybe things that haven't quite worked as well. Yeah. I was going to start with, with one that I think is a great use case for, for us here. And, you know, to Chris's point about, you know, we produced a lot of reports, a lot of documentation for our clients, um, not necessarily getting ChatGPT or being to enterprise to create the report, but actually to validate what we've written. So to actually go through and if we've we've created an executive summary at the beginning of our report, just ask. The Gen AI to create an exact summary and just cross-check it against what we've put, make sure we're absolutely on the mark with what we've put, because we know that a lot of our clients will read the exact summary and maybe not go that much further. So that needs to be really, really watertight. So that's a great use case. Things that we haven't had such great success with. I wouldn't say it's not great success, but I think it's the trial and error nature of Gen I. And sometimes, you know, people are asking, they're putting in prompts and they're not quite getting the response that they're looking for and so they need to just tweak those prompts and craft them in a different way. And I suppose there's that element of perseverance. You know, we don't want people to give up because the first response isn't quite the one that they were looking for. So there's a bit of education that we need to do there to make sure people are getting the most out of the tool. What does that look like? The people which are important part of this was the prompting and how they feel. So how do we prepare people for Gen AI? We've got a number of, um, tracks that we're following in this area, um, being led by our, um, learning and development team. So we've got a sort of digital training manager, and she's looking at a combination of our own communication and making sure that we've got, um, regular comms going out so that we kind of sweep people up and keep sweeping more and more people up in the wave, because not everybody is consuming information that comes out to them when it first comes out. So kind of a continual stream of communication, working on some basic, um, training material to help people with actually the most direct prompting that they can use. Um, because we know that Gen AI is really precise, it doesn't understand nuances. You need to tell it exactly what you want to get, the answer that you're looking for, looking at training to help people really refine their use of the English language, actually. Which I think it's quite fascinating. Just one other thing I'll throw in. That's been something that I think is a bit of a limitation so far. You know, we focus in the conversation so far largely been on natural language outputs. But of course gen AI is terrific in some cases and projecting images and so forth. But it's not quite there yet with regard to the quality of the imagery that you can include in proposals or reports, and partly because all large firms and RSM is no exception, protect our brand. So brand compliance of the images is something that, um, Microsoft and the other people working with Gen AI solutions are still going to get their head round and and still try to I think they're going to realise that for us to have confidence in AI generated visual outputs, it's going to be brand compliant and there isn't currently a way to do that. I don't think that really streamline the process of, let's say, proposal creation. And also I think there are ethical questions around that too. We have tried it in some of our informal staff meetings, for instance, you know, go off and prepare a presentation using generative AI on whatever topic, and they come back and all of the people that have been depicted as white, you know, and so there's the ethical considerations around the adoption of the visual outputs of AI that businesses need to be more comfortable about than they are at the moment. It's a funny point you make about the tools because to share an example, I asked it to make a presentation on artificial intelligence. Yeah. And um, it produced the text and I said, oh, can you make it more succinct? And it seemed to be able to handle that. And then I asked, how can you make it more visual? And all it did was place an enormous photo of a semiconductor chip across things. And I was like, well, that's not really what I need. And I see the potential of the technology. Yeah. And I guess my question is when you experience something like that in the workforce, what do businesses what do we need to do to keep people engaged with it so they don't disengage with it? Probably quite an important point. Um, that's an interesting question. Maybe it comes down to not having a continual stream of pressure to try it, try it, try it, because the technology will evolve in steps. So a specific example of this, some of them are good for language and some other ones are good for images until you get tools that can integrate both. I think we're going to struggle to create the types of artefacts that, um, businesses need, because very few documents these days are just text. They've often got images incorporated, or maybe a video content. And I think the Holy Grail is a generative AI that you can prompt with. Create me a document that does whatever you need it to do, but the outputs are, um, multimedia that we're not seeing yet, and we're seeing that in some of the trials with Microsoft Co-Pilot, for instance, which is terrific in many use cases, particularly virtual meeting summarisation potential game changer there. But when it comes to on brand PowerPoint. Presentations. It's not there yet because of the lack of interaction with corporate brand templates, but also because I think its strength is really in the natural language outputs, rather than the visual outputs rather than both. What do you think, Sarah people, I always talk about people and AI and tools, because I do feel like an important part is interaction of how we work with these tools. But I guess same question to you. If it's not quite working, the magic isn't there. How do we keep people engaged with it so they continue to experiment, iterate, and find new ideas? I think, um, we do need to keep encouraging people to keep experimenting, but I think we need to start to be, um, maybe a little bit more focussed. So, you know, we're using publicly available tools at the moment, which is great. Um, but we actually don't really know exactly what the underlying data is in those tools. We know it's come from the World Wide Web, but how much of that content's been moderated? What's been selected to be part of the model? We don't know. And so, um, therefore we can't be sure of what the outputs are going to be. So I think what we need to be doing is encouraging people to think about slightly narrower applications of generative AI that are really specific to what they do on a daily basis, and then identifying the data that would help them to achieve what they're trying to achieve. So actually just really be a bit more focussed and a bit more specific. And I think people will feel more engaged because it's more relevant to them. Mhm. I think that's a really good point. And um, I think that all businesses are going to find that their staff will gradually become accustomed to using the everyday generative AI, the publicly available ones, ChatGPT, Dall-E and all the rest of it. It's how you then use the language models that are out there to create specific use cases in your business, using your corporate data. To Sarah's point about the training data, which is essential. You've got more control over that, but also that's how you can differentiate because it's your data. Your data is such a huge asset, and that's how you can leverage the value in that data is by connecting up to those language models, um, that a handful of organisations around the world found the resource to develop, and there is only still a handful, let's face it. But any business can connect to those, and they need the software developer and data science skills to do so. Apply those language models to their corporate data sets and that's where it gets really exciting. Should we talk about data? I mean, data is not a new word. It's been around a long time. But so like what's the relationship between data and AI? And then why is it so important to understand that relationship. So I think to the point we were just making all of these language models and any traditional AI models are based on data. And so, um, the outputs of the AI that you're going to get are only as good as the data that's been input to the models. Um, and I'm sure that, you know, the majority of people are aware of that. And in fact, you know, data has been essential for driving many things in organisations. So analytics, automation, now AI. They all require a solid data foundation. I think the difference with AI is because it's so powerful, it can also be very powerfully wrong if the data is wrong. So the need for really good quality data is kind of exacerbated, if you like, by the AI. Has the AI said the rise of gen AI? Has that changed the way we look at data, or whether the business should look at data? Or is it the same challenge but just through a different lens? What's your perspective on that? I personally think it's the same challenge through a different lens, and I think, you know, organisations should, if they haven't already, be thinking about or have a good solid data strategy so that they understand the data they have. They understand where it resides, how it flows through their organisation, who owns it. They have, um, a realistic view of the quality of their data, and they put the right processes and controls in place to actually govern their data and think about how they want to use data. You know, ten years ago it was all about big data. Let's just get all our data together and let's, you know, there's loads of value in data, which of course there is. But it wasn't very specific in terms of what do you actually want to do with your data. So that a data strategy with some really concrete, um, goals, what am I going to do? What am I trying to achieve with my data? Whether it's an analytics use case or an AI use case is really, really fundamental because you can go off in lots of directions and maybe not quite achieve the value you or you could achieve, um, without kind of the strategy and the vision in place. It just makes me wonder, Chris, like, how do you stay focussed? How do you avoid that? What maybe look like like a scattergun approach of data? What's your kind of perspective on how would a a board or business stay focussed when tackling data? I think that is the risk benefit of a scattergun approach to this. And I think, um, that's exacerbated by the fact that when we talk about data, we're not just talking about structured data here. Talk about unstructured data, content reports, videos, um, libraries of images, all of those are part of an organisation's, uh, data, and they've got value to them. And generative AI can learn from and be trained on them. You know, if you're a creative agency, then it isn't so much about the words that you've produced in the past. That's your IP, it's the visuals, it's the content. It's it's the videos. And generative AI can increasingly learn from that as long as it's got the right metadata attached to it, I suppose. Um, but you're absolutely right. You know, you've got to avoid a scattergun approach, which means fundamentally, I think, understanding how the technology works. You know, you've got to have people in your organisation that really understand how to get the best out of a generative AI model and, uh, how to feed data into it, uh, how to experiment with the outputs, how to avoid hallucinations, how to manage the security and the ethics and the privacy considerations around it all. Um, so it does, again, as most, uh, technology adoption plans do come down to having the right skills in place to get the right focus. It just feels like a lot to think about. To me, the more I think about it and it's there. But, um, taking that step back, where are the key areas leadership teams need to start at this point? Like what they need to think about to get. You've touched on actually with the approach, right. But where's the absolute starting point with a blank piece of paper? I think the mistake would be for most organisations to say, right, we need an AI strategy. We're going to create the perfect AI strategy. Now, actually, you do need to experiment. You need to do the kind of things we've been talking about controlled trials to better understand the skills, the safeguards and the use cases for your business. That could then form the kernel of an AI strategy for the future, because the AI that we use now will be the worst it will ever use, because it will very quickly get better and better and better. So an AI strategy for today. If you were to waste your time trying to create the perfect document that is your AI strategy will be out of date in a couple of years, if not sooner. So thinking about how AI and generative AI particularly is going to affect your business model is a bit of a fool's game unless you're in the generative AI business, or maybe the software business for the rest of us. Start by trying it out. Get your staff comfortable with prompting, and start to bring in the kind of data science. Skills that you'll need to then differentiate with AI. That's what I think most businesses should be doing. That's what I'm advising our clients to do. Unless you've already got very, very deep data science skills. Can we go back on control trial? And, um, what I'm like for a pilot, but I'd be interested to hear thoughts from both of you. What does a good control trial look like? So I think there's a bit of a balance here because as Chris said, you know, you need people to experiment. You don't want to be too prescriptive about what you want them to do, but you do need to be able to measure what they're doing. You do need some feedback. You've got to create forums whereby people can share their experiences, where they can provide feedback, where they can evaluate the functionality and share what's good and what's not good, and work together and build on that. If you just set up a trial and say, you know, see you in a year, you're not going to get any value from it. And you do need someone to coordinate that, don't you? We've got a full time digital community coordinator, for instance. Even he is snowed with trying to coordinate all these different professional workers and, you know, get what works and what doesn't and get them to, you know, create their tips and tricks and so forth. I think also, you've got to involve risk in compliance from the start in a trial, don't you. Mhm. Um, you know you've got to have some guardrails is the kind of word that everyone's talking about which simply mean some do's and don'ts. Don't use personal data. Don't use it to create client or customer facing outputs for now until we better understand some of the risks around it. But do use it for anything internally that is associated with research or meeting prep, or things like coding and, uh, Excel macros and the like, which we've mentioned, which are much less risky because code is not something that's covered by intellectual property. As such, it either works or it doesn't. So, um, I think having those guardrails that people need to understand is a key part of a trial. There's a cultural point to, isn't there, like the nature of a lot of people in the accounting profession is they see this this tool and they want to break it. They'll ask it a question like, tell me how this tax legislation works. Tell me how this IFRS applied and when it doesn't work, sort of sit back and say, oh, we're safe. It doesn't work. Yeah. But actually part of the culture has to be to take all the narrative and all the conversation around that testing and make it constructive, which takes time. People need to feel safe. There needs to be some joy in it, doesn't there? I don't know, that's how I feel. But be curious how you feel Sarah about the culture. How do you get people playing with it that is constructive? Yeah, and it sort of plays back to the point about the digital community and having people who feel part of an environment where they feel safe to do that experimentation, where they've got support if they need it, because, um, sometimes just a little bit of a pointer in the right direction can make a really big difference for whatever it is somebody trying to do. So, you know, having those collaboration helps that community feel, um, but I think, you know, on a broader level around the culture, um, all of this does go hand in hand with basic levels of data literacy, which are part of a good data strategy, is, you know, it can't just be about the technology, the tools, the data. It needs to be about how people interact with those things, how they feel comfortable and get confidence in the inputs and the outputs, how they feel empowered to challenge, to to question when they think something might not be right. And all of that comes back to basic data literacy and just asking questions of where did that data come from? How did it come up with that answer? Am I sure it's the right answer? How do I cross-check? And that's just a big education piece and it doesn't happen overnight. You have to kind of keep going with that. I think also people are used to the world of software either working or not working, but with generative AI, the shades of grey, it works very well most of the time, but sometimes it hallucinates and occasionally it just gets things a bit muddled up or whatever. But human beings are fallible too, and so I think that it's got the reason it reached an inflection point. I think after years of development of, um, language models and so forth, was because it suddenly became as good in many scenarios as a human being, because we're all fallible, we can put errors into our into our written outputs and so forth. And so it's it's not a black or white, it works or it doesn't. Largely it's good enough for many, many use cases. So we shouldn't be waiting for perfection. We should be using it now. But with the right review around it, you know, it's an assistant isn't it? It's assistive technology. And as long as you review what it produces, we should be using it now. So a year ago, broadly, no one had heard of Gen AI. I know people had, but general recognition was very, very low. I think now it's fair to say most people have heard of it and played with it. The two questions where will we be in 5 or 10 years time? Who wants to tackle five years and who wants to tackle ten years? Uh, that's so hard, isn't it? Because it's evolving at such a pace that I think it's pretty much impossible to predict specifically what it'll be capable of in our own profession. There's a lot of assumptions around the types of compliance work, the types of advisory work that a human being is essential for. Now, I still think that human beings will be essential for most of that work in the future, but in much smaller ways. Um, and so I think that the ability of generative AI to incorporate a lot more data from the real world that go into all of our thought processes when we as human beings are producing, um, written or visual outputs can be incorporated into the models in the future, and it will be able to do a lot of things that human beings currently assume that they are. Um, you know, that the masters of for now. So, I mean, if I'm going to make a prediction, is that a lot of the current assumptions around what a human being is needed for will be gone. But in terms of specifics, if only I knew Ben! You mentioned it. A lot will change. But is it too early to reflect deeply on the impact of this technology because it's so much unknown? I don't think it is. I don't think it is. I mean, we you know, there's a lot of examples of generative AI in our daily lives that a lot of people don't even realise is, is AI enabled, you know, the Siri and the, um, Amazon Alexa and all those kind of things, which, again, are getting better and better all the time. Um, so I think it's about generative AI being much more woven into the fabric of our daily lives. You know, in cars, in, uh, bus stations and train stations, in airports as well as in the working environments. I think it's going to be a case of it will augment most of what we do in our daily lives, personally and professionally. That's what we need to start thinking about now. And obviously, the Bletchley Park summit recently was a part of that to begin to think about the privacy, um, and regulatory aspects of it. It's definitely not too soon to be thinking about that. We've got to be thinking about an AI augmented, you know, future of humanity candidly. But to bring it back to the business world. All businesses are going to be AI enhanced, or most businesses already are, even if they don't realise it, but becoming more explicitly so in the next few years. Yeah, and I think to build on that, you know, the first phase, I'm not going to put years on it, but just the first phase is definitely about embedding a different way of working and a different way of thinking, uh, you know, across personal lives and professional lives. Um, I think what we'll see, let's say, within the 5 to 10 year window is a lot more proprietary, um, AI, generative AI and AI models where organisations invest in building something that is very, very personal to them and their workforce. But that isn't that can't happen quickly. It's not going to happen overnight. It will take a lot of investment. It's quite complicated. So I would counsel that, organisations don't rush into that. Start thinking about what those uses, use cases might be, what would be the underlying data that's needed. Get your workforce ready and used to working with this kind of technology, and then invest in the right use case. And it might just be one. Make sure it's the one that's going to bring the most value. Thank you so much. Um, Chris and Sarah, um, if you would like to learn more about Genrative AI and some insights, please take a look at our website and we'll put a link to that in the show notes. And please look out for further episodes of The Loop, where we'll continue to investigate the impact of this technology on businesses. And thank you very much for listening.