Fortune Favours The Brave
A regular podcast for business leaders exploring how businesses can harness risks and use them to their advantage. In each episode Howden Insurance Brokers will discuss a topical challenge or issue and what business leaders can do to overcome it.
https://www.howdengroup.com/uk-en
Fortune Favours The Brave
The impact of AI on accountants
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Artificial intelligence is changing the way we work from every direction. It’s creating new opportunities for efficiency and insight, but as our dependence on it increases, so does the potential for risk.
In this podcast, Howden Claims Director Neil Williams is joined by George Smith and Graham Reid from RPC to explore:
- How AI can potentially help accountants?
- What do the regulators think of firms using AI?
- What are the risks of using AI?
- What can be done to limit or mitigate those risks?
Welcome to Howden's podcast, Fortune Favours the Brave. We all take risks in our everyday life, and business is no different. In this podcast, we're speaking to the experts about a topical challenge or issue and what business leaders can do to overcome it.
SPEAKER_02:Hello and welcome to another episode of the Haddon Podcast series Fortune Favours the Brave. My name's Neil Williams, and I'm a claims manager here at Haddon. Today the topic is going to be AI adoption in the accountancy profession. And I'm delighted to be joined by Graeme Reed and George Smith of RPC, a leading law firm. Welcome to both of you. As per tradition on these podcasts, can I ask you both to tell us about a risk you faced recently? George, if I could start with you.
SPEAKER_03:Yes, hi Neil. A risk I faced recently was finding myself on a very cold and frosty January morning, uh, taking my nine-year-old son to our local junior park run for the first time. Something I've been trying to get him to do for a long, long time. And as we stood there on the day's start line with five layers of clothes on, shivering away, I thought, this was a real risk. I might have put him off running and potentially just exercised generally for life.
SPEAKER_02:That's brilliant. Thanks very much for sharing that with us. Love that story. And over to you, Graeme.
SPEAKER_01:Well, the risk I faced recently was um running after a complete stranger, yelling at the top of my lungs, that's my dad's car. It occurred actually just outside Howden's offices, and it was my dad's car from the 60s. It was a bit of an unusual one. I recognize the number plate, and I scare the living daylights out of this person by saying, Oh my goodness me, I used to play in the backseat of this car. Is it yours? I thought he was going to punch me, but fortunately we had a nice conversation instead.
SPEAKER_02:Oh, that's an excellent story. I love that one. That's brilliant. Thanks both very much for sharing those stories. So to kick off, let's get back to basics. If I start with you, Graeme, what do we actually mean by AI? How far have we got with it, and do we actually trust it?
SPEAKER_01:AI is a word we're seeing constantly in the press these days. For this particular context, the use of AI by accountants, I think we're really limited to generative AI and in particular large language models. And if you want the whole area summarized in a few words, we're talking predictive text machines. It's a black box. It's very difficult to say what goes on inside it, but it somehow manages to produce perfect text, and it's been trained to do so on pretty much everything that's ever been written that's on the internet up until recently.
SPEAKER_02:And, you know, are you much of a skeptic or are you enthusiastic about AI yourself?
SPEAKER_01:It's a subject that attracts a very wide variety of views. If you're a company that's invested, you're going to be a true believer. Somebody who's looking on at the spectacle of how the stock markets are rising on the back of people selling video cards and the like, they may be wondering, is it actually going to crash? Maybe I should be more skeptical about it. And then there's always the AI curious, the ones who are dabbling with a little bit of search engine enhanced by AI. The main thing I think is you have to be informed because it is coming, it is there, it is already part of our working lives. And having a critical but balanced perspective on AI usage is key. So maybe we'll move from belief to something approaching understanding one of these days.
SPEAKER_02:And just as a matter of interest, George, are you a skeptic or a true true believer?
SPEAKER_03:I'm somewhere in the middle, I think. I do use AI quite extensively, but I like to think I use it with a healthy degree of skepticism. So I never trust anything it tells me, but it can be a useful cross-check or starting point for generating ideas.
SPEAKER_02:And so just turning to you again, George, how do you think AI can potentially help accountants? What are the areas you see it being talked about at the moment?
SPEAKER_03:Yeah, I think there's a number of different sort of layers to this question. All professionals can make use of AI, of course, and that goes for accountants just as much as any other profession. So basic tasks, you know, research, drafting, content, summarizing, checking compliance, uh, collating and searching information, all these things, of course, are relevant to accountants as they are to all professionals. Above that, you have another layer, which is specific tasks that the AI can achieve for accountants. You might, for example, be able to achieve greater and more sophisticated automation of some of the more routine elements of an accountant's role. So I'm thinking about things like bookkeeping, data entry, payroll management, potentially tax preparation and filing work. Then above that, you have the slightly more sophisticated layer where are there accountant-specific roles and jobs and tasks that can be improved, made more efficient, made more sophisticated through the use of AI. So, for example, tax advisory work, obviously an incredibly complicated and sophisticated area, but a number of firms are looking at whether they can use AI tools in order to improve their processes there. And perhaps come on to talk about this later, but there is now some regulatory guidance in that area. The other potential use case, I think, that a lot of accountants are looking at at the moment is around audit. So audit, in many ways, is the perfect environment to let an AI system loose because you have a huge volume of information, and what you're doing is interrogating that information, you're testing it, you're trying to form conclusions off the back of it. And this is what AI is really good at. So a lot of firms are looking at implementing AI tools to assist with the audit process, both in terms of actually undertaking parts of the audit themselves, so interrogating data, producing conclusions, but also in terms of the sort of audit planning phase. You know, you don't necessarily use less resource to undertake the audit, but you use AI to make the process more efficient. So you do a first pass with an AI tool, it identifies the areas that are potentially higher risk or lower risk, the areas that need more human observation and interrogation. And then you can apply your human resource to the audit more efficiently.
SPEAKER_02:Excellent. Is that most firms getting engaged with AI, or is there a disparity between the larger ones, the big four, and the smaller firms in your experience?
SPEAKER_03:There definitely is a disparity. I mean, all of the big four are putting huge amounts of resource into AI systems, including in relation to audit, and that's been well reported. Mid-tier firms, I think, are starting to go down that route. A number of them are looking at options, but I think there is definitely disparity across the industry as a whole.
SPEAKER_02:Yeah, that certainly chimes with my own experience speaking to my client base is that um, you know, the smaller to mid-tier firms are starting to explore these issues, but it's the actually the larger ones that are actually really getting into the nuts and bolts of it at the moment. And, you know, watch this space to see what happens next.
SPEAKER_03:I think that's right, because obviously when you're at the forefront of these kinds of trends, you need a lot of resource in order to develop systems from scratch without necessarily having the precedence there to create shortcuts. So I think it is the larger firms at the moment, but we will see others following. I guess the other thing it's worth mentioning in relation to um the potential use of AI by accountants is agentic AI, uh, which is uh a yet more sophisticated AI type of system, whereby effectively you go beyond what Graham was talking about with the predictive text prompt and response chatbot style of AI, and into a system that's able to act with a degree of autonomy. So, for example, you might have a system that can produce uh a draft tax return, it can identify that there are certain gaps in the information, it can draft an email to the client asking for that information, it can receive the email back and it can plug the information in and then correspond with other agentic systems. And this is the potential end point of all of this, yeah. Firms might have a whole series of different systems, all of which can talk to each other and potentially to human operators and clients as well, in order to act almost like quasi employees of the business.
SPEAKER_02:And Graeme, if I can just throw a curveball at you, in terms of the uh legal profession, have you seen much adoption there?
SPEAKER_01:The lawyers are definitely getting excited about. I think a key use area right now is enhanced search. Okay. Lawyers deal with lots of cases, they're wordsmiths, and enhanced search is a great thing. The problem is you have to check the outputs because there's a known problem of hallucinating cases. And this remains an issue, and people keep on getting hauled up in front of the regulators and the courts for having invented authorities to support their arguments. Brilliant. Thanks for that input there.
SPEAKER_02:And just moving on now, um, turning to you again, George, what do the regulators think of firms using AI? We've seen various regulators produce guidance notes recently, notably Rick's on behalf of surveys produced a fairly detailed note. What are we seeing from the accountants regulators?
SPEAKER_03:That's right. If I had to describe the attitude of uh regulators in relation to the accountants industry, I'd say that they are very, very interested in the adoption of AI and probably cautiously optimistic that it's a positive thing for the industry. I mean, anyone who's subscribed to the sort of regular updates from the professional bodies like the ICAEW will be receiving almost daily updates and commentary and thought pieces on AI and how it's transforming the industry. There's clearly a lot of interest there. More formally, last year we had the FRC issue some guidance on the use of AI in audit. Now, my take on that is that it was quite high level. A lot of it was hopefully common sense, but it talked about things like use cases, properly documenting AI use, the importance of having appropriate processes in place and making sure you maintain human oversight. But I think that the fact that the FOC came out and published that guidance does demonstrate that AI in audit is being adopted in the industry, and clearly they felt there was a need for guidance in that area. More recently, we've had the professional conduct in relation to taxation bodies, which is a group of seven bodies, including the ICAW and the ACCA, come out with some guidance on the ethical use of AI tools in relation to tax advice.
SPEAKER_02:Okay.
SPEAKER_03:This goes a bit further and it's a little bit more towards what you mentioned, RICS have produced in the surveying context. So they go into a bit more detail talking about things like engagement terms with clients, how to reflect AI in that, making sure you're being transparent with clients about AI usage, maintaining risk registers, and making sure that you apply professional judgment to AI outputs. They also talk a little bit about the sort of data protection angle, which of course is a really important thing to consider if you're making use of client data in relation to AI systems. So I think we do have guidance in the industry at the moment, which is still fairly high level, but there's a gradual sort of iterative process of that becoming more detailed and more sophisticated over time.
SPEAKER_02:Absolutely. To me, it feels like everybody's gradually feeling their way through the process and just learning as they go and providing more guidance as time goes on. Yeah, absolutely. Brilliant. And just again for comparison purposes, Graeme, what are you seeing in the uh legal profession in terms of regulatory activity?
SPEAKER_01:I think the uh the main regulator of the lawyers, the solicitor's regulation authority, would be delighted if lawyers could use AI to deliver cheaper and better quality legal services to people. The problem is that no one's quite managed to discover how that uh potential money spinner is really going to work or can be delivered safely. So right now, they're curious, interested, possibly even cheering firms on from the sidelines, but they're also keeping a very BD eye on whether clients are going to be harmed.
SPEAKER_02:Absolutely. Thanks for that input there. Now, just getting to the heart of the matter, what are the risks of using AI, both in general terms and accountancy-specific risks? Let's start with you there, Graeme, if that's all right.
SPEAKER_01:Certainly. So I think AI risks fall into four categories at a very general level. One of them is fluency. The text they produce is perfect. The second risk is errors, and they produce extraordinary and odd errors. The last two are the biggest problems, I think. It's how we react to the first two problems, our tendency to give AI human characteristics, things like that. And then the last one might be the biggest risk of all in the medium to longer term. And it is the impact that AI can have on an organization, on how institutional knowledge gets passed on, how juniors become trained. But perhaps I can tell you a story first to illustrate the point, if that's all right. Absolutely. So I first came across generative AI a couple of years ago on a uh on a uh car trip with a family. We were trying to keep the kids occupied, and the eldest one had managed to get Chat GPT on her um uh on her personal gadget, and um, we decided to ask ChatGPT to do something fun for us, and I have no idea where the idea came from, but one of the kids said, Please, ChatGPT, write me a poem about Pokemon. Brilliant. In the style of Wordsworth. Somebody was doing their English GCSEs, I suspect, at that point. And after a couple of seconds' thought, it produced immortal words, and if you don't mind, I'm going to quote them. In nature's simplest forms, we find a wonder and a joy combined, and such as Pokemon, a game which in my heart has kindled flame.
SPEAKER_02:Absolutely love that.
SPEAKER_01:Thank you so much for that. Completely silly. But still, it's true to Wordsworth. This is what I mean by fluency. It is almost magical that it can produce something about Japanese game of some sort, I don't really understand the details, in the style of Wordsworth. So, fast forward two years, the latest version of ChatGPTI fired it up not very long ago and asked it this question. Answering as a legal expert, please give me summaries of the five most important cases involving the professional conduct of accountants. It gave me a very satisfactory greatest hits list of interesting Court of Appeal cases, House of Lords, that sort of thing. Number two on the hit list was something I hadn't seen before. I looked at it. It described how the key points were fairness and disciplinary proceedings involving the ICAEW. I looked further. It was totally hallucinated. The site, the name of the case, 1993, one weekly law reports, all of this and its content was made up. So that's the second risk. You move from fluency of words to completely hallucinated material. And yet you look at it and it is expressed in a way that makes sense. It's very difficult to spot. So with those two illustrations, that leads to the last two risks. One of them is we need to adapt to how it makes mistakes, this black box. If you get work shown to you by a junior and you're being asked to supervise it, it's a draft of some sort, you have an idea of what the individual is like, how good they are, you look at the quality of the writing, you get a feel for the extent to which you need to have your your grammar radar ticking away, so to speak, oh, the commas are in the wrong place if that's your particular thing, or they've made a mistake. You use your knowledge of the human to affect how you look at the text, how you supervise. But if it's an AI, there's no rhyme or reason to it. It's not that they make a mistake because they are unsophisticated or inexperienced, they just make random mistakes, full stop. So you have to train yourself to look at the output of an AI in a completely different way. So that's the third risk. Relearning, unlearning, and then relearning how to supervise, to manage the outputs.
SPEAKER_02:I think that's a really interesting point, isn't it? And it's sort of the AI says something with such certainty, whereas humans we've got that sort of air of fallibility, and that kind of plays into the wider conversation, doesn't it?
SPEAKER_01:Quite. And then the last one is, I think, maybe the biggest one of all. It is keeping track of the effect that AI usage is going to have on your organization. So on the negative side, you could say it some people have written that it encourages what that's a glorious expression, cognitive offloading. You're getting the butt the black box to do the thinking. Yes, but you can also get the black box to do the thinking a great deal more cheaply and earn more money that way. But uh there are uh possibly uh complicated adverse effects to AI usage to do with how the organization works. Who are you going to blame if it goes wrong? The human in the loop. That human is going to say, but it was the box of tricks. So it can undermine feelings of individual responsibility and collective engagement. It can prevent the structured transfer of knowledge between generations within an organization. Now, this all sounds maybe slightly pretentious, generations within an organization, but if you actually think about how you've learned in the past, how you went from being a wet behind the ears apprentice of some sort in the organization, you learned the business. There is this process of transfer. You don't think about it because it just happens to you, and hopefully in a good way. How is that going to be affected by AI usage in the future? That is a big, difficult question, in my view.
SPEAKER_02:No, I completely echo those views. Actually, I think you know, for any organization, it's going to be have to be a fundamental part of the process to think about how we educate and train our new employees going forwards.
SPEAKER_03:I think that's uh it's a really good point, and it illustrates the sort of dichotomy that you have with AI, that it can be such an incredibly useful tool, but also such a big risk. Because if it's operated by someone who has the requisite knowledge, you can spot the errors with it. But if it's operated by someone who just takes as correct what it's what they're told by the AI, it's a big problem. So in the accountants context, I saw uh last year one firm was talking about having produced um an AI model for tax advice, which gave the correct answer 90% of the time. Now, if that model is being operated by someone who is easily able to spot the 10% of cases where it's wrong and manually correct that, that's an incredibly useful tool. If not, then that's a negligence claim waiting to happen.
SPEAKER_02:Absolutely it is, yeah. That's really interesting. And then just moving on and developing some of those themes, what else can we be done to mitigate risks in relation to AI and how are the courts going to deal with the issues as they arise?
SPEAKER_01:I think the starting point here, and something that's often forgotten by professional services firms generally, is the legal standards applying to what they offer for their services. Law firms, accountancy firms promise to sell their services with reasonable skill and care. And despite all the promises on the websites that we understand your business and we're going to be amazing and great and act in your best interests, actually the core sales proposition is reasonableness. And there is a big body of case law going to what counts as reasonableness, and it's a funny thing to say. It's utterly human-centered. It's all about professionals making judgment calls, responsible bodies of professionals having a look at some new practice. It might be novel, but it's nonetheless untried but safe, and it's all through the lens of human professional experience. Great. That's what you would have expected. The question is, how is that going to work when it's in AI? Well, you can't ask the AI questions, why did you do this on this particular matter? Why did you make that decision? What this is going to require is that clients, firms, and the courts will have to go away from thinking about what happened inside somebody's head to a much more numerical statistics-based approach to life. I think that's going to be quite a tricky transition. To start with, people are going to find it difficult to show what happened, why things went wrong, because the AI black box will not be able to replicate the decision it made six months ago. It's changed since then. Second thing is, I don't think people are very good at dealing with percentages. To take George's example, if 90% of the time you're accurate, how does that translate into reasonable performance? If you're one of one out of ten people who got a bad tax return and now a penalty, do you really think that that was a reasonable thing? That type of a dialogue is going to be a very painful one, I think, for firms, at least to start with, and until we get to the point where people become accustomed to AI usage. That leads on to the question of mitigating some of these risks. So, for now, and until there are court decisions about AI usage integrated within uh professional services work output, it seems to me the most important thing is to educate the client. And not in a patronizing way, but in explaining what you're going to do, why it is generating benefit for the client, it means you can have a wholly human-based service provision of some sort, or it can be 75% AI. You make explicit the cost savings somewhere in the the contractual engagement materials that your firm uses. By doing so, you've got two advantages now. Number one, you've got a client who's bought into the idea and is informed and is therefore a little bit less likely to complain if it goes wrong. Not a panacea, but a little less likely. And the second thing is you might have set up the materials for some form of legal-based argument that you've limited your liability in a relevant way, that this was a reasonable thing, it hits the reasonableness standard because you explained it at the start. So explanations as to AI usage, it seems to me, are the most useful thing to be doing right now.
SPEAKER_02:Graham, can I ask you, as a head of risk in relation to AI, what sort of what would your checklist look like?
SPEAKER_01:Well, I think I'd start with a blank piece of paper and a pen. Don't ask. The generative AI to generate this checklist. Top of that list is the acknowledgement that people within your organization are already using AI in ways that you don't know about and maybe are not good ones. This needs to be acknowledged. It's just part of the atmosphere right now. People use it automatically. So, next and on the list of safeguards, of course, you need policies, controls, and processes around the firm's usage. Guardrails, guidance, prohibitions, don't feed client confidential data in, that sort of thing. So those are the easy bits. If you don't have those, you need them. Where it starts to get more complicated is the next one. This is this idea of normalizing the AI usage that your firm has for clients, explaining how you're going to use it, making clear the price benefit advantage proposition to them. That's going to be complicated, and you don't want the words you use in that contractual interface with your clients to come back and haunt you. So that's complicated but necessary. But if you really want to knock it out of the virtual park here, you need, I'm not going to sound too much sort of too much of a mystic here, you need to persuade the organization to value the journey as well as the destination. Now, what I really mean by that, it sounds good, but what I really mean is it's this point about how AI usage can interfere with the way training and knowledge flows around the organization. It may not happen today, it may not be next year, but you need to think how people are going to be trained and supervised and not allow AI usage to interfere with that. It's the journey of learning that you still want to keep in place, even though you're going to be making hopefully more money using AI to sell services to clients.
SPEAKER_02:I couldn't agree more about the journey. Thank you for that point. And just taking a step back again, just looking at risk more broadly, are you seeing much activity in terms of claimants using AI to bring claims?
SPEAKER_01:Oh. This is a subject that lights the fire under many people who have the uh happy task within firms of dealing with complaints. I was talking to a regulator of the lawyers last week describing how complaints that they receive about lawyers are up 40%. I am confident that it's exactly the same increase across all professions. AI is partly doing a good thing here. If you type into a search engine, how do I make a complaint about my accountancy firm? It won't just point you in the direction of a website, it will now give you an AI-generated script of things you need to do. So there is a positive benefit. People are able to access possibly, hopefully, good advice on how to make a complaint. But of course, the downside is that AI is generating the complaint. This seemingly fluent language, 10 pages of issues being raised, you start unpicking it and it doesn't make sense. People are arguing things that are utterly wrong, but it's not maybe the disjointed text of somebody who doesn't understand what they're talking about because they're not a professional accountant. It's the AI speaking. And this produces a massive increase in the amount of work it takes to respond to what might be a legitimate complaint, and it might be nothing at all, just generative text.
SPEAKER_02:Yeah, absolutely. It's a major concern I'm seeing with my client base as well. A lot of the conversations I'm having with the moment about the risks associated with claimants using AI. I don't know whether your experience is similar, George.
SPEAKER_03:Yeah, absolutely. And I think there's a related point to that as well, which is that uh clients are not only using AI in the context of bringing complaints, they're using AI to answer ad hoc questions more generally, which they perhaps would once have referred to their accountants. So you have a situation where um, let's say an SME has uh an accounting issue. In the old days, they might have picked up the phone to their accountant. Now they can just drop it into a chatbot and see what response they get back. As Graham's talked about, those responses are often very compelling. And so what we're seeing is clients making their own decisions on the basis of AI advice as to how to handle certain things within their business. Now, what that means for accountants is that at a certain point when they do get visibility of what's been done, they're faced with a situation that's potentially wrong, potentially difficult to unpick. And they may face a liability if they don't spot that and take appropriate steps to correct the position. So there's a further source of potential liability there through the clients themselves using AI. I'll also add to that that there was some press coverage last week about a tax expert being spoofed on YouTube. So there were some apparently reasonably convincing videos with this well-known tax expert giving tax advice on certain topics, all of which I understand was completely wrong. So this these sorts of things just exacerbate the problems of people thinking, I don't need to go to my professional. The answers are out there, I can find them easily myself, and I can take steps in order to protect my own position. Now, obviously, again, accountants down the line need to be alive to that and potentially to jump on these errors and fix them for the clients. Creates a lot of extra pressure as well, that, doesn't it?
SPEAKER_02:Absolutely, yes. And I think the regulator is going to have to become much more involved in manage helping to manage the process in terms of the risk of claimants bringing endless claims against them using AI as well. But there'll be more on that, no doubt, in the future.
SPEAKER_03:I think that's right. And as we've said, a lot of the regulatory guidance at the moment is high level in nature. I suspect we will get to the point over the next couple of years where we start to see more technical guidance from regulators.
SPEAKER_02:Brilliant. Thanks very much for that. And as a final topic, what are the insurance implications of firms using AI? If we can start off with you, George?
SPEAKER_03:I think there are a few. One of the things perhaps we haven't talked about is the fact that specifically for the accountants industry, the use of AI is creating systemic risks, which didn't necessarily always exist. Compared to other industries that I deal with, so FCA regulated professionals, for example, the claims that we tend to see against accountants aren't generally systemic in nature. They tend to be the results of individual professional errors by people working in a difficult, complicated technical environment. Now, what you potentially have if you start to introduce AI, for example, to automate some of the more routine elements of the job is a potential reduction in human error, yes, because you're using the AI, but also potentially an increase in systemic risk. So if you're using an AI system to undertake the same job for tens, hundreds, thousands of clients, and there's a problem with the AI system itself, or the problem with the way the output's interpreted, or the way it interacts with another system, you might well find yourself in circumstances where actually you've repeated the same mistake a lot of times before it comes to light. Now that has certain implications for the insurance position, of course. I would divide the insurance implications probably into three parts disclosure, exclusions, and aggregation. And I think it's really important that firms who are starting to use AI or perhaps a little bit further along on that AI journey really think about this. And the main advice I think is to be open and discuss all of this with your broker andor insurers to make sure that you're being completely upfront. In terms of disclosure, we're seeing uh insurers increasingly ask more questions around AI. I suspect that's something that you see with your clients.
SPEAKER_02:Absolutely, all the time. I mean, the the risk of systemic failure is a major concern in the insurance industry, absolutely.
SPEAKER_03:So I think firms have to be really careful to make sure that they are as forthcoming as possible about their current AI use, about their future plans. And I think this is something that applies not only when they're looking to put a policy in place, but throughout the policy year. So if they suddenly make a decision to adopt an AI system halfway through the year, talk to your broker about it and make sure that the insurers are fully informed so there aren't problems down the line. I think that's a fundamental point.
SPEAKER_02:Keep that conversation going about the AI risks your business faces. And, you know, just developing that theme a little bit further, it'll be interesting to see how the insurance industry reacts in future as we see more AI-related claims come through the system. I mean, obviously there are risks that insurers will start putting exclusions in place or including more onerous uh aggregation language, which is always a difficult subject, isn't it, Graham?
SPEAKER_01:Yes, indeed. Uh in the world of lawyers, there is currently launched a full-scale attack on the aggregation language that um the SRA mandates for law firms. It concerns whether dishonesty can be used as a linking theme to connect claims. And uh, depending on how that goes, uh you could see uh sort of a ripple effect across other professional firms um with similar types of aggregation language.
SPEAKER_03:In the accountants industry, as I mentioned, aggregation is a really um critical point as well. So there's a bit of a split in the industry based upon the size of the firm. So for firms which are using, for example, the ICAW minimum terms, they generally don't have issues with aggregation because there isn't aggregation wording built into those terms. But for larger firms, so firms with a gross fee income of above 50 million, they'll have their own bespoke insurance arrangements in place. So for those firms, it's really important to understand how aggregation works in terms of treating multiple claims as one for the purpose of the policy, and really think about where does the balance of risk lie, both in terms of payment of policy excesses, for example, and in terms of applicable limits of indemnity, and are they properly protected in terms of some of these potential systemic risks coming to fruition? George, you mentioned exclusions. What do clients need to think about in relation to exclusions exactly? Yeah, so again, you have a split here in terms of firms who are using ICAEW minimum wording and larger firms who have their own bespoke policies. I mean, to my reading, the ICAEW minimum terms don't contain any specific exclusions, which would be relevant to the use of AI. But when you're looking at those bespoke policies for larger firms, obviously you really need to consider the specific terms that have been agreed really carefully. So things like uh cyber, computer systems, the use of data, the use of third-party products could all be relevant to the coverage for AI issues under the policy. It's also, I guess, worth thinking about your whole suite of insurance cover. So not only professional indemnity policies, but also any other policies that could be relevant, cyber, DNO, that kind of thing.
SPEAKER_02:Brilliant. Thanks very much for that. I think we've just about run out of time now, but um no, it's been a brilliant discussion covering off some of the key themes. I think we should make a plan to get together in another year and uh discuss some of these themes again and see what's changed in relation to the various topics we've discussed. But in the meantime, thank you both very much for your time. Much appreciated as always. And uh I hope you've enjoyed the podcast and enjoy the rest of your day. Thank you.
SPEAKER_00:Thank you for listening to this episode of Fortune Favours the Brave from Howden. To hear more episodes and subscribe to our channel, search Fortune Favours the Brave on your favourite podcast app.