Governance Bites

Governance Bites #136: Artificial intelligence and governance, with Tony Dench

Mark Banicevich, Tony Dench Season 14 Episode 6

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 17:47

Send us Fan Mail

In this episode of Governance Bites, Mark Banicevich sits down with Tony Dench to explore the transformative impact of artificial intelligence [AI] on boards and governance. They discuss why AI is no longer optional, the opportunities it creates, the risks boards must manage, and how to approach ethical and ESG [environmental, social and governance] considerations. Tony shares practical advice for directors feeling out of their depth and reveals how AI is reshaping decision-making, board reporting, and stakeholder engagement. A must-watch for leaders navigating the AI-driven business landscape.
Tony Dench is an experienced board member and chair with a passion for applying governance principles in a practical and pragmatic manner. He has more than 30 years of combined international experience in leadership roles across financial services, accountancy, and law. As a leader, Tony has a proven track record of delivering impressive results through a hands-on, collaborative style that builds trusting relationships based on integrity and empathy.
A strong advocate for purpose-driven strategy, he successfully implemented this approach as CEO of SHARE, where profits more than trebled under his leadership. Tony currently holds several governance roles, including Independent Director for Utilities Disputes and SBS Insurance, and serves on the Finance, Audit & Risk Committee for the New Zealand Law Society.
#AI, #ArtificialIntelligence, #Governance, #BoardOfDirectors, #CorporateGovernance, #BusinessStrategy, #EthicalAI, #ESG, #Leadership, #DigitalTransformation, #RiskManagement, #Innovation, #FutureOfWork, #AIinBusiness, #ExecutiveLeadership, #AIethics, #AIadoption, #BusinessGrowth, #BoardLeadership, #GovernanceInsights

Hi, I'm Tony Dench. I'm a professional director. I've been doing this now for sort of three years or a little bit longer. I've got five formal board positions, a couple of committees, and half a dozen advisory board positions. And I'm going to have a conversation here with Mark this afternoon about AI. Artificial Intelligence. Wow, that's gonna hit everybody at the moment. Hi, welcome to Governance Bites. My name's Mark Banicevich, and as you just heard again, I have the pleasure of spending time with my friend Tony Dench. Tony, thank you very much for your time. The topic, artificial intelligence, very topical and has been really for a couple of years since ChatGPT [Chat Generative Pre-trained Transformer] kind of was released, right. And it's exploded since then in so many different ways. And in a previous conversation, we talked about thinking about opportunities as well as risks. AI is certainly an opportunity for many businesses, but also has its own risks associated with it. So first question for you, from a governance perspective, why should boards be paying attention to artificial intelligence right now? it's really important. AI is changing the world in ways that we can't even imagine it. We simply don't know. I don't think AI knows how it's going to change the world. So the fact is that in every business, AI is happening. It's happening within the use of it across the staff. It's happening within suppliers. And it's there and it's going on. So the boards need to understand that this is not,"How could we think about incorporating AI in the future." AI is there right now and happening. So the boards need to be aware of it. And they need to be aware of the opportunities as well as some of the guardrails that need to be in place to make sure that there's not something inadvertently happening. And the challenge, of course, is that some boards aren't aware that AI is already in the business and the extent to which it's in the business. One of the things that amazes me about it is just how quickly it is changing. I can't think of any technology that has been adopted as quickly in making such big change in such a drastic rate, right. And you go back, this is probably as drastic a change as we had with the industrial revolution, only happening in so much less time. You think of the development of the internet, took quite a while for that to catch on. Yes. Internet 2.0, when you could start submitting information into websites as viewing information, but again, quite slow change from those things, but this is just happening so very quickly. Do you see it primarily as a risk to manage or an opportunity to seize? I think it's absolutely an opportunity. But it's an opportunity that comes with risks, and one of those risks is that AI is such a broad term. That, at the governance table, whenever you say, "Is the business using AI?" You're not actually sure what that question means. Is it the generative AI, or is it machine learning looking for patterns? So, is it dealing with structured data within the environment of the organisation? Or is it unstructured data that's out in the wild and into the internet, which potentially leads to privacy concerns and challenges? So it is absolutely an opportunity. It comes with risks, and the risk, the biggest risk is the lack of understanding of, what is it? What is it and how is it being used? Yes, let's dig into risks a little bit more. What are the key risks that boards should understand? It's the governance of AI. And the regulators are struggling right across the world to, how do we regulate for AI. I think the notion of privacy is also starting to change and evolve within the consumer space. And that's fascinating because people are becoming more trusting of putting some of the personal information into this environment. And yet the boards will worry about,"Well, if we lose that personal information of our clients, reputationally, what will the damage be? " From a regulation perspective, what will the damage be? And from a lawmaker's perspective, where's the line with ethics? Within AI, and where should the regulations sit with that? And, sadly, the pace of the adoption of AI is leaving the lawmaking, to some degree, the ethics in its wake, and it's a retrospective case rather than trying to get ahead of it. And that's often the case with law, right. It usually takes law a while to catch up with innovation. But as we talked about before, this innovation is happening so quickly that that lag is becoming so very obvious. And it's everything from image and video generation using real images of people, and the deep fakes and things that are happening from that through to, as you say, putting confidential and private information in something that's being used to teach the tool for future use. Which I find quite interesting, this whole concept of, although of course I would never put personal or confidential information into a generative tool, unless some IT [Information Technology] and told me we had very tight security around doing so. Using information to teach a bot, to me, is not the same as putting it on a website that anyone can see. So there's elements around that too, right. How do directors think about the ethical dimensions of AI use? Yeah, that's a challenging question because it's evolving. so quickly. I choose to think about AI, and it's interesting you said, if an IT person has told me this is okay. Often because it's technology, AI is thought about in an IT sense. And I choose to think about it differently. I choose to think about AI in the sense of, if this were a person, so an HR [Human Resources] lens rather than an IT lens. So if this were a person, if I was looking to address a particular issue in the business, a marketing issue or an efficiency issue, would I go and recruit someone to come along to do that role? And if the answer to that is, well, yes, I would, I'd recruit somebody creative in the marketing department to generate and create some content, and that might be for marketing, or I'd recruit someone to generate some efficiencies and design process, then if I can look at AI from that perspective, from an HR perspective rather than an IT perspective, then I think it helps to solve the ethical questions around, how would that person behave? Would I share information with that person? And if the answer to that is, "Well, yes," well, then it's probably going to be okay to share that information with the bot who would be looking after that, because the person would be within the environment of the company. And therefore, in that instance, the AI would be within the environment of the company. So I find that a useful lens. If I can think about the use case as a person, then translate it to AI, then that often helps me to solve those things. And if you think about it in terms of an IT sense, sometimes you come up with a completely reversed decision, which is why I choose to think about AI in an HR context. Yes, okay, that makes sense. The other ethical issues that are often discussed is around the energy consumption and energy use. And we're looking very much into an ESG [Environmental, Social, and Governance] environment with businesses that are looking to be more environmentally sustainable, and yet the use of AI with this large energy consumption, is that being consumed through coal generation, for example, and also the use of water to cool the [infrastructure]. So there's a lot tied up in this, isn't there? There's a huge amount, and that's where the that the law will have to catch up and it becomes a resource and it becomes incumbent on individuals and entities to use that resource in a responsible way. So, some of the funny pictures and everything else that get generated, that are great fun and great banter, to use AI for in a generative sense. Is that a good use of the resource? Is that a sensible use of the resource? I think the understanding and the use will develop. Yeah, the interesting thing about those issues, as I said, is one, if you were a business that said,"We don't like the ethical implications of AI, therefore we are not using it, we're banning it altogether," I think an entity that's not using AI in any form now is going to get left behind quite quickly. So that's a very difficult decision to make and may end up ultimately in the failure of the business. In terms of the playing around with AI, you get good at using these generative tools by playing around. So it's the, one of the things that I'd be very keen to do if I were leading a business now is I would want every one of my staff to have training in prompt engineering around using AI tools to make them more effective at using the stuff and seeing the opportunities, because it's a huge, as you say, it's a massive opportunity in so many ways within businesses. Very difficult to sustain a position where a business would say, "Well, we don't use AI." Very, very difficult to do that because it is everywhere and it is being used by staff in every business. Yes. So I think it would be a difficult claim to say we don't use AI at all. Very much. Do you think AI will change the way that boards themselves work? You know, decision making or board reporting or any other, even in the boardroom, the conversations, right. Well, it will, and it'll change it in various different, and it already is. So there are AI facilities within some of the board platforms, so the likes of Diligent and BoardPro will have an AI platform within that. Minute taking, you know, that's just one of the most obvious use cases. So the minute taking and context around that. But it can also be useful for research. So researching a strategic discussion, saying, "Can you give me some research papers on this particular topic?" And the AI search will be much more effective than a director trying to do that research alone. Some of the legal questions. So getting very quickly to a factual answer or an opinion of sorts or in a contract or around legislation. So it can help on all of those things, which just allow the facts to come to the table, to the board table more quickly for the humans around the board table to then use those facts to make decisions. Will it get to a point where it starts to influence and has a vote at the board table? You know, will we have an AI director? For me, that's still sci-fi stuff. I think that's still quite a long way away. But facilitating the facts around the table for the directors to make judgment calls. I think that that's already happening and it will just increase. Have you experienced, and I think one thing that would be very interesting and very possible at the moment would be to have a laptop with an LLM [Large Language Model], whether it be ChatGPT or another LLM, with voice interaction and just ask the computer questions and have it come back verbally with the answers rather than everyone reading it. Have you experienced that or is that something that's on the radar? I haven't experienced it, but it could absolutely be the case. I guess one of the challenges that we have is that the typical demographics around board table are a generation that, perhaps, are like me. And you go 30 years down a generation, and the use and the acceptance and just the day-to-day adoption of the technology in that generation is just much more comfortable. So I think what will happen is that the generation of directors currently we'll get more and more comfortable and use it more and more, but there's a generational gap in the use of technology and the capabilities in the technology. Yes, okay. What questions should directors be asking about how their organisation is managing AI? Well, I think the first question is the risk management piece. So accepting that AI is happening in the business and then making sure that there are suitable risk management guardrails in place for the appropriate use of it within the organisation. I think the day has gone where you can pretend it's not happening or you can try and govern what will happen in the future. So the guardrail piece is the first piece. And then the second bit is, well, how do we adopt it? How do we use it? Where are the opportunities? And that's the conversation to have at the at the board table now. How can we do more with it? What are our competition doing with it? And how can we make more use? Yes. How should boards engage with stakeholders who may have some fears around AI, customers, the suppliers? Disclosure is important, but I think disclosure has to be within the context of this evolving appreciation of what AI is and what happens to the data whenever you get there. If we think in the financial services world, the data that's gathered in financial services, be it wealth, be it health, is enormous. The personal identification data that's gathered there, too. That's the data that consumers will be sensitive to. So that's the bit that needs to be continued. needs to be governed really quite well. But I think the consumer's appreciation of how that data is handled, is evolving, as well, with the technology. So staying abreast of that and making sure the policies and the approach in the business stays abreast of that, too, absolutely critical. And it moves, it changes really, really quickly. It does, it does. What have you seen boards get wrong in their early engagement with it? Pretending it's not happening. I think that's probably, and there's less of that now because I think there's a growing acceptance that it is. But I've had a number of conversations that have started off with, "Well, how will we begin to use AI in our business?" And of course, that's the wrong question. Because AI is already happening in the business, and I think it's that acceptance, so the notion that it's not there already is probably the biggest mistake. Yeah, that's fair. Do you think it'll fundamentally reshape governance, or do you think it'll just add another layer? I think ultimately it will begin to reshape governance, but I think that's a little way away. Two or three months. I think it'll enhance governance. It'll allow the governance process to be faster, because the research will be faster and create more data. So I think, and with more data, hopefully cleaner and better decisions. But I don't think it will get to the stage of making decisions and having a judgment for a little while yet. I think it'll come. But I don't think, I don't think we're there yet. In the early stages, an enabler for boards to help you collect information more quickly, get more relevant information very quickly, and digest it. The context, so not just a really fast library, but a context-driven library. Yes. I think that's it. Yes. And we've all had examples where you ask it a question that comes up with this stuff, and you think, "Oh, well, I hadn't thought of that." The context piece of it is just remarkable. What advice would you give to directors who feel out of their depth with the conversation of AI, the generation you were talking to before, that's, "Oh, I don't really know this stuff." Yeah, I think those are the two points. The first, get comfortable, start using it, get a subscription, use the paid version. It's very different and probably safer. So use a paid version and start to just see the use cases and use it. And secondly, engage with either someone who is, or a younger generation, who absolutely are and try and see just how the adoption and the ease is being adopted there. I think that's the key to it. That is absolutely the key to it. I think there's a danger that the reluctance to engage with it leaves directors too far behind. Yeah, Tony, that's been cool. Excellent. Great. Yeah, good conversation. Thank you, Mark. I look forward to catching up with you soon, and to seeing you next episode. Fantastic. Thank you. Thank you for watching this episode of Governance Bites. We have more episodes on YouTube and your favourite podcast channel, where I interview directors and experts on various topics relating to boards of directors and governance. We'd love to see you back, and please like, subscribe and share the videos and podcasts.