#DigitalFrontiers

Boardrooms, AI, and Better Decisions

Emma Season 1 Episode 1

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 29:11

What if your board meetings had a relentless, unbiased memory that asked the questions everyone else missed? In this conversation with Richard Nicholas (AI lawyer at Browne Jacobson), Non-Executive Director and Educator Shirley Chowdry reveals a practical, five-pillar framework for governing AI that transforms the boardroom from minute-taking to strategic thinking. 

Shirley shows how AI can become a live thinking partner that raises decision quality whilst maintaining control over risk, ethics, and accountability. From deploying grounded models that evaluate meetings and challenge groupthink to rethinking core business models, she demonstrates where the real edge lies—not just in efficiency, but in revisiting your value proposition itself.

Drawing on extensive governance experience, Shirley offers concrete guidance on the essentials most organisations overlook: AI's environmental footprint, workforce reskilling as a board-level priority, data lineage and decision audit trails, and practical steps for managing data sovereignty and model concentration risk. Her two-track approach equips both management and directors with AI tools that sharpen debate, ensure auditability, and keep humans firmly in the loop—delivering boards that think bigger, act faster, and earn trust in the age of AI.

Richard Nicholas:

Hi, this is Richard Nicholas, and today I'm joined by Shirley Chowdry, who's a non-executive director specialising in AI. So I wonder if in your own words, Shirley, if you could describe what you do for company boards.

Shirley Chowdry:

Thanks, Richard. Great to be here. So as you said, I'm a non-exec director. I sit on two boards and two advisory boards in Australia at the moment. I also teach directors' duties to aspiring and current directors through the Australian Institute of Company Directors, and I go around speaking on AI and leadership and a number of other topics, generally advising boards on how to govern themselves relating to AI.

Richard Nicholas:

Fantastic. And how did you get into that? And what was what was the sort of driving force that led you to what you're doing at the moment?

Shirley Chowdry:

Oh look, I think I probably fell into it to be honest. I'm a reformed lawyer. I was a banking and finance lawyer in New York and Asia and Australia for a few decades and decided I wanted to run a PNL after that, so became a CEO and then fell into a portfolio career and discovered that a portfolio career can be whatever I want it to be, and so it's now formed of things that really spark joy and things that I want to do so I love being a non-exec director and I love teaching. And the AI came out of a deep interest, I saw boards, particularly in Australia, you know, across the press in America as well, rushing head first into trying to embed AI into their businesses and their organisations, but really skipping a few steps, in my view, on AI governance and so I went back to focus on that and it developed into a whole framework and pilots on the boards that I'm on, and some discussions with companies like OpenAI and others, and hopefully developing into something that boards can really use.

Richard Nicholas:

Okay, fantastic. And I know that the sorts of advice that you give to boards is not the typical advice that you might give in terms of board minutes and those sorts of things and how to use AI for that. So what is it that you do for company boards?

Shirley Chowdry:

So I've developed an AI governance framework, and I'm happy to race through that. I know we don't have a huge amount of time, but happy to spend time later if that's helpful. But I've developed a framework that has essentially five pillars for board governance. And so if you think of most of the organisations, you know, the GCs who are listening to this would be part of these organisations. Everybody's racing to embed AI in their organisation, so they might be thinking of efficiency drives or automation. The smarter ones are even thinking about bespoke customer experiences, how call centers can respond to a particular customer's needs using AI. I think the really smart ones, and I haven't heard of too many of these, they're actually going back to brass tacks and saying, how does AI actually affect our whole business model? We've been making this widget or providing this service. What does AI, what can AI now do that we don't have to do and how does that allow us to adjust the service we provide or the widget we produce in a way that is better for customers, better for revenue, better for our people? And I'm not seeing a huge amount of that. That's taking the embedding of AI to a more strategic level, if you like. So that's bucket one. I think most companies are there, they're starting to think about how to do that. I think pillar two for me, or bucket two, is how boards use AI in their own board processes. And you touched on minutes a second ago, and you know my view on minutes. I don't think minutes are the best use of AI for Directors but I think there's lots of other areas of board process that are really good and on one of my boards, we've been running some pilots and looked at AI for meeting evaluation, director evaluation, bias detection, challenging group think. You can embed and ground the AI models or the agentic models, depending on how you're doing it, in theory. Like we used the Sigma hats and then used that to do some red teaming, blue teaming, and you can put Daniel Kahneman's theories in on fast thinking, slow thinking, like you can actually ground the AI models and constrain them in a way that the output on all of those things I just mentioned is very clever and I'm not suggesting for a minute that AI should ever be used by itself. I think we need the human intelligence overlay, but I think it can really hasten and it can make more efficient the work that we do on boards, but it can also really produce a better discussion and that's where ultimately I think directors need to get to, better decision making, better quality decision making. So that's pillar two. Pillar three is taking what directors, I think a lot of directors are already doing, we're using it, you know, as a partner in our own thinking. Help me draft this, help me think about this, what have I missed? We're not doing that in the boardroom. And so we've run a few pilots on my boards, one during a strategy day, using AI in the background to do gap analysis on what the board directors have come up with and another is a collaborative thinking partner so the example I always think of is expansion to China, boards discussing whether they should expand to China ,they have paper in front of them, three recommendations that management's made, they discuss it, they have a really good discussion and at the end, the chair turns to the AI model, which has been listening to the discussion and tracking it, and says having regard to the historical data which we've uploaded already, so you might have uploaded 15 years of board papers and strategy and that sort of thing, what have we missed? And the AI model could turn around and say, well, actually, 10 years ago, the previous board discussed expansion to China. They decided not to go ahead for this reason, and you haven't discussed it. Or they might say, actually, your rival competitor has announced that they're not expanding to China because of this reason, and you need to spend more time on that. So our memories as humans are fallible. We don't remember things correctly, we forget, we don't draw the dots, we don't connect the dots to papers we've read over the last few years. AI can help us do that. So that's the third bucket. The fourth bucket is a larger one, and I think a really important one, and it's the ESG imperatives. So boards are racing to and management teams are racing to embed AI, but we're not necessarily thinking about the other side, and that is the environmental and social footprint we're creating with that fast move to AI embedding. And so on the environmental side, we've got water and electricity and compute use. We know that when you run an AI search, you are using multiples of water, for example, than a Google search uses, almost a hundred times. But what are companies doing to offset that and think about that? And the reason that they're, that we're using so much water is because it takes so much water to cool the data centers, but I don't think boards are actually making that connection. We're thinking about all of this quite piecemeal. And so this framework allows boards to put some connectivity, if you like, between all these issues. On the social side, and I think boards are starting to think about this, we're thinking about the workforce displacement. We know jobs are going to be lost, but we also know jobs are going to be gained. We just don't know what all of those are yet. But I think my view is I think organisations have a duty, an obligation to make sure our people are upskilled and reskilled. And those skills, those new skills may not be used in our own organisations. They may go to other organisations. But I think if we don't do that, we're going to create part of the workforce, and it'll be lower socioeconomic brackets, it'll be women, there's an inclusion issue, it'll be people who don't have access to computers or aren't computer literate, will create that divide between kind of us and them or leaving people behind. It'll become even bigger. There's also research to show that on the social side, we're doing a lot of training on AI, but we're training boards, management teams, upper echelons of leadership. They still tend to be female, male dominated, they still tend to be whiter than other layers of leadership in the organisation. So we're we're promoting a D&I issue where we're exacerbating that inclusion issue. So that's kind of a fly-through the years, Jeep issues. There's more, but I think the other one in there that I'll just mention is I think on the governance side, we're racing to put AI in our companies, we're racing to have AI help us make decisions. But I don't think companies are always looking at how auditable those decisions are. Do you know how AI is making the decisions? Do you know the information it's basing them on? Do you know the bias that it might have inherent in the model? And I think decisions that we make on AI have to be auditable. We have to go just like any other decision in our organisation, we have to be able to go and look at the decision, ask how was it made, and learn from that so we can take that forward. The pillar is data sovereignty and this comes, if you think back a few years, we went through COVID. England was just like Australia where all of a sudden globalisation was a real issue for us. We were connected so deeply to the rest of the world. And, you know, grocery store shelves were empty, all of a sudden we felt the brunt of not being able to work with the rest of the world. And I think our reliance, our increasing reliance on AI is another, is a similar risk. So if you can imagine, you know, tomorrow, and I'm this is not a political statement at all, but tomorrow, if America, for example, decides to cut off access to its large language models, fibers get cut, something happens, and all of a sudden we've raced to put AI in our businesses. I think boards need to ask how sustainable is that? And I'm not for a minute suggesting that we shouldn't do that. We absolutely should. AI is here to stay. But we do need to think about where are our inference models stored, do we have enough data centers on shore? How will we deal with those kind of situations? And that's not just a sovereign issue that governments need to think about. I think it's an organisational issue. Corporates need to ask themselves that question. And we should have learned after COVID that that globalisation is not something we can necessarily rely on. So they're the kind of five buckets. What I hope it does for boards, and I did it out of necessity for myself, really, because I sit on these boards, is it provides a bit of a map for boards to think about, an end-to-end map, for boards to think about, okay, I'm putting AI in my business, but what implications does that have for us as an organisation?

Richard Nicholas:

I love that. No, I think that's fantastic. And it's a fantastic framework , because it's not just the governance side, it's not just the legal side, which might have the sorts of things I look at, but also the actually how do you use AI in the board? and I love the idea of the AI being present at board meetings and actually being able to listen into things and interject and pick out things that haven't been haven't been thought about.

Shirley Chowdry:

Richard just sorry, just on that I'm addressed I'll lift up interrupting and we'll have a conversation. But on that, one of the biggest challenges that are always, that is always presented to me is discovery. What happens if we end up in court, board directors conversation is open, all of a sudden our conversation's laid bare. I have a slightly different take on that. I think this will help great directors, it will not help bad directors. You know, if you're doing your job and you're abiding by all your directors' duties, your fiduciary duties, if you end up in court, you can be subpoenaed, your notes can be subpoenaed, the minutes can be subpoenaed. If you're doing your job, actually AI could help in that process, I think. I don't think we need to be as scared of AI in the boardroom as we are being. And I think you remember, might remember kind of 10 years ago or 15 years ago, we started to move our information to the cloud. And regulators were really scared of that and said, Oh, you know, banks can't go there, we can't put information in the cloud. I think it's a similar, I feel like it's a similar thing. We will I've had off the record conversations with regulators, and they've said they think in five years, in some time frame, if board directors aren't using AI like this, we could conceivably not be meeting our directors' duties. So I think there is a level of we have to get comfortable with this.

Richard Nicholas:

No, I can see that and if you've got it like you say, pr events sort of groupthink, allows boards to actually be challenged in ways that perhaps they might not have been if they've got people of a similar type around them. And that there's also the point you make about having to train from the bottom up, and the point about the inequality that is being created by only training the very top level, the executives on AI and that's not something I've heard before, actually, but I can completely see why that's the case.

Shirley Chowdry:

I think too, you know, we talked about using AI's collaborative thinking partner in the bookroom. Having run pilots in that, I do think that there it's a bit of an evolution to get there. So on one of the boards where we tested it, we have now gone back and we've decided to build two other AI models to lead into that. And the first is for management to have their own AI closed system to produce board papers, so to take executive papers and help put that strategic lens on them, make them into better board papers, and for directors to have a preparation AI tool that they can use to get better before they enter the boardroom. And so a model where the papers are already preloaded, they can interrogate, they can read them, obviously they have to read them, but then interrogate the papers through that model. The questions they ask then go to train the management model, so the management teams get better. And I think the aim with all of this for me is to constantly improve the level of decision making the boards and management teams are making. Because if we can use AI to make us better I think that's the best use of it.

Richard Nicholas:

Completely. And that must be right. I remember I think I read somewhere that is it Jeff Bezos at Amazon who insists that the very first thing that people do in board meetings is actually sit and read the papers before them because what he found was if you don't insist on that then people will wing it.

Shirley Chowdry:

Well, I mean you're not beating your fiduciary duties if you're not reading your board papers. And so this is not, what I'm suggesting is not a way for board members to reduce the amount of time they're spending on their papers. They still need to read them. This is a way for them to enhance their understanding and get better before they enter the boardroom and then get better as a collective, because boards and diet boards live and die as a collective when they're in the boardroom. So this is to make us better at our jobs. If we're better at our jobs and we make better decisions, we will enhance productivity of our companies, and productivity globally is an issue. And for Australia, productivity is a real problem. And so if we can improve productivity and do our bit in the boardroom to do that, I think companies that do that will fly.

Richard Nicholas:

Completely. That must be right about doing things better. One area, one group of people who may be listening to this podcast actually is is general counsel, who I know can speak to lots of in-house lawyers who are both seeing the opportunity of AI, but also really quite concerned on the social side. You're talking about job losses and that sort of thing, w hat do you think this means for general counsel and in-house lawyers who are coming across AI being rolled out in their business?

Shirley Chowdry:

Well, I was in-house for ten years, so I absolutely know the challenges that in-house counsel face. The best ones to me have always been the ones that are not totally legalistic and don't just look at everything through a legal risk and compliance lens, but are much more commercial in the way they do things. And so I think the first thing is AI is here to stay. I talk to people all the time who say, I even talked to a director in Sydney recently who said, oh, it's very theoretical, isn't it? And I walked right away from that conversation because I think that's ludicrous. I think it's here to stay. The question is not, the problem for GCs is not to say no, we're not doing that. The problem is to say, how do we do that? And that's always, to me, that's always differentiated a good GC from one that's not so good, somebody who has a commercial view on helping the business. And that's what businesses, that's what our business clients are looking for. They're looking for somebody who is commercial. And recently a chair said to me, if you are not using AI to keep your customers, someone else will be using AI to take them.

Richard Nicholas:

I love that.

Shirley Chowdry:

And, you know, I think that should be driving us as counsel. That should be driving us. And so I think gone are the days where lawyers just need to say, okay, we're embedding AI, what's our governance policy? What are we doing? What aren't we doing? What are we telling our people about job losses? We've moved way beyond that. I think GCs need to think about the.. all those pillars that I was talking about. You need to think about the footprint outside your organisation. You need to think about the sustainability of embedding AI and what it means for your business model. You need to think of the strategic risk and the operational risk. And I think if GCs start to broaden their perspective on AI beyond , you know, the privacy concerns, the operational issues, I think though what what we want our GCs to be and our legal teams to be is so integral to the business that nobody argues you're a cost center. Everybody says you are so vital to what we do. And I think that's what good legal teams do. And I think AI is a really good example about how legal teams need to think broadly about the impacts of AI. If legal teams aren't the ones to say, hey, we've got a D&I issue, we've got an exclusion issue, we've got data sovereignty issue, we've got an ESG issue, I can't think of anybody else in the business who's going to raise their hands with those issues. So, I think there's a a really great opportunity for legal counsel here to think broadly, to provide broad value to the business and to prove yet again how valuable legal teams are to businesses. Obviously I'm biased because I'm a reformed lawyer, but you know, I think legal teams that have great legal teams that are outstanding, you never ever hear businesses say that they don't want them.

Richard Nicholas:

Fantastic. Yes, I see that. And clearly it's a transition that you've made yourself focusing on AI from being in-house lawyer to non-exec director. I know, I'm sure there will be people interested in how you did that and the sorts of steps that you took to become integrated in the business in that way.

Shirley Chowdry:

I think look, I was I think I was always, you know, I always think of the law as you have this hard line in the sand below which everything is illegal. So that's kind of the legal basis. Then you have this line above that, above which everything is legal. But lawyers, the really great lawyers operate in that gray zone between those two lines where things aren't really illegal and legal, but it's a judgment call as to how you do things, whether it aligns with organisational value, what the risk is on a decision you might make. I've always operated in that zone, in that gray zone. And I loved being a lawyer until the day that I wanted more and I wanted to do something a bit different. I think I saw, this is probably not a very nice thing to say, but I was surrounded by business people and they'd make decisions and I remember thinking to myself, I could have made a better decision. That's not the decision we should make. And I wanted to do, take on a commercial role. I don't think the world is particularly fabulous at recognising that lawyers can transition out of law. I think lawyers are really good at recognising it because we see, you know, if you're a lawyer, you write well, you communicate well, you're articulate, you usually have really good business judgment because you're surrounded by these people making, you're giving advice constantly to people making decisions, you can take large quantities of information and distill them really simply and quickly. Like you can project manage. I was a transactional lawyer for a long time. And so you're project managing constantly so lawyers have all these skills which are really, really valuable in the business, as CEOs, as COOs, much more broadly than just law. And so I went and made that jump. And I think as a CEO, I loved being a CEO, but I wouldn't have been as good a CEO if I hadn't been a lawyer first. You know, hadn't worked on annual reports, hadn't worked on capital raisings, hadn't done M&A, all of that stuff. So, I made the jump to CEO and then I got offered my first board role on the Australian Associated Press. Well, I'd been on the board of YMCA, I'd been deputy chair, but I got offered my first board role after I left that CEO role and kind of fell into it, I think, and then I started searching for board roles and I got them. I think the main lesson that I took from that process was when you are a lawyer, people tell you, oh, well, you can you can have a portfolio, just join boards. And I realised, I discovered, I think, that I didn't just want to be on boards. I wanted to have a portfolio that was more expansive. And so I started teaching and doing other things. And I think that's the main lesson. You can create a portfolio out of anything you want. You can reduce your full-time role to, you know, go and get a role that's just two days and have that be part of your portfolio so you've got some stability, excuse me, you can teach, you can write, you can sit on one board, you can sit on an advisory board, you can speak, so I've created a portfolio for myself that has elements of all of those things, and I love the variety, but now I'm always saying to lawyers, if you want to make that transition, do it slowly. So I had a board role while I was still a lawyer and while I was a CEO, and I think that was a really good option. It was a very large not-for-profit, but, it doesn't have to be a not-for-profit, it could be anything. But start to think about that transition ten years before you want to do it.

Richard Nicholas:

No, that's good advice. And you've obviously had various roles in different organisations from GC and CEO and non-exec director. And what do you see as the future then, do you think, for AI within businesses from those sorts of perspectives that you've held?

Shirley Chowdry:

I think AI is here to stay. I think those people who are saying this is a fly by night thing or we don't want any part of it, I think they'll be left behind. So I think AI is definitely here to stay. I think AI will be integrated into all parts of our business. I mean, it's reforming law. It's changing how we educate young lawyers, it's changing how we educate at universities, it's changing how we educate at schools. I think it is absolutely here to stay. I think we are at the point where the best results from AI are going to come from the overlay or the collective of human intelligence and AI intelligence . I suspect one day we'll be at a point where AI is much stronger, much better, can do things much by itself. We're not there yet. I don't think this is something to be scared of. I think it's something to be excited by in an organisation with one caveat, and that is eyes wide open. There are real risks with AI, but we have to be open to them. We have to manage them. We have to create frameworks and structures in our organisations so that people know we're aware of them and we're we're managing to them. I think organisations that blindly embed AI without doing that will face lots of issues, lots and lots of problems, whether it's from shareholders or customers or whether it's from people inside the organisation. So I do think we need to be eyes wide open. But I think this is exciting for organisations, it's exciting for customers, it's exciting for shareholders. I see real upside with this, but eyes wide open.

Richard Nicholas:

And that seems to be a theme really of just an exciting opportunity, but keeping eyes wide open. I know that there will be businesses and individuals and organisations listening to this who will want to get in touch with you from all the things you've said there and all the experience you've got, which I'm massively grateful for you sharing.

Shirley Chowdry:

 No , thank you.

Richard Nicholas:

How can people get in touch with you if they if they'd like to get in touch with you?

Shirley Chowdry:

Oh look, LinkedIn is probably the easiest, but I also have a website at ShirleyChowdry.com, but LinkedIn is probably the easiest. I'm active on LinkedIn, and I'm reasonably good at responding to people's messages. So that's probably the best way.

Richard Nicholas:

Fantastic. Well, once again, thank you very much. There's a huge amount I've learned there in terms of the framework that you suggested, the way the boards are actually using AI and the the future for AI. I think there's, it is an optimistic future, but like you say, eyes wide, eyes wide open.

Shirley Chowdry:

Yeah, thanks Richard. And I think you know, one of the most important things I would say to the people listening to this podcast is I think there is real opportunity for GCs and legal teams in this move. And if you can think expansively about your role in the organisation, I think, especially if you're thinking about what comes next, I think this is a great opportunity.

Richard Nicholas:

Certainly, Chowdry, thank you very much. I really appreciate it.