CCAB Ethical Leadership Podcast
Ethical leadership isn't a destination, it's an ongoing conversation. The CCAB Ethical Leadership Podcast brings together leading voices from across the accounting and finance profession to explore the complex ethical challenges facing today's business leaders.
Hosted by Tom Parker, each episode draws on the expertise of senior practitioners, academics, policymakers, and specialists to examine the real-world decisions that test our professional principles, from the rise of artificial intelligence and the risks of data misuse, to the human dimensions of organisational culture and the responsibilities that come with leadership.
Whether you're a practising accountant navigating the pressures of a rapidly changing profession, or a business leader trying to build a culture your people can trust, the CCAB Ethical Leadership Podcast offers the insight, perspective, and practical guidance to help you lead with integrity.
A podcast from the Consultative Committee of Accountancy Bodies.
CCAB Ethical Leadership Podcast
How will AI reshape culture?
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In the second episode of the CCAB Ethical Leadership Podcast’s AI special, experts from across the accountancy profession discuss how AI could influence workplace culture, leadership, and professional development.
AI has the potential to be both a positive and negative force on culture, and could impact multiple aspects of how we work together. Tom Parker brings in experts from across the accountancy profession to weigh up the pros and cons, and discuss how we can get the best from AI, while avoiding the worst.
Links:
- Ethics resources
- ICAEW Ethics and Company Culture hub
- CIPFA Conduct and Ethics standards
- BR Insights publications
Guests:
James Barbour, Director of Policy at ICAS
David Lyford-Tilley, Head of Standards and Technical, CIPFA
Laura Hough, Director of Trust and Ethics at ICAEW
Wieke Scholten from BR Insights
Host: Tom Parker
Producer: Natalie Chisholm
Episode recorded: 4 December 2025
A Hack Creative and First Touch production for CCAB
Welcome to the CCAB Ethical Leadership Podcast. I'm Tom Parker, and in the second episode of a two-Part Special on ai, we'll discuss the impact of AI on organizational culture. AI has the potential to be both a positive and negative force on culture and could impact multiple aspects of how we work together. We're going to weigh up the pros and cons and discuss how we can get the best from AI while avoiding. Worst. Joining me in the studio today for this episode are James Barbour, director of policy at icas.
James Barbour:Great to be here. Thanks very much, Tom.
Tom Parker:David Lyeford-Tilley, head of Standards and Technical at cfa.
David Lyeford-Tilley:Thanks for having me.
Tom Parker:Laura Hough, director of Trust and Ethics at iic. AE W.
Laura Hough:Great to be here. Thanks for having me.
Tom Parker:And social and organizational psychologist Wieke Scholten from BR Insights.
Wieke Scholten:Thank you, Tom. Hello everybody.
Tom Parker:Before we get into the AI part of this puzzle, I'd actually like to start with you, Wieke. How does organizational culture shape the behavior and outcomes of an organization for better or worse?
Wieke Scholten:I think that answer starts with talking about what organizational culture is, so I would define that for the use of this conversation as the sum of. How we do things here in an organization and why? So this is about behavioral patterns and drivers. How do we do things in an organization? And there are always aspects of that that help you deliver good outcomes. And there are always aspects of how we do things here that may. Unintentionally often lead to something that nobody wants. So I think that is a helpful sort of perhaps start of talking about culture that makes it more practical. Uh, because how we do things here is about how you make decisions, uh, how you communicate, how you respond to things that go wrong, and not so much about values and intent and where we wanna be.
Tom Parker:Well, David, how, how transformative is ai. Potentially then when it comes to organizational culture?
David Lyeford-Tilley:Potentially quite a lot, I think because it, it changes the shape of the discussion around what people do. It, it's, it's a very much a, a technology, I think that interacts a lot with trust. I, I suppose I'm an accountant, so I see pretty much everything in terms of trust. Um, but it is about. Both. Does the organization trust the individuals that are working there to use it and to use it appropriately? Do the people that work at the organization trust that their bosses aren't going to kick them all out and replace them? Um, do they have a, a kind of open conversation about AI and its, its use and its risk? Or are they, you know, is it more of a, uh, an open secret or even, you know, even worse, a closed secret where people think, oh no, we are very good on ai. We don't use it here because that's our policy. Whereas actually, in reality, it's being used and perhaps not used very well.
Tom Parker:Well that looks at some of the negatives. What I actually wanna do is start looking at some of the positives. So, Laura, what uh, can sort of greater efficiency? I think that's a lot of what AI is seen is that it gives efficiency to organizations. So what can greater efficiency that AI can provide us and how does that improve culture?
Laura Hough:So I was at a conference yesterday and a lady mentioned something called relational offsetting, which. Using AI tools to do the kind of heavy lifting of an activity that doesn't require human input and allowing the human to focus very much on that human interaction. So the example she gave was a conversation between a probation officer and the person on probation and how the individual could really focus in that discussion. And the AI could take the minutes, for example, rather than tapping away on a phone whilst trying to have this in depth conversation with somebody. So I can see that that could create efficiencies, but without taking the human out of that. Situation. And also I can see in my kind of job, there's so much information I have to read today. I had to read yesterday and I've had to read the whole time I've had the role. And having a tool that could pull together those pieces of information that are really relevant for a particular subject matter would be really helpful for me and be more efficient in in my kind of job.
Tom Parker:And those efficiencies work across lots of different parts of say, an organization and accountancy and financing. You know, there, there, there must be lots of different ways that that can give you some data and give you, and spare you some time, I assume as well.
Laura Hough:Yeah, I think absolutely would save time, but I don't see it as necessarily reducing people's roles, but allowing them to focus more on the things where. They're adding the value rather than a task that the AI could add the value and actually be better at than a human. You know, I always see that there's some things humans are very good at and some things AI tools are very good at, and we should get that balance right and the mix right, and we can make the best use of it and make our organizations as efficient as possible.
David Lyeford-Tilley:I, I think it's also about. This is what I was, I was thinking in terms of the culture is also having had that conversation about what are the things that AI is good at? What are the things that it's not that the humans are better at? Mm-hmm. Because if you don't have a shared organizational understanding of that, I think it's very easy for people to go and try stuff out that maybe isn't the best way to go about it or haven't thought about some of the implications of it. Whereas if, I think if you have that. Open trusting two-way conversation about how you want to use AI in your organization. You both can increase the trust, the organization also make sure you're using it in a, in the best way.
James Barbour:Yeah. And if we can free up, you know, the mechanical tasks that should allow us to more actually interact with humans within the organization. And that's where real value, you know, Laura was highlighting the real value can be added by that. And we've even even got the third component of this. We've got the ai, so we could have humans interacting, interacting with the ai. One of the best uses of AI I have is just trying to break the blank page syndrome. So it doesn't need to be perfect, but it just gives you a starting point and it's far easier to criticize something once it's there than actually starting it. So I think there's a lot of value to be added. Yes, it can have an impact in culture, but overall we need to make sure that it doesn't overreact and and pushed into place. We don't want to be the, the culture has to be set at the top of the organization, not by machine, by humans. They then have to make sure that's replicated through the organization to the extent that AI is seen as helping us in what we do, but can never override that human factor.
Tom Parker:Yeah, it's a great point there, uh, about leadership. So Wieke, how important is leadership when it comes to shaping that organizational culture and especially around ai?
Wieke Scholten:Yeah, so I think there are, um, different ways to look at that. And so, to your point, James, tone from the top have what the, let's say the executive team or the leadership, senior leadership, uh, teams. Uh, send out in terms of cultural direction. And that is of course important because it helps people to understand this is where we wanna go. However, what we also know from behavioral science is where culture really shapes itself is at that shop floor. Call face level, um, you know, where the daily operational reality, so that leadership, so direct line management that you receive, uh, what people see in what, what we do together and how a direct line manager, for example, contextualizes, why we use this ai. Um. Tool in this case, and why not? And having that, initiating that discussion for direct line management is very important. It's almost like, I don't know if you know the movie Silence of the Lambs with Hannibal Lecter and he has this quote. Uh, you learn to love what you see every day.
David Lyeford-Tilley:Hopefully only a little bit like Hannibal Vector,
Laura Hough:and I think there's a point as well, isn't there, about how the AI will be used in the way that other things are already used in the organization. So if you've got an open culture where people discuss things, challenge things, say, oh, I've got this new innovative idea, shall we use it? That's what will happen. But if the culture is keep your head down, do your work, and go home as quickly as you can, then that's what will happen with the AI tools. So when they're used,
David Lyeford-Tilley:it just kind of can be a, it's a force multiplier. It's a it. Mm. Exaggerates perhaps what's already there. There are some parts of it I think, which maybe cause changes, but a lot of it is just about doing the things we've always done more quickly and more, perhaps more efficiently, um, and more. Uh, so if you've got perhaps some cultural issues, it can blow those up. If you've got some cultural strengths, I think it can reinforce those as well.
Tom Parker:Wieke, I want to build out on James's point, which is around, in fact, it frees up more human to human interaction. What's the benefit of having more human to human interaction when it comes to, say, creativity? Or is it the water cooler chats or anything like that? You know, how important is that for organizational culture?
Wieke Scholten:Very because, um, we're very, we're social creatures. So the way we behave, and if culture shows itself in our behavior at work, then the way we behave is very much shaped by our direct social environment as well. So those water cooler conversations are essential in shaping culture in that daily operation reality. And I think what an interesting point is perhaps to think about as well, is we talk about here at the table. Open culture or open communication culture. So if you really have a culture where people speak out and speak up and share what they think, that's great. And most people want to do that actually. We know that also, hey, that people want to do the right thing and people want to belong where they work and, and be open about what they have in terms of internal dialogue, however. There are always situations where they don't, and I think that is maybe instead of saying, do we have an open culture saying, where is it most challenging actually to express that you are. Not sure about what you're currently doing with ai, for example.
James Barbour:That's absolutely key. And for me, if people speak up early and feel comfortable doing it, that saves, you know, disasters at the end of the line. Let's solve the problems at an early stage, and it's all for the benefit of the organization, the people, the investors, whoever it may be. But if we don't, then these things tend to fester. It leads to bad organizational culture. Might mean a lot of people leave good people, good talent. And you don't want that. So if we can get that in place, proper, speak up, listen up systems and those who are tasked with listening up have to listen. It doesn't mean at the end of the day someone is disciplined, but they have to investigate and see, look into what has been put before them so that people trust the system. And if you get that trust and you get that culture installed within an organization, I think it can be really, really beneficial.
Tom Parker:Yeah, I wanna look a little bit about upskilling here as well, because, you know, there have been quotes bandit around about the fact that AI isn't coming to take your job. But, uh, it will come for those that haven't used AI and don't understand ai. And I wonder what this means for upskilling and professional development within organizations. Laura, you know, how, how can AI potentially help that upskilling, but also how important is upskilling to this, to using AI and it becoming part of your day to day?
Laura Hough:So I think AI could be a very powerful tool in terms of training people, tailoring training to them, providing them access to the knowledge that other people already have in a way, you know, recordings of, um, training sessions or recordings of other people's meetings. They can easily access those now, which didn't used to be so easy. Um, I think we'll need to find a way to train the future workforce. To critically assess what AI produces for them. So maybe the best way in the future will be to get, I don't know, an AI tool to prepare the financial statements for you. But what's your role then as an accountant in critically assessing those, finding out where the weaknesses are, validating them? It's just a sort of different lens, I think,
David Lyeford-Tilley:and, and that's something that, you know, people say a lot of the times, almost a cliche, say, human in the loop you do to have somebody involved in that, but that's got to be. Somebody meaningfully in the loop. So there has to be somebody who's actually trained and understands what the AI is doing and what it's good and bad at. Who's empowered to actually challenge this and override it if they think it's made a bad decision, and who's had the, the organizational support to do that? It can't be a human in the loop in the sense of, well, the AI says to press the red button. The person presses the red button. There has to be some actual. Real two-way, uh, empowerment. And that is, I think of case we're upskilling. And that's, that's exactly the kind of area that I think would be really valuable is to, if you're gonna have that human involvement in those processes. The challenge I think, you know, for many of us here, speaking on behalf of professional accounting organizations is okay. Our current workforce probably has the knowledge and the expertise to challenge those ais doing those things because they've gained that experience in a pre AI world 10 years from now. How do we get people who are, who have only ever worked in an AI world to get to that level of skill? If the things that we cut our teeth on now nobody's doing because AI does them. So that's gonna be a challenge for us for the future, I think.
Tom Parker:Yeah. How do we solve that? I mean, if we are looking about the path of education, uh, I mean we, we still teach kids in schools in a traditional sort of examination way of retaining knowledge and doing maths in your head and things like that. When we've got a technology that does it all for us, where's our empathy being taught? Throughout educational formats and then into, you know, the next generation of of workers. Maybe we can, we can look at that a bit.
James Barbour:We need to look at the aviation industry. The pilot doesn't fly the plane. The copilot does automatic copilot, but the pilot could still fly the plane and we'll land and, and take the plane off. So they use visual simulators and so forth to train and need to look at more of that. How can we create some of the environments artificially? That will allow that sort of training to be embedded. And, you know, a lot of consideration will need to be given to that. And the whole case study type scenario can come alive. Because when you're thinking about ethics and what can go wrong, it's a disaster story. Is it leave, you know, the, the real sort of visions and that's what you want from people as a visions. Because if you think of something. Have you always had people thinking before they act? How could I appear before my peers, before a parliamentary committee or on news night to defend my actions? Have you always have that at the front of mind? It's usually a good way of getting them to do the right thing.
Tom Parker:We touched a little bit on this on the first episode, is around trust and around this accuracy of AI as well. I mean, if you've got a human in the loop and the experience of that human is gonna make a decision based on the data that the AI has given you, how do we make sure that we are having the right data, the clean, the correct data that is fueling that system, and then the right human to accurately tell whether that information is correct? I'm thinking about outside of the accountancy. Industry about medicine. You have pixel ai, which looks at melanoma and unfortunately the data that's been built on over years and years and years has been fundamentally about Caucasian images. And we don't have the pictures to train the AI models on. So you need an experienced doctor. To then say, well, yes, even though that has a greater accuracy of 97% compared to the human of 96%, I'm actually still missing out a large part of the population that could tell whether it was melanoma or not. How do we equate that to accountancy and finance? How do we equate that to the human experience versus the, the efficiency and um, and credibility of, uh, of an accuracy of AI open to the floor?
David Lyeford-Tilley:So I think with AI trust, so much of this is about. Getting the right data to get the right input to get to what we want. We have to make sure that, you know, the organizations that we're working with have the data that we need, uh, in a format that can be used. But it's also about what the AI systems that we're using have been trained on. Um, large language models and the way that they work now are. Very probabilistic in how they work, which means that they're really, really good at things like generating speech, whether it's a lot more open to pattern interpretation, but aren't so good at things with a definitive answer. Mathematical things where there's a specific right answer because it will give a. Probabilistic answer, it'll do, you know, if you ask it to generate a random number, it gives a seven more often than the other options because humans give sevens more often than the other options from one to 10. So it's just replicating that pattern that it's seen, and while the systems are getting better at that. They're not there yet, and we still have some work to do on, on getting to that right point. So understanding things like, well, what data does this system have access to? In what way was it trained? What things is it missing? What errors and omissions, like you were saying, with having non-white, uh, skin, uh, pictures to work on for medical information, what's the equivalent of data that's missing that it doesn't know about?
James Barbour:Yeah, if we don't have the right data, how are we gonna get the right output? So it's the old garbage in, garbage out. And so it's absolutely essential that you know what data you have. Is it the right data for what you're trying to achieve? And that, to an extent will fall upon those in the accountancy function within an organization. Because if you look at sustainability and emissions, no, there's data coming through from that, but is the data accurate? That will evolve, but AI tools will be there to assist as well. But it really comes back to, accountants can take on this role of governing the data because data guardians an essential role. I think as we move forward, and it won't just be in sustainability, there'll be more and more comes on the role of CFOs, accountants, but the underlying skills that they have in the trained and we'll serve them very, very well. And that comes back to trust and the fundamental ethics that is installed in all accountants.
Laura Hough:I think there's a question, isn't there as well about what we're willing to accept as our risk tolerance? You know, I was thinking of driverless cars earlier, and even if they have fewer accidents than human drivers do, we are not prepared to go with any accidents as being a tolerable level of risk in that situation. And that'll be different for every. Different situation, different type of data, different activity. So we need to get people involved very early on. I think in designing these things from a range of different backgrounds, not just programmers, but people that do jobs like us or you know, different aspects to bring in on those design of the tools before they become products.
David Lyeford-Tilley:That understanding of risk I think is really important because this also goes to that same organizational cultural discussion about the use of AI is to understand, well, what is a. Acceptable use case and what is a really high risk case where we're actually the, i a should not be involved or should be involved with a very thorough review, very involved, uh, oversight because it's really, really important and we can't afford for it to be even a little bit wrong.
Tom Parker:We're perfectly segueing into the negatives from the positives, so thanks everyone for that. So, Vico, I want to ask about some of these behavioral risks when it comes to, to the adoption of ai.
Wieke Scholten:I think, yeah, behavioral risk is actually, it's a term to talk about what are poor outcomes that can be driven by aspects of the way we do things here. And I think introducing something new, let's say, to, um, certain teams that have not worked with AI and start doing that, um, that it. Indeed also important to look at the risks from a behavioral angle. So where are our vulnerabilities in when we come to decisions? Or are we, for example, under high commercial pressure that, uh, creates a certain, let's say, incentive to take a shortcut in some way and not intentionally, again, have we said earlier, often unethical. Acts in at work are not because there's lots of male intent, but it's just factors in people's daily working environment that encourage, um. Something that is to be seen unethical. And I think introduction of something new like ai, we tend to focus on all those positives, but we're simply just have to balance that with, okay, and where does that may not work? So if we have indeed open communication, where are we? Where do we not speak out or, yeah, we always weigh the risks, but where do we not and why? And maybe commercial pressure could be one of them, but there are a range of factors, I think.
David Lyeford-Tilley:Yeah. It also, I think goes to what. The incentives in the culture are because, so for, we were talking earlier about trust of, do I trust that the, uh, the management level is going to leave my job intact?'cause I've seen this happen when people say, oh, I've, I've written this computer a or now I've set up this AI thing that automates a big part of my job. Hmm. How do I stop my boss finding out? Because I think if they know that I've done that, am I going, they're going to. Fire me. And if they, that shows I think of a poor culture or they should be saying, oh, I'm really pleased to tell this 'cause my boss is gonna be super happy with me and promote me and give me into other things. So it's at that difference of how people react to that situation and what they expect to happen. If they say, oh, I've automated a big chunk of my job, is a good example of what perhaps the culture is, is saying,
Tom Parker:because I wonder whether it then moves a little bit more towards that. The longer we are in this process of a interaction between. Artificial and human because I think maybe a year or so ago, if you talked about chat GPT having written a part of your bit, then, then bosses may look and say, do we trust that technology? Yeah. And also I'm paying you for eight or 10 hours a day, whatever it might be, and you've done it in three and then gone and walked the dog and then you know, been gone to the bar or whatever. That does look bad. But if it frees up more time for you to be better at your job, then where, I think that's where we are now, right, in this conversation.
James Barbour:Yeah, but going back to the risk, there is a risk of automating bias if people just accept what's coming out of the AI without properly checking. We've seen hallucinations appear in reports, well publicized stories there, and so we have to be careful in that. And it goes back a bit to me, to the culture points we were making earlier, but the speak up, if someone within an organization has a sense that the EIA is not working appropriately. They should feel safe to go and report that regardless of how much has been spent in developing that particular tool, because if they don't, there could be severe repercussions further down the lines, it goes back to that organizational culture and I think being really, really important. Yes, we want to get the benefits, but we need to make sure appropriate safeguards are in place.
Tom Parker:Yeah, so looking at those safeguards 'cause around data security, around biases. You know how with, with such a, a fast evolving technology, you know, it is changing all the time. Regulations are coming after things have, have happened. How do we feel about, about creating this secure ideology within an organization that the tools we are using are, you know, essentially our friend, they are gonna help us, but we do still just have to have a bit of checks and balances on the, on the information that's coming out.
James Barbour:Yeah, I mean, I, again, entirely, it comes back to that balance point. It's all about balance. We can only really explore the opportunities, and as far as I'm concerned, if we have the appropriate guardrails in place and people feel safe. For example, they're using enterprise systems, so the data's not gonna be inadvertently leaked and cause major issues there. So the use of AI should be encouraged. Within certain parameters. And you know, as was mentioned earlier about the more risky cases, and that really has to be subject to greater controls and so forth. So I'm not saying go down the route of the EU AI Act, in a sense they're already coming back. But to me a lot of this goes back to personal responsibility and accountability. People also need to play their part in this 'cause that really is important, as well as having the governance and the controls. Professional accountants are taught, you know. This output is yours. It's not the ai. If you're gonna sign off in this, ultimately you have to take responsibility. And I think that's really important,
Wieke Scholten:but I also think that's fair also, that is socially driven as a, of work that has been conducted in accountancy firms where every individual accountant has that professional integrity and, and, and pride. Then when we work together. In that daily working life, um, there are, there can still be social factors that, let's say distract us from that sort of internal moral compass. And what we do know is that the group moral compass overrules individual moral compass, even though we think that's not the case, and it would be great if it wouldn't be, because then it's a lot easier because then we can just tell ourselves this is how we want to behave, and then we'll do that. But we know at work it doesn't work that way. So that means I think you need a continuous management of those behavioral risks to always be curious about. And where does the desired culture that we have in terms of, for example, feeling safe, where. Are there situations in our organizations where people do not feel safe because they will always be there and be curious about that instead of just saying, this is what we want to be.
Tom Parker:The way I think about this is that we've gone through industrial revolutions before and the workforce has, has changed farmers to factories and things like that. And everyone thinks that there's gonna be negatives and there, there are negatives that come with it. But essentially you are, you retrain and you remove the, the organization. Um, going back again to, to how quickly AI has come into it. In fact, how quickly technology has changed. I'm thinking pre pandemic to post pandemic. I mean, we are working in a hybrid way. Before it was everyone in the office all interacting together. Lots of team, uh, interactions that took away time. You know, where, where are we now? I think, can I ask everyone compared to where we were in the last couple of years and then maybe where we're gonna be in the next two years?'cause everything is moving so quickly.
David Lyeford-Tilley:There's sometimes people phrase this as a. Being in a post singularity world, the pace of how quickly technology changes at some point passes the ability of human beings to keep up under and understanding what's happening and the, the changes in it. And I don't think we're at that point yet, but at some point it becomes that. You can't necessarily always know what the latest technology is and is capable of, and it does change so quickly. I think we've seen this huge shift, as you say, like I think the, the remote hybrid change over the pandemic. This sort of natural experiment that caused with everybody having to try doing that 'cause there was no choice. And actually finding, actually in many cases, that it worked well for people and that, you know, many organizations have not sprung all the way back to where they were beforehand. Reached some sort of new balance, but we still are figuring out the kind of cultural implications of that they still have. You know, I think managing and training of new people and team building that. Practice is based on a very, very long time of working together in the same location most of the time that we are still figuring out how to make that work in a world where that happens some of the time. And for some people it doesn't really happen much at all. Um, you know, my, my, my own team in my organization is spread across the uk. We've got one in Northern Ireland, got one in Oslo. You know, they're, they're spread about a bit. So we, we have to think about these questions and I think that's pretty typical now and I think it has some real benefits, but it does require some, some extra thought as well.
Laura Hough:I think I worry most about people who are new to the workforce. So the first job out of university, or the first job after a training contract, or even, you know, during that training experience as an accountant, those three years are very formative. Doing all of that work on your own in your bedroom at home is not really the same as interacting with colleagues, of making friends, learning from older colleagues as well. So I think we have to really give that proper thought and consideration.
Tom Parker:James Fika, anything to add?
James Barbour:I think we really need to work out what is best for the organization and, and by that I mean there'll be certain people, it's better working from home most of the time. Certain people better in the office depends on the role, and I think organizations need to accept that rather than force one particular way of working. Now, there might be certain businesses have to work in one particular way, but I do think it really is trying to get the best for the organization sitting down and saying, this is how we can best come together. And I think it will be different for every organization. I don't think there's one answer.
Tom Parker:So then perhaps one final point from everyone is if you had some advice to give to a leadership of an organization on how to best implement ai, maintain good company culture, organizational culture, what would it be?
Wieke Scholten:I think it would be a combination of, and setting the standard in terms of this is how we want to use ai and this is where we don't because of these reasons. So that contextualization and setting direction and pairing that with a continuous curiosity about daily situations where that is hard to do. So where it's challenging, where we. Fail to do it in the right way, where things pop up that we said we would never do. You know, it's that type of. Curiosity, uh, about those daily situations and learning from that.
David Lyeford-Tilley:I, I'm always a big advocate that people should have some sort of an AI use policy, something accessible, non-technological, something that's useful for everybody, because I think otherwise you have the default. Do whatever you like, policy that. You start out with, and that has to be also evolving, but you can also set some bright lines. You know, you might say, okay, for example, we don't think that using AI generated art instead of commissioning artists is an ethical thing to do, so that we are not going to do that. That's, that's the sort of line that you could set for your organization and explaining why that you've made that decision, I think helps to both set the line and build the culture.
Laura Hough:I think it's important to take everyone on the journey of using those tools with you right from the bottom of the organization. Explain to them why you're bringing in certain things and not others. Because if you just try to set something from the top, it's very hard to filter it all the way through to everybody. I believe the use of AI tools will follow the values and culture of the organization that are already there. So in a sense, you almost need to sort those out first and then build the AI in afterwards.
James Barbour:For me, it's build the strong governance controls, ethics foundations, and then let's explore and not just about efficiency savings. What is there that we could potentially do when we were done before? So explore real business opportunities and to do that, I think it's good to get different age groups together within an organization. And really explore and challenge, why have we never done this before? And I think if you do that you can open up potential new set of areas that things could be developed for the, the better of the business. It gets people again, together, culture wise, very positive. So I think it is less of a safe environment. And then let's explore the universe. And for me, that's where we're going.
Tom Parker:Well that one last very quick question because I'm fascinated about this. If we use AI more and more and more and it makes us more efficient and it makes us better at being humans, are we gonna get down to a three day working week? Is it a two day working week? Oh, are we still gonna be this industrial nine to five, five days of the week? What do you think?
Laura Hough:There's a question about productivity there isn't there, rather than just efficiency. So perhaps by being more efficient on these transactional tasks, for want of a better phrase, we may become more productive. So maybe not fewer working hours, but more productive working time.
Wieke Scholten:Yeah. Perhaps we will also have more time to think about quality.'cause that's the thing with efficiency and quality, right? In terms of trade off. And I think, I can't imagine that it will be a three day, that's my conclusion.
David Lyeford-Tilley:I mean, I suppose full disclosure, I work for an organization that has a four day working week already. Um, but I do think that there's a point of, you know, work expands to fill whatever space we give for it. So I think there's a room for us to do more if we're more productive, you know, as you say, more efficient. Um, but then maybe that means that, you know, so much of what we do, I think tends to be deadline driven and in the acting to the immediate. So it perhaps gives up some time and some space to try to think about the longer term and accomplish more as well.
James Barbour:And I think you always have to consider, we spend a lot of time at work and I don't always see work as a negative. It is a social environment. You like the opportunity to think as well. I've seen a lot of people who retire and can't cope. Mm. They couldn't get used to this having all this time. So yes, we really need to focus on wellbeing and if we can reduce ours, great. But there's also, I think we have to just make that workplace as good as possible. So it's an environment people go every day. Um, not dreading it, but actually this is an environment I want to in and we can do a lot of things to make society better.
Tom Parker:Well, another fascinating conversation. We are at the end of the episode though, so I'd like to thank all of our guests. Thank you to James Barber.
James Barbour:Thank you very much, Thomas. It's been a pleasure.
Tom Parker:Thank you to David Lightfoot. Tilly, thanks
David Lyeford-Tilley:for having me.
Tom Parker:Thank you to Laura Huff.
Laura Hough:Thank you so much,
Tom Parker:and thank you to Ika Schul.
Laura Hough:Thank you.
Tom Parker:Well, that's all for this special on a eye. If you've found these podcasts useful, let us know by getting in touch or by following this podcast on your favorite app. We may well make more of these if you ask for it. There are links to a number of useful resources in the show notes that will help you determine how to approach AI from a cultural perspective. So be sure to check those out. Thanks for listening. Bye for now.