Evolving the Enterprise

Human-Centric Transformation: David Holton on Balancing AI, Customer Experience, and Culture in Banking

SnapLogic Season 4 Episode 11

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 50:25

In this episode of Evolving the Enterprise, host Dayle Hall speaks with David Holton, Chief Transformation Officer at Cambridge & Counties Bank. With over 15 years of experience in financial services, David has led transformative initiatives focused on data-driven growth, AI enablement, and customer-centric modernization.

David discusses what it really means to put customers first in an increasingly digital world—and why human connection, cultural readiness, and trust remain the cornerstones of sustainable transformation. From shared-risk vendor partnerships to the moments that truly matter in customer journeys, David offers a practical, people-first view of how to use AI responsibly and meaningfully.


Dayle Hall: Hi, and welcome to our latest podcast. This is our podcast where we dive deep into the strategies and technologies driving business transformation across multiple industries, multiple technologies, and how we can bring it all together to help the enterprise.

I'm your host, Dayle Hall, SnapLogic CMO. Today, we have an accomplished guest. David Holton is joining us today. I'm going to call him a transformational leader. He's going to tell me whether that's true or not. But he's a transformational leader with deep experience around modernizing financial institutions, shaping customer-centric journeys, everything to help a customer really get to know the company that you're working for have better experiences.

He's the Chief Transformation Officer at Cambridge & Counties Bank. He's focused on making the organization data driven, AI enabled, while also very carefully still making sure you protect the relationships with customers and providing the kind of support that really drives customer value. He's drawing on 15+ years in financial services. He's got a unique perspective on how to align technology, people, and governance to make sure you can move the organization forward.

David, welcome to the podcast.

David Holton: Thank you. What an intro, Dayle. I appreciate that. Let's see if we can live up to it.

Dayle Hall: Exactly. We start big, and then it doesn't matter what happens from here. Everyone's hooked. Well, look, David, I appreciate- I know we've met before. You're a customer of ours. We've seen you on stage at our INTEGRATE event in London, which was excellent. For our podcast listeners, just give us a couple of minutes. Give me your background. What happened during your career to get to the point where you became CTO at Cambridge & Counties?

David Holton: So I fell into this type of work, I would say. I started off as many graduate trainees doing banks, looking at a range of different things. Most of my career to date had been on the frontline sales, working directly with customers, etc., assessing their needs, working out what the bank could do to really support customers.

I then looked around various different execs that I liked, that I really respected, and looked at their careers and they'd been all over. So I decided to take a bit of a breadth in the career rather than a traditional linear approach. Went into risk, went into marketing, went into ops, found myself running the workplace pensions business, which was probably outside of banking, for a period of time with one of my employers. I then went into consulting, which you'll be well aware of, Dayle, in that regard. I decided I liked doing the doing more than talking about it.

Dayle Hall: I like that. We like doers.

David Holton: Yeah, get back into the world of industry. But probably in the last 10 years, I've focused mainly on a lot of the transformational change. When I was on the frontline, the view was that a lot of the people we had on the tech side really understood technology clearly but struggled a little bit converting that into genuine understanding of the customer.

And I think there's a role to play at the intersection of technology and customer and where those two can meet and how you can really understand how technology can be an enabler for doing things rather than just a product. I found that quite exciting, and that's what I've tried to do in the roles I've had since.

Dayle Hall: Look, I think what I like- it's an interesting background because I think you've been probably closer to the customer leading into this role than some of the other guests I've had on here who've talked about they've come from an engineering or even a data research kind of background, which obviously gives them a different kind of perspective. But do you feel that in this type of role within your organization, because you've been close to multiple types of customers, you've done different roles, you've really got that business perspective, has that really helped you to really understand how the technology can drive the business?

David Holton: Yeah, I’d certainly like to think so. I’ve always had an interest in tech, and I think that helps. Otherwise, it would've been probably an impossible move across. I always think, and I’ve said this a little bit earlier on, technology is the enabler to be able to go and do something, a business outcome you're looking to drive, a customer need you're looking to satisfy.

And it's fascinating because when you've spent time out with customers walking around their factories, looking at the buildings, looking to buy, seeing their trading businesses, you get an understanding of what their business is trying to achieve. And the only reason they need banks is to be able to go and do more of what makes them successful. We are here to serve them.

And I think you need to understand the customer's business deeply if you're going to genuinely be able to serve them and serve them over time. Otherwise, all you are is a commodity. The organizations I've worked for don't want to be just seen as a commodity. They want to be seen as a trusted advisor supporting customers over the years. I don't think you can do that unless you actually understand them.

Dayle Hall: Yeah. I think that's a good perspective to keep reminding people. Again, look, when we do these podcasts, I'm always thinking of what are these nuggets of information, of insight, that someone can take with them if they listen to this? But I think a lot of us talk about, oh yeah, we're doing it for the customer. We really want to understand the customer.

I think a lot of us do actually get bogged down in our own processes or their own, are we trying to get efficient? Are we trying to implement a new technology? I think what I heard and what you just said is if you keep focused on what your customers are trying to achieve, keep focused on that as a goal, that should help drive your strategy, even though you still have your own internal things that you have to get done. But that should be the North Star for everyone, right, depending on the role.

David Holton: Absolutely right. It's a bit of a cliche about everybody thinks customer-centric. Every organization will say that, but the reality quite often is different. A lot of places focus on, I've got a set of products, I need to be able to sell these products. How do I get the best return on that?

My view, and certainly the view of our organization here, is that is driven by a better, closer understanding of what your customers actually need. I've yet to see that customer-centricity not pay off. I think quite often, some firms find it really difficult to monetize because it's far easier to say, if I sell X products, I make Y revenue. Actually, what you don't see is, if I get to know that customer, I get to learn their business, then I get Y-cubed revenue from that.

I can't pinpoint that on day one. But I can tell you now it comes and it comes over time because some of the softer side of the customer experience piece comes out, and it's very difficult to write a business case associated with it. Often, there's not a clear kind of technology lead back always. And when you see it, it's really powerful.

Dayle Hall: Yeah. I'm going to keep that in mind as we go through some of the questions that I've got for you. So if we start with, we keep that customer-centric approach, which, it sounds like, that's how at least you run your organization, which I think is excellent. So if we think about looking at what does a technology partnership look like in somewhere like Cambridge & Counties Bank, how have you seen the traditional IT implementation or the model between technology partnerships go from just, we scope something out, we execute it? Is that still how things are done, or with what we have capabilities now around automation and AI, has that changed?

David Holton: I think it's still how most third-party tech providers will approach something. Let us assess what we've got, and then I'll charge you for Y hours, days, months, etc. Where I'd like to see it going is more of a partnership approach, I think. And especially with the technology like AI, which is emerging, which you could argue is relatively unknown, certainly over time, in terms of the pace of change that you've got with that, I think it's much more essential for partners to come and say, let us share in the risk, let us share in the outcomes. Rather than sell you a package of a hundred days, I will generate this outcome for you.

And I'll do two things that will align the strategic provider and the tech solution with the customer outcomes, which, I would say our organization will be pinpointing there, but it’s also share the risk. Because otherwise, in this new world now, there's information asymmetry for more than I think you've had in previous technologies, where a firm like ours might have very little understanding of AI, mainly what it can do and the scope of where we might be able to take that, whereas the thing an organization like yours will be far more attuned to the capabilities that you're seeing elsewhere and being able to bring that. So we need that shared as a potential, and we need you to partner us on things like the design and discovery piece of work rather than let us come in and scope something and deliver a project.

Because actually I think what we'll see going forward is less individual project work and more, let's go on a journey together. And I know that sounds really cliched. I don't know where this is going to end up in the next 6to 12 months, and I would love a partner that says, neither do I, but we'll go on this journey together. And actually, through working together really closely, we'll deliver better outcomes, because the only thing I can guarantee you now is whatever I see I want to build and I would like to see in six months, it's going to be all by that point. So I want to be able to pivot. I want to be able to adapt as we go through the build and the design to get the optimal solution landing at that time, not what I thought the optimal solution was going to be six months ago.

Dayle Hall: I love the concept of basing it on outcomes rather than, call it, technology milestones or things that you want to achieve there, and then agreeing to it and partnering. When you approach customers, vendors with that kind of mentality, is it wildly accepted? Do people really get it yet? Or they're like, yeah, we're trying to do it that way? Do they always come back to the core deliverables that they have to get done and so on? Do they really come back to the outcomes and say, did we actually achieve that? How hard is it to actually have that discussion?

David Holton: It's very hard. And I think to be fair, this is still quite an embryonic concept. We are only just starting out on that journey now and trying to influence providers to be a bit more outcome led. I think some take an allergic reaction to it very quickly, i.e., no.

The way we run our businesses effectively, I need to sell you a period of time with my developers or that capability. I think those firms are going to struggle in the future because I think they're going to find the need to adapt their working practices and how they monetize the skills they've got for the future. I think the ones that have been a bit more forward thinking and are willing to, let's face it, invest a little bit of risk in helping design the future, I think they'll do really well.

Dayle Hall: Would it be fair to say then, if you were looking at selecting who to work with, would you make a conscious choice if someone was going back to, okay, yeah, but I want to go through- yeah, but I still need to sell you this and I still have get you to sign this PO, and it's this many hours and this many people that are going to be involved. If you had a partner that was coming in and being more outcome driven, you felt like they were going to actually- maybe some of the other specifics on hours or features or whatever is not fully defined, would you gravitate more towards that if you were making that partner selection?

Because I think there's a little bit of inspiration in some of these processes now. It's kind of gone, everyone's so definitive. You have to do it this way. And like you said, it's got to be this much hours and this much output. But it's almost because AI is changing, because there's so much possible, that some of that, I think, is lost. Would you make a conscious choice to work with a partner that was more inspirational that way?

David Holton: Yeah, I think it depends on the project and the outcomes you're trying to deliver. If you've got something that is fairly set in stone, I need to build X widget, then you can quite easily do that on a time and cost basis like that. I'd probably liken it to when Agile came out and everyone was gravitating back to waterfall because that's how people knew in terms of how to deliver some of this stuff. But I think the real benefit's going to come when firms are able to think in that shared risk model because that effectively puts my objectives and what I'm trying to achieve for my customers truly aligned to yours, whereas in the more traditional way of selling services for a set amount of time, there's no risk on your part. You get the reward of the guarantee of the payment in that regard, or we effectively take all of the risk of, well, will this do what we needed to do?

Let's face it, in that shared model, there's got to be something in it for the provider of the services. I would see the economics of what we do shifting away from fixed fee into shared outcomes as well. And I think that could be quite exciting from a- if I was sitting on the tech provider side, I'd be saying, well, okay, I'll share the risk, but I'd also like to share the reward in that regard. Again, for me, that would just make even greater alignment.

Dayle Hall: Now, companies like yourself are obviously taking risks trying to improve customer experience and so on with AI. Vendors like SnapLogic are also trying to figure out how to provide those kind of services. How would you advise a vendor or someone that you're going to work with to show up? you said it yourself, things with AI are still being defined. No one is a true expert, even though many people claim to be. If you look at 10 websites that say they do AI, everyone’s saying that they know everything about AI already, but the reality is we don't. So for someone like you in this type of role, how should a vendor show up? Is it thought leadership? Obviously, shared risk, shared outcomes I like, but what are kind of things that makes you feel more comfortable working with the vendor?

David Holton: What I said on stage at your conference was you guys are only reading one page ahead of us in the manual. I got a bit of a reaction from the crowd on that one. But it's right, with technology that's emerging like this, nobody's that much further forward than others.  I think you asked the question about how should people show up. I think that honesty and integrity of we don't have all the answers, so I want to see that humility. I want to see that honesty in there because that integrity is something I would put value on.

But the flip side of that is I also want confidence you're going to be able to do what we need to do, and it's worth paying you for it rather than just trying to learn it ourselves. The flip side of that is also, but here's what we've done and here's what we've seen where we've overcome some of those unknowns. If you think about what we've done in the last 12 months, we've gone from zero up to where we are now. We're really confident that in the next 12 months, we're going to go even further because it's operating at pace. And I think starting with that acknowledgement that this is evolving technology, and therefore, everybody's learning, will actually give some ease and comfort to purchasers of that who don't expect you to know all the answers.

And I think all of this comes back to, it lends itself really well then to much more of a partnership model where we can go on that journey together. You guys are slightly further ahead than us. That information asymmetry I talked about earlier on can get closed as we learn more. That knowledge transfer as well is absolutely critical for firms because whilst there's always a need for externals, there's always a need for organizations who specialize in certain activities and bringing them in at certain key points, we also need to learn more about how this stuff works because I genuinely believe there's a huge potential here, and it's incredibly exciting.

There's also a huge risk and with a lot of unknowns, especially in such a regulated industry like financial services in the UK. We need to make sure that whatever we're doing in this space, we're doing safely. It's safe for our customers, it's safe for our organization, etc. And therefore, the amount of effort we put into understanding what the right guardrails are, how we do that in a way that is going to deliver the outcomes and the improvements we want to see, but in a really safe and secure way, is absolutely essential. And I think, again, that's where having the insight from the likes of SnapLogic, for example, can be really helpful, because what you've seen work well elsewhere will be really valuable for us as we start to build out how that looks and for our companies.

Dayle Hall: Yeah, no, I like that. If we look at specifically then your organization and what they're trying to achieve, if the partnership we have or the partnership you have with other vendors, if it's working, how does that translate? What's the clearest way that you could say the customer experience is getting better, the broker outcomes, or you're seeing improvements that help your colleagues deliver this better relationship-led service? If the partnership's working, how does that translate then into your specific organization, into financial services?

David Holton: We love data. There'll be a ton of metrics that we would look at for efficiency gains and other things like that. But in a way that's by the by. The thing I'm really excited about and really interested in with this is we are an organization, I think, relatively uniquely in this space who pride ourselves on that human interaction, that kind of direct to customer. If you bank with Cambridge & Counties, you can call up any of our colleagues. And a lot of what I've seen in AI and a lot of the pitches that I've personally had from firms looking to sell various services are, we replace that with AI and you can almost have a thousand colleagues instead of the couple of hundred you've got there. I'm not interested in that. That's not what we want to do. That's not what our customers want to do.

I would use this technology to open up more opportunity for our colleagues to spend with their customers, because I think that's a better outcome for our customers. I think that's a better job for our colleagues. And again, that kind of symbiotic relationship there really works. And therefore, I see AI effectively taking what you would call the necessary waste.  At the moment, we have to do a lot of manual data management throughout the process to assess a customer's need, assess the lending criteria, the credit decision, the anti-money laundering, etc. There's a lot in the process from getting from a customer having a need to us being able to lend them money or taking a deposit.

Much of that I think AI could do, therefore free up colleagues to be able to do more of the face-to-face interaction. And I think it's less for me about, oh, we can do that process with significantly less people, and it's more about what are we using our people for. There's value-add activity, and there's wasteful activity, even though it's necessary.

Dayle Hall: It's interesting you say that because I think- I started this podcast, I don't know, three years ago. This was before we had the LLMs, a lot of the Gen AI capabilities. As soon as that started to come out, I did a podcast with some different thought leaders, and it was all about taking people's jobs. It was all about that.

I feel as though there's starting to be another shift. How you described it I thought was excellent, which is how you can augment what your team is doing. What are you using humans for rather than replacing the humans, which I think there's more of that. And there are some very large vendors out there that I won't name and CEOs that have said, we've got rid of like 3,000 people, and they've had to backtrack. They've literally had to rehire people back because you can't do everything. To me, that was a sensational headline. It clearly hasn't worked as a business strategy. So I think people are seeing some of that shift back, at least the narrative around AI versus human.

In terms of your bank, and you talked about it, your thought around using AI versus using AI first, human first, whatever you want to call it. But as an organization, is that a corporate-level understanding? Is that something that you've instituted in your team, or is that kind of the whole organization now is looking at, yes, we can use it, but it's not about getting rid of more people? How does that translate to your entire company, the whole of the bank, rather than just your own organization?

David Holton: It's definitely not just a me thing. I think it's in the DNA of our organization. Ever since the bank was established, it's always viewed itself and taken some pride in the human interaction bit. Quite a lot of our competitors went on a very big digital transformation journey far earlier than we started looking at technology and have replaced large portions of the journey effectively with tech. We stayed away from that for a long time because we thought our unique selling point was effectively a human would do everything.

I take a slightly different view. I don't think a human needs to do everything. I think a human should do the bits of value, the bits that the customer sees, the bit of the interaction with the customer where the customer gets value from having a person rather than a machine do that. And I think we know which those value points are. And therefore, those are the things that we're going to protect.

But I wouldn't say I've had to go out and say, this is the way I think we should do things and we'll be human first. I think the organization has that in its DNA. And what I've had to do is say, don't worry, when we come and bring in either business process automation, digital transformation, AI, whichever bucket of technology we're looking at, I've had to convince the organization that we will do this and empower colleagues to do more with humans, not less. And that's a really compelling message. I think that is really important to us. It's important our colleagues know. It's important our customers know.

Dayle Hall: Is there resistance either way, either to stay more human first to lead with AI, and as an organization, how do you bring people along in the journey? Because I know there are some people that are like, oh, let's go all in, and there's some people that are probably more reticent. I think that's natural. But what is your organization like, and how have you brought people along that journey?

David Holton: I think people change. In my experience, in all the organizations I've worked with and worked for, people change is always the hardest part. It's never the tech. It's always the people. And quite often it's because people genuinely care. Especially where I am now, the culture is so positive. We believe in what we do. And therefore, any change, even really positive change, could impact on that. And therefore, there's always a natural reticence to, but we really like what we do and our customers really like what we do, so why on earth would you change it? And my job effectively is to say, because we could do better. And I'm always looking to how could we better serve our customers. Let's face it, not just because we want to be better than we are today, but markets change, customer needs change, things evolve, AI especially. It’s so exciting because it almost democratizes the availability of technology and capability that previously you had to spend tens, if not hundreds, of millions of pounds to generate. What we can see with agentic AI now, with relatively modest investment, can get you to where some banks and some organizations have been spending huge sums of money for ends of years to get to.

Dayle Hall: It's crazy.

David Holton: Absolutely. So I think the need to maintain knowledge of what's happening out there and assess that and then bring in the very best in your organization, I think, is only going to get more important.

Dayle Hall: I think what I like about what you said is you're at least looking across- what's the right point to inject AI? What's the right point to keep the human? And there's this concept that I think you've mentioned before, which is the moments that matter. How do you as an organization, or you within your team, how do you determine which of those interactions that you potentially have throughout the whole customer journey, the experience, what should be automated versus where a human needs to stay potentially as the front and center during that process? How do you make that decision? Do you talk to customers? Is it just something that you feel is right? Do you do it different because other organizations do it? How do you make that assessment?

David Holton: In a way it's easy. Ask people what they value about the various interactions. Like most organizations, we do insight. We're heavily broker led, so we're heavily intermediated. So we deal with brokers, we deal direct with customers. They've all got different needs. They all value different things. I'll quote my boss because he said this on stage one time and it got a chuckle, but it's really stayed with me. No customer has ever thanked us for transposing their data. And I thought that was such a wonderful quote because he's absolutely right.

If you look at the activity behind the scenes of many organizations, it is data input into system A, data extraction, data input into system B, data extraction data and so on and so forth. That gives no value to the end customer. It is processes that we have created over time as we've built in new technology stacks, etc., and learned of different needs and different regulations that have come in and you've had to do rather than designed from scratch. And I think what AI gives us is a level playing field now to be able to take the human out of some of those things and, again, reinvest that human time then on the bits of the value chain that are really important.

For example, when a customer wants a new loan, they want to talk to us about, would you likely do this? Is this the kind of thing that you would be interested in? There's this little quirk. Could you get over it before I waste my time filling out the application form, going through the process to be told, knowing X weeks or months depending on the process. I don't want to digitize that because that's a really positive customer interaction. My teams will tell us that, the customers will tell us that.

So if you listen, it goes back to the first bit we were talking about of actually working on the side of the customer and working in the business. When you know this, you can quite easily, I think, see the areas where a human really gives value. And you can also see an area where you go, bloody hell, the tech would absolutely make a real big difference there. And I think the key is designing something that works, going from tech to human to tech to human, for example, and isn't clunky because of that.

Dayle Hall: My experience recently- and I'm from the north of England, too, so we grew up in the same kind of area. We have very distinct things about the way we grow up. But even in the US when I moved here, what banks have started to do, and financial institutions, less human led, branches started disappearing. And then it was then you can't get anyone on the phone. They moved to call centers. But now they're moving to AI call centers.

I'm trying to get to the question like, do you believe that to some extent, some institutions, not saying your own, but in your industry, have some of them gone too far too fast, forgotten that it's- you just described it perfectly. If you ask the customers what they want, you're responding to that, which I think is the best way to then look at technology to help that process. Do you feel some people have gone too far too quick within your industry?

David Holton: Undoubtedly. The bigger banks have always had a very challenging time on cost-income ratios, let's face it, the economics, return on capital, and all the great things that we've all got to consider, but they're under a hell of a lot of pressure for. I think that can quite often drive a singular focus on the economics rather than the customer. I've been in these organizations, I've been close to them. You will get every single colleague talking about the fact that we are customer-centric, we're customer first, our people are our USP, etc., and genuinely believing that culturally.

But I think the activities generally get driven by a need to hit a certain economic return, whereas other organizations, I think, who say that's important because I tell you what, my boss was in here with me now, he would be saying our return on capital is incredibly important, [crosstalk] point of view is important. But we don't do it at the cost of our customers because we genuinely believe, and I genuinely believe, and I think data over time shows this, when you focus on the customer and you put them first, the other stuff comes. You've just got to go on a little bit more of a journey of belief to get there.

Dayle Hall: If I was in the UK living right now, mate, I would sign up for your bank just because of the way you described that. I'm going to tell my parents. My parents are still there.

Okay, let's move on to talk a little bit about risk guidelines around using AI within your kind of organization because very regulated, a lot of security, privacy challenges, obviously. So it's a big issue for every organization, healthcare, financial services, obviously it's more front and center. How has AI moved from, call it, hypothetical to more practical and everyone wants it and people feel like they can use it every day? Has the risk appetite in your organization changed? How do you think about policies or governance before you put something in place? How do you put some of these AI processes in place given what you have to focus on for a financial services company, but also move fast to try and take advantage of the technology?

David Holton: It's a great question and one I don't think I'm going to be able to answer to the fullest extent because the honest answer in that is we're still looking at that now. I see it as probably the biggest risk I'm facing isn't, will the technology do what I needed to do? It's how on earth do we manage it going forward. Now I think there's a couple of ways we've looked at. When we built agentic AI, we've looked at splitting up the- I don't want to get too detailed and technical here, but we've looked at splitting up the tasks into a number of different agents. So the agents themselves are far easier to understand, to prompt, to manage. They've got very specific tasks.

Where I've seen it not work as well, but also be quite unruly from a management point of view is when you have one agent trying to do a dozen different things. I think that's where the agentic AI can get a little bit confused. Do I do that? Is that the priority? Is this one? I can't find an answer to that, so I'm just going to go to the LLM and just try and find something that might work and then play that back as the answer. When you cut it down into smaller chunks, I think you take away a lot of that bigger risk. So that's that.

Don't forget you've already got a lot of checks and balances in place anyway in terms of your oversight and your QA, so you need to use those. But you do need to adapt them, and you need to look at that. Now we've got SnapLogic, and as you probably know, helping guide us through. Here's what the right guardrails look like. Here's what you need to be thinking as you start to put some of these things live. I see, and you mentioned it earlier on about a lot of the firms that went very early and got rid of a lot of people are bringing a lot of people back. I think they're bringing a lot of people back because they're realizing, I might need less people now on the admin side, but I need a lot more on the QA because I need to be looking at the outcomes and the outputs of these agents, and I need to be assessing it. Is it still doing what I thought it was going to do?

I think there's a lot of concern in the market about LLMs. Would you get them effectively making up things if they didn't know the answer? What does that look like? Well, how do you test that? How do you know that the programming and the prompts that you've given are still valid after six months and it hasn't learned bad behavior from some humans that have been interacting with it? I think at the moment, the thing I'm probably most nervous about is I think there are more unanswered questions than there are answered. But again, you go on the journey, you do smaller steps, you start putting some of this live, you see the benefit, you redo it, you QA it, you move to the next thing, you start incrementally adding to that, I think we'll all learn as we go through.

Dayle Hall: And you said oversight and QA, and it feels like it is a continuation that you have to keep on top of with. Maybe it's different skill sets that you need with people. What are the kind of things that- within that QA, within that oversight, what do you look for? What kind of things are you, as an organization, saying, look, we have to check for these are our metrics, or these are the things we have to make sure that it's still doing what we anticipated and what we intended for it to do?

David Holton: Yeah. If we look at agentic AI almost like a digital colleague, so what would I do with a colleague? I would look at their work. I would train them up. I would give them mandatory training that we have to do. There would be that constant retraining, QA, monthly conversation, where you go, what was it you did. Did you do what we expected it to, what was the output like, etc. You do the same, but you just do that with an agent. So you should know what it's meant to do.

The good thing about agentic AI, and certainly the solution we're building alongside yourselves at Snap, is the auditing of that is real time. I've now got more data and we'll have more data on what we're doing with each of these individual agents than we would have done with individual humans doing those tasks because I don’t have to go and ask them the question. You should be able to see and you should be able to highlight if you have anomalies in there. And you're always going to get anomalies. You get anomalies with people. You're going to get anomalies with tech. How detailed the anomalies are or how serious is really something that you've got to oversight, I would suggest.

It's still really early days, and I think, again, we will learn so much from having these things live and volume going through to really test. Do they keep doing what you programmed them to do at the start, a bit like an RPA agent of old would? And it does what you say. Will agentic AI learn and evolve and start doing things you didn't program it to? And time will tell.

Dayle Hall: Do you think- and you may not be there yet, but what we just talked about there was making sure that the AI agent is still looking at things the right way. It hasn't changed. It hasn't learned bad habits. Do you think you'll get to a point as well where you're putting safeguards or checks, QA, whatever, on outcomes to the customer? So not just making sure that what it's doing internally is right, but that the agents and the humans involved with the agents are still fulfilling the goal that you set, the strategy you set, which is customer first, which is to make sure that what you really wanted to solve in a business case is getting solved by the agents.

Because as you probably do as well, you read a lot, like people are putting 5,000 agents to work in their organization. I don't necessarily think that's a good thing because what are all these things doing? Are we losing what we've spent the last 30 minutes talking about so far, which is, are we losing the fact that it still should be about what you're providing to the customer, the experience, the journey and so on?

David Holton: Yeah. The checks and balances we've got to date, are the customers getting what we need? What's our insight telling us? What's the feedback we get at the end of every successful transaction? For example, the market research, all of that good stuff, I think, is as valid today and tomorrow as it was yesterday. Because, again, the agents themselves aren't out there front and center performing a big enough [inaudible] that has an outcome of its own in that regard. That outcome is, have you freed up enough time for our colleagues to do better things with their customers? And actually, that comes through the metrics that we've got today would be my view. Very different for other organizations who do put them front and center. So if you've got an agent effectively answering the phones and doing that, then absolutely you would need to know that outcome. We will not go down that road.

Dayle Hall: No, I like that. If we look at something, let's call it process rationalization- so before you actually decide what it is that you're going to put in place, and we talked about does it solve a customer challenge or improve that experience, and I think something that you've said before, process rationalization is as important as the technology itself. That's something that I know you've said. How do you actually put that in place practically? Because it's a great sound bite. And if someone's out there going, yes, I fully agree with what David said, but practically, how do you do that? How do you instill that internally? How do you map to it? What are the KPIs? How do you make sure that you're not jumping ahead? Again, we just talked about this with a lot of other companies have done. They jumped ahead to an automation or an AI-first thing without really looking at the guardrails around rationalizing whether this should be done in the first place. How do you manage it?

David Holton: It's a great question, especially given the pace of AI and the need for organizations to go, I want to be on that train. I want to be on that train and let's just jump in. I think there's also a danger with a lot of firms who are bringing AI solutions to market are effectively just saying, we can digitize everything you do today and save you all that. Actually, if 20% of what I do today is rubbish, I don't want you digitalizing that.

You're not going to like this answer, but it's very old school, back to on a wall-

Dayle Hall: I am old school. David.

David Holton: Back to good old workshops with individuals in your organization who know how this stuff works, who know the bits there. They've had to build a workaround because your system doesn't do that, but customers always ask for it, and really, really understanding where your regulation and your rules and other things dictate that you have to do certain things, even if your customers might go, I got to hear that part, sorry, we're a bank, we've got to do it, and then understanding the limitations of your core banking systems or other things, and therefore what you've got to work around, and then effectively trying to be as efficient as possible without throwing away the value-add and what you need to do.

Like it or not, there are certain parts of the process that are just essential to be able to do. I need to get a financial analysis on a company before we can lend the money. You might not enjoy that part of the process, but it's absolutely critical. Understanding things like that and then building the most efficient route through, depending on what the outcome is. In some parts of our organization and in some of our customer cohorts, it's all about pace. The primary driver then is how can we get to the quickest decision we can that's still safe and still right. And you're not going to let the customer down six weeks later when you go, oh, actually I need more information. So there's certain things like that in there.

Other parts of our process and organization and customer need don't have pace as a big thing. It's more about getting to the right outcome for them and the right structure. Therefore, you can build in more time to do the right kind of analysis. So I think it really depends again, like a lot of what we've spoken about, what is the outcome you're trying to achieve? And then I would old-school sticky note it on the walls and go through and get to the very best you possibly can. And you'll never get it perfect. And the day you put it live, you'll think of something else that could have been better. And that's just life.

Dayle Hall: Yeah, no, I like the sticky, still follow that same process when we're trying to do something new here. As AI has become more prevalent and the expectation is there's definitely advantages that you can get within your organization, was there an expectation, either with the leaders of your organization or within the employees, that we have to do this and we have to show immediate return, or does your organization take a longer view?

We've talked about this for the most part, customer first, making sure that it's proper business outcomes that are going to drive, which I think is great, but is there still some pressure internally, like, hey, we have to go faster, we have to show it? If you're going to invest in a technology, there is this level of, okay, when do I start to see the result. Has that been accelerated because of AI, because of expectations of what it can do, or does your organization still take a, look, if we keep on the customer outcomes and business outcomes, we know we're doing the right thing? How do you balance that internally?

David Holton: It's a challenge, I think. Even the most customer-centric organization can't ignore what's happening out in the technology space. And I think there is a worry that some firms feel like they might get left behind if they don't get something out there. You've seen it across the market, this big drive for- I see it a lot in the press where firms will be putting out, oh, look, we'll just put this live and we can now do automated decisioning on blah. When you wouldn't pick it, it's the tiniest little thing. But it's there and it's live and they've got the kind of first mark mover to market.

I'd be really pleased with the way we've approached it. There's no doubt that the customer-centricity is the big driver. But the board especially has been challenging back to the executive about, we want to see ambition. If you think that ambition is in the IA space, great. If you think it's elsewhere, great. But we want to see the bank continuing to be relevant and continuing to evolve in that space. And whilst I don't believe you've got to be technology focused, all human focused, I think the two work incredibly well together, and I think our board leadership really understand that.

Dayle Hall: It's great. Do you think in general- again, we talked a little bit about the controls around financial services organizations. It feels, because they have so much they have to be cautious of so many regulations, rightly or wrongly there's this expectation that maybe you move slower than other industries because of that. But is that true, or is that changing now because of the access to technology, because of all the data that you can use to provide better customer experience? Is that a fair characterization of your industry, or is it changing?

David Holton: You're right. I think banks are generally viewed as being slower than others to grasp new concepts and certainly when it comes to tech, because there are formal checks and balances. You mentioned healthcare earlier on. Same there, heavily regulated for very valid reasons. And it can be difficult then to navigate all of the checks and balances that you need to do. So to put something live here will be very different to putting something live in a non-regulated organization.

But I don't necessarily see that slower pace and harming outcomes on what happens because some of this stuff should be thought of a little bit more. What we can probably be better at is doing genuine true MVPs, so putting things out that are smaller, testing it in market, learning from it and then going. I still think in banking there's a lot of big bang stuff because by the time you've done all the checks and balances and other things, you're going to build a little bit more of the solution. And that tends to be the way things happen. Some smaller MVP-type releases out to market could probably help us with pace. And I think we need to learn that a bit more.

Dayle Hall: We could talk for another two hours, David. I know we could do that.

David Holton: That's easy for you. It's morning there.

Dayle Hall: I'm just getting going. I could do this till lunch. Has there been something that you've put in place around, it could be AI, could be just kind of an automation process,  has there been something that you've done that you've either been surprised positively or negatively? Meaning what's something that you've put in place and said, wow, that really worked out way better? And has there been something that you've done that said, ah, that just didn't work? Again, it's an evolution, you can always improve, but is there something that would guide someone out there during one of these processes that you learned was wildly successful or that didn't really hit the mark that someone would be like, yeah, these are the things to think about, to learn from?

David Holton: I'll probably go with the latter in terms of what hadn't gone particularly well. One of the first big change programs I did when I joined this organization was looking at a new onboarding system. On paper, we did all the right things. It was probably one of the most successful technology deployments I'd seen. So I thought it had gone incredibly well.

What we underestimated was going from an organization where almost everything was paper based at the time to having this solution. It ticked all the boxes in terms of efficiency gains and far easier to use, etc., etc. But I think we'd underestimated the shift in personal behavior for people going from, I can effectively do this process how I want, I can write it on this, I can do it in Word, I can email somebody, I can write it on a Post-it and stick it on someone's desk. However I want to do that, we can do it. Now this guy's gone and said, no, you upload onto this system. You upload on that system because you get the standardization. You then get the economies of scale. We all know why you did things and the benefits for it, and you get the automation and the workflow.

We didn't spend anywhere near enough time bringing the colleagues along that journey. I spent a lot of time talking about how great it was going to be when it landed and making sure people knew exactly what was coming. But we'd underestimated the fact that Dayle does things very differently to David. And therefore, the training you need to give Dayle is different than the training you need to give David. And we did a kind of one-size-fits-all training of, this is what you need to do in the new world, rather than an A to B. And I think my biggest learning from that, and we'll take it through anything else we do, is the time you spend investing in the people who are going to be interacting with the new technology is never time wasted.

Dayle Hall: Yeah, I love that. That's a great quote to end. I have one more question for you before we wrap up, which is, there is a lot of hype. You may think that to some extent, AI is a little bit overhyped. I know some of the vendors out there and what they claim they can do, and we know they're actually doing that. But there's definitely a lot of potential. Is there something that you are really excited about the potential of AI, whether it's within your specific industry, whether it's broader than that, next year, a couple of years? Is there something that you’re like, I think with AI, we're going to be able to do XYZ significantly better and that will help us do something else? What gets you excited when you think about the prospect of this technology?

David Holton: Mine's probably a slightly generic answer to that. When you think about the future of work and you think about the things that we would like to be able to do, when you look at your job and you go, I'd love to be able to get to that, but I've got to write up 10 committee meetings and etc., my hope is that AI is able to do a lot of the things that we currently do but we don't attribute a huge amount of value to, but we just have to do it.

In your job description, it's a necessary evil because it's got to be done. If AI can do that and then we can spend more time doing things like this, Dayle, then I genuinely think you could get a happier, more content, more energized workforce. I would love to see that. Because you get a happier, more energized workforce, you're going to get a far better delivery to your customers. Customers are going to enjoy that interaction far more. Everybody benefits. That's my hopefully.

I know a lot of people feel as if it's just going to come and effectively start replacing everyone. There's going to be no jobs left. I don't see that. I genuinely don't see that. I don't see that because I think there's far more oversight firms are going to need to do. I think customers will vote with their feet genuinely. If you built a solution where it was pretty much just all AI bots, I think customers will go, I'm not interested in that. We all do it when we have to do an IVO or anything else like that and you get an automated board. As soon as there's the ability to go talk with human, you click on that button, because humans like human interaction. And I think firms that understand that and put that at the forefront will then start giving far better jobs to their colleagues, and I find that really exciting.

Dayle Hall: One of the things that I think you'll start to see is people will make choices to work for companies, organizations that if you're in a process and they start to say, look, this is the role, we have all these AI tools to help you do this part of your role faster, we want you to use AI, and that will free up your time to do these kind of things, which is the inspirational, meaningful, customer-centric things, I think people will start to join organizations where they have that at their fingertips. I think people will choose to work for organizations where they're like, oh, my God, you have all these AI tools so I can do all this crap that I've been doing at my current job. I can do that in a day or an hour rather than take a week. And then I can do all this inspirational stuff. I'll work for you right now.

David Holton: It's a compelling employee proposition, isn't it? 

Dayle Hall:Right. I think that's the kind of thing that we'll start to see more. I think about it now, even when I'm interviewing people for roles, our SDR team, for example, have built a very simple agent. It helps them capture information about the person they're trying to get in touch with. And it feeds information, and it's generally pulled from reputable sources. So it's not wrong. It helps them be more successful. If you're an SDR, for example, now and you want to- it’s a tough job, by the way, probably one of the toughest jobs out there. My two colleagues that run the teams, I wish them best of luck. But if you know you're going to come into that role and they have all these AI tools, we're like, great, I can do all that, I can automate all that, and I can just spend my time being more creative, I'm going to join.

Well, David, I appreciate the time. You talked a lot- the things that I think are inspirational you've said are the shared risk, the shared outcomes. I love when you talked about working with a partner, a vendor, whatever. You talked about humility, integrity, but also confidence. I think that is a great way to look at some of the relationships that I know that we have with your organization. I'm very proud that we get to work together, SnapLogic and Cambridge & Counties.

What I really liked about what you said is we may be a half a page ahead of you, but we're in it together. I think we should do this podcast again in 12 months’ time. I think we should do it again and ask each other similar types of questions and see what's changed. But I have a feeling we're going to be talking about five or six other new technologies.

David Holton: Yeah, exactly.

Dayle Hall: But I'm glad we're on this journey and couldn't be more proud to have you as a customer. Thank you so much for being on the podcast. 

David Holton: Thank you. I've really enjoyed it, really enjoyed it. Well done.

Dayle Hall: Yeah. Great. Well, thank you everyone for listening. That's the end of our show, and we'll see you on the next episode.