AI Proving Ground Podcast: Exploring Artificial Intelligence & Enterprise AI with World Wide Technology
AI deployment and adoption is complex — this podcast makes it actionable. Join top experts, IT leaders and innovators as we explore AI’s toughest challenges, uncover real-world case studies, and reveal practical insights that drive AI ROI. From strategy to execution, we break down what works (and what doesn’t) in enterprise AI. New episodes every week.
AI Proving Ground Podcast: Exploring Artificial Intelligence & Enterprise AI with World Wide Technology
Why Ally’s AI Actually Stuck
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Enterprise AI isn’t failing because of models, tools, or budgets.
It’s failing because people don’t actually use it.
While most organizations stall after pilots, Ally Financial broke the pattern — reaching over 50% AI adoption with nearly 90% retention.
In this episode of the AI Proving Ground Podcast, Ally Chief Information, Data and Digital Officer Sathish Muthukrishnan joins former Bank of America CTO David Reilly to unpack why adoption is a leadership and culture problem, not a technology one.
Ally’s breakthrough came from an augmentation-first mindset — positioning AI as a way to help employees do better work, not replace them.
In this conversation, you’ll hear:
- Why “stick rate” matters more than access
- How psychological safety accelerates adoption
- Treating internal AI tools like real products
- Turning AI pilots into infrastructure
If your AI investments aren’t translating into real usage, this episode shows how to fix it — without burning trust.
Support for this episode provided by: Illumio
More about this week's guests:
Sathish Muthukrishnan was named chief information, data and digital officer for Ally Financial Inc. in December 2019. In this role, Muthukrishnan is responsible for advancing Ally's technical and digital capabilities, including customer experience, data & analytics, cyber security and infrastructure, and accelerating the company's growth and evolution as a leader in the digital financial services sector. He reports to Ally's CEO.
Sathish's top pick: Accelerating AI Adoption: How a Bank Gained Early Insights
David Reilly is Chief Development Officer at WWT and previously served on Ally’s Board of Directors. He spent over a decade at Bank of America, most recently as CIO for Global Banking & Markets, after holding multiple senior technology leadership roles. Earlier in his career, David spent nearly three decades in technology and cybersecurity roles at major financial institutions including Morgan Stanley, Goldman Sachs, Merrill Lynch, Credit Suisse, and HSBC. He also serves on the boards of Data Dynamics and NPower.
David's top pick: Addressing Technical Debt in Financial Services
The AI Proving Ground Podcast leverages the deep AI technical and business expertise from within World Wide Technology's one-of-a-kind AI Proving Ground, which provides unrivaled access to the world's leading AI technologies. This unique lab environment accelerates your ability to learn about, test, train and implement AI solutions.
Learn more about WWT's AI Proving Ground.
The AI Proving Ground is a composable lab environment that features the latest high-performance infrastructure and reference architectures from the world's leading AI companies, such as NVIDIA, Cisco, Dell, F5, AMD, Intel and others.
Developed within our Advanced Technology Center (ATC), this one-of-a-kind lab environment empowers IT teams to evaluate and test AI infrastructure, software and solutions for efficacy, scalability and flexibility — all under one roof. The AI Proving Ground provides visibility into data flows across the entire development pipeline, enabling more informed decision-making while safeguarding production environments.
From worldwide technology, this is the AI Proving Ground Podcast. You've probably heard this before. Enterprise AI doesn't stall because the models aren't good enough. It stalls because most companies still can't get it adopted. In financial services, that gap shows up fast. The board wants momentum. Regulators want discipline. Employees want clarity on whether AI helps them or threatens them. And the CIO eventually gets to the same question: where is the value? So today's conversation sits right in that tension. Ally Financial, a leading digital financial services company with nearly$200 billion in total assets, has pushed internal adoption of its AI assistant, AllyAI, well past the pilot phase, driven by education, frictionless access, and a set of guardrails that keep people moving without letting data slip outside the walls. So in this episode, you'll hear from Satich Uthukrishnan, Ally's chief information, data, and digital officer, and David Riley, former Bank of America CTO and current Ally Financial Board member on what it takes to do the hard things first, like data, tech debt, control partnerships, so AI can scale without becoming theater. It's an enlightening conversation that makes one question hard to avoid. When the best tool can change overnight, how do you build for speed without locking yourself into yesterday's tech stack? So let's jump in. How are you doing today?
SPEAKER_00:I'm doing great. Good way to start the new year with a podcast with David and you.
SPEAKER_02:Absolutely. Mr. David Riley, how are you this afternoon?
SPEAKER_01:I'm great, Brian. Great to see you, Satish.
SPEAKER_02:Good to see you, David. Excellent. I I want to jump right in. So many of the organizations that Worldwide comes into contact with when they're discussing AI strategy, AI implementation, adoption is such a key hurdle for them. I've seen metrics from anywhere of you know low 20% to you know closer to 30% enterprise adoption rate as a standard kind of across the industry. So, Teach, I'm gonna start with you because Ally launched Ally.ai uh last year, and at least from what I've seen, tremendous results. You know, the the company reported, at least towards the end of the last year, 500,000 prompts, 50% employee adoption. I mean, those are just great numbers. Any updates to those numbers or or what's the secret sauce right now that you guys have going on where that adoption rate is at 50, if not better?
SPEAKER_00:You know, the the numbers have improved. So I'm happy to report we are now closer to 600,000 prompts. The one of the key things to ensure that adoption happens and it sustains, is to support the employees and make sure they understand AI is here to argument them, not to replace them. And part of that is education, being patient, understanding how they're interacting and working with AI to make that better. So we, the Ally AI that you referred to, Brian, we are continually refining it and making sure that it's easy for employees to step into the platform, understand, and use AI. So when they step into the platform, obviously they have to authenticate and log in themselves. As you know, Ally has about 10,000 employees. We have seen 165,000 logins to date. And that means people are logging in multiple times. But the key status once they log in, the stick rate, and what I mean by stick rate is you log in and you actually do something with AI is north of 90%. Very similar to how customers treat ally. We have north of 95% retention rate with ally customers. And that's because once you experience us, you want to stay with us. And that's sort of what we are seeing with our employees. And this hasn't come overnight. My team, along with support from enterprise leaders supporting the youth as well as the need to understand AI, has grown has helped a great deal in terms of teaching our employees, educating our employees. In fact, we we do what we call AI days across the enterprise every two months or so. And recently, my CEO, Michael Rhodes, opened the session and he talked about the importance of AI for the enterprise transformation. So that level of involvement and support is critical in making sure that not only do we introduce AI safely, but also allow employees to learn from it and be patient so they can scale and sustain the operations and usage.
SPEAKER_02:Yeah, David, I mean, pushing that type of adoption is you've been in charge of big teams, you know, whether it be at Bank of America or Morgan Stanley or almost any of the stops along your career path. Is that is pushing that adoption different than any of the other transformative eras, you know, whether it be you know cloud or or whatever it might be, or is there is there a standard for driving adoption that that is you know cuts across all the playbooks?
SPEAKER_01:I I think there are there are two things, Brian, that we we'll get into through the conversation. And Satish has done incredible work on both of these that I think are difference makers. The first is a foundational one, and in any of those prior technology changes, cloud, SaaS, whatever it would be, there's always some foundational work that if you do right is almost invisible to the people that are using that technology. But if you don't do the work, the friction is very, very visible. And in the implementation of AI and generative AI at Ally, that foundation is data. And we're going to get into the work that Satish and the team have led and continue to lead. The uh the the way in which our data has been organized, cleansed, and continues to be, by no means are we done, but that work I think has created a foundation that makes it easier for that stick rate to be as high as it is. And we'll we'll get into that in the course of the discussion. And the second, which isn't really as foundational from an IT perspective, is business engagement. And what we'll talk about through the course of the next hour is the extent to which Satish has partnered across all of the support functions and all of the business lines to drive that adoption. So it isn't something that feels like it's being done to the business or to the support function. It's very much driven by those groups within the enterprise. And that I think also makes it sticky. It's not episodic, it's a much more secular almost change that's that's here to stay. I think without those two things done as well as they have been done at Ally, we wouldn't be so far along our journey. Now, Satish will tell you we're still in the very early innings, but compared to peers, I'd say we're much further along. But those foundations are going to mean we can continue to accelerate and continue to drive that adoption across support groups and lines of business.
SPEAKER_00:Yeah, if I may, you know, I would also like to add the tone at the top. And I and I say the tone at the top starts with the board. And for context for the rest of the conversation, I'd love to share that. David is part of Allies Board, and we are lucky to have him. He talks about the partnership that we have amongst the executive team members, but the role the board plays is super critical. And and David, I I would love for you to share some of the questions that you ask in our interactions in in board meetings that can be publicly shared. It's not specific to technology, even though Dave comes from a technology background. The way he interacts with our executive team members and the questions that he asks sparks the interest, drives the momentum to create change and also to sustain it. So, David, I would love for you to share some of the things that you do behind the closed doors.
SPEAKER_01:You know, the the the the passion and the momentum and the energy that's behind this technology change for my in my experience is not like anything I've seen before. I've never seen a technology that moves this quickly. I've never seen a technology that's become as pervasive as this, and never seen as much appetite for these use cases as we've seen before. And I think what that means is all of us as practitioners, whether that is in the senior management execution group, the executive team, or on the board, there are some things that have always been true that are even truer now. So one of the things that's true is we've historically, as technologists, we'd make a technology decision, and maybe we would live with that decision for three years, sometimes five years, sometimes even longer. And we do all of the necessary work to ensure that that piece of hardware or that piece of software was embedded in everything that we do, and the switching costs and the switching time very high. This is very different. The large language model that you use today, which is best in class, may change tomorrow. The coding assistant that you use today may be best in class, but something may come along tomorrow that displaces that. And so thinking through what are some of those switching costs that we need to think about. And I don't just mean expenses, it's about the technical integration that you would do, ensuring that you haven't accidentally painted yourself into a corner. And while that's always been true about embedded technology, it's even truer now. And so I think it's caused all of us, Satinha and I talk about this a lot, to rethink the way we think about partners, that we think about that long-term investment in a particular tool. You want to get all the value you can from it, but you need to switch out or have the ability to switch out if something better comes along. And of course, I'm sure we'll get into this in the hour, the things that always matter, risk, technical debt, end of life, those problems haven't gone away. And so ensuring that we're always thinking about that, who has access to the data, the privileged access that exists, all of those things need to be held top of mind. And of course, if we don't, then we open ourselves up again to well-intended risk as we implement and move at the speed we need to move at. But ensuring that those disciplines continue to exist, I think, that's another set of questions that we always ensure we think through because those risks didn't go away. And they may be a little bit masked now, but they still exist and they still need to be to be careful.
SPEAKER_02:Yeah, all of those things, I mean we'll we'll touch on those in in a couple of the conversations that we have here in the next couple minutes, but I do want to touch on on ROI, on value, David, as you mentioned. Satish, how is how did Ally, as it went on its journey with AI, with Ally AI, how did it think about ROI? I saw a report that said, I think it was from Google Cloud said 75% or so financial service execs are reporting that their organization is achieving positive ROI within the first year. But I mean, that makes it sound easy. That's it was your journey more complicated, or did you kind of find success pretty quickly?
SPEAKER_00:It it continues to be complicated. This is not an easy problem to solve. Measuring productivity might sound easy, but translating that to outcomes that are business relevant is super complicated. But what I have been blessed with is a group of leaders that understand that AI is here to stay, so you cannot stand still. We have to move forward. So we quickly jumped into a group, group of initiatives that would drive efficiency and a group of initiatives that would drive effectiveness. Efficiency, you can think in terms of driving productivity, unleashing employee time, automating tasks, etc. Effectiveness in terms of driving significant business outcomes. And some of it might be mitigating the cyber risk that we have. And you know, David talked about a number of those things. So when we narrowed that, narrowed that down into those two buckets, how do you approach that? We needed to find a way to make sure that we have a solid set of guidelines that allow us to pick and choose which AI use cases we have to go after. Honestly, when we started talking about generative AI and went to the enterprise and said, who wants to be part of this? What ideas do we have? Within a few weeks, we had 464 different ideas, all of it valid. But obviously, you cannot execute all of them at once. So we narrowed it down to a simple guideline that said, let us create use cases that face off to internal employees because the technology is changing fast. So let's make sure that we are testing it on ourselves first. And number two, technology is changing fast, and technology is not very predictable at the moment. So let's make sure that there is a human in the loop. That was number two. And number three, to give us peace of mind and make sure that we are experimenting early and not waiting for all of the issues to be figured out. We said, we will not allow AI of ally data to go outside the ally walls. We'll protect it vigorously and internally. And we did that. And in that on top of the foundational things that David talked about, bringing data centrally, making sure that our cyber capabilities are up to snuff, making sure that the data access is monitored and guarded and given only to those who need it. All of that combined, doing those hard things, allowed us to establish these three simple guidelines to then figure out what use cases we will go after. And then we, if you think about that as the science, the art behind it is where do you implement AI? We wanted it to do it in environments where technology is not quickly adopted. And for and you know, while we have technology because we had a fully digital bank and serve customers digitally, our customer care reps, you know, we would wait till the end to make sure everything is tested before we roll it out because they're customer-facing. But we had the simple guidelines and we figured out a use case where every call that our customer care associate has goes between 12 and 15 minutes, and at the end of the call, they do a summarization of that call. And this technology intersected perfectly with that task they do for every call. So we listened in in the background for that entire conversation, converted into a text, used generative AI to summarize it, eliminated the PII, had a human in the middle, which is our customer care associates, to give us a thumbs up and thumbs down on whether the summarization worked great. And we eliminated, eradicated most of the issues that come with using this technology at an early stage with the humans here supporting us and who are also the experts. And we shaved off anywhere between 30 seconds to three minutes. We drove the accuracy of this summarization by 100%, close to 100%. And we converted people who may you may think of as the latest or the last adopters of technology to be the leading adopters of technology. And they were the ones that were giving us use cases that would help our internal customers serve our external customers better. And soon enough, we were able to quickly find ROI for these use cases, not just from an efficiency standpoint, but from an effectiveness standpoint as well.
SPEAKER_02:Yeah, David, love that Satish kind of puts it within that very easy to digest and understand that you know that three-point kind of formula there is what he's talking about and what Ally has done to date as it relates to AI, is that a game plan that can expand to other organizations, whether it's in financial services or some other industry?
SPEAKER_01:I I think it can, Brian. And I do think it's a best practice because what one of the things that you you do hear from time to time with different AI implementations across different industry verticals is we're in test and learn mode, nothing wrong with that. We're in experimentation mode, nothing wrong with that. We're finding productivity benefits, but not benefits that we can place a hard dollar save on. And I think for a CIO, it's really important that even if someone in your business isn't asking you to quantify the benefit in hard or soft dollar terms, that you realize they're just waiting to ask me. They haven't asked me yet, but they're going to ask me. And I cannot be in respond mode. I think the proactive approach Satish has taken is a best practice. Whether somebody uses those three drivers or whatever drivers are right for your industry or for your company, I think you should assume as a practitioner, eventually my CFO or my CEO is going to say, This is great. Where's the money? And even if they're not asking you today, and they're saying that's okay, just learn. They're kind of not really, they're going to ask you eventually. And preparing for that, I think, is a smart move for every practitioner. And being clear that to the point Satish opened with, sometimes there's a hard dollar save here. Sometimes it's productivity where everybody's going to agree we're better, but we can't quantify it in hard dollar terms. Having that conversation early and you as an IT leader driving it, I think will make the world of difference as your company starts to figure out how do we count this stuff? How do we figure out the business value that we're getting in terms of lower risk, sometimes lower expenses, or even just we're better than we were yesterday, and everybody agrees with that? Finding that framework, Brian, I think is key. Not when you're asked for it, but before you're asked for it.
SPEAKER_02:Yeah. Yeah. So Tish, was that do you think that was preconceived? If you can kind of think back a little bit, was that preconceived, or you know, were you thinking beforehand about when do we demand near-term ROI versus when do we give a little bit more runway to get into that exploration and that innovation?
SPEAKER_00:You know, we have always tried to make technology a revenue center and not just a cost center. So I would say it's embedded within how we think in terms of the investments we make in tech. And it is not, it's finite, right? Like you want to only invest a percentage of your revenue in tech tech growth. And as David pointed out, you have to take care of the tech debt, you have to take care of the vulnerability, end of life, et cetera. You have to continually modernize. You have to figure out where your operation, operating cost is going into. So when you think about all of that, the the pie becomes even smaller for newer technologies like generative AI. And that is where the help from the board is super critical in terms of asking us how are you thinking about uh generative AI, where are you experimenting it, where are you thinking about investing, etc. So part of it is it's in the DNA, but part of it is also we're a regulated institution. So we have to get the support of our internal internal control groups as well as our regulators, and it's it's it's an accountability and responsibility of us to say we are responsibly deploying newer technology. For that to happen, those guidelines not only served us figuring out prioritization and looking at use cases that will give us the biggest benefit, but also showcases how you are thinking about it safely and securely while implementing and adopting newer technology.
SPEAKER_02:Yeah, I mean, uh the two of you have mentioned several times here tech debt, so we might as well double click there. David, I know you've, you know, I've seen you give, you know, really engaging, compelling conversations about the idea of technical debt and just addressing it head on. Apply that to the AI conversation. I mean, how much of AI success depends and hinges on just being honest and upfront in the mirror about your technical estate and tech debt and what's going to be coming in the future?
SPEAKER_01:The the the efficacy of the results that we'll get in any organization by using AI, generative AI, stands and falls on the quality of the data. And if you can't get to the data in a safe way, then your answers, your outcomes are going to be impaired. They're going to be limited. Most of the data in large companies that you want to get after lives in operational systems and transactional systems. Unfortunately, not as well organized as we'd want it to be. Brian, not as well categorized as we would want it to be, not as well labeled as we want it to be. And sometimes those systems are very rigid because maybe they haven't been kept up with in terms of underlying investment in the platform. And as a practitioner, you you know about those risks, you know about those things and how much they sometimes hold you back. They're harder to remediate when there's a vulnerability, they're harder to change or adapt when you've got a new set of functionality or even a new system you want to implement that will rely on them. And we always find a way as practitioners. We always figure out a solution. But sometimes that solution Adds a little bit more complexity, adds a little bit more time, and adds a more integration work that you need to do care and feeding-wise to keep it current. I think when you know that that's a moment that you're in, when you're implementing a contact center use case or a business-facing use case, or a control use case that might help with compliance adherence or audit preparation or whatever it would be, as a practitioner, now more than ever, I would say you have to raise that risk. To say I can do it, but I'm not going to get great results. I'll get good results, but unless we make this underlying investment, not great results. No one's going to ask you to sort of volunteer that, Brian. But I think it's really important that as a practitioner you flag what the consequences are if you're not able to do all of that underlying remediation and what that means long term to the growth and adoption of some of these technologies.
SPEAKER_02:Yeah, and so teach. I mean, certainly Ally, not immune to any of what David's talking about here, but as we've mentioned before, you know, Ally is a digital native bank. So that does allow you to probably be a little bit more nimble as kind of that, you know, that digital modern era organization. Is there anything that organizations can learn from what Ally is doing, how it's building its IT stack, uh, so on and so forth, that they can apply to drive their own AI success?
SPEAKER_00:Brian, you said it well. We you we have to be as a digital native organization, extremely nimble so we can move at the speed of our customers. We can move at the speed of the culture that is transforming so fast. We pride ourselves in doing the hard things first, as I had mentioned before. David articulated the direct benefits of what that gets you, right? Like modernizing the system, centralizing the data, making it easier so the outcomes are that much more relevant and beneficial to us. There is a bit of an unintended consequence if you don't pay attention to it. God forbid, you have a vulnerability that gets exposed by a cyber criminal, and then your organization is in the news for all the wrong reasons. Think about what it does to the mindset of your own company, your own employees in terms of experimenting with newer technology. All of a sudden, you then become bashful about using this new technology. You you are not so sure about what your next steps are. I realize this security foundation, this mitigating the risk work, thinking in terms of bringing everybody along so you can explain the actual risks you're taking and how you're mitigating them is extremely hard. But once you do that, the trust and confidence you create with the stakeholders increases. And they are now there to support you if something ever happens or there is a misstep or two. So that I think is an important lesson for us as well. Initially, we may have been anxious and excited to go use the technology immediately because the the barrier to entry, and as David articulated, the pace at which it's changing is so fast, you could use it easily. But have you checked all your boxes before using it? Is a critical question to ask. And then think in terms of what are you missing? What could be exposed if I start to use this? What could go wrong? But also balancing the mindset between risk and innovation is super critical. It's a hard one to solve, it's an important one to do. And that was our biggest learning. I'm glad we focused on that first before we scaled the usage of Ally.
SPEAKER_02:Yeah, David, I mean, he's talking about doing the hard things first, which is such a such an easy way to explain. Like you're gonna have to do all the hard stuff first to get any of this right. As a as a board member over at Ally, what did you observe about uh Satish and the rest of the team? What what exactly were those hard things that they did? And you know, what was your perspective as a board member, knowing that the hard things often mean going a little bit slower to move fast later? And how did how how can you reconcile the two sides there, knowing that you want to get there as quickly as possible?
SPEAKER_01:I I think the first point, it's this idea of bringing your stakeholders with you that Satish talked about. Risk, audit, compliance, the regulatory teams. For all of those teams, their job wasn't made easier with the introduction of these technologies. Their hard job was made harder. Now there was something else they needed to understand on top of data, cyber, all the other risks, business risk, liquidity, capital, market, credit, all the things they needed to do. This was one other thing that they had to do. And I I think what what we've all observed as being a real accelerant is as Satish has brought those control groups, those partner groups with with him on this journey, it's recognized the fact that they had a lot and and still do a lot of learning to do. This is new for everyone. We're fortunate as technologists that we've got a shot at understanding how these models work, but a regular human being maybe not so much. And it's really important to bring those parts of the organization along with you, and there is a pace that every one of these organizations can move at, and all of us have to understand what the pace is for the companies that we're working with and go at that pace. That's the first thing I think, because that acknowledgement that there's a lot of learning to do, there's a lot of understanding that needs to be created is key. But I'd say right beside that, and this would be a good thing to just talk about Satish, it relates to the the tone from the top. Your push with the type the IT team around the find it first effort, it might seem a little bit behind the scenes or or tangential. I I don't think it is. This idea that, listen, we've all we've got to be looking at where we've got risks that we need to mitigate. Because every one of those that we mitigate now means that we're we're paving the road for the future. That's the second one, Brian. But I'd like Satish to talk about that because I think as an IT leader, this proactive approach to identifying risk and being intentional about the treatment of that risk, not just for AI, but a huge impact on AI, I think is another one of those observed practices that I think the people that are listening could really learn from.
SPEAKER_00:I'm happy to jump in to find it first. You know, it served actually two different purposes. One of those is how do I make it somewhat sexy for my teams to go say, we will self-identify problems in our own house? It's really literally airing the dirty laundry. But the way we articulated it was can you help me make sure that our risk and audit partners and our compliance partners are out of their jobs by us finding our own problems and us solving it? It's a simple one-liner, like how can we go be more responsible? So the trust gets deeper between us and the control partners. That's number one. But number two, the selfish motive behind that is people that are outside ally, that are trying to exploit our data, our brand, et cetera, are not looking for low-hanging fruits. They're working harder and harder to get as intrusive as possible. So if I take away the low-hanging risk items and we self-identify and we solve it and we have a plan, our control partners can go deeper, they can understand how technology operates deeper, and collectively it makes us a much more safer and a secure and a strong organization. And that was the that was the idea behind Find It First. And I'm glad to say over 70% of the issues that are identified are self-identified by tech. And then we followed it up by saying simple three, you know, three-word stuff like get it done. If you find an issue, you don't have an excuse to say, oh, I'll wait for somebody to come and write up the issue, we'll establish timelines. No, you have three weeks to six weeks to fix this issue, and if you cannot, escalate it to the leaders. And I had support from my entire leadership team, and all of them have to come together. It was terrific to showcase it. Initially, it was I was shuddering, showcasing all of the issues that we found ourselves. You know, it's better to be behind the curtain and wait for somebody else to find it. But once we did that, to our normal thinking in terms of, oh, somebody's going to say, you guys are not doing your job, you guys suck. It was exactly the opposite. Not only did our CEO and my business leader say, oh, we need to prioritize this, let's get this fixed. The board was fully supportive. They are like, we are so glad that you guys are proactive in finding it. And I will share a great board best practice, David, and I would love for you to talk about this. Sorry to throw the ball back at you, because this involves greater involvement from the board and spending more time. David actually suggested we created additional board meetings in between board meetings to specifically focus on how techna uh how tech is addressing and solving these issues, which went a great way in creating trust with external regulators as well, because they see all of those interactions.
SPEAKER_01:I I think it it builds upon what you've created, which is it's okay for the IT practitioners in Satisha's team to raise their hand to say, I've got this risk, I've got this risk, I've got this risk. Anything we can do to encourage all of those risks being surfaced means we build, not just do we make the company safer and we protect our customers even more, we create the confidence that's needed across all the stakeholder groups. So if there's that degree of transparency, Brian, there's a lot more confidence that as we're moving quickly, we're doing so safely. And listen, from time to time, it's inevitable. You're gonna step on a rake, something's gonna happen. But with all of that transparency, with all of that proactivity around the responsible use of technologies, AI and others, but we're talking about AI today, the responsible use of AI from a team that is looking around the corners to figure out what are all the ways we can win and all the ways we can lose, never completely perfect. It builds that confidence that when that rate now and again does get stepped on, everybody understands that's the price of moving at the pace we are. We'll learn from it, we won't repeat it because the culture exists to ensure that we're deploying these technologies as safely as we can. And all of that Satish drives into the partner ecosystem as well. I'm not asking you to do something I don't do, but I expect you to operate as a SaaS partner, as a service provider, whatever it would be, in exactly the same way to hold the third parties that we all operate with to the same standard as you hold yourself to.
SPEAKER_02:Yeah, uh we could certainly talk about adoption and and you know building a culture. We could we could spend hours, multiple episodes, entire seasons on on that topic. Uh we only have a limited amount of time here, so I do want to uh slightly pivot and move into agentic AI. I think a lot of people probably or said and probably thought that 2025 was the year of the agent, but you know, it felt a little bit flat, if I'm being honest, at least from my perspective. Uh Satiach, I'm wondering, you know, how does Ally think about um incorporating um agentic capabilities or has it already with um Ally AI and what are some of the motions that you need to do to get ready to do that?
SPEAKER_00:Yeah, the the foundation has to be right. David talked about data being centralized because the adoption of generative AI across companies without without even thinking about industries, across companies, across industries, the barrier to entry is so low that you can jump in and adopt AI. How are you going to differentiate yourselves? That's that's the key question that we were trying to answer. And as as as we had discussed before, the technology is changing so quickly. Again, we need to bring in a very simplistic thought framework that'll allow us to adopt and keep keep pace with this change. So from an identical AI perspective, we we can we categorize it into two. One is are you having the agents be process-oriented or are you having the agent be goal-oriented? The difference is if you are asking the agent to be goal-oriented, you just say, hey, I need to do X, and the agent now has full autonomy and agency, hence they call it the word agent, to go and figure out how they will execute that goal. How they execute it may not fit with your guidelines and your standards, etc. So that we are not ready for. So we move to process-based agents, which we clearly have identified standards, we clearly have identified procedures, we know the guardrails, we know what outcomes we are trying to generate, and we can clearly tell the agent these are the different paths that we want you to take. And more importantly, we do not want you to do this. You cannot do any of this and stay within your lanes to do it. Now, all of a sudden, you're replicating what a human might do, and we still kept what what we now move away from human in the loop to human on the loop. Like you have humans verifying what the agents are doing. So let me give you a use case that we will we will work on in the future, but not uh not there yet. Is today, all of the calls that happens with our call centers, there are samplings of calls that you take and make sure that you you do QA on them to ensure that you, you know, the humans are aligning with your standards and policies. But if I have agents, I now potentially have the ability to QA every phone call. And I have clear processes and procedures to do that. And now I'm scaling AI responsibly and I am staying within the guidelines incredibly effectively. So one of the things that we did was, which is which is in the which is in production today, is called a design persona agent. So any use case that we define, we have a set of personas that interact with ally, and we think in terms of how that persona would react to the experience that we are creating. So we created an agent, design persona that almost operates in the it operates in the process world, but it almost slightly interfaces or steps into the goal-oriented world because it understands I am creating this content and experience for this persona, and we wanted to speak the ally language and be ally-like and stay within this guardrails. Terrific, terrific results.
SPEAKER_02:Yeah, David, I mean, build on that a little bit based on what Satish just said, what would you, you know, you sit on a number of boards. What would you tell some of the the organizations and leaders that you collaborate with? Is it more looking at, you know, thinking of agents instead of necessarily autonomous and on their own, but more applied to processes? Or um is it just thinking of, you know, you know, some of that design building that he said? How how would you translate what Satish said in terms of value for kind of the rest of the industry?
SPEAKER_01:I I think this role of the the line of business, Brian, is is key and a little bit different here. And woven through all of these use cases at Ally is the partnership that Satish has with the line of business. So it it's we should chat a little bit about that, Satish, because I think in the case of the agents, it isn't like other kinds of software or services that you would you would create, it's a much more embedded and owned piece of technology by the business. What I see from others, it's almost now an emerging practice that as I deploy an agent, gen generative or otherwise, it really should be treated similar to the way we treat an employee. An employee gets hired, an employee gets trained, an employee's got a hiring manager and an operational manager that oversees the performance. Sometimes that employee needs additional training, sometimes that employee needs disciplining, and sometimes that employee needs to be let go. And thinking about agents in the same way that they're owned by the entity that operates them, that gets the benefit from it, whatever part of the business or support function or IT this would be, and being, I think, clear that I IT can provide all of these capabilities, but the ultimate ownership for how the agent performs belongs with the owning entity in the line of business or the support function, if it's finance or audit or tech. I think thinking about it that way helps us put the accountability and responsibility for things like access, for training, for ensuring that you're getting the value that you hope to get and that you continue to refine it, puts it in the right place as all of us learn as we go. But I think Satish, your way of partnering here at Ally with all of the lines of business in terms of where that where that partnership, how that partnership works and where that responsibility is. Chat a little bit about that because it's very much the lines of business that are driving these capabilities in partnership with IT, which of course we think is is the right, is the right operating model.
SPEAKER_00:David, it's it's very similar to what you said. The business shouldn't feel like technology is done to them. You know, it is how are you adding value to what they do on a daily basis and how does it contribute to the bottom line? And to the extent that when we talk about generative AI use cases, it are it is our business partners that are presenting to the board. It's not technology saying we are presenting these use cases. Obviously, there is tech updates that we provide, and many times it is really boring because we talk about all the risk stuff and all the tech debt, etc. Half joking here. But with the business partners, they are fully invested. They're fully invested in understanding, and this is a culture that Michael has, our new CEO, has also driven that. It's not valid that should understand all the risks and cyber issues that you have. The business should understand it so they can prioritize investments, but also make place for innovation. And when they understand the roadblocks and risks, they now more efficiently plan innovation. And to David's point, it becomes a great partnership for us to make sure that we don't have to help help them understand the difficulties we face. They already understand that. And they are now behind us and along with us in terms of driving the innovation all the way up to sharing it with the board and getting permission and saying that this has to be invested in and we have to move forward. I will tell you: if I am just a technology leader, if I'm the only technology leader pushing for this innovation across the enterprise, it's going to take far more effort. It's going to take longer. And it's probably going to drive a little bit of frustration because you know you have to constantly explain it. But when you have that partnership and you bring the collective use case to the forefront, you just have wind in your sails and you look forward and you move faster. One of the ways to keep up with the pace at which the technology is changing.
SPEAKER_02:Yeah, Satish, I'm gonna bring up a quote that I heard you say a few months back, and I absolutely love it. It may not be verbatim, but it's pretty close to you said to succeed in AI or in this day and age from an IT standpoint, leaders need to have the courage to question their purpose and reimagine what they do. I'm just curious, you know, over the last 12 to 18 months, or as if you've been on the AI journey with Ally, what have you, what have you had to relearn, or what have you had to reimagine that only now or heading into the bulk of 2026 is putting you on a path to succeed and move with speed?
SPEAKER_00:Yeah, thank you, Brian. I constantly remind myself that we need to have that courage to reimagine. One of the things is what I talked about how do you explain technology? How do you bring forth technology? How do you partner with the business? When you see the pace at which technology is moving, and David brought it up, like the best LLM today may not be the best LLM tomorrow. In worst cases, it may not even exist tomorrow. When you see that level of outside in noise or influence, your innate experience would tell you run the marathon at sprint speed and try and keep up with it. But you have to operate exactly the opposite of it. You have to be patient. You have to make sure that this is a sprint that you're running at a marathon speed, not the other way around. It's okay to move, pick your big wins, and this is where the partnership with the business comes in. You identify what has the biggest impact across the enterprise, pick a few things to work on, get your foundation right, and then move forward doing this in a way that your technology platform is scalable and fungible so you can adapt easily, which is where Ally AI came in from. Ally AI was an intentional step for us to slow down and build that AI platform organically within Ally so we can connect to one of the many LLMs that are coming on board, and we can either add or replace or combine these LLMs to serve the purpose that we have within ourselves. So I would say being the original disruptor back when we had the financial crisis in 2008 to start a digital bank, all digital bank, when the trust with the financial services from the customer standpoint was at a low, to now, when everybody wants to do everything, we have picked a focus forward strategy. And we now have the courage to say that we are going to simplify our business. We are going to focus on the businesses that gives the greatest return on investment to our shareholders, to our customers. When we have fewer businesses, we can serve our customers better, we can have deeper relationships, and obviously AI is going to be part of that. That is courage to reimagine what we have done, what may have been successful for us in the past, but saying that I have the courage to shred some of what has been partially successful, I'm going to double down on things that are going to be widely successful for all of the stakeholders, including employees engaged with ally.
SPEAKER_02:Yeah, David, I mean what Satish is talking about here is basically just being open to disruption, open to change so that you can, you know, digest it and understand it so that you can put your business on the best path forward. Uh, you know, maybe as as you get out a little bit of a crystal ball here, what what's on the horizon in 2026 that that leaders need to be open to to understanding so that they can navigate that disruption waters?
SPEAKER_01:Two things. One, uh I would say, Brian, you know, and we Satish shared this with us in December when we were together. I think sometimes we we think as individuals, we know how people or customers will react and interact with certain technology solutions. I know I'm guilty of this. I bring my own biases to those interactions. We saw a demo in December that showed us how a customer was interacting with an agent that was you know a quite quite a quite a difficult conversation. And going into it, I would have thought there's no way somebody would want to have that conversation with an agent. And the demo that we saw, and it was made very clear up front to the customer that they were talking to an agent, I couldn't believe the quality of the interaction for certain individuals in certain circumstances. I I've had to relearn my bias is I want to talk to a human being. And don't get me wrong, there are some people that that's what they want to do, and we should allow for that. But but not everyone. There's more openness and more willingness to interact with some of these agents than I certainly thought. And I think that's lesson number one for me, Brian. That it's really important, while it's always been important, to leave your own biases at the door because the way in which individuals interact with some of these technologies is quite surprising. That that it people are very comfortable, way more comfortable than than I thought they would be. And this demo really showed that that while not for everybody, to get to this idea of a segment of one, there are certain customers who are very comfortable interacting with these technologies, and we need to provide that capability. The the second I would say, which is a bit of a may sound a bit of a negative. I I do think, sadly for all of us, some of the most innovative users of these technologies are the bad actors that would do us cyber harm. The the bad actors are using these technologies in incredibly creative and innovative ways. And when you see what they're doing, you'd hire them if you could make them do good things for you. It's it's incredible the way they test and learn, the pace at which they change the use of these technologies. I I think as an industry, we need to ensure that we're moving just as quickly here with the use of these technologies to provide cyber defenses. Whether it's pen testing, red team testing, looking for vulnerabilities. I I think we're probably not moving as an industry quickly enough in terms of using these technologies to improve our defenses, as we should. And we should pick up the pace in 2026 to do that because I I that this is something that the bad actors can use at scale. It makes the cost of an attack even cheaper than it already was. We have to ensure that we're responding in kind across the industry, across every single segment.
SPEAKER_02:Yeah, certainly security, um, another issue that we could go on for hours and hours about. But we are we are running short on time on this episode. So um I'll just encourage listeners. We've got a lot of episodes as it relates to cyber and AI, and I encourage you to go check it out anything from within the last couple weeks to even a few months ago that it's still applicable. Um we are short on time here. So, Satish, David, thank you so much for joining. It was a fantastic conversation. I hope a lot of the listeners out there today got a lot of it out of it as much as I did so. So thank you again to the both of you.
SPEAKER_00:Thank you, Brian. Thank you, Brian, and thank you, David.
SPEAKER_02:Okay, thanks to Satish and David. As this conversation makes clear, AI progress inside large organizations isn't constrained by imagination, it's constrained by readiness. Teams that are moving fast like Ally Financial are investing in trust through clean data, transparent risk ownership, and business leaders who feel accountable for outcomes. This episode of the AI Proving Ground podcast was co produced by Nas Baker and Kara Kuhn. Our audio and video engineer is John Novlock. My name is Brian Phelps. See you next time.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
WWT Research & Insights
World Wide Technology
WWT Partner Spotlight
World Wide Technology
WWT Experts
World Wide Technology