AI or Not

E049 - AI or Not - Anusha Nerella and Pamela Isom

Season 2 Episode 49

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 25:10

Welcome to "AI or Not," the podcast where we explore the intersection of digital transformation and real-world wisdom, hosted by the accomplished Pamela Isom. With over 25 years of experience guiding leaders in corporate, public, and private sectors, Pamela, the CEO and Founder of IsAdvice & Consulting LLC, is a veteran in successfully navigating the complex realms of artificial intelligence, innovation, cyber issues, governance, data management, and ethical decision-making.

Trust is not a buzzword in AI-driven finance; it’s the product. We sit down with Anusha Nerella, a senior fintech leader to unpack how explainability, compliance, and human-centered design turn models into systems people can rely on. From reason codes in trading recommendations to precision-focused fraud detection, you’ll hear how building for auditability and clarity unlocks adoption with regulators, clients, and frontline teams.

We start with career roots in large banks and regulated environments, then tackle real-world use cases: liquidity routing that justifies every decision, fraud models that reduce false alarms by adding behavioral context, and governance workflows that keep shipping honest. You’ll learn why dashboards aren’t enough, why lineage and documentation must be embedded in the pipeline, and how to make “accountability by design” a team habit. The conversation also does no pull punches on bias, outlining how a high-accuracy credit model still drifted toward unfair outcomes until the data pipeline was rebuilt with fairness checks, feature audits, and human review steps.

We also look at representation and leadership. Women are shaping core AI work, yet often lack access to high-stakes projects and senior roles. Sponsorship, clear role maps, and culture that rewards bias-aware engineering are not nice-to-haves; they’re risk controls. The guiding question runs through the entire episode: Can we build it? Should we build it? For whom? Purpose informs thresholds, monitoring, and the definition of “good enough” when real money, safety, and trust are on the line.

If you care about AI in fintech, fraud prevention, compliant deployment, and building systems that earn confidence, this conversation is your playbook. Subscribe, share with a colleague who owns model risk or data platforms, and leave a review to tell us how you design for trust.

[00:00] Pamela Isom: This podcast is for informational purposes only.

[00:27] Personal views and opinions expressed by our podcast guests are their own and not legal advice,

[00:35] neither health, tax nor professional nor official statements by their organizations.

[00:42] Guest views may not be those of the host.

[00:51] Hello and welcome to AI or not, the podcast where business leaders from around the globe share wisdom and insights that are needed now to address issues and guide success in your artificial intelligence journey as well as your digital transformation journey at large.

[01:07] I am Pamela Isom and I am your podcast host.

[01:10] We have a special guest with us today, Anusha Nerella. She's a technology leader,

[01:15] a development expert in AI and ML automation,

[01:19] and a senior member of ieee.

[01:22] Anusha,

[01:24] Welcome to AI or Not.

[01:26] Anusha Nerella: Thank you, thank you Pam. I would love to be here and thank you for having me as one of the invitee to talk on your podcast.

[01:36] Pamela Isom: Yes, it's an honor to have you here and I do have a couple of questions that I want to discuss with you.

[01:43] But before we get started I'd like you to tell me more about yourself, your career journey and while you're describing that,

[01:53] tell me more about how you got engaged and involved with fintech and financial services.

[01:59] Anusha Nerella: Sure, absolutely Pam.

[02:01] So I started my career as a software engineer more than a decade ago and honestly I have always been fascinated by how technology is making financial systems more smarter and fairer.

[02:15] So most of my work has been in the fintech space where AI is just not a buzzword anymore.

[02:22] So it's part of the system now.

[02:24] And I have worked at places many global fintech corporations and including like Barclays, Citibank and prior to that I worked with United States and States Patents and Trademark Office,

[02:38] so leading projects at machine learning and AI driven fraud detection and regulatory automation,

[02:45] cloud from consumer banking to Institutional banking. So what really excites me is that like this blend of AI, cloud and compliance because it's just not one thing to build a model and just not another to make it work in a highly regulated banking environment.

[03:05] So along this way I became a senior member of IEEE and a member leader in Forbes Technology Council where I shared my thought leadership and in the form of articles, expert panels.

[03:17] And I do continue with my mentorship through ADP because I think every month I have been at least doing more than thousand minutes interacting with many levels of candidates from the college students to the professionals working in the industry like working on various mentorship programs.

[03:38] So like it's mostly I love interacting and leveling up myself in the way that I build my knowledge from various mediums and Also I share my thought process, how I built my perspective.

[03:50] So this is mostly about how AI is changing the way we look at finance, regulation and even trust. And I always say like AI is not just about making predictions, it's about earning confidence.

[04:03] So that's really been my journey.

[04:05] So building technology that people can depend on, not just be impressed by,

[04:10] well.

[04:10] Pamela Isom: That'S really,

[04:12] actually really good to hear because that's a different perspective on it, which is about earning confidence. Not just making predictions, but earning confidence. I like that.

[04:21] So you mentioned that you have some lessons learned including use cases from building cloud native AI powered platforms in highly regulated industries and you were just starting to get into.

[04:34] I heard you mentioned a few companies,

[04:36] but can you explain and share some use cases for our listeners?

[04:40] Anusha Nerella: Yeah, absolutely. And I'm glad you asked me that because earning confidence really became a theme now because across all these projects, one big lesson I have learned is that AI systems are only as good as they are accountable.

[04:55] So in fintech you can't just say the model predicted this, the model predicted that.

[05:00] So you have to show how and why.

[05:03] So every decision needs to leave a trail of explainability and compliance.

[05:08] For example, in one of our AI powered trading platforms we built models that optimized liquidity routing,

[05:16] basically helping traders find the best execution across markets.

[05:21] But it wasn't just about speed. We designed the system so that every recommendation came with a reason code.

[05:28] So like what data influenced it, how confident the model was and what is the fallback decision would look like.

[05:36] So that's what made it more trustable, not just only intelligent.

[05:41] So another use case I can talk about is like fraud detection.

[05:45] So we used anomaly detection to flag suspicious transaction clusters across millions of trades. That happens on day to day basis.

[05:53] So in the initial stages it also raised lot of false alarms because our models were not ready to understand what kind of scenarios we can stage on.

[06:04] So that's where we had to fine tune all this contextual intelligence and teaching that made lot of difference between an unusual and an actual risky one.

[06:17] And I always tell our teams like the goal is just not to make AI look smart. It has to make how humans feel safe using it,

[06:27] considering myself as a real consumer.

[06:30] So I always communicate the same.

[06:33] So once we build AI that earns the kind of trust from regulators,

[06:38] from clients,

[06:40] that's where it starts to truly transform how financial systems work.

[06:45] Pamela Isom: I do think that establishing trust is very difficult when it comes to capabilities like AI,

[06:56] only because there's a lot of misnomers about the capabilities itself.

[07:05] Also because there is misuse.

[07:10] But what I want to talk about here, which you are doing is the good use cases.

[07:15] I think sometimes the good use cases get lost in the issues that have surfaced and you always have people and you always have those who are good and then you have the not so good right behaviors.

[07:32] And so the abusive behaviors of the technologies or of the capabilities is, can sometimes overshadow the innovation. And so what you described here, especially when you think about AI powered platforms for highly regulated industries.

[07:50] So the example that you shared is really good because for fraud prevention and fraud detection,

[07:57] I believe it is a really helpful capability that we should take advantage of and explore even further.

[08:05] And then I was also thinking about this group that has started to use and they, they put out testimonials that it says in the first half of 2024, just over $740 million was stolen in payment fraud in Britain.

[08:26] With scammers deploying and determining creative and increasingly creative ways to trap victims.

[08:33] So this is according to data from UK finance industry they focused on. This particular company is focused on partnering with one of the AI solution providers to streamline the process of reporting suspected fraud using tools and making AI augment, helping AI to augment their existing tools so that the process of reporting suspected fraud is faster,

[09:01] which allows them to better secure vulnerable accounts and free up call handlers and customer support handlers to deal with other customer needs and deal with these immediate needs faster.

[09:15] They take pride in using capabilities like AI to assist them with this effort. So I thought that that's really good and it's a good use case of how they are augmenting the customer handling and the call handling to support that.

[09:31] And so I believe that the examples that you share fit right along with that. And so I really appreciate you sharing what you shared and I just wanted to know is, does that tie in with some of the.

[09:45] Because what do we think that in highly regulated industries that AI would be consumed less because it is a highly regulated industry? And is that a misnomer?

[09:56] Anusha Nerella: Yeah, because it is definitely. In fintech industry,

[10:01] using AI is just not implementing or coming up with ideas how trustworthily we are releasing the data into production because not even though we are very confident about how they are supporting us and how the agents are helping us in efficiently doing our jobs.

[10:21] But it has to strictly go through the complaints. If it is not compliance ready,

[10:27] then no auditor is ready to release anything that are complete AI operated.

[10:33] So there should be AI at every level of software development lifecycle,

[10:40] especially in fintech industry.

[10:42] It has to be majorly at the governance level where the compliance reporting happens.

[10:47] Even though if the application goes through every check correctly, but if it fails at compliance, then whatever that we have done so far,

[10:57] it is not worthy to push forward.

[10:59] So we have to ensure everything to be aligned with compliance before even confidently saying that we are completely strong and confident about releasing any solution with AI Inc.

[11:13] So that is what any fintech industry would say.

[11:17] Pamela Isom: How do you see women in emerging technologies like AI progressing?

[11:25] And can you tell me do you think we are,

[11:28] are we there? Are we meeting the moment and where are the opportunities?

[11:35] Anusha Nerella: Oh yeah, definitely. See if you see ourselves, women are not less in like leading or inventing something or providing our thought leadership in this industry.

[11:47] And we have a strong footprint that we have left in this industry and we are still continuing that.

[11:53] So I have seen a great number of count in the across the cross domain and definitely I would say like we are underrepresented in tech industry comparing to senior levels of work workmanship.

[12:09] So for example,

[12:10] globally women may only take up to 22% of AI talent. Okay.

[12:15] Considering some analysis. Okay. And even fewer in some executive positions.

[12:21] Okay.

[12:21] So if any sort of retention or promotion issue.

[12:25] So being part of the workforce is one thing, but moving into leadership or staying long term is another challenge for women.

[12:34] So skills and access gaps remain same for us because even when women are in these kind of roles,

[12:41] we may have less access to the core AI projects or even I would say like fewer opportunities to lead full stack AI and ML initiatives.

[12:50] Bias is for sure real both in workplace and in the technology itself.

[12:56] Because closing the pipeline is just a step one. We also need to widen the door in order to build the stairs. I think that view has to be built from everyone's perspective.

[13:09] So given where things are like here is what I believe we'll move things where we are rather than where we are getting there.

[13:18] So there is some international leadership and sponsorship that is happening real in tech and AI and women sponsors are coming in real people who will assign them to the big and visible projects, not just the safe ones, but are the confident ones too.

[13:33] And we also can see the transparent career paths in AI and ML by defining what it takes into the whether it is data scientist or AI engineer or the platform leader roles,

[13:45] making sure we have equitable chance to acquire those skills and the project assignments.

[13:51] I would say the culture and these kind of engineering practices has to be improved so that we get all sort of support and growth.

[14:01] We can do a good team culture, promotional criteria and Any sort of remote and hybrid flexibility because we are in this kind of a real bias aware AI design world.

[14:15] And as more women build and lead AI systems,

[14:18] the less likely the systems are embedded by us or overlook diverse perspectives.

[14:26] So if the creator set doesn't mirror the world itself,

[14:30] the system will fail half the world.

[14:32] That's what I believe.

[14:34] Pamela Isom: I appreciate that perspective. So tell me, have there been any moments in your work that were particularly transformative or surprising?

[14:45] Anusha Nerella: Yes, I would say definitely there are many circumstances that we come through and personally, me,

[14:54] I believe this.

[14:56] I have been gone through such quite a few.

[14:58] But one moment that really changed the way I look at was when I was initially started with AI on a project and where we were deploying a machine learning model on credit decisioning at one of the clients that I have worked.

[15:13] And on paper the model looked so amazing because that was my initial stages and it has high accuracy according to the stats we have developed. And it was giving great performance.

[15:25] But when we started validating it against real world data,

[15:29] we realized it was unintentionally favoring some certain application profiles because of historical biases in the training data. Because in financial firms we need historical data along with the real raw data in order to make it more meaningful.

[15:44] Okay,

[15:45] so that was like my moment.

[15:48] It reminded me that AI doesn't just learn from data,

[15:51] it inherits our decisions, our blind spots, I would say, and sometimes our biases.

[15:58] So instead of just tweaking the algorithm,

[16:01] I went back and redesigned the data pipeline,

[16:03] adding the fairness checks and human review loops.

[16:08] So that end result was not just a smarter model, but it was also an ethical one.

[16:14] And I remember thinking, this technology doesn't go wrong by itself, it goes wrong when we stop questioning it.

[16:21] So that experience changed my approach completely.

[16:24] Every AI project I have led since then starts with a simple question.

[16:28] It's like can we build it but shouldn't we build it and for whom?

[16:34] So when we have some sort of questions related to each application that we are going to build at,

[16:40] then they will get a trailing of the questions and supporting answers which will train the model like which will end up us in training the model with very proper context.

[16:53] This makes very meaningful when anybody try to work flawlessly.

[16:58] Pamela Isom: So I heard you say sometimes I like to play back, especially if I think there's really key moments in what you just said. So correct me if I'm wrong,

[17:08] but you mentioned the data pipeline and how important it is to know and understand that data pipeline.

[17:17] You mentioned that tech goes wrong when we stop questioning It. In other words, go ahead and verify the intents.

[17:26] I heard you, the intents. You said, can we build it,

[17:30] should we build it and for whom is what you said right that you zero in on? Now, when it comes to projects and AI programs,

[17:41] I think that that's very valuable. I will tell you that I'm involved in some research for one of the government agencies and I'm honored to be one of the principal investigators.

[17:54] And so what's happening is we are at the data curation stage and we are really trying to understand and lay the pathwork for a strong data pipeline from inputs to what we're going to use to outputs, as far as how that data is going to be reflected as a part of the AI outcomes.

[18:18] Right. So,

[18:19] and we know the center of AI is data.

[18:23] We have to stop and ask ourselves,

[18:25] should we be building this and should we be,

[18:30] how do we test to ensure that the outcomes are what we need and how do we train our models, how best to train the models to get the outcomes that the government is going to be most interested in and that's the most accurate.

[18:47] So I do like what you said. I do like the association with the data pipeline and I just like the thought provoking process that you go through because again, it makes me think about your original conversations around the quality and the fact that we want our solutions to be trustworthy and you are describing things to think about and things to integrate to drive that trustworthiness,

[19:15] particularly in the financial services space.

[19:18] Okay. So I appreciate that and I want you to know that I hear you and I want the listeners to hear you. So I appreciate you sharing that transformative experience and perspective.

[19:30] Anusha Nerella: I would also love to congratulate you on leading that kind of research because it's so refreshing to hear like a principal investigator talk about data curation as the foundation.

[19:41] It's very interesting. I would say.

[19:43] Pamela Isom: I'll fill you in on it, but we're pretty excited. My organization is pretty excited about it. We were honored to be selected to do this. It is truly what you said.

[19:52] It is about understanding that data pipeline, not just how you're going to build it, but how you're going to ensure that it's consumed properly and maintained. Right. So we are spending quite a bit of time at that one area and it just goes along with what you're saying.

[20:07] And when you're dealing with critical and regulated industries like energy,

[20:12] like Fintech, as you're in,

[20:15] you definitely have to do this and it's not something you can skip past. You can be efficient at it and you could speak, you could practice speed,

[20:23] but you need to focus more on credibility and trustworthiness. And that's what I'm hearing from you. And I like it. I like that you're saying that, but thank you for the compliment.

[20:33] Anusha Nerella: I know that I think for everyone listening here, right, Whether you are in government research or finance or healthcare,

[20:41] this rule applies because trust doesn't start when we deploy, it starts at while in the design phase itself.

[20:48] So I often suggest people like they have to focus on the modal accuracy or the metrics that comes from the performance because the real truth is always about a model can be 99% feels legit for us and accurate.

[21:06] But still if it has a less percentage of trustworthiness then it won't stand.

[21:13] So that's what I would say. Like every parameter we tune is a value judgment,

[21:19] whether we admit it or not.

[21:21] So coming as I have to speak in the term of financial services as majorly I come from that domain that means making sure the AI doesn't just detect fraud very faster because the fraudsters are always weeks ahead of us.

[21:37] So in government or healthcare it means like building systems that reflect equity and transparency.

[21:43] Pamela Isom: Right.

[21:44] Anusha Nerella: And to your point, that's where the thought process matters most. Questioning the why behind every data point, every feature, every outcome.

[21:52] That's how we make people believe and we gather the trust from the people.

[22:01] Not just admiration doesn't help,

[22:03] so the trust helps.

[22:05] Pamela Isom: Exactly, exactly. And it's not something we can circumvent.

[22:09] So trust should be a priority and how you get there is a priority and it should be a priority. And that is difficult for some, right? It's difficult, but it's worth pursuing, it's worth digging into.

[22:25] It's worth having discussions like this to help us think about how do we really get to trustworthy outcomes more than talking about it. But how do we get there? That's why I asked you about use cases so they life experiences.

[22:40] So that being said,

[22:42] can you share words of wisdom or a call to action? And before you do that, is there anything else that you wanted to share with me before we get into into the closing comments of this discussion?

[22:55] Anusha Nerella: Absolutely, Pam, because I really appreciate this conversation where it brings different perspectives into one place and because it's exactly the kind of dialogue we need more of listening to us. If I had to leave one thought with everyone listening, it would be this, like don't chase just innovation rather than for speed,

[23:16] chase it for a substance like a reality.

[23:19] And we are living in a time where AI can do incredible things like giving us more time to do our own things and reducing our time to achieve some tasks.

[23:30] But I would say the biggest responsibility is to make sure whether they are getting it right and whether you are a technology person, policymaker, or even a researcher.

[23:42] Just ask yourself few questions before even starting it.

[23:46] Can we able to build it?

[23:48] Should we build it and for whom we are building it for?

[23:52] Because that last question,

[23:54] for whom is where the purpose actually lives in.

[23:59] And just like I have been enforcing since start,

[24:03] trust is not just built by technology,

[24:05] it's built by the intent behind it.

[24:09] So I would say let's keep designing AI with intent,

[24:13] integrity and empathy.

[24:15] Because that's how we move from innovation that impresses to innovation that truly matters to everyone who are in this industry.

[24:23] And we gain the trust from consumers just when we are able to show that confidence and trust.

[24:32] Pamela Isom: Okay, well, I appreciate that. I appreciate those words of wisdom and I kind of hear a call to action too in the middle of all of that. So we can just reflect on the things that you said.

[24:43] There is a call to action there. Right. Which do what you said.

[24:48] Yeah. So thank you very much.