
AI or Not
Welcome to "AI or Not," the podcast where digital transformation meets real-world wisdom, hosted by Pamela Isom. With over 25 years of guiding the top echelons of corporate, public and private sectors through the ever-evolving digital landscape, Pamela, CEO and Founder of IsAdvice & Consulting LLC, is your expert navigator in the exploration of artificial intelligence, innovation, cyber, data, and ethical decision-making. This show demystifies the complexities of AI, digital disruption, and emerging technologies, focusing on their impact on business strategies, governance, product innovations, and societal well-being. Whether you're a professional seeking to leverage AI for sustainable growth, a leader aiming to navigate the digital terrain ethically, or an innovator looking to make a meaningful impact, "AI or Not" offers a unique blend of insights, experiences, and discussions that illuminate the path forward in the digital age. Join us as we delve into the world where technology meets humanity, with Pamela Isom leading the conversation.
AI or Not
E036 - AI or Not - Nihkil Kassetty and Pamela Isom
Welcome to "AI or Not," the podcast where we explore the intersection of digital transformation and real-world wisdom, hosted by the accomplished Pamela Isom. With over 25 years of experience guiding leaders in corporate, public, and private sectors, Pamela, the CEO and Founder of IsAdvice & Consulting LLC, is a veteran in successfully navigating the complex realms of artificial intelligence, innovation, cyber issues, governance, data management, and ethical decision-making.
What happens when two AI agents start talking to each other without human supervision? How can artificial intelligence help the 1.4 billion unbanked adults worldwide access financial services? These questions sit at the heart of our fascinating conversation with Nihkil Kassetty, software architect and AI ethicist with deep expertise in digital finance.
Remember when banking meant standing in line at your local branch? Today, AI is transforming that experience—and potentially extending financial services to billions who've never had a bank account. Nihkil Kassetty explains how companies like Tala are using AI to analyze smartphone data (call logs, text patterns, app usage) to assess creditworthiness when traditional credit scores aren't available. With 67% of unbanked individuals owning mobile phones, this technology creates pathways to financial inclusion that simply didn't exist before.
Beyond accessibility, we explore the fascinating evolution of AI agents that can perceive their environment, learn from interactions, and make context-aware decisions. Unlike traditional automation following rigid rules, these agents provide an intelligent layer across financial functions—from fraud detection to investment advice. But this power comes with profound ethical considerations. When agents begin communicating with other agents, can humans maintain proper oversight? Who's accountable when AI makes mistakes? And how do we ensure these systems operate transparently?
The conversation shifts to green finance, where AI helps direct capital toward environmentally sustainable projects by analyzing satellite imagery, emissions data, and supply chain activities to provide a clearer picture than company self-reporting alone. This capability transforms how financial institutions evaluate climate risks and sustainable investments.
Whether you're fascinated by the technology itself or concerned about its ethical implications, this episode offers valuable insights into how AI is reshaping finance to make it not just faster, but fairer and more inclusive. Subscribe now to join the conversation about building a financial future that's both smarter and more human.
[00:00] Pamela Isom: This podcast is for informational purposes only.
[00:27] Personal views and opinions expressed by our podcast guests are their own and not legal advice,
[00:35] neither health tax, nor professional nor official statements by their organizations.
[00:43] Guest views may not be those of the host.
[00:51] Hello and welcome to AI or Not, the podcast where business leaders from around the globe share wisdom and insights that are needed it at this very moment to address issues and guide success in your artificial intelligence and digital transformation journeys.
[01:07] I am Pamela Isom and I am your podcast host.
[01:12] And we have a unique and special guest with us today,
[01:16] Nihkil Kassetty. Nihkil is a software architect.
[01:21] He is an AI and digital finance thought leader.
[01:25] He's a cloud and payments expert.
[01:29] And I would say what's really cool about Nihkil is he is an AI ethicist and I can relate because I am as well. So Nihkil,
[01:41] thank you for joining me today. I certainly value value you and your credentials. I'm so glad we had the opportunity to meet.
[01:49] Welcome to AI Or Not.
[01:51] Nihkil Kassetty: Thank you. Thank you so much for that introduction.
[01:53] And hi again, Pamela. So I'm really excited to be part of this conversation today.
[01:58] And I would say, like, I'm someone who is more passionate about how technology or like especially AI, can drive real change in finance.
[02:08] Pamela Isom: That's exciting. So we'll have you talk about that. Before you do, can you tell me more about yourself, your career journey, and what's in your future?
[02:17] Nihkil Kassetty: I would say, like, over the years I've worked on everything from digital payments to ethical AI systems.
[02:24] And in the recent times I've started working on the agentic AI because it's more likely, I wouldn't say it's like a trend, but it's more about what we need and how we have to evolve as the technology evolves.
[02:35] And also what fascinates me most is about how AI can make finance not just faster,
[02:41] but fairer and also more inclusive.
[02:44] And whether it's helping the unbanked get access to credit or ensuring sustainability, it's more than just a checkbox for me. So I believe we are at a turning point where AI can reshape the entire financial ecosystem for the better world out there.
[03:01] Pamela Isom: Is that what caused you to get interested in AI in the financial services space or when you think back to.
[03:09] Nihkil Kassetty: Your career,
[03:11] let me be honest. Right. So I would say, like, I'll just give like a bit of personal background here. So my mom's, my mom is a banker and she's going to retire next year and.
[03:24] Sorry, next month actually.
[03:27] So she has been into banking and I'VE been with her like a lot of years definitely since. Since when I was a kid.
[03:35] So I've seen a lot of things how banking works. I used to get go to the bank along with her sometimes when I was a kid when there was like no daycare or something like that.
[03:43] So it's just like a personal thing. Like I've seen like how things have evolved over the time. I've seen how my mom used to work at bank and also at home sometimes getting some documentation and all these things.
[03:54] But that's not the whole reason why I went into finance or something.
[03:59] It's more about it happened naturally.
[04:01] It's not that I have planned that I want to get into finance and payments or AI. Definitely like that.
[04:07] Pamela Isom: That's pretty cool. So my, my mom was a good role model for me too. So it's always good when you can relate back and your childhood in general has the best experiences, sets the best foundation for us.
[04:19] Nihkil Kassetty: Yeah, definitely. Right. I mean it's more likely I feel like it's like a world goes round and round and now that even I wanted to more get involved into fin tech side of it and also to just say like I've been into fintech for the past 10 years.
[04:34] More likely. So it feels like I pretty much like it and I wanted to learn more out of it.
[04:41] Pamela Isom: We had talked about the concept of financial inclusion and so I want to go more into that but before I do I just have this statistic that I wanted to share with you.
[04:54] This is based on 2021 from the World bank and they say 1.4 billion adults worldwide are UNB.
[05:03] Yes, 67% own a mobile phone.
[05:07] So I when I and you tell me if I'm off base here but for me it seems like that an example of financial inclusion or at least how AI can help with empowering those that need it in situations like this is through something called through like mobile mobile payments and the helping the unbanked with the use of mobile phones.
[05:34] Because what I was reading was that many of the unbanked they may not have banks, they may not have bank accounts, but many of those individuals do have mobile phones.
[05:47] So was AI an example of a tool or capability that can be used for financial inclusion? And when you answer that then give me some more examples of what you mean by that.
[06:02] What are some examples success stories that you know of?
[06:06] Nihkil Kassetty: Do you want me to also help. Help you understand like what exactly is financial inclusion?
[06:11] Pamela Isom: Yes. To start out with because I have my own interpretation here that I try to share, but I'd like you to really elaborate more on that.
[06:20] Nihkil Kassetty: Sure. So like I would say like financial inclusion is more about like inclusion is more about getting more of again like it's the same,
[06:29] I would say like the same meaning whenever we try to say inclusion. But here financial inclusion is more about on the finance side.
[06:37] So what it does exactly means is like making financial services like the basic financial services at least like savings,
[06:45] credit, insurance and the digital payments.
[06:48] So are like more accessible and affordable to everyone.
[06:52] And I would say like especially people who are traditionally undeserved or excluded from the usual banking system. So this can include low income individuals or like rural populations or migrant workers who move from different places and also women and small businesses without good credit history.
[07:14] So ideally the goal of the financial inclusion is to ensure that everyone has equal access to tools that help them manage money,
[07:23] grow income and more than anything everyone wants to improve their quality of life. So that's the most important thing again here.
[07:31] So here with AI this becomes more scalable and also by using alternative data.
[07:37] And as you are mentioning the mobile technology which can definitely reach people where banks can't. So and as you said like you told about like the 1.4 billion adults globally are still unbanked.
[07:48] So based on World bank data,
[07:50] as you said from 2021, that number still remains same and it has increased in fact. So but when we hear talk about financial inclusion, this is where AI and again mobile technology really come together.
[08:03] So I would say even if someone doesn't have a traditional bank account,
[08:07] they can obviously still have access to credit, make payments, save money just through their phone. Now that phone has become like a day to day thing of everyone's life. And it need not be like, I would say like you need not need to have a great phone, but at least some phone where you can just make payments,
[08:24] have good network to do the payments. And that would be more than enough to have into to be to feel like more included and more up to date with the technology.
[08:33] Technology that's in the outside world and coming to like I would say like some good stories or like which have already tried to implement these kind of things. I would say there are definitely a lot of examples.
[08:47] I would come with like something called Tala which is a bit of global.
[08:51] So this one they tried to offer like instant micro loans to undeserved people in Kenya, Mexico, Philippines and India where like more like a lot of rural people also exist.
[09:03] So here what, what they did was they tried to use AI to analyze around 250 plus smartphone data points like it can help with the call logs, text patterns and how they are using the apps and all these kind of things to really assess the creditworthiness rather than the actual credit history.
[09:21] And they also had like a good result out of it. Like they were able to almost dispose like around 3 billion plus in loans. And also close to, they were able to reach out close to 6 million plus people.
[09:34] That's a huge number in fact as a study to start with and see how it works. And that's a bit more global. But also we have something called Petal in USA which also provides credit cards to the users with little to no credit history in the United States.
[09:48] So here also AI is used to evaluate cash flow from the linked bank accounts instead of relying solely on the FICO scores or regular other credit scoring systems.
[09:59] Pamela Isom: So instead of FICO score, the AI is helping to assess credit worthiness.
[10:08] Nihkil Kassetty: That's right.
[10:08] Pamela Isom: Based on some criteria like maybe how frequently you pay your utility bills or other things.
[10:18] Nihkil Kassetty: That's true. So it's trying to just evaluate some other data points. Like there can be different reasons that your credit score might have been low at some point,
[10:28] but obviously takes some time for you to really.
[10:32] Or like I would say like organically. I mean there are some ways people do it, but organically to really get it to the good credit score it takes some time.
[10:41] So that's one reason they see like what is the overall income flow that's coming in, how is the cash flowing between accounts to accounts and what exactly the specific person is doing in the overall financial perspective.
[10:53] And that is one way to do it instead of solely representing on the credit score. Because sometimes it takes years to get into a good credit score once it gets bad.
[11:01] Pamela Isom: Okay, yeah, that, that's, that's a really good explanation because we can interpret financial inclusion to mean so many different things.
[11:12] I, I'm glad you pointed out that the data is still the same. 1.4 billion adults.
[11:17] Nihkil Kassetty: It's like more. And another interesting thing I would like to share here. So I am. So usually World bank sets up some meetings along with the International Monetary Fund every year.
[11:29] So this year World bank and International Monetary Fund are having meetings called spring meetings in Washington D.C.
[11:37] so I'm going to be there next week to that meetings and also I'm going to speak in one of the events there regarding this scene more about like how like financial inclusion has to be made and how AI is getting used and how resources that are being shared across are not being able to be shared with across different units.
[12:01] So there are a lot of topics I'm going to discuss during the World bank and spring meetings next week.
[12:07] Pamela Isom: So one of the things that's come into my mind is can we trust it?
[12:14] So I think there needs to be,
[12:17] we still need to work on building up that trust.
[12:20] But I think that the things that we have shared so far, some of the things that you have shared so far are strategies to help build that trust. Right, because the whole intent is good.
[12:34] So with the intent is good,
[12:35] some of the things that you shared can help attribute to building trust. But there is this concern about AI tools.
[12:44] So I imagine that in your conversations,
[12:47] and I know in my conversations I'm having discussions around being cautiously optimistic and being prudent about still validating the outcome.
[12:57] Is that where you're at on this?
[12:59] Nihkil Kassetty: Yeah, so I totally agree with that. So we still have work to do in building trust, especially with communities that have historically been undeserved or even harmed by financial systems. So.
[13:11] But what you are seeing is that intent, when backed by ethical AI designs, like more of transparency,
[13:19] more fairness audits, and more of explainability, like what's exactly is happening can really help. And as you see, like the examples we have discussed aren't just about the access,
[13:31] it's more about building credibility. Right? So because if people understand how these decisions are made and they see consistent fair outcomes, they can definitely trust the system and see, hey, okay, whatever has been done, it is giving us some good output.
[13:45] So that is where we have to show the results. And accordingly, people also gain some trust into it.
[13:51] Pamela Isom: So let's talk about AI agents.
[13:55] I'd like to better understand in the financial services space,
[14:00] from your perspective now,
[14:03] how do you see AI agents in the mix? Right?
[14:08] And what's the distinguishing factor as opposed to traditional forms of automation and finance?
[14:16] Nihkil Kassetty: So AI gents, like as you see, like I would say AI, as we see AI is not something which has been here like just like in few years or something like that.
[14:28] So AI has been in place since long time, but it was working in a different way in terms of like how the machine learning models also were working.
[14:36] So even like I would say like a small example before we actually get into that,
[14:40] even like when you try to go out of country or when you go to a store and make some bigger payment, or even like a small, small payment,
[14:49] what it's like, like let's say if you go to some store and make a payment,
[14:53] which is the store you usually go,
[14:55] right? You always get this you used to always get this like a notification saying that hey, we have noticed this transaction. Did you really make this payment or did you really make this transaction?
[15:05] So this was also happening already, but now that was happening in the back end more of like in a different way. But now with the agents are coming,
[15:14] how they are is like they are more dynamic so they can perceive the in also like learn from the interactions.
[15:22] And it's like more of context aware decisions. And it evolves over time. Like it learns from what's been happening and again it gets better and better.
[15:31] So coming to the financial services we are as said, like we are already seeing them in action. Like there are some examples like bank of America has like a tool called Erica.
[15:44] I mean I've used it so I really remember the name too. So it really helps like how the customer service could get better.
[15:52] And it also helps like how you can resolve your issue without even trying to talk to a real person.
[15:57] But I definitely understand there's always a human touch that is also required. But there are some places where definitely agents are doing great. And also like there's another thing called like Morgan Stanley's GPT powered assistant even that does the same thing.
[16:12] So what I would like to say again like an additional point here, these agents don't just respond to queries like as we are using the LLMs like ChatGPT or something, but they provide proactive insights like handle complex questions and improve over time based on user behavior.
[16:31] Like there are chances that if you really talk to just like a human customer service person or something like that,
[16:39] there might be some things that customer service person might not answer because they are not really aware of so many things. But as you see as an agent,
[16:49] the agents can also help you get more information rather than what exactly that customer service agent is supposed to do. So if you ask something a bit out of it, it can still try to gather more information and get it to you.
[17:02] And another last thing you were asking about is the automation, right?
[17:07] I would say the key difference here is automation is more about rules. Like rule by rule it has to follow,
[17:16] you have to write a list of rules. So if one doesn't work well, it fails the other one.
[17:22] Similarly, AI agents also uses rules, but along with that it also learns and does the reasoning. So that is one thing the more important aspect of a agent. So they are becoming an intelligent layer across everything.
[17:39] Everything as in like as we are talking about the financial systems, what is important here. So for us it's fraud detection or like giving Your personalized financial assessment.
[17:50] Also like investment advice because not a lot of people know a lot about how to invest and do all these things. So.
[17:56] And that is one thing, it can help also like as we were talking about the customer support before,
[18:01] so I would say they are just not like simple things. They are more of like folding in more naturally as like a friend beside you who can really help in a lot of things.
[18:12] Pamela Isom: So I like that. I agree with you too. I think that the agents are complimentary to the humans.
[18:23] I believe that we are going to experience working with agents more and more. I believe that there is this intelligence layer that is introduced because of agents.
[18:36] What I haven't totally settled on is agent swarms where humans are not in the loop at all. So I haven't totally settled on that because I still think that there needs to be that capability that's integrated.
[18:50] But I am in favor of agents communicating with agents, AI agents communicating with other AI agents because they. That is then how we're going to get more reasoning and more responses back to the user that is more comprehensive and more related to what we're looking for.
[19:12] So that's how we're going to get more insights. I find that when I'm using AI today as a user,
[19:20] I find that it is definitely retaining more information about previous conversations.
[19:30] So. And I do see more of the reasoning starting to come forward.
[19:35] So I like that I have to still remind the tools that yes, I'm talking about this, right? I'm talking about X, I'm not talking about why,
[19:46] but it does try to pull back and re. It uses memory and it tries to pull back and it's doing a better job than it has in the past of retaining information and queries that I had previously made.
[20:01] Right. And conversations that we had been having.
[20:05] So. And I believe that that is what is making the AI agents more powerful and more informative. So I think that that's good. So the key for us, and you tell me the key for us is to figure out the right use cases,
[20:20] especially in the financial services space,
[20:23] and then use it accordingly. Right? So make sure we figure out the right use cases for AI agencies,
[20:32] definitely.
[20:33] Nihkil Kassetty: So I completely agree.
[20:35] Like I would say like a. Agents are not really replacing humans.
[20:40] I would say like as I want to repeat it, they're just complementing us. So what we are really seeing here is the rise of an intelligent layer which is more adaptive always on support system, the documents on how we make decisions, manage risk and also interact with the different financial systems.
[20:59] So, so I'm not, again, also, like, I'm not really completely fully favor of agents just communicating with other agents.
[21:07] So what I've seen, like, I'll just give you like an example that I have observed like a video recently.
[21:12] I've seen like a video,
[21:14] an agent.
[21:16] I've seen, like, how the video is like,
[21:19] there's an agent who is talking from a laptop and there's an agent who is talking from a mobile. So, so the conversation started this way, saying that,
[21:29] like,
[21:30] hey, how are you? How can I help? The other agent says that, hey, my boss is looking to book some tickets to here and here, so can you make a plan?
[21:39] So they realized that they both are agents.
[21:42] And what have been done is like, when they realized that they both are agents, they tried to vent into a different zone, kind of more of like a gibberish kind of language which a human cannot understand.
[21:54] So there are a lot of things people are trying to develop and we have to be more careful. So though when two agents are talking, if you don't really understand what they are talking about us, it's really harmful for us.
[22:06] Pamela Isom: How did you run into this? Was this through one of the tools?
[22:10] Nihkil Kassetty: This was like, I have seen like a YouTube video. I mean, I don't think this is like a fiction, but someone recorded this. How two agents are talking and what happens when two agents talk?
[22:20] I mean, I don't know if this happens everywhere, but it's more likely a kind of a setup that someone is trying to implement.
[22:26] Pamela Isom: I think that it's good insights because there's a natural tendency. So. So think about the tools today. So,
[22:34] and we're still talking financial services, but let's say I query one of the tools or some information on financial services, financial services agencies and my general vicinity that would likely give me a loan.
[22:49] Right? So let's say I come up with something like that and I query one of the tools that I'm trying to find out, and the tool comes back and tells me,
[22:58] yeah, this place would be good. Probably not so much here, stay away from here. But more likely this would be preferable based on your situation or circumstances.
[23:10] Nihkil Kassetty: Right.
[23:10] Pamela Isom: If the tool is, is communicating with me and it's using some of my historical information,
[23:17] some of the previous chats and things that I've been having,
[23:20] then that would be good. If the tool starts to communicate with other agents,
[23:25] I don't know what information it has that that other agent has that it's basing its information on.
[23:31] So then I could see where things could start to Go.
[23:34] Nihkil Kassetty: So we never know like what information the agent is also sharing. I was thinking of something so like again, like it's like a more of common trait, I would say. Like, like let's say there are like three people like out of them, like person A talks language A and person B talks language A and person C talks lang some other language.
[23:59] Okay. That this the other two person doesn't understand.
[24:03] Okay. So now what happens is like it's common trait that when you see someone who is from who who feels like you belong to the same. Like, like it's not like belonging, but it's more about if you see like the same other person also talks the language that you know,
[24:16] you tend to talk into the, get into that mode and you talk to the same language.
[24:21] Sometimes we tend to don't bother like if someone else is beside us and if they're really understanding what we are talking about it or not.
[24:28] So this is like a common trait even humans have. I think that is where like even agents feel that, hey, you are an agent, I'm an agent, and let's get into our own comfort mode and talk into our own language.
[24:40] Pamela Isom: Yeah. So the difference being like in the example that I had,
[24:44] the two agents could start communicating with one another because they both realized that it's AI talking to AI and the communication is different.
[24:52] And so depending on what I can hear and interpret is what you are conveying is that that is something that you saw a video about,
[25:03] but you also are concerned.
[25:06] So it may talk about financial concepts that are relatable to me, but in talking and conversing with the other agent, it may be just far over my head or something that I just don't really understand.
[25:19] And so we have to be mindful of that. And to me that goes to how we train our models.
[25:26] And it also goes to governing the agent and ensuring that. I mean, this is an element of governance that we're going to have to get into. Right. So when you're, how do you govern different agents?
[25:40] Particularly when we're starting to deal with automation, where agents are only talking to one another.
[25:46] Right. Where they're only communicating. So when do the humans get involved and to what extent do we get involved? And is the timing set so that we are involved at the right time so that we can influence the conversations?
[26:01] Because I definitely wouldn't want that to happen to one of my clients.
[26:05] I would want my clients to definitely understand the communications that's coming back. And I will want my clients to know that if there are multiple Agents involved,
[26:15] that the agents that are involved are the information that they have is reliable and trustworthy.
[26:22] And that I know that the information that actually that is seamless to me. Whether they're more than one agent involved or not, it's all seamless. I just needed to know that.
[26:32] I need my clients to know that when the AI responses that they are getting quality outcomes definitely, I mean the.
[26:39] Nihkil Kassetty: Quality wise, it will definitely be great.
[26:41] Just again like it's not just about the quality as we have discussed. Like based on the other points, like we also need to make sure there's always,
[26:49] it's more of like a human loop that is always something we needed in between AI or how much our technology evolves. We definitely need that human touch to at least have a bit of personal sense of feeling.
[27:04] Pamela Isom: Okay, so now let's shift gears for a minute. Let's talk green finance.
[27:09] So tell me more about green finance and what. And what that, what's up with that and where are we?
[27:16] Nihkil Kassetty: I would say like green finance again it's more of like green as in we just wanted to make sure what exactly it is. So as well as like green finance is definitely gaining more momentum, right?
[27:27] So like at its core I would say it's about challenging overall capital towards the projects that are environmentally sustainable. Like I would say there are a lot of things like things like renewable energy, clean transportation or climate resilient infrastructures or tools that are being built.
[27:46] But when AI like come san and it is making all of this more measurable for us and better tracking and also like more accountability like in general, right? Like traditional,
[28:00] like I would say ESG scoring often relies on company reports. Like there are like different companies so they give some reports based on what is their overall count towards the sustainability and how are they maintaining it.
[28:14] So so traditionally we have to say like rely on all the reports that the companies provide. But now as artificial intelligence is coming up and it's gaining momentum, it can definitely analyze at a higher level.
[28:26] I would say like it can definitely analyze at these satellite images also like real time emissions that are coming out and I would say like new sentiment and even there's like lot of supply chain activity that provides a clear picture for the company's a true impact.
[28:42] Like if, like if the company is trying to build something and it to really know how much impact it is being to the like not just the economy, right when the company is trying to do, it also needs to make sure that it is having its responsibility towards the environment and overall sustainability.
[28:57] So I can just Give you like a simple example. There's something like a firm called like Satelligence which uses satellite plus AI to detect the illegal deforestation in like different supply chain environments.
[29:12] And it also, it also easily catches the issues that some companies don't disclose because if they disclose that it's a problem for them. So they don't disclose it.
[29:22] That's pretty obvious that they don't disclose that. But with the artificial intelligence it's pretty getting easy that even if the company doesn't disclose, it doesn't get lost somewhere.
[29:31] We can still find that data and make sure how this overall impact that company is bringing in. So this is where also like financial institutions come in. So when we use like I would say I've seen some financial institutions also use artificial intelligence to run these climate stress tests to like there are some tools like it's called like Jupiter Intelligence where that helps banks model how the rising sea levels or the wildfires could affect the real estate
[30:02] portfolios because that is more important as well. So and also like the insurance risks because now we have seen a lot of accidents in the recent times right in the west side because of the fire and a lot of things.
[30:14] So it also tries to provide that estimates like how these are really affecting here in the United States where this climate related financial risk is how bigger it is getting into part of overall regulatory discussions in terms of finance.
[30:32] So I know I've said a lot but in short I would like to say green finance isn't just like a, a trend or like a catchy word or like a buzzword or something like that.
[30:43] With AI it's becoming more evidence driven and also like high impact to see how capital is being allocated and how it is really being used.
[30:55] Pamela Isom: Okay, so speaking of that, so let's talk esg. So how is AI improving ESG assessments for instance.
[31:07] Nihkil Kassetty: So ESG assessments I would say like if you see the ESG assessments, right. There are different kind of assessments like coming here. So as it says like environmental, social and governance as we see.
[31:20] Right.
[31:21] So to evaluate more of in terms of companies or like in terms of overall long term sustainability and ethical practices.
[31:30] So traditionally the ESG assessments relied on like as we were discussing before also like the static reports that the companies provide. And it is as it's like it's definitely misleading because we never know if it is correct or not unless we hire someone and again spend a lot of money to really understand if the given reports are right or wrong.
[31:52] So as we see like AI again help thousands of resources like As I was mentioning, like it can definitely tell you from the news reports or like if you see in terms of finance like the earning calls and how the regulatory filings and it can tell definitely how the governance is running.
[32:11] And it also raised some red flags there.
[32:14] Pamela Isom: So you think that AI then is helping with the hidden, I often call it hidden insights.
[32:21] So AI is helping with the insights and providing deeper insights when it comes to ESG and some of those ESG reports.
[32:31] Nihkil Kassetty: Yeah, definitely. I love that when you call it like hidden insights because that's really where AI shines also in the ESG space. It also goes beyond just the polished reports and find signals that might otherwise be more hidden or buried.
[32:48] So it is just bringing out that hidden things which we can't really see or evolve unless we get. We spend more money,
[32:56] more workforce to really understand if the, the data that is coming out is right or wrong.
[33:02] Pamela Isom: So I look at, I'm from the Department of Energy, so did a lot of work around energy and climate and just understanding how like for instance, using AI to understand from a subsurface area,
[33:17] where does it make sense to store carbon? Right. So where does it make sense to store things in a safe way? Right. From a environmental perspective.
[33:29] So if you find contaminants in the air and you put first of all, use it, discern and detect that these contaminants are in the atmosphere. But then what are you going to do with it if you're able to draw them out as an example.
[33:43] So when I think of green financing, I think of work to support activities like that. Right. So financial support for activities like that to keep the environment safe,
[33:56] to address the climate crisis. And so that's what I heard you say. I just kind of reiterating what you said and point out, that is important. And so we need to continue to ensure that these type of activities occur.
[34:13] And when we get to the ESG discussion,
[34:16] I mean you did give good examples because what we need is for the assessments to be accurate,
[34:23] we need to be trustworthy, we need that data to be reliable and we need it in a timely fashion.
[34:30] And so this helps with all of that,
[34:33] which are points that you iterated there.
[34:38] Nihkil Kassetty: Exactly.
[34:40] And I think that's where financing really becomes a force multiplier. Definitely. It's not just about capital. Right. It's about channeling how this capital is being put into the right activities.
[34:53] So as you are mentioning, whether it can be carbon capture or renewable energy or like other climate resilient infrastructure, these are just kinds of initiatives that protect the overall environment and helps us with the overall climate crisis.
[35:08] Because I haven't seen so much of climate related or sustainability related activities or like people really focusing so much on it like as it's, as it's been happening in the recent years.
[35:19] Because I feel there's really some things happening and we really, really need to work on it. And that is where also AI can help more in a smarter and more targeted way by deploying this financial support and also identifying where this impact is real and where is risk is high and where we need to really act fast and make sure that it is really being capitalized.
[35:45] Pamela Isom: Well,
[35:46] yeah, yeah, I appreciate that. Okay, so let me, let's talk about the regulatory challenges.
[35:53] I was in a discussion with a client maybe a couple months back and we were talking about how regulatory stipulations are a burden, right? So they were saying that,
[36:11] yeah, but it's a burden like we want to open up to,
[36:14] to the government for instance,
[36:16] and tell the government what's going on. So I was dealing with someone about the salt typhoon crisis, right, and the soft salt typhoon incident and they were telling me that like we want to be more open, we want to be more transparent with the government, we want to be more transparent with other companies that are experiencing these problems.
[36:39] But we are concerned that it's just going to turn into more regulation,
[36:44] more stipulations that's going to weigh us down.
[36:48] And so that I'm sitting here going, okay, so need the insight, we need the intel of what you're experiencing,
[36:57] we need you to talk about, we need you to get with other light companies and let's discuss this so we can come up with a,
[37:05] with solutions that are repeatable, sustainable.
[37:09] We need this. So I was sitting back, I'm a problem solver, right? So I'm sitting back and I'm trying to figure out, okay, well what can we do to promote the sharing.
[37:18] And so I ended up writing about it, right? And I did an op ed on it.
[37:24] But I said that to, to ask a question. So we've got the advancement of AI capabilities and I want to know like what new regulatory challenges, and they can be ethical challenges do you anticipate in financial services?
[37:42] And I kind of said that to kind of lay some context in case that would help with your thinking there.
[37:48] But that is something that I personally am looking into is how do we make governance leaner so that people will want to open up and share what's really going on.
[37:59] But tell me your perspective with response to, with the rapid advancement of AI capabilities.
[38:05] What new ethical and or regulatory challenges do you anticipate in financial services?
[38:13] Nihkil Kassetty: That's actually a broad topic but we can try to make it a bit shorter and see what we can discuss here.
[38:21] I would say like with the overall AI is coming to picture and also like all the industries are trying to implement AI into their systems here we have to make sure that explainability is at scale and how we are scaling it.
[38:36] So as financial institutions deploy more complex and multi agent AI systems,
[38:42] one thing is it becomes definitely harder to trace who made the decision and why.
[38:47] So like when something fires back like how do we know that which agent at which part really happened? Unless and until I mean it has to become more bitter and better and see we have to get more a traceability here.
[39:01] Unless like otherwise we don't even know now that like, like let's say if at, at my work, if I make a mistake, it definitely knows that we can. When we trace back, it knows that okay, this person has made some change.
[39:12] Okay, it might be, hey, can you take a look? We are seeing into some issue or something. But what exactly happens when AI agents work? Like, like 10, 20 AI agents work.
[39:20] We don't. It's hard get.
[39:22] It gets hard to trace what exactly is happening. And, and that is one thing and that is where regulators will demand more of transparent and auditable AI pipelines Here especially where like more important things like lending, insurance and fraud that is more important more than anything in general.
[39:40] Finances and accountability is the most important aspect as well. Like I would say it's just my opinion. I would say like it was right now it's really unclear who is liable when AI makes a mistake.
[39:52] Right.
[39:52] So we don't really know what to say. Like is that like as a customer, what do we say like when AI makes a mistake or like as a whole system like is the mistake from the bank, is the mistake from the vendor or is the mistake from the developer?
[40:05] In fact we don't know because everyone wants to get the AI agents to be used, but we don't know what exactly is happening there. So that is where more of I would say like financial regulators are trying to push for more clear lines of responsibility, especially under different frameworks that we have.
[40:23] Like it can be like the EU AI act where they want to be more responsible in general. So that is the most important thing. And another thing I would talk about is the ethical challenges.
[40:36] So I think you already mentioned it multiple times. So where AI agents start interacting without humans in the loop. So who monitors the ethics of that dialogue in fact. Right.
[40:45] So like we have to make sure that new standards are trying to be put into place or like we have to make sure more of autonomy boundaries are being there to make sure that communication protocols and also escalation of anything that happens is being pushed to humans so that they can have a oversight on top of it.
[41:06] Pamela Isom: Do you think that there would ever be a agent that is the lead agent that that like things get escalated to this AI agent and it never makes it to the human?
[41:21] You think that we are getting there? Because I can see the traversal where agents are collaborating. I always say orchestration, right? So there's this primary orchestrator and this orchest agent and this orchestrator is saying like which agent addresses this situation, which agent addresses that?
[41:42] And then this agent then brings all the information back together and then decides whether there should be an escalation to a human or whether it is enough to just move forward without that.
[41:58] Do you think we're headed there or would you say agent always?
[42:05] Nihkil Kassetty: That's really great.
[42:07] I would say like a very good thought process question because we are actually almost there. We are already doing it like even now I'd see like,
[42:16] like multi agent systems there where like a lot of agents, like a lead agent takes care of the main task and tries to distribute it to the subtasks, to the agents who can work individually like more of asynchronously and try to get into a mode where they push back their results to the lead agent.
[42:37] So this is where like there is a possibility that if lead agent becomes too autonomous, it might evolve or even suppress the error even before it gets to the human side.
[42:48] So we don't even know that. And what all we see at the end is like the lead agent tries to resolve everything. It just tries to give the result that we asked for.
[43:00] And again, at that point if we want to evaluate everything again, we don't need the agents again, right? So again we have to spend more time to evaluate if that is coming up, right?
[43:10] Or if there is a mistake or something. And that is something hard to really know.
[43:14] So we are already getting there.
[43:17] I think the next frontier should be more of building a logic that's more explainable at that same time auditable and also grounded to the human values. Some things are fine to be done by the completely agents, like small things like a trip planning or like hey, just choose a restaurant,
[43:37] order a food item from that restaurant or something. Like these smaller things are fine, but the bigger things I would definitely feel that we need more explainability, auditability and more of human emotions also be considered.
[43:52] Pamela Isom: Yeah, I agree,
[43:54] I agree. I think, I feel like that this is an area that we need to tap into more.
[44:03] I know we're excited about agents and the swarming concept and the agents talking to each other and the more automation, the better.
[44:13] I get all of that. But I do think that there still is great opportunity for applied ethical principles and evolving, even those. Because as we talked about on here, these agents are now talking, communicating with one another.
[44:29] And so we need to be mindful of what that can turn into and permeate.
[44:37] Nihkil Kassetty: Yeah.
[44:38] Pamela Isom: And then you think about financial services and I asked earlier,
[44:44] can we trust this?
[44:46] And so that's what we have to work on is how do we continuously monitor so that we build and maintain trust and then even gets amplified because now we have agents talking to other agents.
[45:03] So responsibility is going to be on that lead agent.
[45:08] Nihkil Kassetty: Right?
[45:09] Yeah,
[45:10] yeah, exactly. Right. I mean, that is where. So if that that's happening, I mean that's good way as I said. Right. For smaller tasks it's fine. But something where definitely human,
[45:22] like human mindset is required to really put into that thing.
[45:26] Even, even that kind of task, if they are leading agent is trying to do everything that. That's where it becomes a problem and.
[45:34] Pamela Isom: Where it can turn into drift.
[45:36] Nihkil Kassetty: Yeah.
[45:37] Pamela Isom: Where drift can get like really serious.
[45:40] Nihkil Kassetty: That's right.
[45:41] Pamela Isom: So this has been a great conversation. I'm not sure if there's anything else you want to talk about before you share your words of wisdom or provide a call to action for me and my listeners.
[45:53] And I think you've done a great job of helping to convey why ethics is so important.
[46:00] We have given examples of why and how ethical practices can be applied in a business's code of ethics. Right. So why that's so important and how we want to make sure that that's infused in all that we do.
[46:15] So then my last question to you is can you share words of wisdom and. Or a call to action.
[46:22] Nihkil Kassetty: That's a good thing.
[46:24] I would say, like as we think about the future of finance here. Right. I truly believe AI gives us a opportunity to not just optimize what we have, but also to reimagine what's possible.
[46:38] And I would. I always feel that imagination is the biggest thing. So until you don't imagine, you don't really know what to build or what you have to come up with in the future.
[46:48] So from making like banking accessible to the last mile or like even the people everywhere Even like in the rural places where there is no real access to the technology or like to helping us invest with both intelligence and more of conscience.
[47:03] I see that there's a lot of potential,
[47:07] which is also very much incredible here. And at the same time I would say be more responsible and also it's a responsibility to make sure this power is guided by a lot of values that we definitely need to have more of fairness and again like transparency.
[47:24] And we also need to make sure everything is inclusive, that we are taking care of everything that we are doing and also we are taking care of the people. Whatever we do with.
[47:33] At the end of the day, what I would say is it's not just about what AI can do or like what we choose to do with it. It's more of if we get this right,
[47:43] we are more of like building something smarter systems. And also we are also building more human financial feature and which can also help into a better world. Definitely like where it can be AI today but it can be something else tomorrow.
[48:00] And because there's a lot of things that are coming, coming with Quantum as well now, I think that's going to the next biggest thing and people are going to flood on it probably in the future.
[48:09] So things keep coming. So it's not about just AI, it's more about how are we serving the people and how are we trying to make people's life better? That is always the thing that we have to consider.
[48:21] And I really appreciate for having me here today. I really enjoyed it.
[48:26] Pamela Isom: Well, thank you for being here.
[48:28] Nihkil Kassetty: Yeah.
[48:29] Pamela Isom: Okay, so I, I appreciate.