Trading Tomorrow - Navigating Trends in Capital Markets
Welcome to the fascinating world of 'Trading Tomorrow - Navigating Trends in Capital Markets,' where finance, cutting-edge technology, and foresight intersect. In each episode, we embark on a journey to unravel the latest trends propelling the finance industry into the future. Join us as we dissect how technological advancements and market trends unite, shaping the strategies that businesses, investors, and financial experts rely on.
From the inner workings of AI and ML to the transformative power of blockchain technology, our host, James Jockle of Numerix, will guide you through captivating conversations with visionaries who are not only observing the future but actively shaping it.
Trading Tomorrow - Navigating Trends in Capital Markets
AI, Risk, and Regulation: How Institutions Are Responding
Artificial intelligence is reshaping how financial institutions manage risk, compliance, and data and regulators are moving just as fast. As AI becomes embedded in core decision-making systems, firms are navigating new expectations around explainability, privacy, accountability, and trust.
In this episode of Trading Tomorrow – Navigating Trends in Capital Markets, attorney Maryam Meseha, founding partner and co-chair of Privacy and Data Security at Pierson Ferdinand, explains how regulatory frameworks, board-level governance, and evolving data protection requirements are influencing AI adoption inside financial institutions.
It’s a wide-ranging look at how financial institutions are balancing innovation with oversight as AI accelerates that you don't want to miss.
Welcome to Trading Tomorrow, Navigating Trends in Capital Markets, the podcast where we deep dive into technologies reshaping the world of capital markets. I'm your host, Jim Jocko, a veteran of the finance industry with a passion for the complexities of financial technologies and market trends. In each episode, we'll explore the cutting-edge trends, tools, and strategies driving today's financial landscapes and paving the way for the future. With the finance industry at a pivotal point, influenced by groundbreaking innovations, it's more crucial than ever to understand how these technological advancements interact with market dynamics. Regulators are responding with frameworks, while institutions are under pressure to balance innovation with governance. To help us unpack how these forces are shaping finance, we're joined by Miriam Masiha, founding partner and co-chair of Privacy and Data Security at Pearson Ferdinand. Miriam is an experienced data privacy and cybersecurity attorney who advises clients on the responsible use of technology, data protection, and risk management. At Pearson Ferdinand, a tech-enable law firm, she focuses on cybersecurity, data governance, AI, and regulatory compliance. Before founding the firm, Miriam co-chaired both the cybersecurity and data privacy and crisis and risk management practice groups at major regional law firms, where she guided organizations through complex privacy litigation and incident response matters. Miriam, thank you so much for joining us today.
Speaker 1:Oh, thank you so much for having me, Jim. I'm really excited to get to talk to you.
Speaker:When you look across the financial industry right now, what's standing out to you about how firms are actually approaching AI governance?
Speaker 1:It's an exciting time. AI is finally becoming a board-level conversation. Financial institutions are realizing that AI touches risk, compliance, operations, and reputation. So the more mature firms are creating cross-functional AI governance committees that often include legal and compliance and data science leaders and other functionalities in the enterprise. The shift is that we're seeing is can we use this to should we and how do we use it responsibly? So the cultural evolution is important because it determines whether AI adds any long-term value or just ends up creating new blind spots for risk.
Speaker:So this is something I've never said before, but the regulatory side seems to be moving just as fast as the technology. What changes or proposals are having the biggest real-world impact so far?
Speaker 1:Yeah, I mean, we're seeing some really significant movement on the regulatory front. For example, the SEC's uh cyber disclosure rules that came out about two years ago have already changed how financial institutions approach incident response and board reporting, uh, and as a byproduct, the technologies that they use uh internally vis-a-vis uh data privacy, vis-a-vis AI. Um, the other significant change as of late is the uh passage and implementation of the EU AI Act. Uh so the EU AI Act is influencing at a global level what best practices look like when deploying uh AI functionalities. Um, even for US firms that aren't in Europe and don't touch that space, they're using the EU AI Act as a benchmark, uh even though they're not technically bound by it. Um so what we're seeing is really a movement towards accountability by design, which means that companies must document risk assessments, document their mitigation steps before deploying certain technologies. So it's no longer about uh being compliant reactively, it's about proving that you've got embedded safeguards proactively.
Speaker:So now the technology is moving very quickly, regulators are moving very quickly, but when you have departments like legal and cybersecurity involved, um how are regulations as well as internal stakeholders impacting the development uh or rolling out of new tools?
Speaker 1:That's a great question. Um so the the frameworks themselves uh are forcing firms and entities to treat AI uh model development just like any product development, right? So complete with documentation, testing, life cycle oversight. Um it's reshaping pure procurement, for instance. Um vendors that can't demonstrate responsible data practices are getting screened out early, and they're they're not even getting to the point where um they engage with internal stakeholders. So governments, the governance framework isn't about slowing in innovation, it's actually about putting up guardrails that allows entities to become the market differentiator.
Speaker:So we're hearing more about AI being built directly into trading and decision-making systems. From a governance standpoint, what kind of oversight do regulars really want to see here?
Speaker 1:So regulators want to see traceability, right? And and what I mean by that is a clear record of how decisions are being made, uh, how the functionalities are being tested, and ultimately corrected where they need to be. So they're less interested in banning AI outright, but they're more interested in accountability. So who's validating the models that are being used? Is bias being monitored over time? What human checks exist before major actions are taken through the AI model that's being considered? So financial regulators specifically are taking lessons from other industries like aviation and healthcare, where automation has to be explainable and have a human fallback.
Speaker:Now, you you mentioned the EU Act, the uh work that the FCC is doing, but not all countries are necessarily playing by the same rules. Um, and in some ways, there is a competitive arms race going on with uh uh the introduction and use of AI. Do you see potentially some of these frameworks creating a competitive disadvantage for some institutions in some countries and a competitive advantage in others?
Speaker 1:That's a great question. And it it can be it can be overwhelming for institutions to kind of keep track of the regulations that they might come under and might not come under. Um and so what I've what I've seen is that oftentimes now privacy and compliance have to run on parallel tracks. Uh they excuse me, they were running on parallel tracks. Now they're they have to converge. Um so the challenge is identifying which laws are applicable to my use of the data or my use of the AI implementation that I'm I'm looking to roll out enterprise-wide. Um so it's balancing the approach between good privacy hygiene and compliance and marrying those two rather than keeping them separate as we were seeing uh historically.
Speaker:A couple of years ago, I was doing some research on the derivatives market, and and I was uh had a great uh database in which I was looking at financial institutions across the globe. I think one of the things that I found interesting, while the overarching institution, which was a diversified global financial institution domiciled here in the United States, was under Basel III rules. Some of their jurisdictions uh in other countries were under different regulatory regimes. And the fascinating thing in that was when we were I was looking at their derivatives holding, their derivatives were all domiciled in a country in South America that was under Basel II standards. So the question is given the way these regimes are coming up in or countries are implementing different rules, is there an opportunity for AI regulatory arbitrage?
Speaker 1:Can you clarify what you you mean by arbitrage?
Speaker:Well, if if if there's data privacy rules here in, say, the US or the EU, but another country doesn't have those data privacy type restrictions, could an organization have something domiciled in in one of their um uh institutions in another country with different set of guardrails, different set of rules, and perhaps different advantages than those here in the US or in the EU.
Speaker 1:Yeah, I mean that's that that's a great point. Um I have not seen that unfold practically. Um, because the, you know, the enterprises that we're talking about or the institutions that we are talking about, the reality is that they do touch on uh multiple jurisdictions. And keep in mind it's not about where the enterprise itself is located or domiciled, it's about the data flows, right? So I can have a US institution that is domiciled here in the United States, squarely does business here in the United States, but is taking data from European uh residents, right? And so the argument is that they would come under the GDPR in that sense. So, you know, unless you are creating an enterprise or a or you're a business or industry that is siloed within a specific jurisdiction, it becomes very overwhelming, like I said, to keep track of the patchwork of regulations that might apply to you. So, what's the solution? I mean, how do we handle that? Um the way that we like to handle it is that we standardize by what I'd like to call it, you know, trust by default. What does that mean? It means that we try to align our practices and our standards by the the strictest standards that we come under, whether that be the GEPR, the EU AI Act, what have you. And so when you benchmark against the the most stringent standards, almost by default, more likely than not, you've covered the less stringent uh regulations in the jurisdictions in which you have to comply. Um so, like I said, if you think about the GDPR, right? A lot of a lot of people call that the gold standard for data ethics. Well, if we if we set clear rules around those the GDPR, even though we're not technically legally bound by it, um I think you're way ahead of the game. So clear rules around consent, for instance, or data minimization principles when we're rolling out certain technologies. Um, how do we handle user rights? Uh, how do we handle consents? You know, many, many firms or many enterprises opt in to voluntarily adapt those principles, even though from a legal technical, strict legal perspective, it they might not come under it.
Speaker:Everyone, everyone talks about balancing innovation with compliance, but in practice, what makes that balance either work or completely fall apart?
Speaker 1:That's a good question. Um, it it really just depends on who internally, which of your stakeholders you bring to the to the table and what governance frameworks you're deploying to um put guardrails around the system that you're considering. Right. So what makes it work versus not work is how how am I approaching the technology? Am I approaching the technology at the forefront? Or do I roll it out and then think about privacy, think about uh the technology later? Um, so companies, in my experience, that are kind of behind the game are the ones that implement the technology and then ask questions later and don't have the proper framework around the technology, the proper policies, what have you, that act as guardrails so that when they do deploy uh the technology, they understand the risk and can comply with whatever legal regimes uh are they come under.
Speaker:On the customer side, trust is everything. Uh I don't have one conversation about AI without saying the word trust. How are financial institutions trying to weave compliance and data protection into the digital experience so it feels seamless and not restrictive?
Speaker 1:The things that we're talking about now were the things that we've mentioned throughout the podcast are can be couched under one term. It's called privacy by design. Um and so the best experiences that consumers have are with technologies that are deployed with privacy at the forefront. Not necessarily calling attention to it, but privacy is embedded in that experience in that digital experience that the consumer is having. So if you think about the way you interact with certain banking institutions, for example, right, they have integrated uh real-time consent dashboards, um, you know, data use explanations when you come across uh a point in your interaction with the website, for instance, that uh will pull data from you. They adapt authentication uh methods that uh feel secure, but not necessarily intrusive or or burdensome to the user experience. So from our perspective, transparency builds the trust, and then trust drives the adoption of certain technologies. Um, so that's really the balance that uh these institutions are trying to master.
Speaker:One thing that always comes up is algorithmic bias. Um and part of that, you know, uh even though it's not popular say anymore, but in terms of DEI and and some firms are dropping the D, others the I, some of the E, who knows? But with algorithmic bias, how are regulators and the firms you work with trying to get that right, especially in financial decision making and things like lending and and and whatnot?
Speaker 1:That's a very real consideration, especially for financial institutions. Um, it all starts with data integrity. So biased inputs create biased outputs, right? Um, so regulators expect firms or institutions to audit the data and the models that uh they're they're implementing, not just at launch, but continuously throughout the life cycle of the product. Um so uh having then having that human in the loop, vetting the outputs against the data that's being ingested by the model, making sure that it is uh minimally biased or or has accounted for biases that might exist within the context of what we're using that model for. Um, so I've seen financial institutions in particular invest in what we call explainable AI. Uh, those are two AI tools that can translate complex models into human readable logic. So it explains it to you in almost like a dumbed-down version. Um, and that's particularly essential when it comes to lending and credit decisions that you mentioned in your question. Um, so the more from a regulator perspective, the more you can explain why, the stronger your position is, both with not just regulators, but with consumers as well.
Speaker:I was having an interesting conversation last week. Um we were hosting a conference with AWS, and it seems the sentiment now is some of these AI tools is to almost treat them like an intern, right? Intern's gonna make mistakes, an intern's gonna um, you know, it's not gonna be the end all be all. And that becomes a point of of human intervention. Is that something you're seeing in the way tools are being deployed with that human overlay? And at some point, when does the intern become the expert?
Speaker 1:Yes, I've I've heard that sentiment expressed in the past. And what I always caution is uh the buck stops with you, right? So so um when we speak about risk, when we speak about uh potential avenues of liability, um you can't really blame the AI for getting it wrong, right? Uh the default is always going to be, well, where were you in that process? Um, and so, yes, can we develop models that are exponentially faster and uh uh better, I'd say, than the human input? Absolutely. But there's always gonna have to be that human oversight to ensure that the outputs are ethical, not biased, uh accurate. Um, so although the the models themselves can lend to a certain level of expertise, uh, which affords us the business reason for implementing them, right? Um, it gives us that ROI. From not just a regulator perspective, but a practical perspective and from a trust perspective, we always have to have that human in the loop that's vetting the entire life cycle to ensure that the outputs are what we want them to be.
Speaker:What would you say is the next frontier in AI or governance and data protection that you think the industry perhaps isn't not paying enough attention to yet?
Speaker 1:We are seeing an uptick more and more in um supply chain risk, especially when it comes to AI. Um so, for instance, financial institutions use hundreds of vendors. Uh I mean, many of which embed AI into their platforms without even disclosing it. So that that creates hidden exposure. Um and regulators are starting to pay attention to those risks. Um, I think we will soon see more requirements around vendor-level AI attestations, um, especially from firms that aren't able to verify the models themselves. Um, you know, the models that are embedded inside other products that we deploy. Um so the recommendation is always that institutions start mapping and managing their vendor relationships now so that they can stay ahead of any regulatory curve.
Speaker:So, Miriam, unfortunately, we made it to the final question of the podcast. We call it the trend draw. It's like a desert island question. If you could follow only one trend in AI regulation or data privacy over the next few years, what would it be and why?
Speaker 1:I think the trend I follow most closely is the rise of trust tech. Um, and it I think it's adept to what we've been talking about for the last half an hour or so. Um, you know, we've seen FinTech, we've seen EdTech and RegTech. Now we're starting to see technology that actually measures and operationalizes trust itself. So uh it's it's technology that automates bias detection, um, creates real-time AI audit logs, has privacy dashboards that empower enterprises and users to really have more of an insight into what the technology is doing within their enterprise. So, in a world where customers and regulators expect proof of integrity and transparency, trust will become the ultimate metric of value, in my opinion.
Speaker:Trust tech. Well, you heard it here first. New term. Miriam, thank you so much for your insights. Uh, really appreciate uh the work you're doing and for sharing with us today.
Speaker 1:Thank you so much for having me, Jim. This was fun.
Speaker:Thanks so much for listening to today's episode. And if you're enjoying trading tomorrow, navigating trends in capital markets, be sure to like, subscribe, and share. And we'll see you next time.