
Oyster Stew - A Broth of Financial Services Commentary and Insights
Our financial services industry insights come from on-the-ground experience, successfully navigating complex challenges and business. Our insights are your opportunities.
Oyster Stew - A Broth of Financial Services Commentary and Insights
The Realities of AI Implementation: What Every Firm Needs to Know
Powerful AI tools are rapidly transforming financial services, but beneath the promise of efficiency lurks significant risk. For firms eager to harness AI’s power while avoiding costly missteps, strategic planning and sound governance are non-negotiable. In Part 3 of our special Oyster Stew podcast series with Morgan Lewis, our panel of experts uncovers the practical realities facing firms implementing artificial intelligence solutions today:
· Are your policies and strategies setting you up for success, or a compliance nightmare?
· What are the legal repercussions when AI systems fail, like placing erroneous orders?
· How can strong governance, data quality, and human expertise drive successful AI adoption?
Oyster Consulting has the expertise, experience and licensed professionals you need, all under one roof. Follow us on LinkedIn to take advantage of our industry insights or subscribe to our monthly newsletter.
Does your firm need help now? Contact us today!
Welcome to a special Oyster Stew podcast series presented by CRC Oyster and Morgan Lewis. This week's episode is part three of the series. Make sure you listen to the other episodes, where you'll hear essential insights from industry veterans with decades of hands-on experience in wealth management, trading technology and SEC enforcement. If you'd like to learn more about how CRC Oyster and Morgan Lewis can help your firm, visit oysterllccom and mor MorganLewiscom.
Pete McAteer:With the evolution of AI and the vast frontier facing the financial services industry, oyster Consulting and Morgan Lewis have partnered to bring to light some of the critical challenges, threats, risks and the opportunities that we're all thinking and talking about. The team of experts joining me today are Carolyn Welshans, a partner at Morgan Lewis in the Securities Enforcement and Litigation Group, where she advises and defends financial institutions, companies and their executives in investigations by the SEC and other financial regulators. Dan Garrett, a managing director at Oyster Consulting, with 30 years of wealth management experience, working at RIAs, broker-dealers and clearing and custody firms running operations and technology groups. Jeff Gearhart, also managing director at Oyster Consulting, with 35-plus years of capital markets experience in senior leadership roles with institutional broker-dealers. And me.
Pete McAteer:Pete McAteer, a Managing Director of 25 Years in Financial Services, leading many of Oyster's management consulting engagements that include firm strategic direction, platform strategy decisions as well as execution, with a focus on readiness and change management. Thank you for joining us. So, dan, the first question is coming to you have you encountered instances where AI deployments led to unintended consequences?
Pete McAteer:What lessons were learned about.
Dan Garrett:It's been in the papers, been around which is somebody using AI and not verifying the information that it provides back. So we all know that AI hallucinates, it makes up things. That's not a bug. It's as much as it's being creative, like human brains. Sometimes it gives you creative things to say or to talk about, which is great when you want it to be creative, not great when you're trying to look for facts and you're displaying what you're getting out of AI as facts. And so the recommendation that we have is always verify trust, but verify anything that these generative AI models provide to you. One of the tips that I like to provide is just ask the AI model to provide you with its sources and double check those sources. They can provide hyperlinks to sites. It'll go there, you can look at it, and that's a great way to just double check that what it's providing to you is accurate. But the other thing I wanted to talk about was very specific to our industry and some things that I heard. That was some stories that I heard from financial advisors.
Dan Garrett:In our space, there's been a plethora of AI agents that are being used for note-taking, so they are being used by financial advisors to listen in on the phone call that they have with clients and that helps them with note-taking and it helps them summarize the call. They can chat with it afterwards and say you know what did we talk about, what were the takeaways from the phone call, and the generative AI will present back, you know, just a nice summary of the call. It'll provide a to-do list that you can take and you know provides you know, provides potential time savings and what you hear a lot of times. It might be some hype from some of these different providers, but this can save up to five to 10 hours of time for financial advisors in that they don't have to take notes anymore, they don't need to transcribe notes, they don't need to provide notes to their sales assistant. Sales assistant can go in and talk to the chat bot about that conversation that was had and get the takeaways, and so there is a potential opportunity and I've talked to financial advisors saying, yeah, it's a game changer for them. It's been wonderful.
Dan Garrett:However, I talked to another group of financial advisors that said, no, it's an absolute nightmare. It's terrible, because right now what they have to do is they have to then get the transcript from the AI conversation. They have to review the transcript to make sure that everything that was said so again, comparing their actual notes to what the AI is producing is saying is correct, verifying it, making changes to it and then providing it to compliance to be recorded. So how can you take something that, for one group, is saving them 10 hours a week and another group where it's costing them time because they're going through and reviewing it?
Dan Garrett:And what I'll say is it's all about adoption and the way that some of these applications are put into place, and so I put it out there as just a warning that when you think about implementing generative AI and using it in different models and so forth, really think about the consequences to the process and the flow and the requirements that you're using it. Now, some firms' compliance departments may not be requiring that or their financial advisors to review and make changes and store those notes, and others may. We can get into whether they should or not, but it's about implementation and then understanding the consequences of what happens there. So those are two that I just wanted to point out.
Pete McAteer:Hey, dan, just a quick clarification on that second piece. Do you think it has more to do with the firm's policies and procedures around managing the AI tools? Absolutely Okay, and that's. I guess Carolyn might want to weigh in on this when we turn to her as well. I didn't want to step on toes, but it just feels like you could really create some onerous oversight and review if you didn't trust and didn't have the experience with the tool set.
Dan Garrett:Absolutely correct. Okay, Absolutely correct. And we could get into an entire discussion around that of whether these recorded phone calls are admissible and things that you should be storing in your books and records or not. Some firms are arguing that no, there isn't a transcription, it's just the AI has learned about the call and you can talk to it about it. Yes, you can ask it to create a transcription, but that's only if you ask it to do that, right? So at that point, the transcription of the call exists and then it should be stored and it should be made sure that it's accurate, right? So there's a lot of gray there and ways to think about it, but it absolutely comes back to policies and procedures, and thinking these things through before you run out and implement a system is to think about the policies and procedures that you're going to put in place and really think through is this going to make things better or worse in terms of operational efficiencies?
Pete McAteer:Okay. So, Carolyn, I'll turn it over to you for your feedback. This is right up your alley now.
Carolyn Welshhans:People just generally have been able to identify efficiencies and positive things that it can provide to a business, including in the financial area, but at the same time, you've got to think about the fact that, ok, so what are, though, the regulatory requirements? What is this going to mean for our governance? What are the things we've got to think through of how this fits into either what we're already doing, or does it create a new obligation on our end that we didn't have to deal with before? And that's not to say you shouldn't adopt the AI if it makes sense for your business. It's just that you've got to think all of this through, think through the different regulatory regimes that might apply to your business, and then what do you do as a result?
Pete McAteer:Okay, awesome. Thank you, carolyn. Just a quick question, jeff, just in case you've thought about this or maybe seen something out there in this space. But high-touch trading desks, they're talking to their firms. You see this as it's reared its head in that space.
Jeff Gearheart:In high-touch trading desks. I would say not as much as the algo market-making desk and the model-driven desks. It's really where you see heavy use of AI, heavy use of data, ai to manage that data and make trading decisions. That's where it's really coming into play. The high touch desk is still a lot of the good old-fashioned voice communications, providing guidance to, to the clients and moving on from there and I guess those are already recorded lines and the transcripts would just be additive to the existing policies and procedures, right?
Jeff Gearheart:fair when you think about a user coming into a high touch desk. They're seeking guidance on bringing a large position into the market to liquidate it or accumulate or something of that nature. So they're looking for consultation and I'm pretty sure they're still going to want to talk to their trading or sales trader, if you will trading, or sales trader, if you will.
Pete McAteer:So Carol, what legal repercussions can firms face?
Carolyn Welshhans:if AI systems fail or cause harm. So, just like Dan before, I'm going to pick one situation to kind of pick on here, and I think the one that people maybe have thought about the most, or the worst case scenario they've thought about, is AI hallucination when it comes to trading. What does that look like, what are those risks and what could result? And so I've thought about that in terms of. I think the closest analogy is algorithmic trading. We've already seen that. I think it's in some ways a very close cousin, and the SEC has brought cases there where there have been allegedly runaway algos or trading algorithms that didn't perform the way that they were supposed to and resulted in a flood of orders, for example, going to the market. And in those situations the SEC has brought cases against the broker-dealers involved under Rule 15C35 of the Securities Exchange Act of 1934.
Carolyn Welshhans:It's sometimes referred to as the market access rule, and what that rule generally requires is that broker-dealers have some pretty specific types of controls in place. They really come down to some financial risk and some regulatory risk controls that are really designed with the intent of preventing what people in the past have referred to as a fat finger error. You know, somebody enters an order for a million dollars when they meant a dollar, or you know a million orders when they meant one, you know you put in too many zeros, sort of thing, and these controls are supposed to be in place to make sure that if that order, that erroneous order, would exceed, for example, a credit or capital threshold for that specific customer and for the broker dealer itself, the order gets blocked. It doesn't ever get placed.
Carolyn Welshhans:So you can see how that's something that might be looked at if there were an algo that hallucinated and then similarly placed a bunch of orders that run contrary to the financial risk model of a broker-dealer or its customers, for example, or something else about their trading, and so again, kind of like we were talking about a moment before, that's not necessarily a reason to not adopt AI if it makes sense for your trading model and your business, but I think it just means you've got to think about that sort of rule if it applies to you as a broker dealer, and have you thought about how algorithmic trading in the past, if you've done it, or even if you haven't might now be implicated by AI under this sort of rule?
Carolyn Welshhans:How do you make sure that your automated controls are keeping up with, for example, generative AI. That might be changing over time, so are you thinking about how to make sure that you're surveilling for those controls once you have them in place and that you're comfortable that you've got that control? So I think that's one kind of very specific example of the legal repercussions that could come about when we're talking about AI and trading and financial firms.
Pete McAteer:Terrific. Thank you, carolyn Jeff. I'm going to turn to you. Anything else to add there?
Jeff Gearheart:I think that that is actually an excellent example. We do a lot of work around the market access rules and we're well aware that there are a lot of large penalties and fines that can go into place. That's just from the regulatory aspect. Then there's also the trading losses and the true financial losses you're incurring. It's a big deal and when you're using these models or AI to guide the models, things can go haywire pretty quickly. So you've got to have the right controls in place, not just on the credit and capital aspect, but the erroneous order controls testing that's involved, which is actually required by the rule to do the certification. There's a lot firms have to do when you have direct market access and you're using trading models and AI to make decisions. So excellent point.
Pete McAteer:So, Jeff, how can firms proactively identify and mitigate risks associated with AI in trading operations?
Jeff Gearheart:Thanks, pete. I think there's lots of ways to answer this and I'll give some specific examples, but I think all the core risks have an underlying theme and that's that industry knowledge and expertise is essential. It's key to managing and mitigating the risks. So, in other words, artificial intelligence great. Somebody needs to know what it's doing and to evaluate the results. I think it's going to become a larger problem when you talk about trading, operations and settlement functions.
Jeff Gearheart:It's not the glamour part of the industry and that's where we're losing a lot of industry and institutional expertise. So people are retiring or moving on or things of that nature and, to be clear, nobody wants to go into the securities industry to be an operations professional. They're all looking at the sexy side of trading and model development, things like that. So you need to make sure you have the right people there. Key staff is essential to understand the basics of the process and evaluate the results of the AI results, the trends, the data analysis, things of that nature, so that, first and foremost, is knowledgeable, well-trained industry professionals. Second, I think this is where a lot of companies need to evolve and where I think we're seeing more work is firms need to have an AI framework that defines the governance and accountability. Simply put, you need to make sure the company knows how AI is being used within the firm, that there's an approval process, that people aren't just inserting it in the process and moving forward from there. So those are what I think are my priorities.
Jeff Gearheart:When you get into the specifics, such as model risk, you know the model could be, you know, producing an incorrect output, so you need to have the right level of model validation in place, stress testing and, honestly, regular retraining and viewing the results and making sure that they're meeting your expectations. A couple other risks that I think are really key data quality and integrity. That's been a big deal for me. I've been in this field for over 34 years. Data quality is key and these models can analyze huge amounts of data very quickly. But you better have regular, rigorous data cleaning make sure it's valid, make sure it's accurate, make sure it's not corrupted those types of things.
Jeff Gearheart:And then, when it comes to the use of AI for operational risk, you need to make sure there's transparency, there's audit trails on what it's doing, there's some type of metrics that you can use to review the results to make sure they're reasonable in an escalation process. They're reasonable in an escalation process. And the last thing I'll state even though there's probably a bunch of other risks, such as cybersecurity and other things you need to focus on is change management. We've all worked in large companies and they get content doing things one way Well. Ai continues to learn and evolve, so you have to provide training, ongoing management of anything that changes in the process that could affect the models and involve, you know, not just the technology team but the end users and the people that can actually evaluate the results. So there's a lot, I guess, to answer your question in terms of how you could mitigate the risk, but those are what I think are the keys for me.
Pete McAteer:Yeah, yeah, thanks, jeff. I agree. Much more to come. We're just getting in the door, through the door with this right now.
Carolyn Welshhans:Carolyn, anything to add on the trading operations.
Carolyn Welshhans:I mean, I think what Jeff said was really thoughtful and for me it helped clarify, kind of the thought of ai isn't plug and play. Obviously we've been talking about that um, and I also think it's not necessarily correct to think about it as a substitute for a lot of the, the uses we've just been talking about. It might be an enhancement, it might make things better, but as jeff was talking, it was clear kind of each of the steps he was talking about. You do still need the people, you still need the knowledge. You know whether it's in terms of oversight or it's the training of the model or it's, you know, thinking through what you really want it to be doing and the knowledge you want to be imparting to it. You know it's still a partnership with the people who have that knowledge and have those contributions, and I think that that might be a good way to think about it. Again, not a substitute or a plug and play, but an enhancement, if that is in fact what it would be for your business.
Pete McAteer:Yeah, the plug and play piece I see is where it inserts itself in the middle of the analysis and digestion of large amounts of data to summarize and pull together summary data, summary information that can be leveraged and considered right and it has to be considered by a human before it can be put to use.
Libby Hall:Thanks for joining us for this episode of our AI series with Morgan Lewis. We hope this conversation gave you new insights into how AI is shaping the future of financial services. This podcast series was recorded prior to the merger of Oyster Consulting and Compliance Risk Concepts. Be sure to subscribe so you don't miss upcoming episodes as we continue exploring the legal compliance and operational impacts of AI. For more information about our experts and our services, visit our website at oysterllccom.