Trading Tomorrow - Navigating Trends in Capital Markets

The AI ROI Debate in Capital Markets

Numerix

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 30:59

Artificial intelligence has quickly moved from experimentation to enterprise discussion across capital markets. But as the initial excitement fades, senior leaders are beginning to ask tougher questions.

Where is the measurable return? What actually scales? And how do firms move from pilot programs to real operational impact?

In this episode of Trading Tomorrow – Navigating Trends in Capital Markets, Jim Jockle speaks with Cubillas Ding, Director and Markets Insights Consultant at Celent, about the evolving debate around AI’s return on investment.

Setting The Stakes For AI ROI

SPEAKER_01

Welcome to Trading Tomorrow, Navigating Trends in Capital Markets, the podcast where we deep dive into technologies reshaping the world of capital markets. I'm your host, Jim Jocko, a veteran of the finance industry with a passion for the complexities of financial technologies and market trends. In each episode, we'll explore the cutting-edge trends, tools, and strategies driving today's financial landscapes and paving the way for the future. With the finance industry at a pivotal point, influenced by groundbreaking innovations, it's more crucial than ever to understand how these technological advancements interact with market dynamics. But now the tone is shifting. Senior leadership is asking tougher questions. Where is the measurable return? What scales and what stalls? And how do firms move from experimentation to sustainable advantage? A recent selling study titled AI Frontiers and the ROI Dilemma in Capital Markets takes a closer look at this shift and introduces a framework for thinking about AI investment zones, distinguishing between quick wins and operational scaling opportunities, strategic bets, and areas that may warrant containment. To help us unpack this evolving ROI debate around AI and capital markets, we're joined by Cubulus Ding, research director of Sellens Capital Markets Practice. Based in London, Cubulus specializes in global financial markets, securities IT strategy, asset management workflows, and enterprise risk management. His work focuses on how financial institutions implement technology at production scale, reliability, safely, and with measurable business impact. Cubulus, thank you so much for joining us today. Great to be here, Jim, and good to talk to you again. So in your research, you describe the industry as entering a zone of disillusionment. What does that look like in practice and why you use that phrase now?

SPEAKER_00

Well, I use the term um zone of disillusionment, in this case in our research, to reflect the point where AI in capital markets shifts from broad excitement to a state of harder, more grounded decision making. Over the past 12 to 18 months, we saw LLMs effectively launch to the world. So generative AI became ubiquitous. Anyone with a mobile phone, including the most senior executives, could experience what comes across as relatively impressive capabilities firsthand for themselves. So this accessibility drove a sharp spike in expectations. And naturally, firms assume that the trajectory will actually continue and there'll be rapid productivity gains and also faster time to market, that sort of thing. Now, when I say that firms are entering a zone of disillusionment pertaining to AI, in practice, on the ground, this isn't necessarily implying that firms are abandoning efforts. Instead, it's more about friction meeting reality. And the phrase is particularly relevant now because many firms are reaching the point where easy wins and the demo effect have already been captured, right? So some firms are facing pilot fatigue with plenty of proof of concepts, uh fewer scale deployments. There may be a high level of ambiguity in terms of how value is really demonstrated. So teams struggle to quantify payback beyond anecdotal savings, sort of time savings. And then you also have data workflow constraints where models are impressive based on generic use cases, but integration into core front-to-back workflows, internal data sets, and real life production controls become a real bottleneck. And you know, legacy core systems and data quality may not necessarily be able to support these new generative agentic AI demands. So really the next phase will require more difficult internal conversations, specifically about where the economic payoff lies, which use cases, what operating models, and how long is it going to take to show up at scale? But I think, in my view, this recalibration is healthy and it separates experimentation from real industrialization in the longer term.

SPEAKER_01

So have there been other technologies in capital markets go through similar phases like this before?

Parallels And What Makes AI Different

SPEAKER_00

Well, I would say yes and no to that question. Yes, because uh there are indeed other technologies and the industry goes through similar cycles and phases of adoption. But I would also say no because at a purely technological level, I can't think of one that's specific only to capital markets. So with AI, you can draw parallels with past technology waves such as cloud migration, big data, you have sort of um uh process automation and then blockchain and DLT, where similar adoption and disillusionment dynamics have played out. And you typically start with initial assignment driven by uh visible demos and vendor narratives, followed by a more sober phase where core integration becomes more difficult. And that's where sort of difficulty comes in and disillusionment sets in. Now, more broadly, I think there are other technology-led changes in the past 30 years in capital markets, such as the electronification of equity and then fixed income markets. You have algorithmic trading, and then the current developments around accelerated settlement. But what I think is different about this AI wave is that, in my view, it comes from what I would term as sort of three use. So around the ubiquity, the utility, and the uptake of the technology. And it's just another level compared to prior waves of technology overall. So even though the pattern around AI adoption is similar, in terms of just hype, hard integration, and then real scale, AI is different in the sense that its ubiquity, utility, and uptake makes this less a capital market technology upgrade and more a foundational operating model shift altogether and the possibilities are broader and more profound.

Where Firms Sit On The Maturity Curve

SPEAKER_01

I think one of the key differences, and and you and you hit on this a little bit, is DLT, blockchain, et cetera. That doesn't change our lives, right? Every employee now has it on their phone, right? It and interacting, right? So there it almost seems as the workforce is going to almost accelerate adoption because at the end of the day, who who really cares about blockchain, right? It you know, I'm gonna come to work if it's on a chain or not, what do I care, right? But here today I can interact and it's every level of the institution. So now that we're deep into this kind of first wave of experimentation, where do you think most firms realistically sit on that AI maturity curve?

Data Debt And Integration Bottlenecks

SPEAKER_00

Yeah, I think most are still um in the early to ramping up part of the AI curve. So it's past the idea stage. And some firms have gone deeper into experimentation and now are building the what I would term as the foundational aspects required to scale rather than having the capability of scaling in a repeatable manner. So I think for the for the majority, the center of gravity is really around foundational enablement. And you see firms putting in place the building blocks that make AI usable in a controlled enterprise setting. So for example, things like data readiness, so basically to ensure stronger data governance, you know, lineage, quality, that kind of thing, uh, toward an AI-ready data world rather than ad hoc extracts. Um, you are seeing firms beginning to realize some of the elements around core systems modernization in terms of reducing dependence on uh brittle legacy platforms and batch processes. So you're moving toward an event-driven uh SCP-type workflow or a fit-for-purpose wrapper API that AI can actually sit inside real processes. And these often require rationalizing platforms and also the decommissioning duplicate systems altogether. Um, I would say on the frontier, there are pockets of genuine scale. So there are a smaller set of firms moving from pilots to end-to-end production grade initiatives that are repeatable. So, where workflows actually redesign and adoption is more measurable. And for firms in this phase, there are significant efforts to actually drive enterprise change integration and workflow embedding. So, in terms of uh integrating into the tools people already use. So OMSs, execution management systems, CRM, you know, research platforms if you're in the front office, or integrated development environments if you're a developer. So all of these require stronger risk and control frameworks, along with a fundamental rethink of the operating model and skills required. And so many firms are not there yet in terms of where they are in this process.

SPEAKER_01

So you mentioned data, and you know, it's one thing to drop a file or PDF one of them into um an LLM and have it scan it and give you a result. It's another thing to say search these thousands, right? Which means the data needs to be vectorized, it has to be in a right way that can be consumed and adjusted. How many flaws are companies experiencing in the way their infrastructure is set up? Uh you know, we would would you argue that is almost the first barrier that as why a lot of these AI projects are failing is just not having the right structure in place, whether it's the data structure, et cetera, to achieve success with these projects.

SPEAKER_00

I I totally agree. I think I think you have a situation where actually uh because of the silo nature of how technology has developed over the years or merged and integrated, you have patchworks of different systems at that different levels of maturity, uh capabilities, and also different data structures, even for the same nomenclature for data, it may not mean the same thing. And AI requires some level of uh data having being of a good quality, but also the fact that um you you have you have to be able to trust the data, right? So if the human doesn't trust the data, how are they gonna trust the decisions coming out of AI?

SPEAKER_01

Aaron Powell So where is the skepticism showing up most? Is it about technology? Is it is or the the AI? And you know, I go back 12 months ago and it was all about oh, hallucinations, or is it more about prioritization and and ultimately discipline?

Expectations Versus Enterprise Reality

SPEAKER_00

Aaron Powell Well, I think the skepticism is showing up less in the form of you know the technology doesn't work and more as a prioritization and execution problem in terms of uh alignment, discipline, and also realistic uh delivery expectations. Here I can say that as you alluded to earlier with the phone, consumer grade tools have raised the bar, right? Delivery misalignment and unclear prioritization happens when you have consumer grade experiences, say what via your mobile phone, which then sets the executive tone and then expectations for how it should or can work, right? And so this is similar to when uh the internet search first came on the world into the world. And everyone could use Google or Yahoo or whatever not. Now everyone can use ChatGPT or Claude or Gemini or Perplexity, right? So the frustration tends to surface when there isn't realistic alignment between the CXO level narrative, which could go along the lines of, oh, this will transform productivity quickly, right? Versus the reality faced by business heads, senior meeting managers who are accountable for delivery. And those teams run into the unglamorous work in terms of data access, workflow redesign, change management controls, core systems legacy integration, things like that. And that's where confidence can really dip if expectations aren't set properly, especially as firms try to reproduce a public-facing Gen AI experience inside a regulated permission high-stakes environment where accuracy, provenance, you know, things like that matter quite a lot. And where the last mile in terms of integration and governance probably constitutes most of the work. And that's where skepticism tends to build up if expectations aren't met properly.

Cost, Value, And Investment Zones

SPEAKER_01

And one other thing, which you know we we haven't touched on, is cost, right? You know, it's one thing as the consumer, if you're playing with Claud or ChatGPT and you know you're asking fairly complex questions and interfacing with it. But the minute you start getting into complex things, you're you're you're hitting limits really quick. And depending on what plan you have, you know, it's another 20 bucks, it's another 20 bucks. Uh I would assume if you're going to do this at scale, cost has to come into the conversation, right? I mean, are you are you hearing people saying, you know, this is great, but it's too expensive to do what we want to do?

SPEAKER_00

Yeah, I think the conversation needs to shift um to to be able to balance the kind of value versus cost, right? So so you can you can talk and talk about cost and say it's it's too expensive, but then that's part of the equation. And you have to look at actually what value does it actually deliver based on the cost that you uh that that you're expecting from from the AI. So I think the conversation needs to shift um uh to to have sort of look at the value, but also the cost. And that's why we we really introduced sort of the whole concept of the sort of AI investment zones in the study that we we actually did. So so uh that needs to happen. I think at the moment many firms are uh are throwing resources uh uh I wouldn't say indiscriminately, but but without without a lot of questions uh at the problem or at the challenge, uh without thinking actually where's the value versus cost uh and how should we structure that conversation?

SPEAKER_01

So let's stay on the four AI investment zones in your report. Can you explain them and what led you to structure the conversation in that way?

The Four AI Investment Zones

SPEAKER_00

Yeah, so we introduced the uh four AI investment zones because many conversations with sell-side institutions, with buy-side firms, and then the the vendor ecosystem, what we what we found was we repeatedly saw the same challenge. So firms having an expanding inventory of potential AI use cases, but prioritization is often ad hoc and usually influenced by what teams hear at conferences, you know, what they hear from consultants and vendors, and also uh in the broader news cycle of media as a whole. Now, the problem with that approach is that it makes it very hard to have the necessary and meaningful and often difficult uh internal conversations across CXOs, business heads, senior and middle management and risk and compliance teams about what to do next, why it matters, how to scope it, what you're actually building, and what it costs, and what the risk control implications are. And then finally, what when should you expect payback? Right. So the four investment zones are meant to be a common language and decisioning framework. So one that helps firms separate different types of AI spend and for them to recognize that ROI shows through different building blocks rather than a single monolithic business case. And so we structured the conversation this way to help firms move from you know, we have a hundred ideas to a disciplined portfolio view around what's foundational, what's near-term, what's transformational, and what's a strategic back. And to make the necessary trade-offs and risk return profiles actually more explicit. And that's how we went down that route in terms of the structured conversations that we think firms should actually have.

Copilots As Stepping Stones

SPEAKER_01

In terms of any new technologies, it's always that mindset of crawl, walk, run. Uh, and so many firms are really starting with horizontal co-pilots and really low-risk use cases. How should they be thinking about the longer-term value of those initiatives?

SPEAKER_00

Well, in my view, um, firms should treat early horizontal co-pilots and sort of use cases, as you put it, more as a deliberate investment in the capabilities that is required to unlock larger domain-specific impact. And this can come in the form of, for example, you know, building quote-unquote operating muscle in terms of cultural and skills-based ROI. For instance, uh, these initiatives could accelerate cultural adoption, you know, prompt engineering or AI literacy. It could help form the organizational norms around how employees work with AI, or also governance habits, right? So the long-term value is really the workforce and leadership team that can reliably identify, test, and operationalize AI opportunities. So, not just a set of point solutions. Um, you could also use these sort of early stage horizontal co-pilot initiatives to identify where horizontal tools plateau and where the real value is. So, although general co-pilots often result in productivity gains, the larger returns typically come from domain or role-specific co-pilots and AI agents embedded in the work, which can be specifically tuned to uh enterprise knowledge, it can be connected to systems of records and designed around specific decisions and tasks. So we have seen some firms use early deployments to also expand the inventory of use cases and define what success looks like. So, in terms of time-saved, errors, um, quality outcomes and things like that. But the bottom line is that horizontal co-pilot type initiatives are a stepping stone and they are valuable because they build uh adoption momentum and they could potentially highlight governance issues uh as well as help the organization um frame in terms of how it measures success as a whole.

SPEAKER_01

So, as but as stepping stones, are you seeing any meaningful economic impact? Um, is there measurable ROI emerging? Um, or are they just baby steps?

SPEAKER_00

I think they can be both, but the mix really depends on how deliberately the firm measures and operationalizes them. Now, continuing from what I said previously, I think early pilots or co-pilots can absolutely help with adoption in terms of reducing fear, you know, building comfort and trust, and just creating momentum and generating some sort of backlog of ideas through feedback. And that cultural socialization can be really valuable. Now, from an economic impact point of view, the picture can be mixed and often uneven at first. You know, some firms do see near-term gains in terms of time saved on things like drafting, summarizing, searching, and maybe first-pass analysis of certain something, right? But many don't see clean, immediate ROI because usage is inconsistent. So workflows aren't redesigned and outputs aren't integrated into systems of records. So benefits stay at a personal productivity level rather than translate into sort of enterprise performance. I think one of the most important benefits from these early projects is around being able to build what I would term as organizational IQ and EQ for AI. Three sort of short short letters. But these projects teach the organization how to select use cases, how to govern risk, how to manage change, and really to understand where AI is strong or weak, right? And this learning reduces the cost and risk of every subsequent deployment and accelerates the path. Higher value use cases, especially if you're going down the route of domain-specific uh use cases or workflows as a whole.

SPEAKER_01

What tends to separate firms that are moving beyond pilots uh from those who might be stuck?

SPEAKER_00

That's actually a very, very good question. So, in my view, I think there are a few characteristics that separate sort of serial prototypers from AI scalers. So I think the first one is that firms that stay stuck make the mistake of running everything from the center. So you have innovation teams, IT analytics from the center. So business units often feel like customers and not owners. And so pilots don't survive the handoff and adoption stays shallow. On the other hand, scalar firms would actually germinate from the center. So they expand outward. So the center sets reusable foundations, so governance, security, privacy policies, uh model, and tool standards. And then the functional and business unit domains would lead delivery to prioritize use cases, redesign workflows, drive adoption, and of course commit resources. So that's one in terms of germinating momentum and capability from the center. I think the second factor that's important is sort of active and engaged leadership. You see pilot stall when leadership support is passive. You know, so they would they would say to employees, you know, try some AI, right? And and so on the other hand, firms that scale have active CXO engagement in terms of explicit goals, funding, uh, risk appetite, and then accountability. So I think those are two important things, and and none of it is really technology related. I think it's it's more cultural and how the firm actually builds capabilities uh for for AI around around its people.

SPEAKER_01

Well, it's interesting in terms of leadership, and I think of my own team and and and whatnot is you know, my demand on time to uh to produce something or or create value for something is shortening. And there is only one way to do it work through the night or utilize the tools that we're giving you, right? So um I I think the leadership component is uh is is critical there. Um, you know, are there expectations, do you think, where um expectations around AI might be running ahead of fundamentals?

When Competitive Advantage Emerges

SPEAKER_00

Yeah, I I think it's uh it's an interesting question because uh you uh that it is running ahead of fundamentals in a sense that uh much of what's happening now, it's still at the layer of having generic uh AI being implemented and and firms are only getting into the meat of the matter. So I would say in short, it it's it's it's running ahead. And and even though firms are uh going down the route of actually trying different things and and there are challenges ahead, uh, I would say that um most firms are I think most firms are are uh are coming to the realization that actually there's still some way to go in terms of being able to extract value from AI in the longer term.

SPEAKER_01

So when do you think we'll see truly undeniable AI-driven competitive advantage in the capital markets?

SPEAKER_00

Yeah, I I this is going to be a crystal ball, I think. Um from my standpoint, I think if you talk about early undeniable advantages, I would say probably after three years in specific pockets, you know, for example, areas such as research production, maybe surveillance and some aspects of trading and structuring. But I think in the longer term, for the more defensible firm-wide advantages, this is likely to be a sort of five plus year story as operating models and controls mature. But I think a key point to underline here is that the differentiator and the competitive advantage won't be related to models or the tool itself because those will actually commoditize. I mean, in my view, it will be culture and adoption in terms of how employees and partners are trained, incentivized, and expected to work with AI every day. Now, I have actually quite an interesting uh story. It's an example of a public sector organization, uh, and they conducted a 12-week co-pilot trial in 2024. Uh, and this was done across different departments to serve as um, and and I I present this as almost like a cautionary tale to what you're asking in terms of competitive advantage. Now, the findings from that particular study cited no definitive evidence of productivity gains from the use of AI co-pilot, despite high user satisfaction. Right. And they also said for Excel and data analysis tasks, they these were reported to take longer to do and were less accurate with copilot. Right. And at the same time, users had remarkably low uh um engagement with the AI tool, and there was heavy manual editing, you know, 60% saying that they would um uh make changes in terms of moderate to significant edits to some of the outputs that that AI came, uh that AI produced, right? Now, when I look at these less than positive conclusions about AI usage, I would ask some questions. You know, were those employees motivated and incentivized to leverage AI to deliver greater productivity? You know, did they hold latent fears that AI could replace, displace, disintermediate their jobs? So I think these sort of questions determine how strongly an organization can attain different levels of differentiation and advantage from AI. That's the point. Because the example here suggests that without the right incentives, trust, and workflow redesign, AI becomes an occasional helper rather than a competitive lever. So, really something to think about in terms of yes, in the longer term, the advantage is going to come from how people use it and less about the tool.

SPEAKER_01

It's like a desert island question. If you could only watch one trend in AI adoption over the next few years or the way the world's moving in the next few days, um, what would it be and why?

The One Trend To Watch

SPEAKER_00

Well, I think the trend I'd watch is the convergence of traditional machine learning, gen AI, and agentic AI into an AI client operating layer. So basically one that redefines the client journey end-to-end. So traditional statistical machine learning can power prediction and control. So execution costs, risk, personalization. Gen AI becomes the natural language interface that explains research products in a client-specific way. And then agents that actually execute workflows, so RFQs, pre-trade checks, approvals, post trade services with full audit trails. So the shift here is really from clients navigating systems to simply stating intent and goals. And the platform basically orchestrates everything around outcomes and controls. So if I think about it, think Star Trek and the Enterprise and the captain giving commands.

Closing And Where To Learn More

SPEAKER_01

We're getting there. Yeah, and we're we're halfway there. It's no longer computer, you know, change my speed. It's hey Alexa, go uh buy me a cup of coffee or something. But Cubulus Ding, Director of Markets Insights and uh Consulting at Celent. I want to thank you for your insights. Um, the summary of the research is available on the Celent website and uh following uh Cubulus on LinkedIn uh for more of his insights. He publishes uh some blogs and research on a regular basis. So, cubeless, thank you so much for your time today.

SPEAKER_00

My pleasure, Jim. Good talking to you.