Trading Tomorrow - Navigating Trends in Capital Markets
Welcome to the fascinating world of 'Trading Tomorrow - Navigating Trends in Capital Markets,' where finance, cutting-edge technology, and foresight intersect. In each episode, we embark on a journey to unravel the latest trends propelling the finance industry into the future. Join us as we dissect how technological advancements and market trends unite, shaping the strategies that businesses, investors, and financial experts rely on.
From the inner workings of AI and ML to the transformative power of blockchain technology, our host, James Jockle of Numerix, will guide you through captivating conversations with visionaries who are not only observing the future but actively shaping it.
Trading Tomorrow - Navigating Trends in Capital Markets
Exploring How AI May Change Identity Security in Finance
As AI systems become more autonomous, they are making real-time decisions, managing sensitive data, and interacting across increasingly complex identity environments. That shift raises new questions about access, control, and accountability.
In this episode of Trading Tomorrow – Navigating Trends in Capital Markets, host Jim Jockle speaks with Raz Rotenberg, the Co-founder & CEO at Fabrix Security. Rotenberg discusses how identity protection must evolve in AI-driven financial systems. He explains what an “AI native” approach to security means in practice, why context and reasoning are becoming central to identity management, and how regulated institutions can balance autonomy, privacy, and oversight as their AI capabilities advance.
Welcome to Trading Tomorrow, Navigating Trends in Capital Markets, the podcast where we deep dive into technologies reshaping the world of capital markets. I'm your host, Jim Jocko, a veteran of the finance industry with a passion for the complexities of financial technologies and market trends. In each episode, we'll explore the cutting-edge trends, tools, and strategies driving today's financial landscapes and paving the way for the future. With the finance industry at a pivotal point, influenced by groundbreaking innovations, it's more crucial than ever to understand how these technological advancements interact with market dynamics. Artificial intelligence is becoming more autonomous, making real-time decisions, managing sensitive data, and powering new kinds of financial systems. That progress also introduces new questions. How do we secure access, define identity, and maintain control when machines start making decisions on their own? To help us explore that, we're joined by Raz Rotenberg, co-founder and CEO of Fabric Security, a company developing what it calls AI native approach to identity protection. Before founding Fabric, Raz spent more than six years at Run AI, where we helped build its AI infrastructure product from the ground up, work that ultimately led to the company's acquisition by NVIDIA. With experience across both government and commercial AI and development, Raz brings a unique view into how identity security must evolve as AI becomes more autonomous inside regulated industries like finance. So, Raz, first and foremost, so nice to meet you and thank you for joining us today. Thank you for having me here, Jim. So, from your perspective, how is AI changing the conversation around identity and access inside financial institutions?
SPEAKER_01:So I think it does change the conversation very significantly. And I think it boils down to adding context at scale. So the biggest shift actually comes from um having context in all the everything around identity and access management. So one of the things that has been happening in the world, especially around finance, is the explosion of identities. You have so many human identities. Now you also have service accounts, non-human identity identities, entering, we're entering the era of AI agents. And manual controls around identity security just cannot keep up. Because to manage and secure those identities, you have to make a lot of, you have to answer a lot of questions. And these questions require context answering. Now organizations are able to explain the context of so many identities and access related decisions, but AI makes it possible to analyze massive amounts of data and information and do it very quickly. Right. So it can actually understand context and surface key insights into actions.
SPEAKER_00:So what do we actually mean when we talk about an AI native approach to security?
SPEAKER_01:So AI native, an AI native approach is something that is built from AI, uh, it's built for AI from the ground up. So it's it's it also uses AI in every every single piece of the system. So it's not like taking an existing legacy tool and adding AI somewhere around along the process, like at the input layer or at the output layer. It's essentially redesigning the system and adding AI into every single piece of it, whether it's in the beginning, in the middle, or in the end. So we I always like to think in analogies, and it's like building an electric car. You don't take a regular car and just add electricity to it, right? In order to build a Tesla, you have to redesign the car entirely in a different way. So this is exactly what happens now in identity security, in particular and in all other places where we see AI native tools. It's about like embedding reasoning capabilities and AI native, AI-based automation to every single process of identity security and allow the process to continuously learn what's happening in the organization and improve over time.
SPEAKER_00:And some firms are exploring systems that adjust permissions automatically. What opportunities and concerns does that raise?
SPEAKER_01:So there's a huge opportunity. Essentially, they will be able to hugely reduce the amount of, for example, manual reviews, um, the amount of time it takes for access requests, faster remediation for overprivileges, and essentially significantly reduce the attack surface. The challenge though is that adding automation alone is not enough. You have to make sure that the AI automation comes with explainability. You don't want to put a black box that just makes access related decisions in, for example, regulated environments like financial institutions and financial organizations, right? So essentially is to put AI in place but keep the humans in control. So they are the authority and they are supercharged by AI-powered tools and not the other way around.
SPEAKER_00:And the term identity fabric, it's being used more and more often. You know, perhaps first can you explain the term identity uh fabric and what does this concept represent uh to you and why might it be gaining traction, especially in regulated sectors?
SPEAKER_01:Yeah, that that's a very good point. So essentially, identity fabric is the concept of taking multiple systems that are siloed and are managed separately, each one of them has its own identity and access management model, and combining them together into this what's called identity fabric, which is essentially this huge spider web of identities, access, permissions, roles, activities, and making sure that you give the organization this single pane of glass where they can immediately see and visualize all the access and permissions across their estate. And more importantly, more importantly than visibility, they would be able to um to make decisions that are um organization-wide and take one inform one piece of information and improve the decisions on another system based on that.
SPEAKER_00:And clearly oversight is the recurring theme uh over that we have in all AI discussions. How can organizations ensure transparency and accountability when AI systems are making or influencing security decisions?
SPEAKER_01:So essentially every AI-driven decision must be explainable. Why, for example, an access was approved, flagged, or denied, why one recommendation was done. So essentially, you want to make sure that you, as the organization, have access to the behind the scenes of the AI. So you must build systems that are uh that have human readable reasoning and audit locks. So essentially you can go and and research every single decision that happened or every single thing that the AI recommended, why it did so. What information did it take into account when it came up with this recommendation? And what's the implications of these um recommendations or actions that the AI took into place?
SPEAKER_00:And what would the experience for the end user begin to look like with these types of controls and clicks?
SPEAKER_01:Yeah, so I imagine that I believe that AI tools should supercharge humans and should essentially integrate with the way they work today. They should not change the way that people work. They should seamlessly integrate and supercharge them with capabilities that were not uh able uh before that. So essentially imagine that you do the same processes that you do today, but you have these smart recommendations. You have this all you have all these pieces of information coming right where you need them. And you also have this AI that reasons over them, creates this single coherent story from the facts, from the scattered pieces of information, so you as the end user can make a better decision.
SPEAKER_00:And many banks and trading firms really still rely on legacy infrastructures and systems for that matter. What challenges do they face when they're trying to monetize identity systems for AI-driven environments?
SPEAKER_01:So there are two main challenges that that we keep hearing from um uh banks and trading firms, for example. So the first one is connecting to those legacy systems. So, for example, many of those systems do not have uh well-defined APIs or um nice automation workflows or a good enough user interface to connect them. But let's say that you are able to connect to them. The second challenge is to actually understand the permission model. Let's say that you have a legacy application, legacy infrastructure that was created 15 or 20 years ago. The person who created this software has probably already left the organization. But thankfully, AI could be beneficial for both things. So AI could help a lot with integrating and connecting to those systems and applications. For example, it can find the right API to use or use what's called, for example, a computer use agent to export information easily from those systems. And the other thing is that AI could iterate over the entitlements, the permission models, and reason over it and understand which entitlements, what actually every entitlement means, and which ones are the more important ones to help the humans focus on them when they're making the access decisions.
SPEAKER_00:And what should fintechs um be thinking about? You know, who sell into these institutions who might be putting in place these types of AI uh security features?
SPEAKER_01:So who's selling them? They need to understand that they're coming to organizations that that have been working for decades. Uh I've spoken to organizations that are more than a hundred-year-old companies, right? So they have all those systems and all those processes, and a lot of the times they have many, many employees, right? So you have to make sure that you don't come with a rip and replace approach. You don't want to come to those fintech institutions and tell them just to forget everything the new and start working with this new model that you're coming with. You want to make sure that you integrate with what's happening today and you over time improve, get into a significant improvement, for example, in a few months, and and help the people do their existing work and free them from the repetitive, labor-intensive tasks of the day-to-day to focus on higher strategic initiatives.
SPEAKER_00:And how do regional regulations across the globe shape how firms can use AI for identity management?
SPEAKER_01:So regulations around AI are like they're a new thing and they're they're also like progressing. And organizations need to make sure that they follow the right regulation. But one of the things that I want organizations to keep in mind is where the data is located, and probably most importantly, what about the training of the model? For example, what model is running on my data? Is it a frontier tier uh model? For example, OpenAI or Anthropic? Is it something that is fine-tuned? Where is the actual GPUs and hardware is located? And is the vendor using my data to train their model and eventually run on top of information of other organizations or not? So you have to make sure you take all that into place and you get the right answers before integrating an AI system.
SPEAKER_00:There's often tension between innovation and control. How have you seen institutions trying to strike a balance as they experiment with AI and security and compliance?
SPEAKER_01:So successful AI projects always start small. They start with a small portion of the environments where you take these part of the environment, you run a proof of value, and you want to make sure that a proof of value demonstrates both the transparency of the AI as well as the measurable ROI and the effectiveness of incorporating these AI systems. So essentially, you start with that, you see the value, and you make sure that you understand the AI. You can see the behind the scenes, the chain of thought, things like that, until you gain the trust in that system. And you gain the trust in that system by looking at the things the AI said, providing feedback, using this human-in-the-loop validation process. You want to make sure that you have clear guardrails and continuous monitoring. And only after that, you want to um over time you want to increase the amount, the portion of your environment in which the AI is running on, as well as the autonomy that you give the AI.
SPEAKER_00:And as AI expands across trading and risk and client platforms, what should financial leaders understand about the evolving link between autonomy, privacy, and oversight?
SPEAKER_01:Yeah, that's a very good point. Because you do want to get to give your AI more autonomy. Because the more autonomy and the more agency you give your AI, the more impact it will get you, and the more, the higher ROI of that AI system. But you have to make sure that autonomy doesn't come without a cost. So if you just put if you just give your AI complete autonomy without the right guardrails, you put the organization in operational and reputational risk. So you want to make sure that you um give the AI autonomy over time only after gaining enough trust in the system. And this trust, as we discussed before, it comes with the explainability of the system, and you want to make sure that the humans are in charge. They feel that they are in charge, they understand the AI, and only then you want to increase the level of agency and autonomy of your AI.
SPEAKER_00:Some argue letting AI work autonomously is the way to get best performance uh out of the AI. Um from in a in a human explainable way. Um it almost uh to do so almost degrades the performance because AI, for lack of a better term, thinks differently than humans, uh, and therefore can come to decisions in a different way. How does that affect that concept affect uh especially things around security?
SPEAKER_01:So I think it's very similar to, for example, hiring a new employee, right? So, for example, if you are a few decades-old organization and there come there's somebody new joining the company today, it would take time until you get this person full autonomy in their role, right? I I've never heard of somebody coming in into the organization and in their second day of work just completely change the architecture of anything, right? So the same thing goes for AI. You want to make sure that you give it the right amount of time and feedback and guardrails to make sure that the AI fits your organization. And it's not that your organization needs to change for that particular AI to work. So this comes with this feedback loop. You can provide feedback, for example, for every single decision or recommendation of the AI. So if it's okay, you can do, for example, a simple upvote. If it's not okay, you can do a simple downvote or actually explain, tell the AI why you're not so happy with that recommendation, essentially teaching the AI how you are working within your particular organization.
SPEAKER_00:And looking ahead, what developments in AI or cybersecurity could most influence how financial institutions define and protect identity?
SPEAKER_01:So I think there are two main aspects. So one of them is we're gonna get more identities. So we're entering the era of AI agents, and organizations need to ensure that they are um ready to protect and secure those identities. But the the second thing that is more interesting, um, the way that I see it, is that you're gonna get more agentic decision. So it's gonna be an industry-wide shift from this um look at security as this static rule-based policy enforcement and permission modeling that happens only once in a decade and never changes. This is gonna be changing towards this continuous um ever-learning process that is powered by this artificial intelligence brain that always works in the background and makes sure that the permission model of the organization across their entire identity fabric, identity fabric, is the best one for their organization, is the ones that suits their needs at the right time and and essentially take their permission models and make them a lot better for their organizations.
SPEAKER_00:So, unfortunately, we've made it to the last question of the podcast and we call it the trend drop. It's like a desert island question. If you can only watch or track one trend in AI security, what would that be?
SPEAKER_01:So for me, it would definitely be the AI reasoning capabilities in identity security. So essentially, I believe that the missing piece in identity security is not another tool, it's not visibility, it's essentially the reasoning and the context. So essentially moving from pattern recognition and static rule enforcement to actually understanding what's happening, why is it happening, what should be happening, and seeing the shift from static to a more dynamic permission model that is AI-based, AI powered, and AI driven. This is what I'm gonna keep my eye on. And I'm sure that we're gonna see very interesting stuff there.
SPEAKER_00:Raz. I want to thank you so much for taking your time and sharing your insights. Thank you.
SPEAKER_01:Thank you for having me here. Yeah.
SPEAKER_00:Thanks so much for listening to today's episode. And if you're enjoying trading tomorrow, navigating trends in capital markets, be sure to like, subscribe, and share. And we'll see you next time.