Trading Tomorrow - Navigating Trends in Capital Markets

Dr. Merav Ozair on if Blockchain could enable Responsible AI

April 10, 2024 Numerix Season 2 Episode 15
Dr. Merav Ozair on if Blockchain could enable Responsible AI
Trading Tomorrow - Navigating Trends in Capital Markets
More Info
Trading Tomorrow - Navigating Trends in Capital Markets
Dr. Merav Ozair on if Blockchain could enable Responsible AI
Apr 10, 2024 Season 2 Episode 15
Numerix

All around the globe, AI is not just advancing; it's sprinting forward. As we stand on the edge of what could be the largest technological revolution of our time, we're facing some crucial questions. How do we navigate the development, deployment, and impact of AI while mitigating its associated risks, especially in a highly regulated industry like finance? And could the answer to implementing AI responsibly lie in the use of another popular technology, Blockchain? Dr. Merav Ozair, a global leading expert on emerging technologies, argues yes. She joins the host of Trading Tomorrow - Navigating Trends in Capital Markets, Jim Jockle for a riveting conversation on AI, Blockchain, and the future. More from Dr. Merav Ozair, including where to buy her upcoming book, can be found at -  https://www.doctorblockchain.io/

Show Notes Transcript Chapter Markers

All around the globe, AI is not just advancing; it's sprinting forward. As we stand on the edge of what could be the largest technological revolution of our time, we're facing some crucial questions. How do we navigate the development, deployment, and impact of AI while mitigating its associated risks, especially in a highly regulated industry like finance? And could the answer to implementing AI responsibly lie in the use of another popular technology, Blockchain? Dr. Merav Ozair, a global leading expert on emerging technologies, argues yes. She joins the host of Trading Tomorrow - Navigating Trends in Capital Markets, Jim Jockle for a riveting conversation on AI, Blockchain, and the future. More from Dr. Merav Ozair, including where to buy her upcoming book, can be found at -  https://www.doctorblockchain.io/

Speaker 1:

Welcome to Trading Tomorrow navigating trends in capital markets, the podcast where we deep dive into the technologies reshaping the world of capital markets. I'm your host, jim Jockle, a veteran of the finance industry with a passion for the complexities of financial technologies and market trends. In each episode, we'll explore the cutting-edge trends, tools and strategies driving today's financial landscapes and paving the way for the future. With the finance industry at a pivotal point, influenced by groundbreaking innovations, it's more crucial than ever to understand how these technological advancements interact with market dynamics, from the transformative power of blockchain in secure transactions to the role of artificial intelligence and predictive analytics. We're here to ensure you stay informed and ahead of the curve. Join us as we engage with industry experts, thought leaders and technology pioneers, offering you a front row seat to the discussions shaping the future of finance, because this is Trading Tomorrow navigating trends in capital markets, where the future of capital markets unfolds. All around the globe. Ai is not just advancing, it's sprinting forward, and many are hopping on the bandwagon, trusting the technology more and more. But as we stand on the edge of what could be the biggest technological revolution of our time, we're facing some crucial questions. How do we navigate the development, deployment and impact of AI while mitigating its associated risks, especially in highly regulated industries like finance. But could the answer to implementing AI responsibly lie in the utilization of another emerging technology?

Speaker 1:

Joining us to discuss this further is Dr Marav Ozer. Globally, she's known as a leading expert on emerging technologies and responsible innovation. She also has a data science and quant strategist background, making her the perfect person to speak with us. Dr Ozer has in-depth knowledge and experience in global financial markets and their market microstructure. She has developed innovative methodologies to evaluate digital assets and crypto markets, including cryptocurrency indexes, valuation and risk metrics, ratings and tokenized products. Currently, dr Ozer primarily consults organizations on how to responsibly innovate and strategically implement emerging technologies, in particular, ai and Web3. She's also the founder of Emerging Technologies Mastery, a Web3, ai and metaverse consultancy shop. Dr Ozer also has a book coming out about responsible technological innovation. It is set to be released in summer 2024 and is called Responsible Innovation how to Responsibly Innovate and Strategically Implement Emerging Technologies. Welcome, marav, so great to have you here today.

Speaker 2:

Thank you for having me, Jim.

Speaker 1:

So let's just start with the basics. You've done a lot of research on AI systems risks and challenges. Should we monitor the rapid evolution and deployment of this technology?

Speaker 2:

Well, there are a lot of risks and challenges with AI. I mean it's fun, it's hype, I mean we all experiment with that, with the chart GDP, but AI has been around for a long, long time, since the 50s. So, if you think about the recent challenges, first we have to define what is AI system. The Organization of Economic Cooperation and Development, it's an international organization. The US is one of the members and they have defined what is AI system and, in a nutshell, I mean the definition is everything that is input data and output data. That's an AI system. And, in a nutshell, I mean the definition is everything that is input data and output data. That's an AI system. It could be something from predictions to recommendations, to content, to insight you name it and if you think about it, it could be something very simple, like a regression model, which I'm sure you at Numeros have been using some, to something that is more advanced, like what we have now with generative AI, which you know the LLMs and HTTP and all of that which is a little bit more advanced.

Speaker 2:

So the core of AI system is data. This is how we demand Bring data in, get data out, so the data could be. That's, you know the main problem. If you think about it. There's a problem where? Where is the data coming from? I mean, who created data? Us, every one of us. This is where the data is coming from, and there's a sensitivity of the data, the privacy issue, the security issues. There's also a problem with the data itself, in terms of whether it's complete or incomplete, accuracies, errors, bias, which everyone is talking about.

Speaker 2:

So there's all kinds of challenges. That has to do with the data itself, but also with the way that these models are built. It's not, I mean, we think about AI as automation, but in order to push a button and something happens, there's a software behind it, there's a code. Who creates these codes? A human? So humans develop that. It also depends on how they develop it, how they train the model, how they monitor the model, how do they take care of all the things in the data that we just mentioned.

Speaker 2:

So there's a lot of challenges with the data itself and the AI systems. It's not like you know, deploy it and forget about it. You have to always monitor it and update it and make sure that everything is running smoothly. And now, with the Gen AI and the 2GP, there's another problem, which is hallucination. So, whatever, they may sound very confident in the answer that they're giving, but it could be completely false. So that's another problem that we have with the new AI. So someone was saying that if you think about garbage in garbage out with LLMs and generative AI, it's garbage in garbage out on steroid. It's okay, a matter of you know an error here. It's like you're giving me false information and you're like confident that it would fail with everything. So I know that the companies who are developing it are trying now to figure out how to solve these problems. Let's put it this way I'm a great believer in technology. I know technology can do great things, but as great as the benefits also is the harm and the damage it can do.

Speaker 1:

Is there a technology over the past 50 years where the same fear existed as it does around AI? Can you think of an example?

Speaker 2:

I know that around the internet, you know the 2000, and people thought that something will happen, you know, with the zeros and everything. I remember that was an issue. Some kind of like the dark ages are going to happen and we have to get prepared for that. So I think that's the only time that I can think similar to this. But when I'm trying to think about the history of all technologies that have developed, I think that's really the only one that has the excitement and the fear at the same time, and some people are only in the camp of the excitement and some people are only in the camp of fear. You can find somewhere in between, but it's either like this or that in most cases.

Speaker 1:

And I guess perhaps the fear is driven by the human nature of AI as compared to you know, computers have been around and things have been automated for forever. So you know, computers have been around and things have been automated for forever and perhaps it was an evolution rather than a revolution as compared to where we are today with AI. So it seems maybe that's feeding into this.

Speaker 2:

Yeah, I mean it's definitely an evolution. I mean we feel like, oh, AI just came to the surface and everything. Obviously not, because AI has been around since the 50s. This is where it all started and it evolved. And the financial industry have been using it also for decades. The tech industry have been using it for decades If you think about email recommendation and corrections, if you think about spam filters, using it every day without even thinking about it. So it's around, but I think that what made it a little bit kind of like more personal is the generative AI. It's as if you are communicating with the machine. It's like you're asking questions. It returns questions if talking to you, and I think this is where things are morphing to something else which has a lot of excitement or the fear. Some will accept it and embrace it and some will say, oh, this is a little bit too much for me.

Speaker 1:

So, looking at, there's a lot of talk about regulation globally around AI. What do you think are some of the key components that need to be included in any AI regulation framework? And perhaps, obviously, the EU AI Act is moving faster than the US, and do you think that's a good framework that should be adopted here in the United States?

Speaker 2:

So let me start from the last one and move to the. So let's think about the EU AI Act. They started working on that somewhere in 2019, five years ago. The way that it started is it was mostly, I mean, they kind of like developed like a risk level on how these systems can impact the safety of users. So they had minimal, limited, high and unacceptable. So it's kind of like you know, just you know trying after it is deployed, how, in terms of how they're going to impact the user. They didn't even think about the LLM and all of that.

Speaker 2:

And then the LLM came out, I mean with HTTP in late 22. And with all the hype of that, they decided, oh, there is a new animal, so to speak, which we didn't take care of it. And then they created a new article, a completely new article that just deals with generative AI and all that. And they talked about the deployment of this, the transparency, the asking to assessments of the systemic risk that can come out of it and all of that. If I'm looking at that, and then I can compare that to the executive order that came out with the Biden administration that was like at the end of October of last year, and the focus there is on the developers versus the EU. Originally the focus was on the users. You can say, ok, which? But? And the only I mean the EU is kind of consistent to the Biden executive order when it comes to the LLM generative AI, when the FOB is also on deployment, on the developers, because they want all the transparency and understanding, the data collection and all of that and training before deployment. Only when it comes to the foundation models and the LLMs, they are consistent with the VAD administration. Other than that, with the other AI, they also talk about the users.

Speaker 2:

Now you can ask okay, so should we focus on developers or should we focus on the users? I think it should be a combination of that, because we do have to take care of how it is developed before it is deployed. It is important, because of all the things that we just mentioned, that the data in development, how the code and the training and all of that. And now we know that it can do good and bad, so we need to take care of that. Illucination, for example, is part of the model, how it is built, how it is trained. So if you want to minimize that, we need to take care of that before it is deployed. So that's a development aspect of it. But also after it is developed, there are things that you may have not thought about when you were building it, and that's where it comes. You know what the EU is talking about, okay, how it's impacting the safety of the user.

Speaker 1:

So now you are.

Speaker 2:

So I think both is important and some kind of an integration of those. You know, we have to think about how to protect the input and the output of this and I think it's important the regulation would take care of the development part of it, which is very important, before it is deployed and then after deployment it's not over. Things can go wrong. Then you have to monitor that and this is where the risk comes and they want some information about that.

Speaker 1:

I truly understand the concept of looking at developers right. You know, obviously, the bias, the data, the way the models are working, providing transparency. But what is the driving concern for the users on behalf of the regulators? What is the thesis that they're really concerned about in protecting us from ourselves?

Speaker 2:

So there's one major problem, which the executive order was dealing with that extensively, which is bias and discrimination. Now why bias is so important and why it is a big problem. What is bias? Bias is driven by the data, and bias is when the data and the model that is built on that data favors or unfavors a certain group, and when you favor or unfavor a certain group, that's discrimination and it could be because of gender, age, income, you name it. It could be anything from something very subtle to something more extreme, and this is what the Biden administration is really focusing on discrimination, especially when it comes to healthcare and finance, real estate and all of that. And there are also education.

Speaker 2:

So it's very important to take care of that aspect of it, of the data, the bias, remember, there is a human element here. I mean this model, these systems, are not built from air. It's not like some alien came and built it. The data is data that we created and the models are developed by humans developers and trained by those developers, and so there's a human element here and the bias can also unconsciously I'm not saying the developers consciously are developing it this way and training it this way, but sometimes we have subconscious biases that we are not aware of, and these subconscious biases, in fact, it's interesting that it's somewhere in the data that we collect the interesting part of AI models.

Speaker 2:

I don't know if they depict on every subtlety of bias that you have, even if you are not aware of it, and it's kind of like putting your face at your mirror. It's like, for example I mean just an extreme let's say you use AI systems for hiring and you say we are an equal. I mean all of the statements we are an equal working place, diversity, ethical opportunities, and you know the rest. And then it happens that that, for some reason, what the AI is going to depict, that the profile of the person that they hire is a male under 35 from Connecticut. Why? Because that's what unconsciously you've been having.

Speaker 1:

Some argue the data is the data right? It's a matter of fact, right. So you know, if I'm tracking my calories on MyFitnessPal and I decided I want to eat a pizza and chocolate cake, ai is going to predict that I'm going to gain weight and it would be right. You know now it might hurt my feelings telling me that, but the data is the data. I mean my feelings telling me that, but the data is the data. I mean. At some point, how do you trust the data? Or is the bias coming in from the programmer, or is the bias in the data? Where is that living?

Speaker 2:

Exactly. So that's the main question. It could be that, from the example that I gave you, it's coming from the data, because they just have to look around and see what are the employees and say, oh, this is what's happening, and then that will kind of give you the. I mean, and there have been research that have shown that there, if you use AI systems for hiring, it's very biased. So you have to be conscious of that. So probably you are doing it without even you know, being aware of it.

Speaker 2:

So that's one. Or maybe it's not in the data, it's in the way that it was trained or developed. So, and this is why it is important to really have transparency and traceability and auditability of the development, for the data, for the code, the training and so on and so forth, because this is the only way how you can pinpoint where the bias is coming from, whether it was the data or whether it's the way it was trained or the way it was coded. And that's the only way. I mean, if you don't have this traceability, notability, it's hard to know.

Speaker 1:

Regulators and legislators are requesting organizations to adhere to responsible AI, but not suggesting how they practically implement it, and you've written about this before and you have a revolutionary idea to how to implement responsible AI utilizing blockchain. Perhaps you can explain.

Speaker 2:

Going back to what I just said, you need to trace it, you need to audit it and they want the transparency. They're asking for the data, they're asking how it was trained, they're asking for the information about the data that was used for the training and the development of the models, but they're not exactly, they're not saying how. They just want you to do that. And then companies are coming with policies and guidelines and waving hands, so to speak, but not concrete, actionable solution. If you want to be actionable and practical, then you have to have a solution, rather than just policies that are very, I mean, wishy-washy, if I may say, and then someone will come with this policy and that policy. We have to have something more concrete. Blockchain, in a sense, is everything that AI is not. It's transparent, it's traceable and it's immutable. Whatever you know goes into the blockchain cannot be erased, and AI has been called black box, and sometimes you know these models are black box. They are, I mean, it's hard to interpret them, hard to explain them and you need to explain them.

Speaker 2:

You need to explain these models for two reasons. One is that if you want to make a decision, you need to understand why you you need to explain these models for two reasons. One is that if you want to make a decision, you need to understand why you're making this decision this way or that way, why the output is like okay, do X, why should I do X? So you need to understand that. And also it helps to understand how to evolve with this model, because the models are not static. I mean, they are updating and evolving and you know what changes do you need to make in the model to make it better. So if you don't know how the output came out, you don't know where to start. So it's important to understand this explainability and interpretability, and part of it is understanding how it was developed. So you need to understand the code. You need to understand the data was to trace and audit everything, and blockchain allows you to have this transparency.

Speaker 1:

So you know, one of the biggest things I think is the scariest thing and we've seen examples with celebrities and whatnot is the rise of deep fakes. With these concerns, do you think blockchain technology and authentication and having an ability to verify digital content do you think that is a potential way forward to eradicate this deepfake issue?

Speaker 2:

Yes, because one of the issues that the Biden administration is asking in the executive order is to have a watermark, so to speak, for if something was generated by generative AI. And also I think the EU is asking in certain situations, especially with Gen AI, that the have this mark. And it is even better because it's cryptographic, it's sealed and you can't remove it. That's, I mean, if you create and there are some companies who are working on that, like Adobe, for example, and TruePic, and so they're working on something in that vein when you create, when some output whether it's a video or a picture or a photo or whatever created by this smart generative AI, then you get immediately like a mark that is created by. I mean, the blockchain is going to create that and seal it with the cryptography.

Speaker 1:

So I'm going to have one more question, but before I get to that, I know you have written a book. So maybe just for our listeners, who can, you know, learn more about some of your thinking and advancements on technologies, what is the name of the book and what is it about?

Speaker 2:

The name of the book is Responsible Innovation how to Responsibly Innovate and Strategically Implement Emerging Technologies. People think that innovation and responsible are contradicting each other. They're not, because if you think, the definition of responsible innovation in general I mean not just AI is how we can make to make sure that new technologies are working for us, for society, making sure that the benefits, they're helping more than they're making harm. That's the idea. We know that no technology is perfect. They can do great, but they can also do a lot of damage and we've seen some where you just mentioned the fake, the hallucination, and it can be even worse than discrimination, etc. We kind of enforce or enable more of the benefits and try to mitigate as much as possible. I mean, it's hard to bring it to zero, but we can try to work on that and mitigate all the risks and challenges and all the harm that it can deliver. So that's what responsible innovation is all about and it's every technology, whether it's AI, it's Web3, it's the metaverse, quantum computing, etc.

Speaker 2:

And the reason why I'm thinking about emerging technologies as a whole is because every solution is an integration of technologies. It's not just AI. You have AI and you have IoT and you have quantum computing and you have edge computing and blockchain we just mentioned. So the solutions are basically a holistic way, what I call a full system solution. So we have to think about holistically how we are developing the solution. If you think about what was it the Vision Pro which just came out you think, oh, it's just a Google, just hardware. No, within that, you have AI, within that, you have AI Within that, you have IoT Within that, you have biometric Within that, you have all kinds of other technologies to come together and bring this also AR in that, and so on and so forth. So there are all kinds of other technologies that are incorporated in that, so to speak. Gagra.

Speaker 1:

Sometimes it feels like all these technologies are getting built to build something, but we just don't know what they're building yet.

Speaker 2:

It seems this way because it's evolving People you know in all the sci-fi movies you think about, all these robots will take over the world and I've written about that that maybe we have to this is part of the responsibility of innovation is to understand how that may happen and prevent that from happening.

Speaker 1:

Well, I'll tell you, those Boston scientific dog robots are pretty scary. If you haven't seen those, go on YouTube. They're amazing. So sadly, we've made it to the final question of the podcast and we call it the trend drop. It's like a desert island question. If you could track only one trend in this intersection of blockchain and AI, what would it be?

Speaker 2:

So, going back to what I was talking about, how to manifest responsible AI and the solution, the Web3 solution. Since there are no standards out there, I believe and I hope that this will become a standard of how to implement practically responsible AI, this solution this Web3 solution. So that's something that I will follow and something that I believe will become a standard.

Speaker 1:

Well, I want to thank you so much. I had about 25 more questions that I didn't even get to, so thank you for your time and enlightening us. And, of course, her book is available on Amazon or wherever you get your books.

Speaker 2:

Thank you so much for having me Pleasure.

Speaker 1:

Coming up next week. We're joined by Alex Yavorsky, managing Director at Jefferies, to discuss the future of fintech from an M&A perspective. It's a conversation you don't want to miss.

Navigating AI Risks and Regulation
Responsible AI and Blockchain Integration