First Trust ROI Podcast

Ep 38 | Mandeep Singh | What’s Next for Artificial Intelligence? | ROI Podcast

First Trust Portfolios Season 1 Episode 38

Mandeep Singh, Global Head of Technology Research at Bloomberg Intelligence, joins the podcast to explore how massive investments in artificial intelligence made over the past few years could drive future innovations and profits. From humanoid robots to self-driving vehicles, Mandeep shares his expert analysis on the potential impact of these technologies in the years to come.

----------------------------------------------------------------------------------------------
Subscribe Here to the ROI Podcast & other First Trust Market News
Website: First Trust Portfolios
Connect with us on LinkedIn: First Trust LinkedIn
Follow us on X: First Trust on X
Subscribe to the First Trust YouTube Channel
Subscribe to the ROI Podcast YouTube Channel

Ryan:

Hi, welcome to this episode of the First Trust ROI podcast. I'm Ryan Isakainen, etf strategist at First Trust. For today's episode, I am very excited to be joined by Mandeep Singh, global Head of Technology Research at Bloomberg Intelligence. There is a lot going on in the world of technology. Mandeep just returned from the Consumer Electronics Show in Las Vegas. We're going to talk all about innovation. We're going to talk about artificial intelligence and where some of the opportunities may be for those companies that are investing heavily in AI in the years to come. Thanks for joining us on this episode of the First Trust ROI Podcast. Mandeep, it is great to meet you, to put a face with a name. I, of course, have seen a lot of your research on the Bloomberg Terminal and you are for those that are watching and you are, for those that are watching, the global head of technology research at Bloomberg Intelligence, and before we came on, I was asking you about the Consumer Electronics Show in Las Vegas.

Mandeep:

You were there last week. Yes, it was really, you know, quite a spectacle in terms of the number of launches, especially the focus around robotics. And really, I mean Consumer Electronics Show is always interesting because they try to deploy a lot of the technology we hear about into these cool devices. So, yes, quite a show.

Ryan:

So I'm sure you've been before it's something. I've never done, is this like have you been going for, like you know, a long time?

Mandeep:

Yeah, I mean there was a gap during the COVID phase. I didn't go for two or three years, but I have been going there a long time and I would say this year's show was more interesting than the 2024 show. Year's show was more interesting than the 2024 show and the reason I say that is because of all the new things that I guess were talked about, from robotics to self-driving cars and in general. You know the emphasis on using LLM technology, the large language models and generative AI across a broad swath of devices, and we did put out a long report on our takeaways from the show, so happy to get into the details, but a lot of new stuff around LLMs and AI agents. I think that was a big focus this year.

Ryan:

That's great. Agents. I think that was a big focus this year. That's great, yeah, and I definitely want to talk more about LLMs, about AI and really what actually is going to generate profits. I mean, there's been massive investments made in the infrastructure and the development and the build-out, so I think that's definitely a topic we want to dig into. So, before we do, though, 140 plus thousand technology people in Las Vegas what is that like?

Mandeep:

Well, I mean, look, these are people from different backgrounds in terms of their interests around the show. Not everyone is looking to, you know, analyze companies or look at investments. Some of them are there just to formulate those partnerships that can help incorporate the latest technology in their products, and some of them are looking for product ideas. So a lot of people, you're right, and they are very high-caliber people in terms of the exhibitors and what they know, what they are trying to showcase, and that's what makes it interesting. You have to pick your tracks and focus on, you know, things that you care about, but, at the same time, you can come across people that are very hard to find otherwise.

Ryan:

So was there any one thing that you can think of, whether it's AI related or not? That just kind of blows everyone's mind at the show this year, like what's the coolest thing that you saw at the CES this year?

Mandeep:

Yeah, I would say the coolest thing was a demo for a robot. You know that, again, there were a lot of robotic demos, but I think, just in general, this robot could interact with you. It could really give you a sense of the future in terms of humanoid swarm factor and what it could do in terms of you know, having the level of personalization and engagement and like it could understand emotions, it could read your body language. It could really help you do some of the things in your home that were otherwise. You know you're doing it manually and I think it was. Maybe it's not going to hit mainstream this year or next couple of years, but it kind of gave you a glimpse of what's possible using AI.

Ryan:

So we're like, so we're living in the era of the Jetsons. Finally, we've got Rosie the maid. So I was thinking one of the areas that, because I'm trying to imagine having a robot in my house and I'm really not sure exactly what it would do to change my life, to incentivize me, you know, to incentivize me to make that investment. But then I was thinking about all the help that people need as they get older in terms of, you know, home health care, aids, that sort of thing. That seems like that would be something you'd want a robot for.

Mandeep:

Yeah, look, and what the LLMs have showed us is these LLMs have knowledge of the internet, knowledge of the world that's all digitized, and if you distill that knowledge into, let's say, smaller models which can run locally on a humanoid form factor or any other type of edge device, it can be quite powerful because it can understand general instructions. It can, you know, fetch you things in terms of, you know, doing a mundane task that it could be trained on. And you know it wasn't possible before, because when you look at Alexa or some of the prior, you know conversational devices. They didn't have that level of AI embedded into them that they could be generic or they can go beyond talking about weather. So I feel, with AI, we have come a long way in terms of training these chatbots, to a point where they can be quite intelligent in terms of understanding human language, and then they can be trained on tasks that are repetitive.

Ryan:

And.

Mandeep:

I think that was the takeaway over there.

Ryan:

So it seems, as I've watched and listened to companies as they disclose their financial results and they have their conference calls afterwards, that everyone really wants to be part of the glow of AI and large language models, and I've often wondered you know, where does the line get drawn between when something becomes AI versus, just maybe, some evolution of technology getting better? Do you have any way that you think about that?

Mandeep:

I mean, the best example I can offer is self-driving cars, right? So we've all heard about Waymo launching in five cities last year. They're doing about 200,000 rides a week, now 150 to 200,000. And look, when you think about how that inflection point has come.

Mandeep:

And Waymo is not all generative AI, I mean, it's a complex system that's augmented by, you know, llm technology now, but you need almost 100% precision because you're talking about, you know, somebody sitting in the car trusting that system to drive safely and that's where, if it can solve that problem, if AI can be deployed for a problem that requires 100% precision, then you feel like it can do a lot of other things. And that's where, at the CES, you could see humanoids being possible now, because we have solved the self-driving problem. I mean, if you're talking about a scale of 5 million autonomous rides a year, then I think the system is ready and the technology is there. I mean, yes, there will always be edge cases that need to be solved for and addressed, but I think we all can agree that you know a lot of people are trusting these systems every day and riding, and Zooks had a demo in Las Vegas where the show was, where you could ride from airport to the convention center in an autonomous vehicle, and everyone trusted that system and it worked beautifully.

Ryan:

So anytime there's this new. I started my career right before the internet bubble burst and so maybe I'm a little bit overly sensitive. I'm always looking for, I'm always concerned and worried about hype cycles and you know new technologies come out, you know they're going to be disruptive and you know they're going to change things, but I'm always worried about hype cycles and where we are in the hype cycle. So when it comes to, you know, some of these models, some of these LLM models and AI in general, where do you think we are in that hype cycle?

Mandeep:

I mean still in the early innings, because when you look at GPUs and the general availability of accelerated computing, the technology is quite expensive to deploy and it's very evident from the spending of these hyperscalers in terms of their CapEx spend, and the ROI is still kind of not very well established in terms of what kind of returns these companies are getting on that AI spend. With that being said, there are some noticeable use cases, you know, when it comes to the difference this technology is making, and so I wouldn't call it all hype, because there are tangible kind of proof points when it comes to this technology, whether it's on the ad targeting side or, you know, having campaigns created by AI, or just, you know, synthesizing kind of intelligent summaries from a bunch of documents, or generating things based on prompt. All this wasn't possible before, and these are the things that can make a difference in terms of how we work on a day-to-day basis, how much productive we can be as knowledge workers, and I think that's the potential that everyone feels there is with generative AI and LLMs and customer service. I mean, I came across a lot of examples at the show around deploying AI agents.

Mandeep:

Now, ai agents are nothing, but, you know, chatbots that can do things end-to-end in terms of having a conversation, taking a follow-up action, because they have all the knowledge you know from the internet as well as you know at the enterprise level, because they have been trained on those sort of documents and they can find that needle in the haystack faster than a human agent can. And so that's the promise. But look, there will always be challenges. Can we overinvest? In the near term? It's possible, but at least right now, from what I'm seeing, we are still supply constrained when it comes to these chips and we are no way close to being on the side of having an overcapacity or things sitting idle.

Ryan:

So I definitely don't see that for now, do you have a sense of where, in terms of how many years out the spend, the capital spending, will need to slow down? Because you know it'll increase but it can't increase forever. So at some point it needs to slow, and I just don't know when and I don't think anyone really knows but do you? Have a sense of when that could slow.

Mandeep:

I mean I'm looking for that digestion phase, which I think will come in the next two, three years, where companies that are spending. I mean I'm looking for that digestion phase which I think will come in the next two, three years, where companies that are spending. You know, Microsoft said they will be spending $80 billion in AI CapEx this year big number. And when you think about that scale, you know, going forward, it starts to eat into all their free cash flow that the company is generating. So investors will want to see how much that's contributing to the top line growth. They've said AI inferencing is a $10 billion run rate business. That's expected to double, in my opinion, to $20 billion this year.

Mandeep:

And look, I think as long as they keep their transparent with the investors on how that's being used, and AI training and scaling laws are a big factor in this, because if the next version of the model is at least 20% better than the previous version, then you know that intelligence is being created using the model training and so companies have incentive to trend. As the moment we start to see a plateauing of the improvement in LLMs, then that's where you could expect a pause, but right now I feel companies are being very creative when it comes to training these models. There is, you know, scaling at the time of inferencing, so that's the other thing that is keeping interest high when it comes to some of the novel approaches that these companies have come up with. But digestion in CapEx, and you could call it a slowdown, but that's where you could expect that to happen in the next two, three years, because I don't expect years of 50%, 60% CapEx increases to continue for the next two, three years.

Mandeep:

There will be a flat niling.

Ryan:

Most of that CapEx spend has been from the largest companies in the world. Essentially those are the hyperscalers and building out data centers. Is that where the spending will continue, or will it flow downstream to smaller companies? Or is there really no incentive to just not use the services that some of the data centers have already set up?

Mandeep:

I mean, look, I think right now the constraints are in multiple places. So, yes, the data center capacity is a constraint, but also the power, and we know the power infrastructure is not easy to expand.

Mandeep:

You really have to make long-term decisions in terms of how do we build data centers that can have 10 times the power that they have right now? And it's not easy because the power doesn't grow like that. It grows more in line with the GDP, that it grows more in line with the GDP and, plus these chips, they need more power for training. So that's the other constraint you have is the infrastructure accompanying that data center, whether it's on the cooling side or the cables that are transmitting the power. Everything needs to be changed Now, whether we start using nuclear or other options.

Mandeep:

I mean all that is on the table because the clusters of these chips are growing. I mean Jensen talked about, you know, an AI factory with one million chips, you know, put together. Right now, the largest cluster we have, or we have talked about, is 100,000 GPU cluster. So to 10x. That requires a 10x in power, which I don't think we have that available right now. So there are all these practical constraints that require CapEx spend on different fronts, not just in terms of getting data center you know, real estate and getting the chips, but also all the accompanying infrastructure, and I think that's where different companies will have to participate. The governments obviously will be involved, and that's why this is a much bigger theme than anyone imagined, at least a couple of years back.

Ryan:

Yeah, the power. I'm glad you brought that up because that's something that we've often wondered about. It seems like nuclear power would be a great solution and obviously you know Constellation reopening Three Mile Island to supply some power to Microsoft, you know that's something that can be done relatively quickly within a few years. But you know, you look at the most recent built from scratch nuclear plant in Georgia. It took something like 13 or 14 years to get that built and it was like $30 billion plus something like that. And I know they're small modular reactors but those aren't really commercialized yet and you know they seem like there are ways out. So this is something that I just wonder where the power is going to come from.

Mandeep:

Well, and that's the million dollar question in terms of how do we go about, you know, adding that power and you know how quickly can we do it?

Ryan:

Because it's a race, isn't?

Mandeep:

it. It is a race, but the only caveat I would throw in there is if the model stops scaling. So all this is contingent about, you know, building a million, one million chip cluster. Why do you need to build a 1 million chip cluster? Because that could help train your next version of the model, which is smarter than the previous version.

Mandeep:

Now, if all the experts see, oh, we are plateauing in terms of model intelligence because we are running out of data, and look, all these models are trained on the entirety of Internet data that's available right now. So, granted, the amount of data will grow, but it's not going to grow at the same pace at which they've trained these models so far, because it included the entirety of Internet. Now, the pace of growth is much slower than all the data that's available. So that's the big risk is can we rely on synthetic data, which is a term that's used a lot, is we can curate data to train these models or we can use inference time scaling, where we basically use a different approach in terms of how the model answers the question and it can use different paths to answering a question as opposed to just one prompt-based response. So all this is dependent on intelligence continuing to scale.

Mandeep:

Now, as we know, you know, machine learning was there before large language models and generative AI came to the scene. And what was great about generative AI and transformer-based models was it was much sophisticated than machine learning, which required a lot of supervised training. This could be unsupervised, you could pass it, you know large amounts of data and it still had the potential to generate a model which is in billions of parameters. But it scaled beautifully. So that's, the scaling. Laws are probably the single most important thing when it comes to carrying this wave and keeping that spend and interest in deploying generative AI. The moment we start to hear about a plateauing of that, that's the big risk. That's when everything will kind of come to a pause. I mean, I'm sure governments and companies will keep spending in terms of adding and deploying that infrastructure, but that sense of urgency, I think that will go away if everyone realizes there are limits to how much these models can scale.

Ryan:

So you mentioned synthetic data. Is that just basically derived data that is produced and then you've got some conclusion that's been reached and you assume that that's the data that you then build upon? Is that kind of?

Mandeep:

Yeah, I mean, basically, this is not data that's coming from any transaction. So a good example would be I mean, Waymo and Tesla have collected real miles of data based on the, you know, the algorithmic driven driving that have been done on the roads, and I mentioned five cities for Waymo, same thing for Tesla, FSD. Now, if they had to use some other type of data that is currently not generated from real life studies and you know, like if you're studying a protein structure, these are the different combinations that you could use to train an AI.

Ryan:

So like the AlphaFold project, is that considered like synthetic data? Yeah, I mean exactly.

Mandeep:

And so, yeah, I mean you consider all permutations, combinations, and at the same time you have to weight it to real life data more, because that is the data that has been observed. But then if you have to cover the edge cases, the AI has to know about all the other potential combinations.

Ryan:

Yeah, I think the whole biotech area of AI is fascinating and it seems like there's a lot more that can be known, that's unknown compared to you know. It seems like you know an agent that I call up because I need my flight changed, or something like that there's some diminishing returns to making that better and better.

Ryan:

You know it gets to a point where it's you know it's good enough, right, but when you're trying to come up with a cure for diseases, I mean, and understanding how biology works, that seems like there's a lot more that you could actually understand.

Mandeep:

And I would keep the expectations low. I don't think AI would help you find cure for diseases. It's more of an augmented like something that will augment your knowledge, work, whether you're sitting on a desktop and doing your work and beat any kind of work. So if you're a researcher, AI will augment your research, but I doubt it can find cure for diseases on its own.

Ryan:

Yeah, it's a tool, right? Yeah, it is a tool, because if you have that alpha fold library of proteins, then you don't have to spend your months and months discovering what that protein three-dimensional structure is, because you're 99% with that part of the puzzle there. But then you have to do something with that.

Mandeep:

Yeah, and also it can understand that lingo a lot better. One of the things we've seen with AI agents is these agents are trained on all the conversations people have had over the years, whether it's audio or text conversations, and then they can understand what you are asking now as a result of that, because of all the wealth of training data. So it's the same concept everywhere. These agents build on what has already been observed. I mean, think of the different types of customer service conversations we have in our lives. So if AI can you know, learn from that, and they know how we ask a question and what are the typical responses. From an enterprise standpoint, that's huge and the scale of these models allow them to learn from pretty much any and every type of conversation because of the number of parameters I mean, we are talking about 400 plus billion parameters when it comes to, you know, LAMA model or some of these frontier models and that's what gives them that wealth of knowledge in terms of understanding what someone is asking and it seems like they have to.

Ryan:

I have had the pleasure of being on a number of calls with customer service agents, and some are really good, Some are really bad, so there has to be some sort of fine tuning that goes on as these models are learning right.

Mandeep:

Oh yeah, I mean that's where all the time is spent. I mean, otherwise these models would be ready, to, you know, be deployed for a wide variety of use cases right now. The reason why an OpenAI model is still not production ready is because it needs to be fine-tuned according to your particular use case. If you're an airline, you have to fine tune it with your customer service data set. If you are into other types of use cases, whether it's sales or service desk, I mean these are different types of conversations. So an LLM has that generic knowledge but it needs to have that customized, fine-tuned versions of the conversations. And then you have to account for the hallucinations and make sure the responses are in line with your compliance goals. So all that is kind of very iterative and that's why it takes time to deploy.

Ryan:

Is that the most difficult part? You think of developing a workable large language model?

Mandeep:

I mean, look, I think that's what the companies are spending time on, and some of them, because of the compliance aspects, will take longer just to test it out. And others are more out there because, you know, even if an algorithm or, you know, a chat bot cannot answer all the questions perfectly, they're okay with that because they just want to deploy this technology faster. So that's where you know every sector is different. Every, you know, customer service use case is different, but I would say it's very iterative. There's no way you can think about all the edge use cases right from the get-go and you have to, you know, iterate on it.

Ryan:

That's very interesting.

Ryan:

So I wanted to ask you a little bit about regulations and it's shifting gears a little bit but, this is something that always comes up, and you know it's somewhat related to concerns about what decisions AI is going to be allowed to make at some point in the future and what potential harm could come from that, and balancing between adding too much regulation so that it stifles innovation but, on the other hand, making sure that safety is concerned. So do you have any thoughts on I don't know where you think that's going to go, or where it should go, even yeah, look, I mean, one of the things about these models is the prompt can be anything.

Mandeep:

You know every company talks about multimodality, which is basically you can give text-based prompts, video, audio, and then the length of prompts have really grown many fold since the time the first version was launched, so you can really game the chatbot to kind of give responses which obviously haven't been tested for. And that's where you know, putting those guardrails is paramount. So every regulator, I'm sure, will be focused on what sort of guardrails these LLM companies have to implement, and then the companies which are deploying these LLMs in their products will do the same from their side. But I think the AI Act in the EU and there's so many different approaches that the regulators are thinking about. Obviously we have a new incoming administration here in the US.

Mandeep:

I would be interested to see what the AIs are. I think David Sachs would be looking at in terms of deploying AI. But nobody wants to curtail the ambitions of these companies when it comes to AI, which is why the hyperscalers are spending so aggressively, because I feel they are confident that the right guardrails can be implemented and obviously these companies have a lot of data to customize these LLMs. I mean. Think of the hyperscalers as having the data that you need to customize the LLMs, and so they are the right partners when it comes to deploying this technology, and they have vested interest in terms of making sure regulators are convinced that these LLMs can function in a way where it's not putting anything at risk and won't cause any harm when it comes to sensitive intellectual property.

Ryan:

It seems like these companies that are making these massive investments. That seems like it's a pretty secure moat. I don't know how you could have a new competitor and add to that the potential for some regulatory capture. You know where you're putting in regulations because you can comply with them, but there's no way a startup is going to be able to. Do you agree with that? Do you think there is a pretty wide moat for these? It doesn't seem like new competitors can come in, at least for the hyperscalers.

Mandeep:

I mean, what's good is, if you think about even the hyperscalers, not everyone has their own large language model. So Microsoft is relying on OpenAI, which is a company with an independent $150 billion plus valuation Anthropic recently valued at $60 billion on its own. Now it's partnered with Amazon. But that's the good part about this technology is we've already picked some winners and the two names I mentioned.

Mandeep:

And then you have got Mistral and Cohere, so there are a lot of companies that are training their foundational models. And then you have got Mistral and Cohere, so there are a lot of companies that are training their foundational models and then partnering with the hyperscalers. Now you also have a couple of hyperscalers that do everything. I mean, Alphabet has their own chip, they have the largest data set and they have their own large language model. So I do think every company has to be looked at it from a different lens, but the fact that we have got, at least you know, four or five foundational model companies, and even though they are partnering with the hyperscalers, that's a good thing. I mean, these are already, you know, large companies when it comes to their private valuation.

Ryan:

What about intellectual property when it comes to just kind of using everything on the internet, and I know that, especially certain forms of media, if you're doing something creative, making movies or something like that, but even getting news from news sources, I'm not really sure what to think about that. To be honest. Any thoughts on is there a potential that some of these AI companies will have to pay some sort of licensing fee to the companies that are producing news or something like that? I mean, I don't know, I'm not even sure what my question is for you.

Mandeep:

Yeah, no, actually, so that has already started. If you think about OpenAI and Alphabet, they are paying for content, right?

Ryan:

now.

Mandeep:

OpenAI is paying for New York Times. I know Alphabet has a big contract with Reddit to pay for their content, and so all these LLM companies I mean, look the way they train their original version of their large language model. I'm sure they used a lot of content which wasn't licensed properly and that's how they trained their first version and that's how the model got so good in terms of understanding everything on the Internet. But going forward, and given these companies are well established now, they are paying for content and I'm sure that'll be the case more and more, because that's how you keep the model up to date, as well as there's more awareness about how important the original content is when it comes to deploying generative AI. So these companies that own content and have the intellectual property for that content are very conscious of the fact that they need to monetize the content through licensing now. So no longer can you scrape a website or some other source to train. That was the case, maybe, a few years back, but now it's not possible.

Ryan:

The case, maybe a few years back, but now it's not possible. So put yourself five years in the future. What is there a specific application that you are most excited about that you think five years from now, we're going to be like man, this is, this is life changing. This is so disruptive. You know, I think of what the iPhone turned into, I mean the smartphone industry. We didn't know five years before the iPhone that that was going to be a thing, but it used a lot of technology, used the internet and so forth. So five years from now, is there anything you can think of that we'll look back and just say, man, that changed everything.

Mandeep:

Yeah, I think we probably are in one of those moments, simply because the technology has so much underneath in terms of new capabilities and look, our form factor for smartphones and PCs may evolve as a result of that. I am excited about that humanoid device or robot, however you may want to call it, because if it can do those mundane tasks at home, I'm sure people will be willing to pay for it. So, self-driving cars, I mean. Look, I think we enjoy our driving, but at the same time, when it comes to driving, when everything there are long lines and traffic no one enjoys that. So if there was a way to trust these autonomous vehicles for a large portion of our driving, I think people wouldn't mind doing that. So all these are changes that I think will be a result of LLMs and generative AI and the advancements in AI, and that's what makes me excited about, you know, the changes going forward.

Ryan:

So that just brings to mind if I'm riding in a car, no one's driving it, I'm relying on a system to drive it. It makes me worried about cybersecurity. Should I be worried about cybersecurity Because if someone can hack in that car and that car and wants to crash me into another car, I guess? My more general question, though, is yeah, do we need to be worried about cybersecurity when it comes to some of these applications?

Mandeep:

I mean look, there will be more data generated from all these different machines, and that was the promise of IoT and big data, but I think more so now, given we are talking about, you know, ai that will permeate our lives, and I think protecting the identity, protecting the data, is always very important when it comes to the cybersecurity side of things. I mean, there is a race going on between the nation states when it comes to developing AI and deploying AI and so that you know whether it's competing with China or any other nation. I think that will be paramount and, as part of that, you have to protect your intellectual property if you're a nation that is investing a lot of dollars in R&D and developing your technology. So I think cybersecurity has always been important when it comes to digital information, given it's going to grow many fold with these technologies, I do expect the importance to grow All right, I'm going to throw you a curveball here as we start to wrap things up.

Ryan:

One of the questions that I've really enjoyed asking over the last year and a half, since we've been doing this podcast, is okay. So let's imagine you've been at Bloomberg Intelligence for a while. Let's say you went a different route and you weren't in finance, you weren't an analyst. What do you think you'd be doing right now?

Mandeep:

I mean, I would love to be involved with using these LLMs for developing an application and really reimagining how we can make better software or more intelligent systems. I think it's fascinating yeah.

Ryan:

Yeah, very cool, all right, final question for you Any book recommendations, if people, that doesn't have to be related to technology. What are you reading these days? Is there anything that you've read or that's on the Mandeep Singh book list as you? Maybe that you could recommend to viewers of the ROI podcast?

Mandeep:

I mean, I read a lot of journals. I don't get time to read a lot of fiction, so I don't have any good recommendations, but really you know a lot of journals related to tech, and that's something that I really enjoy because they come up with a lot of cool use cases that I can't imagine on my own.

Ryan:

So you know, it's almost like that science fiction. Honestly, I think we're living in an age that's very close to science fiction, or at least if you were to tell people 20 years ago some of the things that are being developed today, it's basically science fiction from 20 years ago and my hope is it doesn't get into like the Terminator you know the Terminator where the machines turn against us. I don't think we have to worry about that, do we?

Mandeep:

No, it'll be a lot slower than what people expect. So, look, I mean, we will continue to talk about infrastructure and then, when it comes to that great use case or application that will come about in the next two, three years, but we're still running mainframe systems, you know, for a lot of the critical applications. So these sort of things, even though you know they are transformative, it takes a lot longer to deploy these applications and bring about the change. But at the same time, I mean that's existential, so you can't really ignore it. At the same time, it's not going to just change everything overnight or? You know people are smart, they have. You know, the ones who are making these decisions know the pitfalls of these technologies and I have faith in you know the companies that are in charge, that they will make the right decisions. And, look, the governments will have the oversight and the regulators will do their thing. But on the whole, I'm hoping for a more productive future.

Ryan:

Well, that's a great place to leave the conversation. Mandeep Singh, global Head of Technology Research at Bloomberg Intelligence, it's been great talking with you. Maybe we can do this again sometime, but I really appreciate your time and thanks for all of you who have joined us on this episode of the First Trust ROI podcast. We will see you next time.

People on this episode