AI Proving Ground Podcast: Exploring Artificial Intelligence & Enterprise AI with World Wide Technology

Building Sovereign, National-Scale AI with Core42’s Edmondo Orlotti

World Wide Technology: Artificial Intelligence Experts Season 1 Episode 44

Core42 Chief Growth Officer Edmondo Orlotti dives into how Core42 and G42 are building sovereign, national-scale AI infrastructure and what enterprises can learn as they face their own challenges with data governance, GPU scarcity, fragmented environments and global regulatory pressure.

Support for this episode provided by: Thales

More about this week's guests:

Mike Trojecki is a technology leader with 25+ years of experience spanning security, networking, cloud and AI. A former U.S. Air Force Tech Controller supporting White House and Air Force One missions, he brings a deep commitment to precision and reliability. He now leads WWT's AI Practice, helping organizations build high-performance architectures and data-driven solutions that unlock real business value.

Mike's top pick: AI Use Cases: Balancing Speed and Risk for Real-World Success

Edmondo Orlotti is Chief Strategy Officer at Core42, part of G42, an AI company headquartered in Abu Dhabi, UAE. His previous assignments were in AI & HPC at global level with Hewlett Packard Enterprise and with NVIDIA Professional Systems Group. He has been addressing advanced simulation and analytics needs of industrial and academic environments from the very beginning of the deep learning age, from data center to edge. With over 25 years of experience in the IT sector at global level and an interdisciplinary background, covering marketing and sales roles, he has always been focusing on IT innovation. In the automotive industry, working with Formula 1 teams up to large automotive OEMS, he's been addressing their HPC & AI challenges from the manufacturing floor up to autonomous driving.

Edmondo's top pick: A Guide for CEOs to Accelerate AI Excitement and Adoption


The AI Proving Ground Podcast leverages the deep AI technical and business expertise from within World Wide Technology's one-of-a-kind AI Proving Ground, which provides unrivaled access to the world's leading AI technologies. This unique lab environment accelerates your ability to learn about, test, train and implement AI solutions.

Learn more about WWT's AI Proving Ground.

The AI Proving Ground is a composable lab environment that features the latest high-performance infrastructure and reference architectures from the world's leading AI companies, such as NVIDIA, Cisco, Dell, F5, AMD, Intel and others.

Developed within our Advanced Technology Center (ATC), this one-of-a-kind lab environment empowers IT teams to evaluate and test AI infrastructure, software and solutions for efficacy, scalability and flexibility — all under one roof. The AI Proving Ground provides visibility into data flows across the entire development pipeline, enabling more informed decision-making while safeguarding production environments.

SPEAKER_03:

From Worldwide Technology, this is the AI Proving Ground Podcast. Today, we're stepping into a part of the AI world that most people never get to see. Over the past year, AI has surged from pilot projects and proofs of concept into something much bigger, something closer to national infrastructure. Countries are racing to build sovereign clouds, enterprises are scrambling to secure GPUs, and everywhere you look, leaders are asking the same questions. How do we scale? How do we stay in control? And are we building something durable or are we living through another bubble? To make sense of this moment, we're joined by someone who sits at the center of all of it, Edmondo Orlati, chief growth officer at Core 42, who has spent his career inside the world's most powerful compute systems from early high-performance computing at HPE and Nvidia to national scale AI infrastructure in the Middle East. Few people understand the difference between hype and hard reality the way he does. For note, Core 42 is the sovereign cloud and AI engine inside G42, the Abu Dhabi-based technology group behind some of the world's most ambitious AI programs, all built around a single idea that nations and enterprises need their own secure, scalable foundation to thrive in an AI-driven world. Edmondo is joined by WWT's senior director of AI, Mike Tricky, to translate what this means for enterprise leaders who are trying to stitch together cloud, data center, and edge environments into something that can actually support AI at scale. So let's jump in. It's great to be here. Thanks for having me back. Yeah. And Mondo, welcome for the first time. Welcome to Worldwide Technology Global Headquarters. How are you? Good. Thank you for inviting me here. Absolutely. We're here to talk about a lot of things that have to do with AI today, with G42. And, you know, we'll be talking a lot about scale, sovereignty, um, so on and so forth. Uh Edmondo, I do want to start on a little bit more of a newsy talking point. You know, your your position with G42 has you in a unique spot where you're you're across government, you're across hyperscale, you're across enterprise. Um the idea of an AI bubble, which we're hearing more about uh these days, whether it's in the news cycles or just from leaders across the industry, you think we're approaching an AI bubble, or what do you think about the environment right now?

SPEAKER_01:

Well, the future is hard to predict, but um so there are a few things that I believe I believe I can say. So the first thing is that the AI uh business is real. Uh so the demand is there. Uh the transformation uh that is driven by AI is happening. Now, the transformation is obviously happening at different speeds according to uh the different markets. So there are markets that obviously are already deeply into AI, and in particular, we can notice that in the consumer market. Now we don't realize that, we don't even see it, but AI is ubiquitous in in the experience that uh that we have as consumers. In the enterprise, it's obviously taking more time, but we see really the difference in the transformation that uh uh enterprises uh go through. Takes more time, but is there and and it's uh really happening. Um I would say really happening without us uh noticing that much, uh, but we get used to it. Then obviously um I'm speaking from a privileged standpoint, um, but I do see that even if in different ways and different uh speeds across the world, uh AI is effectively coming. So I wouldn't say at all it's a bubble. Obviously there is speculation and obviously people want to make uh money quick uh on a stock market that has uh obviously been going um um to the sky, but um but it's a real uh transformation. It's a real revolution, industrial new industrial revolution.

SPEAKER_03:

Yeah, no, absolutely. Mike, I mean it's not it's it's a little not apples to apples here, but from an enterprise side, expectations. Do you think we're you know starting to grow a bubble here in terms of what leaders are expecting from AI versus what they're getting now or maybe getting in the near future?

SPEAKER_02:

I don't think this AI thing is gonna happen at all at this point. Um it's if you look at the way we consume the internet and the adoption of the internet, yeah, we're adopting AI at a much faster pace. Even though the investments being made are outpacing the investments made from an internet standpoint, we're still consuming AI at a higher rate. So I don't think that's gonna lead to an AI bubble. Yeah. So we're seeing a lot of enterprises now. We talk about you know these building these AI factories. What's happening is we've built a lot of these AI factories, they're gonna continue to build these AI factories, but now enterprises are starting to consume AI. Yeah. So I think that consumption is going to bring us to a point where it's not an AI bubble.

SPEAKER_03:

Yeah. Edmondo, a lot of the organizations that we deal with here at WWT talk about a lot about an AI-first organization or an AI ready organization. You're talking about an AI native nation. Uh so I'm curious about the work you're doing at G42. What does an AI-first or an AI native nation actually entail and how are you working towards that?

SPEAKER_01:

So an AI native nation is uh a nation that has uh embedded AI uh as part of uh their core activities. And uh this means obviously a regulatory framework, so countries that have defined uh how to approach AI, but it means also um to have embedded AI into, for instance, any of the governmental services. It it it is about making AI a component of any workflow in the in the operations of uh of a nation of a country. And uh there are few countries uh that have embraced that. And those in those countries you can see the advantages of having an AI native uh approach.

SPEAKER_03:

Yeah. During your work doing this, have have any challenges popped up that may have surprised you? I mean, you've been in the industry now for for 30 plus years, I believe. So you know you want to think that you've seen it all, but has anything surprised you along the way here?

SPEAKER_01:

Well, I wouldn't say surprised, but um so AI is all about data, right? So if you don't have your right strategy on data, it's very difficult to do anything with AI. What I did notice, for instance, uh if I take Europe as an example, uh in speaking with the governments, uh I will not mention the country, but uh speaking with one country, talking about uh digitalizing and applying AI and making that country AI native. Uh in the conversations they came back saying that they had uh in the public administration they had uh something like 12,000 different uh data lake sources, yeah. All of them not communicating with each other. Um so you can imagine that uh for that country to step into an AI native situation, it will take years.

SPEAKER_00:

Yeah.

SPEAKER_01:

Right? You have systems that don't talk to each other, you have data that is not uh properly managed, the governance on the data in many cases is uh at best is fragmented. Um so these are these are the big challenges. And uh when speaking in general about um about uh AI native uh countries, but I would say this applies also to the enterprises, having uh the right strategy on data is really the first step. It's the enabling uh the enabling step. And that's why, for instance, you see that countries that are AI native are countries that have had a proper strategy, the proper policy in place on how to manage their data, the citizens' data, and everything that that's related to that. And uh and uh from the UAE um we are in a privileged position because this was part of a strategic plan that uh started more than 10 years ago.

SPEAKER_03:

Yeah. I mean, Mike, fragmented systems, a messy data estate regulation and policy. This this sounds like a a similar playbook that we hear a lot with uh customers. What are you seeing on the enterprise side? Give us a little bit of a translation about what Edmondo's talking about and apply it to what that means for an enterprise right now.

SPEAKER_02:

Yeah, there's not much of a translation that needs to be had. I mean, it is clearly about the data pieces here. Um the one thing that I continually talk to people about is you know, data is a big issue. You need that data, you need good clean data to actually get use out of AI, but it doesn't mean you need to have everything into one single giant perfect data lake, right? Right. You want to be able to start with the data that you have and go from there. Don't let perfection stop progress from a data standpoint. So I would agree with everything you're saying. And you know, even within an enterprise um and even public sector, the governance piece as well is you've got to have the regulation around that on what to do with the data, how you protect your citizens' data, right, how you're using it, but then also managing the scale of AI and doing it properly. Yeah. I mean, that's that's an important part. If you do that, that'll stop some of the fragmentation. Right. You can if you can manage the scale, use the data that you have, and don't try and over boil the ocean, as they say, with your data, I think you're gonna be in a good spot. But I mean I love I love the talk about data because especially with what you guys are building from Core 42, a lot of people just think about the infrastructure and building out the infrastructure at that point. So the fact that you guys are thinking about the data piece, I think sets you apart from some of these other organizations that are building these large GPU clouds.

SPEAKER_03:

Yeah, anything as you're hearing Mike say that, is there you've had experience on kind of all sides of this here, whether it be from the vendor um perspective or enterprise, and now certainly you know what you're doing here with Core 42. Anything, any lessons that we that you are learning and taking away from what the enterprise is doing on how to successfully scale AI?

SPEAKER_01:

Well, uh one uh one good point is um is the fact that actually in in the enterprise uh space, what we see is that if you want to wait for uh applying all the digital transformation to all the areas of the business and so on, you will never take off. Yeah. Um and uh one of the problems is that very often enterprises run POCs maybe on some data and so on, and maybe the results are not that great, and then because of that they stop all the projects, right? Or they slow them down because they don't see the real value. AI is not uh implementing and embedding AI um into the workflows and and transforming the companies uh with AI obviously takes time. And I think it's more a cultural issue uh people are not used to. Uh but at the same time, uh, if we look at uh let's look at, for instance, at OpenAI, ChatGPT, right? So ChatGPT is the king in the in the world of consumers. But if you go and analyze those consumers, those consumers are employees at companies that use ChatGPT to increase their productivity, so they're effectively applying AI into the enterprise, but they're doing individually because the enterprise is not ready to do it. So exposing obviously to other types of uh risks and inefficiencies and and so on. So AI is a journey, but there has to be a strong motivation and a strong direction to keep on going and understand that this does take time. I think that uh the now the the um advent of the agentic platforms I believe will help enterprises a lot because uh starting to have agents that support individuals is something that I believe we will get uh used to it very, very soon. It will be much easier uh to consume, let's say, to leverage agents rather than uh uh directly managing, let's say, the the uh interfaces by themselves.

SPEAKER_02:

Yeah.

unknown:

Yeah.

SPEAKER_02:

And I was gonna say that on the agentic piece of this, we're now giving not not just employees here at WWT, but across all of our customer base, we're you have the ability to create agents on your own. You don't need to be a coder to create these types of agents. So we're putting that power into the hands of employees and letting them, I don't want to say play with the technology, but create the things that are important to them that help them get their job done. Then you can look at that, and if it's something really interesting or unique, it's something you might be able to scale across the entire corporation. So the agentic piece, I think, is going to, and you talked about the bubble. I think the agenc piece is now that is being adopted faster than even generative AI right now. So I think we're I think that is one of the one of the key points.

SPEAKER_03:

Yeah, you've seen the that that spike in agentic um on your end too, Edmondo? Absolutely, yeah.

SPEAKER_01:

Absolutely. And we see the proliferation of the agentic platforms as well. Um that's definitely that's definitely I think what uh will make uh AI more successful, specifically into the enterprise world, even though even at at personal uh at the consumer level, the um I I can't wait to have my agents for for everything, right? And um it will be really like uh like having a dedicated personal support um that will live with us all the time. Then uh the let's say I would say could this could be a good problem for a good problem to solve for some. Uh what happens with agentic platforms is that your consumption of AI increases dramatically. It's basically an order of magnitude. And that is the motivation behind, uh, at least part of the motivation behind these uh massive deployments of AI infrastructure. Because if we think about, let's say, 10 agents per individual, and and we we think about uh how large, how wide we can go um in the world, then you can uh do the math and check that effectively the infrastructure that we that is deployed today is nowhere near uh what's needed.

SPEAKER_03:

Yeah. Well I'm glad you mentioned platforms, because I've I've seen you say, you know, that the move from AI pilots to platforms is an important one. Does that tie into what you were just saying right there, or maybe expand a little bit on why moving towards an AI platform is uh a good route to go, not only for you at G42, but you know, or at Core 42, I'm sorry, uh, but for enterprises across the world.

SPEAKER_01:

Well, um so enterprises need, as I said, so uh it's about a journey, right? So you need to start embedding uh AI into your workflows in a native way. Um at Core 42, for instance, we do ourselves run a program. There is a team that is going and evaluating uh how we how we work on a daily basis, right? And where can we apply AI into our processes? But what this team is doing, they are redesigning the workflows to embed AI. They're not just applying AI on top as a patch, because that wouldn't work. It obviously takes time, uh, but we go one use case at a time and and we implement that because we are also a company that does exist uh on the market for a while, so uh we have to, let's say, uh justify AI also into our own company uh before going out. So um the transition to platform is extremely important because uh, for instance, there are other there are very nice examples of platforms uh that now are available into the enterprise world, and and as you were just saying, the the uh the employees uh can build agents on top of these flat platforms without any code, and those good agents that will be useful for more people then will be adopted uh all around. So uh there is in particular one uh uh one experiment that is uh happening with um um the IHC uh group uh in the UAE where uh all the 1400 companies uh have access to an AGentic platform and and they build this. This is how you do bring AI uh to fruition.

SPEAKER_02:

Yeah, it's I mean it takes not just one company, right? It takes a ton of different organizations to bring this together. And you you mentioned what you guys are doing with kind of the agentic AI stuff. It's similar, Brian, to what we're doing here with like Atom and some of the things that we're doing in the agentic platforms that we're building internally from network assistance and kind of RFP assistance as well. But I think one of the things that we're going to see from an agentic standpoint, um, especially when it comes to consumption, right now, you know, when you're using Gen AI and chatbots, it's really spiky, right? But when you get into agents, you're gonna have this consistent consumption across the board, right? You're always you want to always be using those GPUs. But from an agentic side, you look back and say, everybody's using these as chatbots for the most part. But we're starting to see people turn to AI, agentic AI as a mentor, right? A thought leader, right? Right. I think you saw um one of the uh one of the life sciences companies, uh their CEO said he uses platform, well won't say the platform, uses this platform to actually serve as a mentor for him. So he interacts with it on a daily basis. And I think those are the things that we're going to see that as as people start using AI as a mentor and start looking to it for advice rather than do something, right? I think we're gonna see adoption increase tremendously.

SPEAKER_01:

Yeah, and I mean you Yeah, no, no, uh absolutely. And uh I mean without getting uh too technical, but um we believe that LLMs now have become a uh commodity, right? Uh what we believe is that we are gonna see a proliferation of what we call the SLM. So the specialized language models. Yeah. So they will be our advisors, right? We will uh consume those SLMs through agents. But I've seen, for instance, uh I've seen this in Germany with some very large manufacturing companies. Uh they are they are basically collecting all their know-how into these SLMs, and basically the employees have access uh to the entire knowledge of the company, and this is a tremendous, uh, tremendous help. Uh it does, it does change the way you work. And then uh you can start having uh um SLMs uh specialized in uh different fields. I was speaking to a to a startup here uh in the US that has, let's say, the backup of some major uh players, and they were telling me that they are building uh SLMs uh for the engineering, the engineering world. So you're gonna have uh the feedback that is going, I mean you're gonna have engineers that are going to talk with other engineers. Uh the difference is that these other engineers will be AI engineers uh to check and validate and speed up their workflows. Uh I think in a way it's true. If you think about um, I mean, this has been discussed uh in the press uh quite for quite a while. Uh if you look at the consultancy world, for instance, this is gonna change dramatically. Because you can have, and Sam Altman has referred to this many times, right? You can have your own personal consultant in the specific domain that you need, uh straight from an AI model. Yeah. And um again, these are things that we will all get used to it. If we think that the smartphones are only 20 years old, uh, right, I mean less, um and now we we what we ask ourselves how could we live without, right? This is going to be exactly the same effect. I mean, in in uh two or three years we will uh use agents and we will say, oh, how how was the world before?

SPEAKER_02:

Yeah. Exactly. And it's it's Brian knows this. I'm a big movie buff. I talk a lot about movies and technology. And if you look right now, like the Matrix, he'd be able to download all the knowledge he needs to fly a helicopter or whatever. But we'll be able to do that with an agent. So are we gonna be able to download it into a human? I don't think so. But can we download that knowledge into an agent and that agent can become an expert in that? So that's the way I look at this is saying, okay, I can get these models, train these specialized language models in these certain areas.

SPEAKER_01:

Yeah, but uh you mentioned seismic fiction, so let me let me uh uh deep dive into that. So uh no, we kind of just get fun now. Yeah, yeah, we we cannot get into humans yet, but we can get into robots. Yep. So having uh the S so the robots are the arms, the physical extension of of the LLMs, right? Of the models. And I I mean I believe we believe we have uh we have um dedicated resources on uh working on the robotics uh world. We believe that robotics, for instance, is going to be uh the additional next uh one of the additional next uh big markets uh coming in because we're gonna have robots at home. Um we're gonna have assistance. Uh again, I was working with uh last week where I was discussing with another company that has developed a specific skill set to be basically a nurse uh for elder people. Uh and that's that skill set is downloaded on a robot, right? So it what you describe as the metrics, the skill set for uh for piloting an helicopter, that's exactly what's happening. So it is science fiction, it was science fiction, it's becoming a true reality. Yeah. And I think this is gonna happen uh sooner than we expect.

SPEAKER_00:

This episode is supported by Talent. Talos delivers data protection and cybersecurity solutions to secure critical information. Trust Talus to safeguard your digital assets with advanced security technologies.

SPEAKER_03:

So we're getting into physical AI here. How does that how does that change your mentality leading uh core 42, or you know what does what needs to happen from an IT perspective differently, if anything, or is it just a natural progression?

SPEAKER_01:

So the so I it's important to mention that we have we have a company inside the G42 analog that does focus on on the robotics truck, right? So uh they're doing uh they're doing an amazing work uh there. And robotics uh has been with us for a very long time. Now the real difference is um the evolution from a from a mechanical standpoint. So we have robots that can really behave like humanoids. Um but robots has been have been with us for uh for a long time. The Tesla car is a robot, right? Uh as well. So the the difference is that now with with the LLMs, uh so the LLMs are a natural interaction between the human and the machine. Uh the robot is the last physical thing. So we will uh um the robots enable us to create an intimacy with the with that machine and also to get the machine to do physical things, right? Like loading the washing machine or or or other tasks. So if you look at the enterprise, uh I believe robots will uh will um I mean robots have been uh in the enterprise for a long time. If you think the manufacturing floor, um robots have been uh here for a long time. I don't know how many people know that any iPhone uh is built completely by robots. There is not one single human hand that touches the iPhone uh before it's made. And these are semi-uh humanoid robots uh that build the phones because uh you don't have fingers that are small enough to assemble all those components and so on. So robots have been in the industry for in the industry for a long time. What you will have, for instance, if you go back into the IT world, uh these um AI systems, right, these GPU systems that we deploy, uh starting next year already, we are we are gonna have uh an significant amount of power that is gonna go into the servers, right? You're gonna have 500 kilowatts and and maybe soon one megawatt inside one single rack. This becomes to be dangerous for humans to operate because you have liquid, you have a lot a lot of uh electricity and a lot of power in there. Uh robots will will do the work, right? Robots will do, will replace humans in those areas where either it's dangerous, either it's uh polluted environments, and so on. And this is happening heavily into the enterprise. If you go, if you go to Abu Dhabi, I mean it's now quite common at a restaurant you have a robot as a waiter um uh taking the orders and and everything. Sounds a little bit fancy or or uh exotic, but it's it's effectively becoming the norm. So um all these digital receptions and all these things, uh we will we are gonna live in a world of robots. Yeah, that there is no question.

SPEAKER_03:

Well, I mean Mike, are you seeing that right now with clients that we engage with here at WWT? And assuming the answer is yes, apologies for making the word assumption there. Yeah. But um what does that mean from the IT perspective? Like how how do companies have to shift to enable it.

SPEAKER_02:

He's absolutely right. This is this is here, it's coming, it's going to be a part of our everyday lives. Um we're actually we've brought robotics experts onto the AI practice now. So from AI robots to digital twins, we've we're now bringing that expertise in. And in fact, we've got a number of customers that we're working with already on some of the biped humanoid type robots. Um and hopefully we'll actually see a couple here and uh maybe we can do an interview the next time on a podcast, an AIPG podcast with uh with a robot and it's all robots. We'll take a vacation. Exactly. Yeah, on robotic bringing. There you go. Um so from an IT standpoint, an enterprise standpoint, what this means is inference becomes a lot more important. Okay. So we're training, we train these specialized language models, like Edmondo said, but inference becomes more important. And that's where companies like you know, G uh G42 Core 42 come in because you've got to have that the ability to execute at the edge. Yeah. We're gonna use this kind of example. I used this in a in a conversation uh yesterday is the difference between training and inference. And training, it's like I I coach my daughter's 10-year-old soccer team. Right. So training is all of your practices, you're coaching, teaching them what to do. Inference is about what happens on the field and how the players react in the field. Well, we need that reaction to happen in uh organizations now to be able to rack react to all the training that has been done, and we can use these robots as the physical extension that Edmondo said to do it. So I I am excited about it. We go back to movies, we talk about iRobot. That's another one. Um, but I'm excited about what that world brings for us. And you talked about the advanced medical, being able to have robots uh for care, yeah. Right, being able to help uh go into situations that are dangerous situations, right? Anywhere where you've got uh you know an environment where somebody could be hurt, you know, we send you send in a robot to do it. Yeah. And we have moved from science fiction to a reality. And as we continue to do that, I've always been a fan of saying that hey, people who write the write the books and come up with the movies, they're the ones that are the ones that create this stuff. We as technologists just execute on it.

SPEAKER_03:

Go achieve it. Well, Mike, I'm loving all the movie references. So as I keep asking more questions, keep dropping more movie references. Um Edmondo, we're talking about with all the robotics stuff and just what we've covered so far, just an explosion of data at the edge in different locations. I feel like that's gonna be I'm gonna shift that into just the concept of of sovereign AI right now. Um why is that um such it's it's becoming more and more of a conversation, as least, at least as I understand it through the clients that we deal with. Why is that shift happening right now with sovereign AI, uh data sovereignty? What's the value there? And just just walk me through where we're at with that.

SPEAKER_01:

So um sovereignty is critical because AI works on data, right? Um let's say that we start implementing robots into any enterprises or even at home, right? You want to be absolutely certain uh that the data that is um involved um in all the in all the processes uh stay secure and stay inside the boundaries of the countries uh under which um you have to be compliant with the regulations. Right? So that that's critical because you want to you want to know, you want to have a framework that uh explicitly lets you know um how your data is gonna be used, for instance. And who controls that data, who has the authority on that data. Uh that's why sovereignty has um has become a more important topic even than in the past uh for countries and down to the to the enterprise level. Um I was mentioning about this company, uh this large manufacturing company in Germany. They are building their uh SLM, and this company has thousands, if not tens of thousands, of IPs, right? Uh where should they put that SLM when they have to run inferencing, right? If I would be that company, I would be very reluctant in bringing all my AP, everything, all my value somewhere in the cloud in a place that I don't have control of. Right? So I I think sovereignty is here to stay, for sure. Um I think the more the end the enterprises and the citizens they understand what it means and what is the value of data, sovereignty will be more and more um a relevant topic. This goes along with the security, because you can have sovereignty, but if you don't have security, your data will flow away anyway. And because of this, in G42 we have a company that is uh dedicated to cybersecurity, which is called CPX. Um and they're uh so they work on different uh Areas of the cybersecurity world. But the one of the areas is how do you make AI secure? And when I say secure, it means a lot of things, right? So, for instance, in the case of the robots, for example, the robot is a physical human. Imagine that someone can hijack that robot. Bad things can happen. We have seen many cases, for instance, in the automotive sectors, right? Cars that are connected and cars where there are critical uh flaws in the in the security and cars that can be remotely controlled, right? So security has to be designed in an embedded way into any AI workflow. And this is again something that has not been, in my opinion, it's not been given the right attention. Just to give you an example, we are working on a project on our side where we go beyond the traditional confidential compute uh type of uh environments. We protect the weights, so we protect the these uh LLMs and SLMs inside the GPU memory. So you don't have any way to get access there, and whatever happens, you can switch off the access to anyone uh on the entire chain from where the LLM is executed down to the network. These are technologies that are being developed and are not widely adopted yet. Uh, but this is a critical point. The moment that inferencing will be the primary task and it's becoming already the primary task for AI, then uh protecting uh protecting the execution of AI, protecting all the channel communication, the links and everything becomes critical. There are countries, for instance, that have already embedded quantum safe networks uh at the country level. So the entire uh backbone of the connectivity is quantum safe. You have to start thinking in that in that way, right? So enterprises uh I was um I was uh in DC last week, and um one of the conversations we were having is uh that basically it may be that in three to four years or even before the RSA encryption may be gone.

SPEAKER_00:

Yeah.

SPEAKER_01:

I mean imagine you are an enterprise, I mean how how much RSA you're using tons of in tons of uh areas. It means you need to start rethinking. This is going to be a very tough job for the CISOs to revise uh uh security, right? But this is something that with AI becomes critical because AI then becomes really an embedded portion of uh of the workflows.

SPEAKER_03:

Yeah, I was gonna follow up on that. I mean, you're talking about Q Day here with with quantum. Exactly. Um then you're talking about AI being there to act as a tool. I mean, Mike, just build on that a little bit. Was that a natural progression as we're talking about data and AI, sovereign AI, that's just leading into the eventuality of quantum?

SPEAKER_02:

I don't think it's a a path to get there. I think they're they're kind of running in parallel, not in serial. Okay. So if you look at you know, Q Day, they're predicting Q Day now is going to be in 2027 sometime, right? And you know, we talk about that a lot. Um, we've been spending a lot of time on the team of okay, how are we reacting to Q Day with all the things we're doing in AI and the security side and infrastructure? I mean, all these things will be vulnerable if we're not if we're not doing the right things as not just as WWT, but as you know, a country, a world. You know, it's that is going to be an important play. Going back to the sovereign AI piece, that sovereignty is critical in an enterprise as well, right? They have the that data is critical to their IP. And one of the things that we've tried to help our customers is this concept of build versus buy and where do they put their data? And typically if it's your critical data and it's things that are important to you, like your IP, then you you want to you want to put that in a secure, on-prem, kind of sovereign type environment. So we're we're seeing a lot of that. The second thing that you mentioned, and Mano, was about the security piece. And there's there's two components to that, right? There's securing AI and then using AI for security. So when it comes to securing AI, you talked about this you uh, especially at the GPU level. We have to make sure that these LLMs are secure. We have to be able to work and to make sure that they're not vulnerable, that the infrastructure is not vulnerable, right? And there are decisions where people are going to say, I've got to put this data either in a cloud, GPU cloud, or put it in a hybrid type environment, those are tough decisions for these companies to make because there are financial differences between each of those. And you know, uh, we're gonna go back. I could talk about the robot stuff all day long, but you know, keeping keeping those safe about being able to hijack the a robot uh or hijack a car or something like that. I mean, when if we don't have this stuff figured out before Q Day, then that becomes a real, real eventuality. Yeah. Right. So we've got to be able to move beyond that.

SPEAKER_03:

Yeah. I would love to talk about quantum for another several hours, but I know we don't have that much time uh here. So I'm gonna I'm gonna shift a little bit and get to you we've mentioned a little bit about security, but I'm gonna talk about the element of speed here. Organizations around the world trying to advance their own AI strategies, they're at a constant struggle of balancing trying to move fast with trying to move safely. Um and Mondo, what you're doing at Core 42 is certainly kind of you know winding up to be a playbook for how to do that. How are you thinking about that balance between speed and security where you're being pushed to move as fast as you can to win this race, but also doing it in such a way that you don't, you know, the car doesn't come apart as you're driving it.

SPEAKER_01:

So let me start from uh two points. And I just want to make a comment on quantum, no, because uh this week at supercomputing, I I you you could start seeing uh quantum becoming a popular topic. And uh many people were making the comment, okay, quantum is the next bubble, right? Um I've been always very cautious on quantum, uh but for what we are seeing, I mean quantum is coming uh slowly, uh surely. There is obviously a lot of technology hurdles uh still to solve, but it is coming. The the day that we're gonna have tens of tens of thousands of qubits available uh from these machines, it's not that far. So um uh when uh talking about the Q day, it's gonna happen. Yeah. So it's not something that that uh I don't want to threaten ever anyone, but uh but it's definitely gonna happen, right? So better to be to be ready and to leverage uh the benefits of quantum. So uh when it comes to speed and and safety, um I start from one point. Uh the AI race uh is not a race to implement AI, it's a race for survival. So I I in this case I'm very very blunt, very direct, straightforward. Uh if an enterprise doesn't embed AI, it's gonna be dead. So that's not not be not be the company anymore. Because the competitors that adopt AI um will outpace, uh will outpace uh those enterprises that fall behind. So I don't think it's even a question of of um of deciding, okay, I slow down because uh I want to be safe and so on. Unfortunately, it's a tough race and uh speed and safety have to go in parallel, right? Uh then obviously uh you start from the areas that are lower risk and and and so on. But uh I'm sorry to say, but um you you can't wait. That's it. I mean obviously, I mean you can say I'm biased. I work for a company that deploys infrastructure, but uh in my experience I see those companies that have implemented AI, they have massive advantages. So how how can you survive? I can you how can you be, I don't know, let's take a retail banking. How can retail banking survive if they don't adopt AI massively? They will just simply disappear, right?

SPEAKER_03:

To sum up your answer, it's kind of like I'm asking you, how do you balance speed and safety? And you're saying tough, you have to do both. Like you're there is no choice. I mean, that's the reality many organizations might find themselves in. Um understanding that the situation is tough, you're gonna have to deal with it. What's what's the go forward?

SPEAKER_02:

Yeah, it's speed and safety, but it's also culture within a business in the in the enterprise side, right? Is are people going to trust it? Are people going to learn how to use AI? And if if they don't, then like Armando or Edmondo said, they are going to they're going to fail eventually. But going back to this concept of of speed, right, and the adoption piece, the thing that why we view uh core 42 as so important, not just to what they're doing in uh the UAE, but just globally, are companies like Core 42 enable the speed. And you already mentioned the security part of this down to the GPU level. Being able to make decisions quick, you don't have to wait to build an infrastructure. You're going to run into power and cooling issues, you're going to run into space and weight issues, but being able to work with somebody like a core 42 enables that speed. Yeah. But I'm it's not necessarily meant to be a pitch here, but it it's but it's the reality is that you guys are leading kind of worldwide in helping customers to be able to increase the speed at which they adopt AI and use AI. Yeah.

SPEAKER_03:

Well, maybe just a little bit of that. Um you're enabling the speed in a little bit of a summarized answer. How? How are you able to do that? What can you share with with some of our listening audience that would help put them on the right path or feel like they're on the right path to accomplish that?

SPEAKER_01:

Well, the we do that in uh in uh multiple levels. So inside core 42, we have obviously the ARM that deploys and operates infrastructure, but we also have it goes under the professional services as well, right? Um so you need to sit down with uh with the customer and uh define uh what for workflows to start with, what is their strategy on data. Uh so we help them uh through the journey, right? We bring them uh, you know, there are many companies that uh where we have to go and explain what an agentic platform is, what agents do, right? They don't get it. And the only way for them to get it is when you show how it works, right? Uh so we help them uh in in this aspect and and uh also with the contribution of our partners like WT. That's the only way. And and as I said, usually it's not uh it takes time. Um it takes time to operate at full scale of the enterprise. But uh I can't imagine of a company that, for instance, today has not even started. Because those that are not even started, again, I'm sorry to say, but uh they will have big challenges, right? They will be outpaced uh pretty quickly. And when it comes to speed and security, uh also for the enterprises, I like to think of something like the Formula One uh Grand Prix, right? So you have to you have to go fast. Uh you have to make sure that you stay on the track and you don't hit the wall, but you need to run very fast, right? Yeah, so that's the type of the situation where where we are now. Um there is a lot of safety embedded into Formula One right now, uh compared to a long time ago. But still, uh it's it's a it's a race for survival. Um I'm I on this I I've seen I've seen really tons of uh tons of enterprises all across all across the world. And uh those that are more advanced in the journey, they are already experiencing uh very, very big advantages. The reason why I was mentioning the retail banking uh sector is a sector that for instance is critical, where really uh the the if you don't use AI, if they don't use AI, they will uh go out of business. At the same time, they have to provide a high level of safety, right? Of security, right? So that's that's uh that gives you that gives you a good example. I would never I want to have my uh financial advisor uh being an AI agent, and few are already providing it. Those that are not providing, I don't even want to interact anymore with those old interfaces and and crumpy stuff and so on.

SPEAKER_03:

Yeah. Well, I know we're coming up at the bottom of our time here, and the two of you have been um gracious enough to spend an hour with us uh to this point. Let's just do a little bit of a future-facing close. You know, if we're if we're talking again in in 12 months or this time next year, um, what is gonna be kind of that central talking point? Today we're talking, you know, certainly data, data sovereignty, AI strategy, move fast. Um, are we gonna be talking about the same things again? Are we gonna be leaning more into quantum? Are we gonna be talking about something new that I don't even know about yet? Mike, we can start with you and then Edmondo, you can close us out.

SPEAKER_02:

This conversation around agentic is going to be around for quite a while. So I think we'll be talking more about agentic AI and actual deployment of these agents, agents becoming part of the workforce, not just a simple assistant. That that's where I see it. And I will tell you that the team, uh, the team that I have now, I mean, the the stuff that we're building from an agentix standpoint and what we're doing here at WWT, and we're scratching the surface of what's going to be possible.

SPEAKER_03:

Yeah. Edmondo, agents two or something else?

SPEAKER_01:

No, agents two, agents two. And uh maybe to add uh to add on that, I think we will see some technologic evolutions. Uh um there will be many more AI factories in 12 months from now that will enable these agentic platforms. I think another thing that we will start getting used uh to is uh the fact that LLMs will uh will dominate the world of um multimodal content. So videos and everything which is um um which is related to multimodality will become extremely popular. Uh so we will get used to that. Um which is scary in in in a way, but we will have to look at any video, anything that we we see through through a screen, uh we will have to really question whether this is uh real or not, because it will be very difficult to distinguish. This will be the additional thing. But uh the the agentic the agents becoming uh part of the workforce, that's definitely what I see in 12 months. Quantum, I think we need to wait another two, three years to really see the impact.

SPEAKER_03:

Yeah, but prepare now.

SPEAKER_01:

Well, I mean it's uh it's a um uh raising uh wave, no? Um it's uh it's a wave that is raising. The the it's coming and uh it takes uh quantum is even more difficult. I mean, it's quantum physics. I mean, people need to get used to a completely different landscape of things. But uh there is already quantum memory, there is uh there is at the network level. There are amazing things that really bring us into the science fiction uh uh movie type. Yeah.

SPEAKER_02:

I'm walking, walking into a meeting right after this one to discuss quantum computing. So yeah, it's it's something to uh, like you said, keep an eye on. Uh, but we probably need a couple more years.

SPEAKER_03:

Yeah, I mean equal part, exciting, scary, humbling future ahead of us, uh, but we'll get there and tackle it together. Uh we'll do the two of you.

SPEAKER_01:

It's exciting though. Yes, it's exciting. That's the other element. Definitely. We are transforming something.

SPEAKER_03:

I'm happy you're ending on an optimistic note there with exciting. Uh well to the two of you, thanks again for joining. We'll have you on again soon. Thank you. Thank you. Appreciate it, Brian. Okay, thanks to Edmundo and Mike for joining us today. The race toward AI native organizations is clearly accelerating, and the window for thoughtful decision making is narrowing. Now is the moment to ensure that your architecture, your data, and your strategy are ready for what's coming next. This episode of the AI Proven Ground podcast was co-produced by Nas Baker, Kerr Kuhn, Batool Kurik, Carolyn Stees, and Helen Messer. Our audio and video engineer is John Noblock. My name is Brian Felt. Thanks for listening, and we'll see you next time.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

WWT Research & Insights Artwork

WWT Research & Insights

World Wide Technology
WWT Partner Spotlight Artwork

WWT Partner Spotlight

World Wide Technology
WWT Experts Artwork

WWT Experts

World Wide Technology
Meet the Chief Artwork

Meet the Chief

World Wide Technology