AI Proving Ground Podcast: Exploring Artificial Intelligence & Enterprise AI with World Wide Technology

From AI Demo to Production Systems You Can Trust

World Wide Technology: Artificial Intelligence Experts Season 1 Episode 80

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 37:45

Most AI strategies look good in a demo. The real challenge is getting them to run at scale.

Recorded live at NVIDIA GTC 2026, this episode with Ragu Chakravarthi from Core42 and Kraig Ecker from WWT explores what it actually takes to move from AI pilots to production systems.

They discuss sovereign AI as critical infrastructure, what control really means across data and operations, and why trust has to be built into every layer. From governance and observability to regulated environments, this is what it takes to run AI in the real world.

The conversation also looks ahead to agentic AI, rising token demand, and the growing need for traceability and access control.

If you are working to operationalize AI at scale, this episode breaks down what most teams underestimate.

Support for this episode provided by: Forescout

More about this week's guests:

Ragu Chakravarthi is Chief Technology and Product Officer at Core42, leading AI infrastructure, platforms, and operations. He oversees large-scale AI cloud deployments across NVIDIA, AMD, and Cerebras, and has driven innovations like Compass and JAIS. His work focuses on enabling secure, scalable AI adoption, with emphasis on sovereignty, performance, and enterprise readiness.

Kraig Ecker is EVP of Global Service Provider Sales at WWT, leading teams that help the world’s largest telecom and hyperscale companies adopt next-generation technologies. With deep industry relationships, he has driven growth across AI, 5G, cloud, and edge, expanded WWT into new markets, and helped establish the company as a leading partner across the global service provider ecosystem.

The AI Proving Ground Podcast leverages the deep AI technical and business expertise from within World Wide Technology's one-of-a-kind AI Proving Ground, which provides unrivaled access to the world's leading AI technologies. This unique lab environment accelerates your ability to learn about, test, train and implement AI solutions. 

Learn more about WWT's AI Proving Ground.

The AI Proving Ground is a composable lab environment that features the latest high-performance infrastructure and reference architectures from the world's leading AI companies, such as NVIDIA, Cisco, Dell, F5, AMD, Intel and others.

Developed within our Advanced Technology Center (ATC), this one-of-a-kind lab environment empowers IT teams to evaluate and test AI infrastructure, software and solutions for efficacy, scalability and flexibility — all under one roof. The AI Proving Ground provides visibility into data flows across the entire development pipeline, enabling more informed decision-making while safeguarding production environments. 

AI Hype vs Reality

SPEAKER_02

A lot of companies say they're doing AI. But if we're being honest, very few have answered a harder question. What exactly has to be true for AI to be trusted inside an enterprise so that it's deployed and leveraged at real scale? Because for most organizations, the immediate task at hand isn't chasing the next model. It's figuring out how to build an AI environment that's resilient, observable, governed, and secure enough to support real workloads across clouds, jurisdictions, and in the very immediate future across fleets of agents. So on today's episode of the AI Proven Ground Podcast, which was recorded live on the show floor while at NVIDIA GTC 2026, we're talking with Ragu Chakravarthi, Chief Technology and Product Officer at Core 42, and Craig Ecker, Executive Vice President here at Worldwide Technology. Ragu is helping build the kind of AI platforms most enterprises are still trying to define, one designed by sovereignty, resilience, performance, and trust from the start. Meanwhile, Craig works with organizations facing that same challenge from the enterprise side, where the question is no longer whether to adopt AI, but how to build the architecture, governance, and operational discipline to make it real. So this conversation is really about the gap between AI excitement and AI execution and what it takes to close that gap. So let's jump in.

SPEAKER_01

Doing wonderful. Thank you. Yeah, appreciate it. Great show so far. Jensen was spectacular as usual. Yeah, well, no surprise there.

Sovereign AI Is Power

SPEAKER_02

Well, let's get right into the meat of it here. Uh, Ragu, so a couple uh months ago we had Edmondo Orlatti on, chief strategy officer over at Core 42 with you. And one of the big themes that he had talked about was just the idea of AI more as a national capability and not just an enterprise, you know, feature application. From where we sit here today at GTC, I'm wondering how you see that kind of unfolding in terms of AI lending itself to that national capability.

SPEAKER_03

Yeah, I think this is a very important feature that we are building. And given the current situation, it's it's a lot more relevant right now, you know, going looking at the geopolitics and everything. So I think it's got the equivalency of building a power grid. If you think about how the power grids are built, they're not built for you know one or two people, right? They're built for an entire nation, they're built for various populations. It's also built with three things in mind resilience. I need to be extremely resilient regardless of how it is. Second thing, it's all about scale and populations grow, AI tokens are gonna grow, as you heard about it today, right? Like it's all about tokens now. So, how do we build that resilience and scale and also to be highly performant? Right? Like highly performant. And those are the kind of things that we are building into what we are going after with what we call the AI intelligence grid. And this is like a power grid, a grid of connected data centers, connected compute capacity, and connected infrastructure that's gonna bring the resilience, scale, and the performance that an AI economy needs. And this is like 100% relevant right there.

The AI Power Grid

SPEAKER_02

Yeah, so resilience, scale, performance. Craig, I mean, what does that mean, kind of practically speaking, from an architectural perspective? You know, kind of relay what Ragu is saying and apply it to the enterprise setting or the service provider setting.

Who Controls Your AI?

SPEAKER_01

The reason I think it's so important and where it really touches back down on the enterprises, so you think about competitiveness, you know, globally. Every country wants to be competitive. So AI is going to bring capabilities to education that's going to allow your students to get you know more educated, smarter, faster. Same thing with healthcare, allow you know, cure disease, help the population out. Obviously, defense will fall in there. But as my goose said, is having this overall architecture that the rest of these industries can tap into, it actually raises the entire kind of country profile as it relates to AI. And so it's really important that there's that connection that I think a lot of times when we talk about sovereign AI, I think some people go directly to like defense and you know, competing with you know, China or others. And really, it's more about how do you raise really the rest of the society up in all these different verticals, you know, and even if you just raised again healthcare and education, think how much better off we would be by having this kind of overall architecture. And you said having that resiliency and scale that is allowed to be built out just like the electric grid was so important if we're really going to distribute AI across all industries.

SPEAKER_03

That that's an important topic, right? Like sovereignty, right? Yeah, like you said, a lot of people jump to assuming sovereignty means it's national security, right? But it's all about like there are three types of sovereignty that we care about. One is obviously it's about data sovereignty. You know, obviously data needs to be resilient, like needs to be controlled. That makes sense. But there is also operational sovereignty, right? Like it needs to be within certain boundaries. And there is the third thing around technological sovereignty, right? Like so countries and other nations feel independent, like, and they're not tied to any particular uh particular text, right? Like so we focus on all three and then we deliver that in in a way it's called our project right now, it's called Green Shield, in to various nations. So every nation has their own constraints, they want to build sovereignty around their own people and around their own boundaries, right? So we deliver this in terms of what we call digital embassies, right? And that's an important topic, I think we should get into.

SPEAKER_02

Yeah, unpack that a little bit and tell us, you know, kind of how you think about it.

The Rise of AI “Embassies”

SPEAKER_03

Exactly. So for us, you know, as G42, I mean, we have a very audacious goal. Our vision is to bring AI to everyone, democratize AI. And what does that mean, right? I think just like way back when you had oil, which was controlling a lot of the power, right? Now I think it's data. And imagine putting data in the hands of everyday people and how much of improvement it'll bring to their lives, right? I mean, that's the goal. How do we bring AI to everybody? There is obviously sovereign constraints around bringing AI just to any nation, right? So we are delivering this concept called digital embassies. So, what is an embassy? Going back to that definition. Embassy is a private jurisdiction in a foreign land for my country, where I have full control over that compound or that particular land, and whatever happens in there is not governed by that foreign contract. Think about that concept for digital assets now. So we in UAE can build a digital asset in UAE because we have the capability, we have the resourcing. More importantly, we also have that government-to-government relationships with various other nations to deliver their AI capability in a UAE map. Plus, our goal is to build and perfect in UAE and deliver that AI capability or computer capability to various nations. And we have it right there. We have the power, we have the resources, and we have the people to build the embassies right there. So that's that's the concept that we are trying to get into, right?

SPEAKER_01

Yeah, you know, I had the opportunity and fortune to go over to the UAE and really one thing that you talked about and just touched on briefly is just the leadership. And so from the top down, I was impressed just how before AI really, you know, kind of opened up to the world, just kind of the digital transformation that's going on in the UAE. So, you know, where we struggle with things around our government from getting our license to, you know, and buying a house and titles and everything is digitized there. I went there once, the next seven times I was there, I never pulled a passport out, right? I mean, good or bad, I loved it, right? Sounds great. So the ability to just kind of flow through and then you know, you you start to unpack their applications and and how marriage licenses are done or other kind of things that you'd have to interface with the government, it's all digital. And so I think the same thing, there's a lot of similarities back to when you look at a CIO or CEO or a board, they really have to kind of look at those same principles that you just talked about that the UAE is doing from a country perspective. It's the same thing with a corporation. You know, how do they make sure that they're putting the right governance in place, the right trust capabilities, making sure that their infrastructure is resilient? They're not just buying a bunch of GPUs and kind of and a model and picking a model. I hate to say it, that was two years ago. And by the way, models are important. You see, Anthropic's kind of done something new, and you know, Grok will be tomorrow. But if it's if you're just focused on kind of chips and the model, you're missing really all of the architecture and the infrastructure that has to go with it to actually deliver an outcome that you want.

SPEAKER_02

How did you kind of break through there? You know, you're you're a certain level kind of ahead of what Craig is describing, you know, just a moment ago. What types of challenges did you encounter kind of along the way? And how did you break through to make that shift so that you are bringing AI to the people?

One API to Run It All

SPEAKER_03

Yeah. So multiple things, right? Like so for me, it's about multiple use cases. So every person's different, every industry is different, and every country is different. So one of the main challenges was one thing doesn't fit all. Yeah. One size doesn't fit all, one silicon doesn't fit all, one network doesn't fit all, one location doesn't fit all. So we believe in agnostic heterogeneous environments. And I think that will solve a lot of the problems in terms of use cases. So what I mean by that is Nvidia is top tier, obviously. Nvidia is going to be a multi-use case-oriented silicon. But we also believe in low latency and Cerebrus, which where we have a lot of investment in, can really bring 20x the performance into the same silicon. And at the lower end, you know, there are use cases where I really don't care about latency so much, but I'm really cost sensitive in uh maybe some of the nations, right? So we also have the lower end performing, but highly cost and power effective silicon in there. And then we also have at every layer, like not one model is gonna solve all the problems. GPT is wonderful, anthropic is great, but then there is a whole bunch of open source models, both from this nation and US, as well as from other nations, which help solve a lot of the problems. And open source and open rates are better. So we bring that heterogeneous environment where you can get multiple silicon, you can get multiple models, and also multiple locations. Do you want a public cloud? Yeah, some most of the workloads can run in the public cloud. But then when it comes to sovereignty, which was our main problem, you cannot run it in the public cloud. So we also have that heterogeneous locations and all of this from a single API. So you don't need to rewrite any logic, any code, just because you're going from a US-based GPT model to an open source-based UAE-based model. Yeah. It's all in the same code.

SPEAKER_02

Yeah. I mean, Craig, from your perspective, how important is that diversity in model and infrastructure? What's the value that that diversity uh brings to any number of organizations?

SPEAKER_01

Yeah, I mean, I think the value in what he was just saying is it's not about the model, and it's really not, you know, forgive us, just the chips. It's have you built the right orchestration? Have you built the right governance? Have you built the right, maybe it is latency, but you think about the ability to scale inference. And so, you know, it was a clear sign to me when when NVIDIA bought Grok, you know, a year ago they weren't talking about inference as much, right? And now you know inference is that next frontier because it is how do you get that workload closer to the customer and and and really modeling out that use case? But building that platform around it, as you said, a single API, being able to tap into different models are going to keep leapfrogging. There's going to be dozens, if not hundreds, if not thousands, of new models that are going to come out. And so this is the thing when you come back to take this back to an enterprise, they need to make decisions on how they're going to run this infrastructure. The all the infrastructure, not where they're going to just put their compute and storage and network. That's obviously important. But when I say infrastructure or architecture, it's really all those other pieces. How is security layered on? How is governance layered on? How can they can they operate this? Can they move between models as you just articulated? That is really the importance of having this stack and this architecture. And I think, you know, right now people are focused on power, people are focused on kind of you know what chip they might have or or what model, but they really have to get kind of higher up in that stack if they're really going to scale AI inside their organization.

SPEAKER_02

Yeah. Robbie, was that interoperability and that diversity? I mean, is that becoming easier or more difficult as AI advances at such a rapid clip?

Why AI Networks Break

SPEAKER_03

I I think we are in a mission to make it easier. It's very difficult to make this happen, but we are on a mission to make it easier. And the most important thing in all of this, what I see is the connectivity. Yeah. The networking. I mean, that is the second biggest challenge that we are having. So, for example, right, like in inferencing, when it's day over here, it's night somewhere else in the world. When it's night over there, it's day over here. And obviously, a lot of workloads don't go through or need not needed to be run, real-time workloads, during the nights. You know, people go to sleep. So, how do we move that workload and make your infrastructure more efficient and still serve the entire world, right? Like in terms of uh the needs, is what we are looking at. And the main thing there was connectivity. And not many folks, or not not many of our competitors, are focused on interconnecting their data strip, their data centers, and providing a single plane from which you can serve a lot of the tokens. And that's that's one of the main things we are focused on. And that interconnected is a hard problem to solve. And that's that's one of the main issues that we are thinking right now.

The AI Trust Layer

SPEAKER_02

Absolutely. A little bit of a pivot here, but maybe not much. You you talked about the three things resilience, scale, and performance. Another key aspect for you, I've seen you author a couple articles on it, is trusts. You've talked about the need, you know, that trust might be one of the biggest factors right now and not just you know, model choice or infrastructure. Unpack that a little bit right now in terms of what you mean by trust. And I've heard it from other places too. Why is it becoming more of a piece of the conversation?

SPEAKER_03

I think fundamentally, you know, there is there is a lot of talk around responsible AI. There is a lot of talk around how do you build boundaries and help help people, right? So we believe we build trust in a couple of ways, right? One is we built a proper governance layer. You were you were mentioning about governance. So we build guardrails so people don't hurt themselves without unknowingly or knowingly, right? Like you don't hurt themselves. So we have a product called Insight. This Insight product helps you not only do sovereign controls, but it also enables security guardrails, and it creates an easy way for you to land yourself in a public cloud or a private cloud. So you let's say with Insight, you had this concept called landing zones. You can deploy insight in a private cloud or a public cloud, and you can start with like a template that's already pre-built, or you can have your custom rules and custom controls built into it so that you can say, like, you know what, whenever I go out of bounds with anything, warn me or block me. Right? Like, so we give you that level of trust, both in a public cloud and in a private cloud, so that you can not hurt yourself, right? So that level of trust is needed. And we also give them the flexibility in terms of do you want to be more cautious or less cautious? Maybe at the beginning of uh an AI program, you want to be less cautious, you know, you want to be more open so that you can innovate. And at the same time, as you go to production, trust with customers as well as with the infrastructure provider because they're a key factor, and we can tighten up more of those controls. So that's that's one way we are packing that, yeah, in terms of flows.

Sponsor: Forescout

SPEAKER_01

Yeah, you jump in too on this. You know, you think about compliance, which you'll hear a lot about, and then truly having this trustful state around AI. And so compliance will always be, you know, can you run something? Trust would be should you or do you feel comfortable enough to run it? Right. And we were talking earlier about this, having the tool sets that allow you to review what AI is doing within your environment. So you have a hundred agents out there and they're making decisions, or maybe even chase chaining multiple agents together in an agentic kind of way. Well, how do you go through and audit that? There will be mistakes, right? And I equate this to we will be hypersensitive on AI making a mistake, but yet we have new employees every day that make mistakes. I was long 30 years ago, I brought down a network by putting the wrong things in, right? We've all done that. That if you know, if you've ever been in technology, and so you know, you bring someone in, you bring a new employee in, and you train them. And it takes three months, six months, nine months, twelve months to continue to allow them to understand what their job capability is. It's going to be the same thing in the ages. But what you need is where the human may say, Well, this is what I did. Here's what I typed in, here's how the mistake was made. You need to be able to have now the ability to go audit and have that traceability of what AI is doing. And so again, it goes back to this foundational kind of architecture. And, you know, you were talking a little bit, some of this is built inherently into the stack that G42 has, Core 42. Enterprises need that across their workloads as they're as they're going to roll out. And that's how you'll move from proofs of concepts to really start moving this at scale is everything else that you referenced about inference at scale and resiliency and security, but then just having this model when you can truly trust what's happening inside your AI architecture.

SPEAKER_00

This episode is supported by Forescout. Four Scout is the only automated cybersecurity company that continuously identifies, protects, and ensures the compliance of all managed and unmanaged assets. IT, IoT, IOMT, and OT. So you can more effectively manage cyber risk and mitigate threats.

SPEAKER_02

Yeah, I'm happy you brought up agents. I'm going to put a pin in that for just a second. Ragu, you know, you one of the things that you talked about when we were talking last week was just, you know, how that trust can lend itself to adoption. And uh, you know, we were looking over that Microsoft diffusion report that talked about, you know, the US may be, you know, leading in infrastructure and capability there, but it's not necessarily leading in adoption. And the UAE, in fact, had just you know phenomenal numbers. I'm wondering if you can talk about how that trust harlays itself into adoption, which is ultimately such a central question for so many organizations out there.

SPEAKER_03

Yeah, I would recommend everybody go through the latest Microsoft Diffusion report that that was published, I think, in late last year, in December of last year. So, where they they really go through the entire world and they they did a good survey, right? So US obviously is the producer of all of the AI technology. We make the best products, hardware, software, uh, you know, and and everything, right? But then uh the cons from a consumption standpoint seems to be the rest of the world is catching up or leading, yeah, in fact. And UAE, for example, is there is 64% of the population in UAE consume AI on a daily basis, whether it's through applications or whether it's through Chat GPT in whatever format, 64%, and that's the leading nation. And there are other nations in there, especially what I heard is South Korea has jumped up like 16 spots recently, and South Korea is catching up too. So the adoption and the usage is what actually G42 and Core 42 we are focused on. How do we enable the usage, right? In that, like what we found out is the trust that the UAE citizens have in the applications, in the infrastructure, and the trust that the the weight of the government puts on it, like is also a major factor in adoption. So for I think AI to be adopted more practically in every place, I think trust is what will break that make that difference. And anybody like who's a CSP or any any software provider, as long as they build that trust with the end user, I think they build that. They'll be the winners.

SPEAKER_02

Yeah. I mean, I want to get back to that passport example that that you mentioned um earlier when you went to the UAE whatever amount of years ago, and have that being kind of a seamless experience, you know, the UAE and you know, Core 42 uh by extension, you know, making that experience kind of more seamless for AI. Do you think that lends itself to that adoption rate?

SPEAKER_01

It is. And you know, one of the challenges that you have, you as you said, the government is not only just pushing it and demanding it, but it's really been a fabric of of the nation for many, many years. And you know, UAE might be, if not the fastest growing nation, you know, in Last 40 years, and you know, when I would go and visit, you'd meet people, and they if you look out this window, there was nothing here, and now it's one of the most spectacular cities between Abu Dhabi and Dubai. And so I think people are kind of growing up with that that digital aspect of technology, and and then it's kind of pushing the you know the citizens along because the government's investing in these areas. And I think, you know, right now, even in the US, I think AI somewhat has a PR problem. Yeah, people are scared of it. And by the way, because of political divide, one you know, someone might be using that for their advantage. They I was just listening to a podcast, they said that there were 23 different data centers shut down because of protesting, right? Yeah, because there's a lot of negatives that are being pushed out there that may or may not be true. Well, all that is going to impede the the the you know our ability to go adopt AI and have that absorption, but that goes back to trust. People don't trust that it's not going to hurt the environment. People don't trust that it's not going to cost them their jobs. And and I I love the because it's been proven out and time and again, and it's uh Jaban's paradox, which is the more you give to someone, essentially, especially as these technology changes have happened, you won't you won't need less coders. You're going to need thousand times more than what you have today because of the growth that you're going to have, you know, you know, with this technology. No one wants to go back when the PC came out, no one wants to go back to pencil and paper, right? Like, no, no, I'm good. I don't want to use the PC. I'll go back to pencil and paper. And it's the same, same thing that we have here with AI, but we do have to, you do have to put it out in a way that are that that consumers really at the at the business level, not the CEO, not the board, not even the the VP level, the consumer level of the business feels like this is safe, feels like this is going to help them in their job and their capabilities, not cost them their job. And so there is a total trust aspect to AI that I think we're missing, that the UAE, Singapore, you've always heard, I haven't been to Singapore, but I've always heard you know, very progressive in technology over the last 20 years. And so I think that, you know, we in America, we need that same thing, and probably across Europe as well. That's true.

AI Meets Regulation

SPEAKER_02

Politics aside, uh, Ragu, I mean, talking about that experience lending itself to driving trust in AI and thus driving adoption, what role does, you know, just the right making the right infrastructure decisions, what role does that play in driving all that forward as well?

SPEAKER_03

Yeah, I think infrastructure. I mean, to me, I look at it more top-down, right? So to build trust, to build sovereignty, to build uh these kind of capabilities, what kind of infrastructure would you need, you know, for for this? So in in all of this, it's definitely there is one half is what the infrastructure provider does, the other half is what the regulator does. So I think there needs to be a proper manage, marriage between the regulator and what the independently the infrastructure provider do, right? Like so we conform to we are actually influencing the regulators in terms of how exactly they should think about what is solving or what is protecting versus what is curtailing innovation that's gonna happen in the country. So that is that is an important thing, right? Like, so but from an infrastructure standpoint, uh to build trust, I talked about our one product, which actually enables controls, whether it's sovereign controls or security controls. So that's that's one thing. And the second thing is all about being very transparent. We have a lot of observability metrics and telemetry that we collect, and we automatically feed that back to the end user to say, this is where your model rests. So important. This is where exactly and this is where your model ran, this is how we produce the result. Where did the data come from? And we, by the way, we kept the data in a sovereign manner in your location, in your choice of location. Second thing is we feed that same thing to the regulator. Now the regulator knows okay, how is this platform operator? And regulators, there are two types of regulators that we deal with. One is the local regulators, whether it's in the UAE or the individual nations we serve. Second is the US regulator, because we have implemented a very innovative technology environment for regulations. And this was actually called out by the US Department of Commerce to say this is a very novel idea, right? So our regulated technology environment actually feeds, first of all, it collects a lot of the data, it feeds that information to back to the US as well as to the local regulators to say who is the end user using it. You know, what is the origin and what kind of uh models that they use? Obviously, the US has vested interest in the regulator technology not getting into the wrong hands. Yeah. Whether it's brogue actors or bad bad nations, whatever, right? Like so we enable that, and that builds a lot of confidence, not only with the end users, but also with the regulators. So that's really helping out. And that's a major criteria for us. Yeah.

Your AI Foundation Isn’t Ready

SPEAKER_02

I mean, Craig, is that a blueprint that you think extends to other organizations? Or is there, you know, because so many, you know, organizations that that we deal with are not dealing necessarily with as modernized a stack as Core 42, maybe. So is there a sequence to modernizing that infrastructure so you're not dealing with so much tech debt, or how should how should organizations think about modernizing to get where uh core 42 is at?

Building AI From Scratch

SPEAKER_01

Yeah, I mean, uh to me, I think it is the most critical piece of this. And again, as I said earlier on, people are focused on the chips, maybe and the models are choosing. But in order for this to really scale for an enterprise, they truly have to build a complete architecture, kind of have a view of the end game. And that may in the in the short term slow them down a little bit because look, boards are telling CEOs, CEOs are telling their teams, we've got to move on AI. So they might move to a pilot or they might move to a certain project. But without that foundation in there, that true architectural foundation, it's probably not going to scale. And what happens is it's going to fall, kind of tip over. And then at that point, it's well, maybe we're not ready for AI, or maybe it doesn't have the ROI that we thought. And the truth is they just didn't build the right architecture around it, thinking of all these points. Every point that Regu just mentioned, we need all you need all of that infrastructure built out in order to keep layering on these AI workloads on top of it. And you know, I think back to, you know, not maybe the best analogy, but think about the way apps ran for, you know, forever in IT. And then a sales force showed up. And so SAS showed up on the scene and they they built that entire architecture and had now that was a single platform and this won't come from a single vendor. But because that architecture was built out as a SaaS model, its ability to then be scaled out to customers, and customers trusted that their data would be secure, trusted that they'd have their metrics inside, observability, all of those things that we're talking about. So the analogy is there, and the same thing needs to be with AI. AI is just it, you know, a bunch of GPUs is just a bunch of GPUs. And so they really have to be thinking about the architecture more. And I think if enterprises do that, and obviously, you know, G42 can help them take that off FREM and do it, worldwide can help you do both. Really sitting down with customers and helping craft that architecture really will allow AI to scale versus just buying some GPUs, running a pilot, running a proof of concept, which I think though that's where some of the press has been about AI falling down.

Agents = More Risk

SPEAKER_03

I mean, we leverage a lot of WWT expertise in this in this area, right? Like so building out an AI. So we we are a neo cloud. So we are not retrofitting an existing infra to build this thing. I mean, you you've seen all the the infra dependencies, right? Like you need water coat. Yeah, it's not available in a traditional cloud. You need the the racks these days are 25 kilowatts. People are used to three kilowatt, four kilowatt racks in the compute cloud, right? How do you redo the power? How do you how do you redo the infra and how do you redo the plumbing to get all of this ready? And then connectivity, as I talked about, the east, I mean like the whole east, west, north, south fabric is becoming very critical. And we've built with WWT a very innovative fabric, a networking fabric, so every GPU can talk to every other GPU in a low latency manner without having to wait for any kind of data transfer or latency. Right? Like, so that's that's very important. So we have thought about all of this from the ground up, and we had the advantage of working with WWT to build it all up. And I think that's showing in terms of what is a difference maker for EABO for us. Yeah, for us.

SPEAKER_02

I'll go ahead and take the pin out of agents that as I mentioned earlier, and this probably ended up being one of our last questions. I know we're running short on time here. How does agents change the equation here, if at all? Is everything remained true and we're powering through with agents, or is agents going to have a massive implication, you know, Craig on the infrastructure?

SPEAKER_01

Yeah, I would say it stresses even more importance in the things I've talked about. Obviously, resiliency, because the more your business is dependent on obviously what agents might bring, and you know, think of them as employees. So you need to make sure that you have an infrastructure that's always on, an infrastructure that's always improving. And then you do have to have this not only the resiliency, but the observability, the traceability of what are these agents doing for you? And again, as you trade, as you start to chain an agent's talking to an agent's talking to an agent, you know, and something goes wrong. How are you able to kind of map that back through and understand what those agents are carrying out for you? You know, NVIDIA just had, you can see that this is becoming more important because just today Nvidia launched their its uh Nemo Cloud or Nemo Claw, and it's really around making sure that you are understanding what is an agent doing in your infrastructure. So a lot of importance there in that. And so I think it makes it more important than less as you move into agents. Yeah. Yeah, how are you guys think about agents?

SPEAKER_03

Yeah, let me think about it in three uh three kind of dimensions again. The first one is around infra. So agents, what we have seen is agents are gonna produce 10x the token than what a human chatbot, human-based chatbot are gonna do. 10x the tokens. So, and then you can see when an agent is talking to another agent, if it's just that interaction, you need 10x the token, you know, to be produced. But now there are agents talking to agents, right? If you look at that grid, so let's say there are 10 agents talking to 10 agents, so that's gonna be a thousand times more tokens that are being produced or that that needs to be processed. And then now in this world, everybody's got an agent. Open claw has just opened up this floodgate, this whole floodgate, right? So now everybody's gonna have a personal agent, everybody's gonna have like a corporate agent, and everybody's gonna have those, right? So agents talking to agents is going to be the biggest uh problem. So from an infra standpoint, you gotta have a scale that it's that it's just unbelievable to me. So that that's something we're planning for. Whether it's silicon, that's why we need a cerebrus, for example. Whether it's connectivity, whether it's storage, everything. We're rethinking that. The second thing is around security. You mentioned a lot about this. An agent could leak information or it can be easily spoofed to produce information that it should not be giving out to another agent. And how do you now do this RBAC or role-based access control to be applying to all of the agents, even though that other agent would not be in the same ecosystem as yours? So we are thinking about a security standpoint. How do we control information from leaking out from a in an agent-to-agent transaction? And the third thing is about governance. When an agent comes into an ecosystem, how do you know that is not a rogue agent? You know, what kind of governance do we have to establish so that that agent can be accepted and there is a trust again? We all go back to trust, yes. So, how do you establish that trust, a two-way trust between an agent, or it could be a one-way trust, too? And we are thinking about those in those three perspectives. I mean, like infrastructure, security, and governance. Those are the three main things I think we got to focus on. And it's gonna be an interesting world, man.

SPEAKER_01

This year is all about agents, obviously, but just don't go to sleep, they don't take holidays, they work on the weekends, they're 24-7. So it's I mean, it's an always-on. You talk about just the demand that it's going to create from the infrastructure.

SPEAKER_03

I have four coding agents running at home right now. Do you want to show on this?

SPEAKER_01

Yeah, it is, it's incredible. And and well said, I mean, again, the the the governance and security slash trust, it is so important that both countries and enterprises are focused there. They really are. Okay.

SPEAKER_02

Well, uh, we'll end on this. I mean, a little bit of a crystal ball question, but we're, you know, if we're back here again in 2027, which I you know, I hope we are, this has been a fantastic conversation, and thank you both for the for the time. What do we think we'll start to be, what do we what will we be talking about? We'll be talking about how we're still struggling, you know, with that inference inflection point or with how to make agents work, or do you think we'll solve for that and move on to kind of whatever the next big thing is?

SPEAKER_03

I think the next big thing is what uh Jensen talked a lot about today in terms of restructuring applications. How do we bring an application ecosystem? So far, we are still at the token factory level, right? Like it's still infra and PaaS. Yeah. I think the SaaS layer is what we'll be talking a lot more about next year. And uh there is a lot of innovation to be happened there, like whether some legacy SaaS applications go out the door or maybe they reinvent themselves.

SPEAKER_00

Yeah.

SPEAKER_03

But we're gonna get past this whole token factory thing and get into more SaaS and nothing.

SPEAKER_01

Yeah, Craig and Crystal. I agree. It's I mean, and he touched on it, it's kind of moving up that five-layer kick that he talked about. And you know, we've all spent a lot of time in technology. And when did you ever talk about a chip before? You just it never came up, right? I mean, it was you had a this vendor or that vendor, I won't say, you know, who might be in the server, and and it became the most critical aspect. And and Jensen even says, we're not going to talk about the chip as much. We're going to move into these architectures and ultimately the applications. And so I'm in agreement. I think as this continues to roll out, you'll have companies that make it, they'll reinvent themselves, they'll change how their architecture or their application works. And then you're just going to see, you know, hundreds of new players out there that this create, this opportunity creates, this architecture creates. So I'm in agreement. I mean, I think that's going to be the next frontier uh inference at scale and applications.

The One Thing That Matters

SPEAKER_02

Yeah, exciting to see how it will unfold. Certainly a lot of work to do ahead of us, but uh right for us. It'll be fun to see how we get there. Well, to the two of you, thank you again uh for taking the time here. I know GTC is a busy time for all of us, so thank you again. We appreciate it. Yeah, really appreciate it. Thank you. Yeah, thank you for the partnership. Thank you. Okay, thanks to Ragu and Craig for joining. The emerging challenge for enterprise leaders is whether your organization can use AI in a way that is resilient, governed, observable, and secure enough to support real decisions, real workloads, and soon real autonomy through agents. So the takeaway is simple. If you want AI to spread across the business, trust has to be designed into the system from the start. This episode of the AI Proven Ground Podcast was co-produced by Nas Baker and Kara Kuhn, our audio and video engineers, John Knoblock. My name is Brian Phelps. Thanks for listening. See you next time.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

WWT Research & Insights Artwork

WWT Research & Insights

World Wide Technology
WWT Partner Spotlight Artwork

WWT Partner Spotlight

World Wide Technology
WWT Experts Artwork

WWT Experts

World Wide Technology
Meet the Chief Artwork

Meet the Chief

World Wide Technology