AI Proving Ground Podcast: Exploring Artificial Intelligence & Enterprise AI with World Wide Technology
AI deployment and adoption is complex — this podcast makes it actionable. Join top experts, IT leaders and innovators as we explore AI’s toughest challenges, uncover real-world case studies, and reveal practical insights that drive AI ROI. From strategy to execution, we break down what works (and what doesn’t) in enterprise AI. New episodes every week.
AI Proving Ground Podcast: Exploring Artificial Intelligence & Enterprise AI with World Wide Technology
OpenClaw, Inference, AI Factories: What We Learned at NVIDIA GTC 2026
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
At NVIDIA GTC 2026, the conversation shifted from model novelty to enterprise reality. In this episode of the AI Proving Ground Podcast, WWT Chief AI Advisor Tim Brooks explains why agentic AI, inference-heavy workloads and physical AI are forcing CIOs and CTOs to rethink infrastructure, governance, trust and leadership.
Brooks, WWT's Chief AI Advisor, said the industry is moving beyond fascination with model releases and into the harder work of managing inference, agents, infrastructure and trust at scale.
The shift matters because enterprise AI is no longer defined primarily by training. Brooks points to an inference inflection point, where production systems, multi-step reasoning and autonomous or semi-autonomous agents are driving exponential token growth. That creates immediate pressure on compute, memory, networking and cost decisions that many organizations are still underestimating.
The AI Proving Ground Podcast leverages the deep AI technical and business expertise from within World Wide Technology's one-of-a-kind AI Proving Ground, which provides unrivaled access to the world's leading AI technologies. This unique lab environment accelerates your ability to learn about, test, train and implement AI solutions.
Learn more about WWT's AI Proving Ground.
The AI Proving Ground is a composable lab environment that features the latest high-performance infrastructure and reference architectures from the world's leading AI companies, such as NVIDIA, Cisco, Dell, F5, AMD, Intel and others.
Developed within our Advanced Technology Center (ATC), this one-of-a-kind lab environment empowers IT teams to evaluate and test AI infrastructure, software and solutions for efficacy, scalability and flexibility — all under one roof. The AI Proving Ground provides visibility into data flows across the entire development pipeline, enabling more informed decision-making while safeguarding production environments.
AI Enters The Operational Era
SPEAKER_00From building AI factories up to governing agentic inference-driven secure AI operations at enterprise scale, NVIDIA GTC served as a clear signal that AI is entering its operational era. And Tim Brooks sees that shift clearly. So in this episode, he'll explain why GTC 2026 felt less like a future-gazing keynote and more like a blueprint for what enterprise leaders need to do next. The key insight: AI is no longer a strategy slide. It's becoming an infrastructure, governance, and leadership decision all at the same time. So let's jump in.
SPEAKER_01I mean, have you been sprinting just nonstop since you arrived here in San Jose?
SPEAKER_02Uh it's been pretty busy here at GTC. Um it's always a busy time, but I think this year, more than I've seen it in the eight or nine years of coming to this uh conference.
SPEAKER_01Yeah. So we're here at NVIDIA GTC in San Jose. I mean, as good of a time as any to just take inventory of what's going on around us as it relates to AI, always anchored in Jensen's keynote. What did you see or hear out of the keynote that really made you think or give you a sense for where the industry is at in terms of AI adoption, AI scale, and so forth?
SPEAKER_02Yeah, there's so many individual takeaways, and of course you can always watch the entire two and a half hours if Jensen's keynote, his stamina is amazing. But um, I I want to say that overall I think there's a cultural shift that we've been we've been making, but to me the open claw um call out that he did is uh significant in in a couple of ways. Just like ChatGPT showed us the birth of what was possible with artificial intelligence. By the way, just think back to a few years ago and that magic moment when you use ChatGPT, and now it just seems so primitive. Right. That's how far we've come. Yeah. Open claw is the next sort of inflection point for human understanding of what AI is capable of doing. So we were awed when it completed our sentences, found things for us, summarized documents, gave us research insights, all of that that ChatGPT perplexity initially Claw could do. And now that seems simple. Likewise, I think in just a few months, no more than an 18 months, we will look back at OpenClaw and go, oh, that's cute, that was a nice start. Yeah, right. Because we're gonna be adopting and using those sorts of assistants. I don't even want to call them coding assistants, they're life assistants, and we'll be using them all over the place. So that's that's a really big insight from this, aside from of course the different product announcements that that came with Jensen's keynote and NVIDIA, but I think that's a cultural shift.
OpenClaw And The Agent Wave
SPEAKER_01Yeah. So if we're kind of staring down the barrel here of a tidal wave of of claws, of agents kind of coming here, how does that change the conversation from uh an enterprise IT and architectural perspective? Um are organizations ready for this, or is there a lot of work to be done? I think there's a lot of work to be done.
SPEAKER_02I think that the, and I don't I use this word intentionally, exponential. We hear a lot about exponential growth, and exponential is really the power of 10, if we think about it. We we have got, we are going to see a truly exponential growth in token consumption. And token costs continue to plummet while their sophistication and their power becomes now already exceeds PhD level reasoning and Ph double subject, PhD level subject matter expertise. But the byproduct of that, that what makes that all go is the amount of compute that those tokens require, the amount they'll consume, the amount of in-memory they need, and we know right now there is at this moment in time a memory shortage. All of these, I don't think that many enterprises are fully prepared for the compute intensity that they're going to need. And that's a shift that they need to begin considering because you may be using several billion tokens right now as an enterprise across, let's say, a week, depending on your intensity of usage. You're soon going to be in the hundreds of billions of tokens. Where do you want that compute to operate? Is it going to stay in cloud? Will you burst to neo-cloud? Is there a point at which the tipping point which on-prem becomes you know really a cost-effective solution for you? And companies are beginning to go through that, but I don't see that yet at where it needs to be. And with budgeting season coming up in a few months, I think people need to take a hard look at that.
SPEAKER_01Yeah, I mean, is that starting to hint at one of the main things that I heard Jensen say was just the idea of an uh uh inference inflection point. Um that's starting to get at what he's hinting at you, and maybe unpack just the the idea of inference inflection. What is what does he mean by that, and what does it signal for the rest of the year?
The Inference Inflection Point
SPEAKER_02So there's a lot of uh conversation about new model releases, and those models require training, and that training often requires a billion dollars or or so of compute time and resources in order to do it. That's not even counting all the developers, the data scientists, all of that that went into it. A lot of focus has been on that, and that makes sense because AI was relatively new. But AI has now reached a level in enterprises where the production environments are now cranking out answers and they're doing it in multi-layered reasoning. And then, and that creates more tokens, right? And thus more compute usage. And now add to that agents acting either under human command or in some cases autonomously, and multiple agents generating tokens. And if they're doing it autonomously, that means they're doing it without us knowing it or supervising it, and they're generating a response, an answer, an action, and we think that's great. Yeah. But that's inference, and it's already greater than 50% of the workloads that we see. It will easily achieve 80% of those workloads as it expands. And I'm not even gotten to physical AI yet, and robotics. So that's an inflection point in terms of what data centers are purposed to do. And when inference gets that big and it impacts your enterprise, you begin to need to make resource decisions about how you're going to accommodate that.
SPEAKER_01So if you're talking about going from millions to billions to potentially trillions, and who knows what physical AI is going to bring to the table in terms of tokens, you already mentioned organizations are having to have these conversations now. To the extent that you've maybe had these with um organizations that you deal with on a day-to-day basis. What's shaping those conversations? How are they starting to think about that? Give us a glimpse into what we might see, you know, the result be.
SPEAKER_02Yeah, so it's a great question, and we see that quite a bit at the board level and the sea level uh level. They're they're thinking about what does my organization look like? So that's people. Yeah. What is the operating model for my organization to go forward to unlock the power of AI? Yeah. So we all know about coding assistance and they're using coding assistance. How can I then create an AI-native sort of organization and a step to doing that? And what is an AI native organization? An AI native organization is one which makes great use of artificial intelligence to either expand the quality of the work, the productivity of the work, to achieve more as a team, as an individual. And what does that look like for the business? Well, typically margin expansion or growth, right? The other one is then, so we said people and process, the other one is technology. So what technologies then serve those purposes? This is what Wards and C-level execs are concerned about, and this meets a real a real fork in the road when it hits the CIO and the CTO and the decisions they have to make in terms of what do they fund, how do they resource, where do they put workloads, and then how do they secure it.
SPEAKER_01Yeah, I mean, it it uh your answer may be it depends, but what's what are the considerations at that fork? Well, how would you guide a customer that comes to you and says, hey Tim, we're at this fork right now, we've got to figure out how to fund it or what to fund. What do you say?
Funding An AI Native Organization
SPEAKER_02Yeah, so um it it always does depend, and it depends on their strategy and their appetite, and oftentimes their own data maturity. So you can't leap from like zero AI, AI apps to physical AI. Yeah. It's just not gonna work. AI is in part of trust building exercise. So, in terms of what they tackle first, I say optimize things you know, things that are manual, things that can be optimized. Next, then look at those sorts of AI applications that will accelerate your growth, make people more productive, make your teams more productive. And those use cases fall into various families. And then there's another part to this which is truly transformational, and that is how do we redesign workflows, and how do I help have AI help me redesign the workflow? Agents are great at finding the optimal route in a workflow in order to optimize it. And these are areas that they need to explore with the right governance, with the right security. In terms of the people part of it, the people part of it means what do I need to do to unlock the value in my people? So I want you to think about like right now, if I took your AI away, how would you do your job?
SPEAKER_01Well, I wouldn't have any set of questions to ask you right now.
CEO Leadership As The Success Factor
SPEAKER_02Yeah. But you'd be encumbered more so than you're used to because you're so now used to using AI, and AI isn't even as good as it's gonna be. Yeah. So when you think that through, what is it that I can do to fully unlock the value of my teens and my people? Right now, many people go from at home using something like an open claw and they go to work and it's just not as good as what they're used to at home. Organizations need to up-level the tools that their their people use in order to unlock the value that lies within those people. Yeah. And then that comes to all right, technology choices. Yeah. What's right for me? There's many choices. What do I make? How do I avoid sprawl? How do I make those intelligent decisions? Fortunately, worldwide has one of those, many of those answers, but one in particular, our AI proving ground and our expertise in AI native engineering, which we can help our customers with in order to make those leaps from initial optimization through accelerate through transformation. All of that uh is some aspects of that worldwide can help all of it.
SPEAKER_01You covered a lot there. I want to go to kind of the leadership aspect. We had uh Jim Kavanaugh on an episode that aired uh a couple before this one will air. He had a ton to say about how you know top-down leadership, how organizations can leverage culture to drive that change. Um like you mentioned, you're talking to a lot of executives and boards. Are they seeing that same type of need for strong culture and strong leadership, or are they just wanting to jump right to that transformational period, which could risk kind of alienating some of the, you know, some of the people?
SPEAKER_02Yeah, you're you're absolutely right. So uh the biggest indicator of AI success is engaged, consistent CEO leadership. Where we see that, we see AI successes. They're not going to accept less than outstanding artificial intelligence broadly distributed throughout their organization with intensity. So currently you see that in actually in some law firms, you see it in some consulting firms, you see it in media and entertainment quite a bit. You see it in some elements of finance, particularly when it comes to personalized finance, wealth management. You don't see enough of it yet in healthcare, American healthcare, because I understand it's a highly regulated business and it it's hard to move and make that aircraft carrier turn on a dime. Right. But the companies that are successful have engaged CEO leadership. Now, you know, Jim is currently a rare bird in that he has moved this along and moved it aggressively. Just two years ago, as you know, we only had probably we had like 12 effective AI applications. Yeah. Now we have hundreds of agents, right? Many applications, and it's providing great growth and and great, you know, it's unlocking the value of what I do and probably of what all of our colleagues do. Jim will be less rare in in a year or two as more and more CEOs are either put into position or something clicks and they become personally engaged in these AI initiatives the same way they are engaged in finance, merger and acquisition, and other important strategic parts of their job. Yeah. And that's really a CEO's role is guide that ship, build that culture, set the priorities, and determine where the resources are applied.
NVIDIA’s Ecosystem And More Choices
SPEAKER_01Let's bring it back to GTC here. I mean, this GTC, and I'm curious to get your feedback here, seeing maybe less about um you know big announcements and more just about getting back to hey, this is a factory blueprint that we want or that we think organizations should adopt. You think that's a fair assessment?
SPEAKER_02That's fair. I would agree with that. I also think uh one observation here is there is a massive proliferation of NVIDIA inception partners, which are startups that either uh you know receive NVIDIA credits or maybe in some cases financing from NVIDIA. Um, they're seeding, and they have for years, but now it's really big, an ecosystem of companies creating successful AI, that leads to, again, more compute consumption, more broad, broader distribution of AI. So that is enormous now. Yeah. So this hall used to have a little section of it that had like 50 inception partners, and now there's I can't count them all. It's two halls. It's massive, and they're all doing great things.
SPEAKER_01Is that is that part new or has that just gone on unnoticed um in years past?
SPEAKER_02Well, the number is is new, it's grown. Uh I don't know that it's gone unnoticed by those of us that have come here for years because we've we've watched it and seen it, and some of those NVIDIA inception partners are now really big, viable companies. Yeah. Um and some of these that are here now will become that way. Because what this means though for enterprises is you have more choices than ever. You have so many choices, and you probably need some help deciding what are the right choices based upon your strategy, your resources, and your AI maturity.
SPEAKER_01Yeah. And I'm gonna guess that's where the uh namesake of the podcast comes in, AI Proving Ground. Yeah, we tend to help with that. Yeah. I mean, how should we think about or how should organizations think about if NVIDIA is kind of turning into that platform, how should we think about the ecosystem? Uh ecosystems have a tendency when they build to consolidate, fragment. It's confusing. Yeah. How do we stay on top of it?
SPEAKER_02Uh that's a that's one I I I uh how would I put this? I'm not a a psychic, so I can't tell you how it's gonna shake out, but I know that however it shakes out, it's good for the consumer. Yeah. It really means lower cost on everything, really, except energy at this point that is in the AI stack. If you think of the AI stack, energy is foundational. You're not gonna do anything, you can't fight physics, you're not gonna do anything without some energy to power your data center somewhere and your compute, right? Yeah. That's gonna happen. The cost of compute in terms of throughput, in terms of the density of compute, continues to decline. You know, Jensen talks about Moore's Law, but now it's accelerated in terms of what AI can do and what compute can do now for to give you the throughput. The network has improved tremendously and needs to continue to improve to provide the large pipes that are needed to either, you know, uh scale, um, scale out or scale across in some cases, some cases multiple data centers working on the same jobs at the same time. I mean, all of that is new. Um, and that leads to innovation with other technology partners. We tend to think of developers and software and data, but there's fundamentally there's technology that is evolving to meet the need.
Physical AI And Trust Building
SPEAKER_01Yeah. You mentioned uh physical AI earlier in our conversation, so let's let's go there for a little bit. Um I mean, I we're starting to see more physical AI here on the show floor here at GTC. How does the concept of physical AI as it starts to come more into a reality, how does that change the equation for uh enterprise leaders?
SPEAKER_02So physical AI is um currently uh effective in pockets. It will be more broadly distributed in an assisted way, but what is required is continual trust building. So you'll take the example of autonomous vehicles. Yeah. So I rode a Waymo over here today from another appointment. I've ridden dozens of Waymo's now. I'd say less than 100, but a significant sample size. And they've been great, they've been accurate, they've been safe, they're consistent, they're reliable, they're cost effective. All of that works. But if one autonomous vehicle backs up into a parking meter, you know, it makes news. It's like a big thing. Meanwhile, there's accidents all over the place by human drivers every day. Yeah. How Waymo avoids that, I'm not quite sure, but they're really good at it. Similarly, with physical AI, trust will need to be developed. And physical AI, whether it's assistive of humans, that requires probably scrutiny, regulation, safety checks, things like that. Then there's AI to produce things, factory AI. That can take off like a rocket ship. So little factoid, um, per number of factory workers, the country that has the most advanced robotics per factory worker is not China, it's South Korea. South Korea is leading the world in advanced machine robotics. It is uh tremendous. Uh, in in the US, for example, the most roboticized company we have is Amazon. And Amazon has logistics robots that are doing a lot of the workload. That's well understood. There will be more and more of that in our very near future. So let's separate the physical AI from what has cysts humans, which requires a lot of trustability and perhaps some regulation, and the robotics that can be in manufacturing things, servicing things, moving things around, that's going to be a different, and that is set to explode. It is it is really taken off.
SPEAKER_01Yeah, I like the example of the autonomous vehicles, one incident can destroy trust. Trust is fickle. Is there an analogy there for AI strategy, like today's AI strategy, on how organizations should approach developing AI, drive and adoption with a focus on trust and driving a trustworthy experience like for the end user?
SPEAKER_02Yeah, absolutely. Uh, in particular, if your AI is touching a customer, yeah. So think about a contact center. And and so trust is built step by step, but it can be lost in one act. Right. Right? And so when it comes to AI, we have expectations when we use OpenClaw. When we've used Anthropic or G Google Gemini or whatever we're using, right? We have those expectations that are set, and sometimes it does fail still, not nearly the way it did, but sometimes it does hallucinate or give you a sort of a sketchy response to what you're doing. Um, you don't want that happening in your work environment. And in your work environment, as we build trust as humans, and there's people that we trust, and people that we sort of need to rebuild trust with, right, or figure out what that looks like. With AI, we don't give it the benefit of that doubt.
SPEAKER_00Yeah.
SPEAKER_02AI, it's like, oh, well, it didn't work, so I move on. So humans by nature, it was Daniel Kahneman who made this observation many years ago. Humans by nature are forgiving of other humans. We'd done that as a species to survive. Humans are incredibly unforgiving of machines. Like machines, we think are rule-based and they should do something, but we're just not forgiving of them. So AI is effectively a machine from the human perception. So we're going to need to really build that trust carefully. Hence, I always ask, I always ask boards and sea level execs, what sort of AI do they want, and what sort of AI can they do? Yeah. And we end up using usually at first pass is the optimization of things that are well understood that they can control just because they need to build trust.
SPEAKER_01Yeah. And it's not, I would think it's not just a security thing, it's a trust in data, trust in the underlying infrastructures that it is, it is.
SPEAKER_02Trust in the effectiveness of the AI, which has technology dependencies, it has data dependencies. The data's got to be, you know, in good enough shape to do that. Uh hint, many people don't have their data in good enough shape, right? Or it's siloed. Um, that's that's a lift right there. And I don't know many CEOs who started as data scientists. So when I talk about CEO leadership, that means you don't need to become a data scientist, but you sure need to know the importance of data and you need to get the coaching you you can get from there's data scientists typically in your organization. Seek them out to get that coaching on what is needed for your AI to be effective.
Networking And Energy As Constraints
SPEAKER_01Yeah, let's get back to uh again to GTC and Jensen's keynote. Anything that you think Is going to go perhaps overlooked, hard to think about considering the focus on GTC within the industry, but anything that you think might go underreported, underlooked, or under-emphasized as it relates to enterprise AI?
SPEAKER_02I would I would say I don't recall a whole lot of time being spent on uh on network or innovations in network, but I do think that that's an important consideration. When you talk about the data volumes and the token throughput we're talking about, compute is, you know, the comp you don't need that much compute if the network isn't going to be able to accommodate as attractively. So you really need to think about what does my network look like and how is my network going to perform when these big data loads come. So I don't think that was a big part of the emphasis. Uh, NVIDIA has a significant networking business, and it's got partners, as we do, that are deeply engaged in networking and in the RD behind networking. Things like photonics are very imminent in terms of the next few years of adoption that will improve network performance as well as the heat efficiency and the cooling required for data centers.
SPEAKER_01What Jensen talks about every year at this keynote is always visionary, sets the pace, sets the standard. I I'm curious, I'm always curious about how organizations interpret that, knowing that they're, I mean, very few probably are are at that cutting edge. So do we need to take a do we need to be considerate of what he said last year and the year before as a whole, or how do we start to make actionable what he's envisioning for us?
SPEAKER_02I I would I would look at it this way. He has been consistently right about a lot of things. Yeah. Uh and he has achieved, and this company has achieved, performance, growth, and margins, unlike anything that people have seen. Yet it's a bit taken for granted sometimes in terms of, you know, well, is it real? Um, how is that being achieved? You know, is there a bubble? I mean, stuff like that. Yeah. I see no signs of a bubble in terms of the demand. Uh, there are data centers, the only thing holding them back from being fully functional at this point is energy access. Yeah. And that's sort of an upper bound on what this industry could do is the access to energy. Other than that, they're cranking out chips, they're cranking out new Vera Rubens, uh, there's an industry ecosystem poised to dive in. Data is as big and as well governed as it's ever been. There's a proliferation of development and models unlocking creativity, open flaw just being one example. So I think that the future is very, very bright. I think that enterprises need to invest their time, not more so than money, their time and energy into developing the right sort of AI that's going to fit their strategy.
North Star Takeaways And Closing
SPEAKER_01Yeah. So Jensen and NVIDIA set that North Star. He's been right time and time again, so it's up to us to get there as quickly as we can.
SPEAKER_02Yeah, I mean they've done great with supply. It's uh really up to enterprises and companies like us to help, you know, build, if you will, the demand. And the demand is really working, successful AI applications that generate value. Yeah, awesome.
SPEAKER_01Well, Tim, I know you're short on time because you got meetings to go to, probably 24 hours a day here. So thanks for taking the time. I appreciate it. We'll have you on again soon. Pleasure, Brian. Great to be here. Great.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
WWT Research & Insights
World Wide Technology
WWT Partner Spotlight
World Wide Technology
WWT Experts
World Wide Technology