AI Proving Ground Podcast: Exploring Artificial Intelligence & Enterprise AI with World Wide Technology
AI deployment and adoption is complex — this podcast makes it actionable. Join top experts, IT leaders and innovators as we explore AI’s toughest challenges, uncover real-world case studies, and reveal practical insights that drive AI ROI. From strategy to execution, we break down what works (and what doesn’t) in enterprise AI. New episodes every week.
AI Proving Ground Podcast: Exploring Artificial Intelligence & Enterprise AI with World Wide Technology
Why Enterprise AI Keeps Stalling
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Enterprise AI didn’t fail.
It hit the wall.
In 2025, pilots multiplied, copilots spread, and expectations skyrocketed. Then reality caught up. Scaling AI turned out to be less about model quality — and more about data, security, cost visibility, and how organizations actually work.
In this episode of the AI Proving Ground Podcast, Chris Campbell and Jason Campagna break down what enterprises learned the hard way, why most AI initiatives stall between pilot and production, and what leaders must fix to make AI deliver real impact in 2026.
We get practical about:
- Why AI breaks when fundamentals aren’t ready
- What agentic systems expose at scale
- Why focus beats hype when AI becomes infrastructure
- How winning teams design, govern, and measure AI like a core system
If your AI strategy looks impressive but hasn’t changed outcomes yet, this conversation explains why — and what to do next.
Support for this episode provided by: Graphiant
More about this week's guests:
Chris Campbell is Senior Director of AI Solutions at World Wide Technology, where he leads strategy and delivery for AIaaS/GPUaaS and data center facilities and infrastructure solutions. He brings deep experience across executive engagement, customer advocacy, and large-scale engineering leadership. Prior to WWT, Chris held senior leadership roles at Forsythe, Red Hat, BEA Systems, and AT&T. He holds a BA from Columbia University and an MBA from the University of Maryland, where he was a Dingman Entrepreneur Scholar.
Chris's top pick: AI and Data Priorities for 2026
Jason Campagna is a strategic technologist at World Wide Technology, where he leads AI solution strategy and helps enterprises navigate the next wave of intelligent systems, from AI assistants to autonomous agents. With deep experience spanning cloud, automation, and platform architecture, Jason focuses on turning emerging technology into operational reality. He brings a pragmatic, execution-driven approach to scaling AI in complex enterprise environments.
Jason's top pick: AI Agents: Scaling Your Digital Workforce
The AI Proving Ground Podcast leverages the deep AI technical and business expertise from within World Wide Technology's one-of-a-kind AI Proving Ground, which provides unrivaled access to the world's leading AI technologies. This unique lab environment accelerates your ability to learn about, test, train and implement AI solutions.
Learn more about WWT's AI Proving Ground.
The AI Proving Ground is a composable lab environment that features the latest high-performance infrastructure and reference architectures from the world's leading AI companies, such as NVIDIA, Cisco, Dell, F5, AMD, Intel and others.
Developed within our Advanced Technology Center (ATC), this one-of-a-kind lab environment empowers IT teams to evaluate and test AI infrastructure, software and solutions for efficacy, scalability and flexibility — all under one roof. The AI Proving Ground provides visibility into data flows across the entire development pipeline, enabling more informed decision-making while safeguarding production environments.
From Worldwide Technology, this is the AI Proving Ground Podcast. By the numbers, AI looks like it's everywhere. But take a closer look inside most organizations, and the story becomes more complicated. Because experimenting with AI is easy, but scaling and operationalizing? Well, that's a whole different story. Many organizations spent 2025 running pilots, chasing proof points, and wrestling with data, security, and cost. In the process, they learn some hard lessons. One of them centers on a word you can't escape right now: agents. They're overhyped, overused, and often misunderstood. What actually works today, what doesn't, and what does it really take to move from individual AI use to institutional AI impact? On today's show, we're talking with Jason Campania and Chris Campbell, two AI leaders and experts here at WWT, who've worked with hundreds of organizations looking to make sense of this complicated landscape we've found ourselves in. They'll tell you what we've actually learned over the past few years that will drive AI adoption within your organization and set you up to succeed as AI becomes part of the core operating model. This episode isn't about where AI is going in theory, it's about what enterprise AI adoption has taught us and how to use those learnings to get it right moving forward. So let's jump in. You too. And Jason Campania, welcome to uh the studio. How are you?
SPEAKER_02:Great, doing well.
SPEAKER_03:Awesome. We got a lot to get to today. We're going to be talking about 2026 priorities as it relates to driving AI adoption, driving AI scale, and success overall. Jason, I'm going to start with you. I was reading a Gartner report. It had to be within the last week or so that talked about how more than 80% of organizations are now testing or building AI applications at some level, and that's up from like a very small amount, like 5% just a year or two ago. So obviously there's a lot of stuff increasing here as it relates to activity, but adoption still seems to be meddling or a little bit low. So I'm just curious, over the last couple of years, what have we learned actually about enterprise AI and how it works and how companies are going about it?
SPEAKER_02:Yeah, I I think, you know, in the hot button topic today, of course, is agentic. Everything is through this washed word of agents and agentic across the enterprise. But I think at the end of the day, we're seeing real actualized use cases coming out of the bottom of the organization, everyone experimenting with AI, as well as, of course, top-down initiatives coming from leadership. I think especially in 2025, we learned uh quite a bit around data and security and what that means. Perhaps that traditional or perhaps it's pretty funny, traditional is 12 months ago, right? Right. You know, the idea of an individual user interacting with Chat GPT or interacting with an AI is vastly different than what it takes to institutionalize, industrialize the usage of AI in an enterprise. And that's for me a lot of what agenc means. And so like as organizations kind of surround those different use cases and try to figure out how to plug all the pieces together, um, I think we're trying to we're really seeing the reality come forward of what it means to digest AI into the enterprise. Yeah.
SPEAKER_03:Chris, anything to build on there? I mean, what what have you learned about AI as with the lens of the organizations that we're dealing with on an everyday basis?
SPEAKER_01:Yeah, I think that 2025 had a lot of folks trying to do proof of concepts, maybe getting a single use case off the ground and trying to, you know, get board approval because they have a lot of things to consider. Now, as we head into 2026, I anticipate more of those budgets coming off the sideline, getting more use cases going. We've seen some success that are published out there. A lot of folks tried feature-based AI last year, maybe using a co-pilot or something like that. Well, now they may look at something that's a little more uh impactful to the business as opposed to something that's just a feature. So it should be an exciting year for AI. And as people continue to use do these use cases and develop them, I think you're gonna see more and more people jump on board and try them out as well.
SPEAKER_03:Yeah, Jason, I'm gonna I'm gonna rev the engine here a little bit and dive into agents. You mentioned already, I know you could talk for probably hours on end about agents. 2025 fell a little bit flat in terms of agents. We heard a lot early on in 2025 about how it was gonna be the year of the agent, and I'm not sure if it delivered or not on the promise, but I do know that I'm not booking any of my own travel autonomously through agents or anything like that. What are agents today as we head into 2026, is is it still that autonomous feature, or is it just automating processes, or where do we sit with with agents?
SPEAKER_02:I think it's easily the most overused, overhyped buzzword that exists in the market. Yeah. Um it is really quite obnoxious how overused it is. I think the average individual doesn't necessarily care about the pure play definition, which you know, if you look at you look at the research and you look at how it's being used, truly an agent needs to have a goal, needs to have the ability to act autonomously to some degree. But the products that are out there, the capabilities that are out there in in these various platforms and and and what have you, wash that entirely. So I might build an agent that only has you know basic retrieval augmented generation with data and has context that it can provide me, but can't even take action, doesn't even really have a goal, but it's being called an agent. Right. And so we've quickly learned that to not overanalyze that as much as what can AI do for me? What are the capabilities that it's got? What level of agency does it have as that you know, that word's being used quite a bit. And then separately, what does it mean? What level of autonomy can it provide for me? And there's a bit of a maturity curve that I think we're going through in the marketplace with what an agent is, how do I actually use that agent? And then how do I string together multiple agents? Uh I think that that's probably top of the list for well, how do you how do you do things successfully with an agent? Yeah. And breaking apart and realizing you're not going to have one, you're not going to have two, you're probably going to have 80 agents that are working together as a as a uh agentic swarm to use the buzzword and make fun of it a little bit.
SPEAKER_03:Trevor Burrus, Jr.: Well, Jason, you talk about how we we'll have a swarm of agents. Any ideal use cases right now on where organizations can quickly apply that agentic approach and realize value, or maybe even at the same time, use cases that aren't ideal for that approach right now?
SPEAKER_02:Aaron Powell Yeah. And I think it's maybe key with how you started and asking, you know, how how was it last year, how was it this year? You know, there definitely was some pressure from business leaders to pick that shining star use case, highly impactful, highly visible, but often a lot of risk and also a lot of work to get there. And I'm not suggesting at all slowing down those types of use cases. It's very important to have a small subset of those that are that are top-tier, custom. You're going to build everything against the intellectual property of the organization, et cetera. But what we're also noticing is there's a lot of bottom-up efforts occurring. And that average individual or team's workflow and things that might be in your day-to-day that save time, we're seeing a lot of energy from, well, we could save time by doing this. Suddenly I've created an agent that does a small subset of that workflow, a set of tasks, perhaps, or maybe plays a role as a part of a team, being where a lot of the energy is around agents. And starting small like that and learning how they operate, and also realizing that you shouldn't have this all-encompassing agent. It's really more a subpart of what you've got to get accomplished. We joke about in our AI days that that you know what what AI can do versus what business leaders hope for versus the hype. There's a bit of a cycle there for, well, hang on, let's let's be very pragmatic about this. But also learning how data and security and the models and the platforms and the infrastructure and all these things swirl together and how they actually achieve that is important to do from the ground up. No code or low code, as it's often being called. Sure.
SPEAKER_03:Yeah, Chris, I I I read a recent research piece, I think it came from Sierra Labs. It was 83% of enterprises use AI, but only 13% have strong visibility to how AI interacts with their core data assets. Is that you think that's something accurate that with the organizations that we're coming across? And what's the antidote there?
SPEAKER_01:Yeah, you know, the idea of data and and the uh and what you're seeing from those data sources, oftentimes, if you're not directly related to that project, they may not really quite understand it. I think more and more users are in that point of finding what's the utility of the AI, and we might understand what that does. So here within worldwide is an example, you might say to our teams that are using the you know business, the the tool that we have for, sorry guys, I'm blanking. Uh I'll come back to it. The tool that we have. Yeah, so the the ones that we're using for the RFP assistant that we have, and in that RFP assistant, they very much understand what the RP process is doing, but they may not understand the underlying data. I think more and more as you're seeing these job-focused AIs that are coming out and the models that are coming out that are supporting that, you're seeing it much like people would with an iPhone. I I certainly understand how to use my iPhone in the beginning, but it was more like pictures and music and other things, but I didn't really quite understand the apps or other things. We're very much like that in AI in this time as well. I think there's a discovery mode and usability that 10% of us that are living in the AI world understand, and we live in it every day, and the other 90% are still playing catch up.
SPEAKER_03:Yeah, Jason, I mean, that that gets me thinking about just the the idea of structured versus unstructured data. Unstructured data seems to be a vast opportunity that organizations can take advantage of, but it's not it's not that easy. Those those things typically sit in silos. So where are we at from in terms of organizations tapping into that unstructured data, which may have that massive potential, but it's just hard to get to and build towards? Trevor Burrus, Jr.
SPEAKER_02:Yeah, I think it's kind of funny. There's been data, data lakes, data mesh, data conversations for years. Yeah. Now there is a significant pressure point to do it right. And organizations that have organized their data, both structured and unstructured, there's there's a real payoff here with AI. And if it doesn't exist today, it is necessary to achieve the value out of AI. We're learning this firsthand in our own environments where we're building on against a particular use case, and it really isn't about the agent or or what have you, or or the tweaking of the agent. It's much more about the data substructure we're feeding it, and less is more has become a real interesting artifact of that. Curated data that has entity mappings, that shows up the right way, that is agent digestible, pays dividends as that agent evolves, and especially as you add additional agents and you try to get them to work together, becomes a real challenge of context engineering to use the term. And um, as Chris mentioned, um like AI literacy, how do you actually interact with all these things or or what have you, you know, seeds fast to that idea of context engineering, not just the prompt of the user, but the trigger, perhaps we'll call it, but then also the system prompt that is with the agent that is defining its goal, and then of course all the data that it's actually working with as a part of that particular use case.
SPEAKER_03:Yeah. And apologies, Sierra, are you saying contact engineering? Context. Context. Okay, yeah, just describe a little bit more context engineering.
SPEAKER_02:Yeah, context engineering is a big kind of an umbrella term for building the world around uh one or more agents, or really AI in general.
SPEAKER_04:Yeah.
SPEAKER_02:I I kind of jokingly refer to this at our AI days for those that have seen a TV show like Young Sheldon, that has a you know, brilliant young man, super, super smart with the books. He's brilliant as a PhD level, you know, even as a 10-year-old or what have you, but he doesn't understand the world around him. Right. Uh context engineering for me is let's build what the agent, what the AI understands as the world around it in very deliberate terms, so of course it can help make the right decisions and act accordingly on our behalf.
SPEAKER_03:And Chris, are you are you are you thinking that in regards to context engineering, is that more on like the user and the prompting, or does that even have to do a lot with data structure and and how organizations are are organizing their data so that the context might be you know a little bit baked in?
SPEAKER_01:Well, you know, it's it they go hand in hand, in my in my opinion. I mean, you can't really you really achieve the objectives that you want to without that data having a lot of a lot of sanitization, a lot of cleansing behind it. So there was an article that came out that said MIT said 80% of these projects are failing. And and everybody in the industry said, you know, no, that if you really look at it, the reason they failed is that they didn't have the right data underneath them. And and that was the reason. So we very much look at that data cleansing side. Now back to the context engineering side. Absolutely, as these things get smarter, they're gonna accumulate more data they can resp they can respond to. So I think that's gonna make it better and the engine's gonna run more efficiently. So I I think overall that's just gonna continue to drive it. But unstructured data is almost the the enemy of quality AI projects, it feels like in some of these cases, because you just have to be able to gather it together and make sure that it's accomplishing what you want to with the project.
SPEAKER_02:Well, and perhaps one of the things we're we're finding quickly is what data do I need versus what data do I have is an interesting way to look at it. Because we all have too much data, arguably, all over the place. Yeah. Do we have the right data to feed this particular AI agent? And then can we use the output of that to measure success? You know, can we see the feedback loop that's occurring? Are we able to measure the data to see if the agent is successful or not? Building measurement into the agent, building that idea of that feedback loop is a critical aspect that if you don't start with it, it becomes pretty difficult to troubleshoot all the various hallucinations and odd stuff that occurs. Yeah. And it all points back to simplification.
SPEAKER_03:Aaron Powell Yeah. Before we move off, agents, and I'm sure agents will pop up kind of throughout the conversation. What else do leaders need to know right now to put themselves in position for success, whether it's building agents, getting their organization or employees to understand the value of agents and how to build them in like a co-pilot studio or things like that? Like what else do we need to know?
SPEAKER_02:Aaron Powell Well, you know, we've talked about data and we kind of got into the thick of things pretty quickly. I think it's it's critical to look at agents both how they're impacting various disciplines in IT, and then how are those disciplines, from a technology perspective, building better customer experiences, building better employee experiences, partnership experiences, et cetera. And so if you look at cross applications and how agents used in software development, AI engineering, AI software engineering is evolving the discipline of how we build the app to begin with. Data is actually the same thing. We've talked a little bit about data, but a gentic retrieval augmented generation or a gentic graph rag, as it's called, there's this evolution occurring where we're using agents in the data substructure. Same applies to infrastructure, IT ops, um, AI ops, same thing applies to security. How am I using AI to actually secure things, not necessarily only securing AI? And so like there's the each of these disciplines, traditional enterprise architectural domains, are being directly impacted by agents and AI. And then how are we using those to deliver experiences? And to the end user, that may be a no-code solution in a platform, and I'm playing around with uh an agent for my workflow. That may be something more low-code where I can prototype the app in the line of business that I never could do before. That macro, I think, is really critical to digest and also try to accelerate as an organization. That idea of an agentic technology transformation. It's a real meta. I almost want to make fun of myself for saying it, but it's absolutely what we're seeing in enterprise architecture evolving to be this AI-driven world to achieve the outcomes and experiences that we've all, you know, want to strive to be.
SPEAKER_03:Yeah. Well, Chris, I mean, help us help us understand how to put that into practice. I mean, what are the first several steps that need to happen so that we're working towards that reality?
SPEAKER_01:Yeah, you know, the I obviously for us we have this process and I'm gonna hit it quickly, but the idea of identifying that use case has been super important. Having a center of excellence even before that is a step to make sure that you have executive alignment. And if you don't really know what a center of excellence is, that is a typically a cross-organization group that is helping with the AI project definition and leadership. So those folks identify a use case, they then identify the data that they want to be able to use for that and cleanse it. They have a model they choose, and then they want to determine where to put it. And as they go to where that they they they put it, they have the architecture, it can sit on-prem, it can sit in a public cloud, it can sit in a private cloud with whether it's an Equinex or digital realty or a GPU or Neo cloud, as they call them, or GPU cloud, like uh a Core Weave, Lambda, Nebius, one of those folks. So I think there's plenty of options for that. Now, let's get back to the beginning. I think everybody and a lot of folks are still in that use case identification phase. And as they're doing that, you've got people trying to figure that out on 26. There's a lot of internal conversations I've had with clients that talk about these projects that they're looking at, but but really looking to see how they can flesh them out internally before moving forward. So we're seeing that effect. And once they do it once, they have that effect, the flywheel effect of, well, we've done it once, we've done it really well. How do we can do this again? We can do it faster, better, quicker, and more efficiently.
SPEAKER_03:Yeah. Well, I mean, let's let's dive in a little bit more with use cases here. I mean, one of the things that I feel like I learned over the course of 2025 was just, hey, you don't need to work on every single use case right now. Sometimes it benefits you. And as a matter of fact, maybe almost every time it benefits you to identify those, those handful of use cases that, you know, either offer a slam dunk opportunity or are going to drive quick value. Chris, are we seeing any commonalities or consistencies in the types of use cases that are, you know, uh, you know, tailor-made for that type of quick win um action?
SPEAKER_01:Uh yeah, in some cases, and I'll I'll use the there's four things that people look at, and it's you know, these use cases, do they, you know, can they make money or save money? Can they decrease human middleware in some way, which means they make it more efficient? Or does it protect us and and protect us from risk in some way? So I think these are all things that really help to matter. And then as you look at the specific use cases, I think there are things that that they know that they might see some efficiency on. And I'm gonna use some of the life sciences ones, but life science seems to be very far ahead because they've got research and they've got a lot of opportunity to be able to go and change the world using some of these models. I think with financial services, they're looking at how do we protect our customers? So tech fraud and some of those elements. So I think these are the ones we've seen traction on early with the larger companies. The smaller ones might look at something like how do we work with contact center? We have retail folks that are working on how to make their customer experience better. So those are all things we've been working with customers on in 25, and I see an expansion on that in 26. You know, Jason, you probably have a few others as well. I know you could probably think through. Yeah, I see it's gotten down here. So yeah.
SPEAKER_02:Yeah, one one in particular, and we you know, you you started with the data side, so it's it's definitely related. Enterprise search or traditional enterprise search has been essentially crossbred with the idea of AI. We see this with Adam. In the early days of Atom, you know, we we birthed an LLM and pointed at some data from our website, and okay, cool, we can do some interesting things, but mostly for generation of content style use cases versus well, wait, does it understand data from a capabilities perspective? Does it understand data around the room, if you will, for different internal enterprise search traditionally kind of use cases? But funny enough, that idea of that AI assistant, that idea of Adam for us, or whatever you you know you might call it for the assistant that our listeners might be building, that tends to be a fundamental for a lot of other use cases. Like our proposal assistant talks to Adam to get information. Use cases that keep showing up on our executive leadership team calls where we analyze level one, two, and three or no low and high code style use cases, those all tend to have a backdrop of, well, if I'm looking for certain kinds of information, where do I go? Often start with that idea of that AI assistant or a subcomponent of that AI assistant. I believe we have over 50 plus agents now underneath that banner of our AI assistant that's got all these little sub-areas in there. It's broken up and tuned, if you will, to understand all those different data sources. I I think that's almost like the organization digitizing its intellectual property in a usable way, an AI-driven way, that snowballs. The flywheel effect is, of course, it's it's referred to, and Jim talks often about. You know, I think that idea of a flywheel effect is building on the success of what is working. What are we actually seeing tangible benefit from? Don't overanalyze all the things, but at the same time, get very specific on is that one working? Is that one working? Don't get too complicated with 50,000 use cases. I've seen some analysis that's got 300, 400, 500. Okay, hang on. What are the one or two that are really important for us to get right? And then allowing experimentation, particularly in a decentralized way, and pulling out of the ether what is working and elevating that, I think, is a is something I'm observing our executive leadership team doing that I think's brilliant and necessary in order to move AI at speed.
SPEAKER_03:Yeah, you mentioned Atom, so Atom AI being our AI-powered chatbot that we have on WWT.com, which is available for our internal users, but also external partners and users of WWT.com. So you can certainly go out there and check it out if you're listening and wondering how to get involved there. So are you are you saying, Jason, that that AI-powered chatbot is a really solid, like, you know, first use case to go after because it it can spawn into so many others?
SPEAKER_02:I think it's the one we are alright, we're all seeing. I I haven't talked to a customer yet. I don't know if Chris you have that that doesn't have some version of a and it's called something different usually. Right. Everyone's got a co you know, some fun branded version internally for whatever purpose. But I do think that a concept of an AI assistant is table stakes. Yeah. People expect it. We see it called different things, like knowledge management, you know, a knowledge management systems, a an AI front end to a knowledge management system, things like that. But they're all fundamentally similar that break apart, you know, masses of internal data. Let's point to that SharePoint archive and do cool stuff with it, things like that. Yeah.
SPEAKER_00:This episode is supported by Gigamon. Gigamon delivers network visibility and analytics to optimize performance and security across infrastructures. Enhance your network operations with Gigamon's comprehensive visibility solutions.
SPEAKER_03:And not to exclude you here from the conversation, but I am going to keep with a question with uh with Jason here, Chris. You know, we've introduced more and more agents to Atom AI. Like you said, I think we're up to you know several dozen at this point and more coming. What have we learned about implementing agents with Atom AI that would be good to just provide listeners of this podcast with context on how to move forward because it's it's not it's not an easy process.
SPEAKER_02:Yeah, I I think there's been a lot of discovery on how you chunk up a prompt, how you route a prompt and talking to some of our internal IT teams. What's it actually look like to tune those individual parts like our customers and and like our projects with customers? You know, we're seeing where a small language model, a tiny language model, traditional ML model, highly tuned. And when I say tuned, I mean you know, art of the system prompt, data sources. I don't necessarily mean fine true training, just to be clear there. But I do think that the decomposition of those things where you tune one agent to be really good at that data and also give it its purpose, its goal, so it's very clear that that's the expert, say, for our internal capabilities as our sales team might like it or what have you. I think that's we're we're learning that less is more.
SPEAKER_03:Well, I mean, Chris, certainly as you know, as more and more agents get introduced to the mix, that's a lot of that's a lot of of inference training, et cetera, that's going on. So that that gets into tokenomics here. You know, budgets are being scrutinized kind of closer than ever. Certainly a lot is being dedicated to AI, but I know we talked maybe in the spring of last year, we started talking about some, you know, I don't think tokenomics was was quite a buzzword yet, but we were talking about where to where to run workloads. Was it cloud, was it on-prem, was it hybrid? How is the whole tokenomics conversation shifted over the last call it six months? And where do you expect it to go uh this year?
SPEAKER_01:Yeah, you know, tokenomics probably six months ago was a conversation among a very small group of people, and I think because it's all become more and more relevant. Costs with AI have been a little bit of a moving target. And, you know, if you look back, Gartner came back with their CIO study that they came out with their survey in August. And much very similar to last year, they said we're gonna go spend more money, 30, 40 percent more money on AI projects, some 60% in those areas. And you know what's gonna get sacrificed is legacy infrastructure. Right. Well, I've been around long enough. I think we've all been around long enough to know that that's been a problem with all C-suites for a long time, is what to do with legacy tech. So there's something's gotta give if you're gonna abandon on legacy tech to go for AI. So I think cost is gonna be something that folks are looking at a lot more closely in 26. So the idea of tokenomics is you're gonna have an amount of infrastructure you're going to have to go invest in. And that what buying NVIDIA pod, you're gonna host it somewhere, but ultimately it's going to be running and using tokens as the way for it to run. Well, that's ultimately what that cost is gonna look like. So instead of looking at something that says this infrastructure is gonna cost me$10 million and the hosting is gonna cost me uh to build a data center, another 20 million that can handle it. I may host it somewhere and see what it's gonna cost me to get the same tokenomics. So you could look at an on-prem version, maybe you look at a public cloud option, you look at a GPU cloud or neo cloud option, or even a, you know, a colo, all of those are gonna have different types of TCO models. So we're working on those now and we'll have those customer ready here shortly. But I think that type of advice that we can give to customers is going to be very relevant because right now we're sort of putting our finger in the wind when it comes to some of these budgeting and figuring out what the costs are. And I think one of the things Gartner talked about probably a year or so ago was inferencing is going to far outseed the cost of what it takes to build pods. So nobody's ready for that yet. I think we're getting more educated on it as the months go by and people get more experienced with what the costs are.
SPEAKER_03:Have have we seen to to either of you too, Jason will give you the first opportunity to to weigh in, but have we seen anybody get tokenomics right yet, where they're just like, yeah, we've we've got this really good, you know, call it for lack of a better term, FinOps system in place where where we feel good about what we're we're spending and what we see coming down the line. Or is it still just a very much up and down roll with the punches type?
SPEAKER_02:I think it's definitely more still up and down. I've definitely seen some very strong analysis that we've done as well as worked with customers on. But I I think it's just it's moving so fast. Yeah. If you look at 5x compute uplift with Vera Rubin that Jensen announced in the CES keynote versus wait a minute, last year it was this. And you you're saying now it's potentially that much cheaper with a newer architecture. Well, hang on a second, what model am I using? Oh, hang on a second, why am I using this giant model for this small task? There's a whole lot of variables moving around here that I do think don't allow that to be a perfect easy picture. Yeah. I do think though, the the advent of resine models and where this is headed, it's the right model for the right job at the right time. I think we heard that very loudly from Jensen in that CES keynote, something we've been talking about quite a bit with our agentic talk track at AI days and deep lineage to the Neo Clouds, as Chris mentioned, and the idea of the these infrastructure patterns and infrastructure or architectural patterns, excuse me, making sure that that's done with purpose, with a close eye on how and what models are being used, what the token spend of those are, and what results you're getting. There's a linkage there that is it, we're just watching it built be built in real time.
SPEAKER_03:Yeah. I mean, Chris build on that a little bit. I mean, last year at this time, Deep Seek came out, it sent shock waves across the industry. I mean, are we are we just in for more and more shock waves as we go along, or is are we gonna teeter out a little bit?
SPEAKER_01:Well, you know, I think with respect to tokenomics, this reminds me a little bit again, and and it's being in tech for a while, uh, of dial-up services and AOL services in the 90s. They didn't know how to really price them. They started off at doing per minute, and then people would get giant bills and it was per hour, and then they did, you know, all you can eat. And and I think in this case, and Jason made some interesting points, different use cases. I think we're gonna figure out in the coming years, we're gonna have a better understanding of what the cost impact is because there'll be some history behind it. And what that really means is that we're gonna know that if you want to go run something globally at the edge, it's gonna cost X. If you're gonna go do a heavy research work on another uh GPU cloud, it's gonna cost Y. Today, that is still unpredictable. So is it a little chaotic? Chaotic's it's unpredictable. I don't necessarily say it's chaotic. We're still waiting for patterns to emerge in the pricing. But this is also, we're all old cloud people. Cloud was also this way 15 years ago. So we're gonna figure this out. And and I do believe that companies right now that need predict uh predictability will go toward those use cases that have already been uh out there and published and that they know they're gonna be successful with.
SPEAKER_02:Yeah. And I think you know, we're we gotta remember we're like in the first inning, the first hand of poker, the first lap of the race on the inferencing front.
SPEAKER_03:Yeah.
SPEAKER_02:And building inferencing into the edge is a very different conversation from training and what that has represented. And the vast majority of the rhetoric in the marketplace and everything until the last couple of months, hell, last couple of weeks even, have really been more about the training infrastructure versus that edge inferencing or what have you. And I know there's latency requirements, there are there's need to have it on your device. What if you don't have great connectivity? That is, again, just in those super early stages of of being actualized.
SPEAKER_03:Yeah. It's funny because I keep hearing first inning. That's it, we're on a marathon first inning here.
SPEAKER_01:At some point we got to expect them for second inning, which is a marathon a marathon assumes that you're actually gonna end the race, though. Like this is this race is not gonna end. True.
SPEAKER_03:Yeah, no. Fair fair point. Well well, Jason, another conversation that we had a lot of last year was biverse build. And I feel like that we haven't really had as much talk there. Is are we are we over that or what's the better way?
SPEAKER_02:Yes.
SPEAKER_03:Yes.
SPEAKER_02:Yes to all the above. Yeah, almost jokingly, of course. But yeah, you know, our our our average large-scale customer, you know, we're definitely seeing okay, we're experimenting in the cloud, and then we're also for particularly scaled use cases or customized use cases, we're building our own AI factory. Oh, wait a minute, no, that factory is actually, of course, inclusive of the cloud. And then I have a specialized use case and I temporarily need to do X, or I want to have a place to do that. That's the Neo Clouds that Chris mentioned. So all three of those, and and Chris mentioned the cloud, you know, multi-cloud, and that that is the same conversation here. The idea of using the right model at the right time, wherever it may be, whether that's an edge use case and inferencing or that's some kind of whatever that is, that that is uh going to be a thing for the average organization.
SPEAKER_03:Yeah. Chris, anything to add on that on that bivers build? I know that was kind of already baked into some of the talk track that you gave just a few minutes ago, but um, anything to keep going with what uh Jason's saying?
SPEAKER_01:I think I want to just add that uh model is a service. You know, we've talked about infrastructure as a service and and you know, those types of things that I think in the last year, but model is a service you're gonna hear more about in 26. We're already speaking with partners that are looking at those offerings now. So that'll just continue that build versus by discussion. And and to Jason's point, I think it's both.
SPEAKER_03:Let's move a little bit to architecture here, um, infrastructure. I mean, Jason, what needs to be optimized right now? I mean, is it is it all the entire stack? We touched on legacy tech debt here a little bit. What do we need to do from an infrastructure standpoint to make sure that we're setting ourselves up for success in 26 and beyond?
SPEAKER_02:That that's a big question, but I'm gonna probably go from what we've been speaking about around tokenomets and what have you. Workload placement is everything.
SPEAKER_03:Okay.
SPEAKER_02:Knowing what workloads need to be where, what models are served from where is a is a huge part of, you know, shall we say getting it right. Thinking about that architecture in a in a modular way, thinking about that architecture across public-private, uh, you know, thinking of that as a as a broad infrastructure architecture versus, well, I'm just gonna go build over here. It's not particularly scalable and enterprise supportable if we're not thinking about the entire umbrella. And the strategy we strategies that we've had with customers, of course, unpacking storage compute networking around high performance architectures, very, very critical. Um, I continue to hear workload placement, though, being a very significant primary enabler of organizations digesting those complex architectures, of course, starting to show ROI metrics. Right. You know, the fidelity of those ROI metrics are what we're really talking about, one emission tokenomics and FinOps and what have you. And if we're in a particular ecosystem, you know, one OEMs components and subcomponents, you know, there's certainly plenty of data. But the idea of looking across all of that is where it's much more difficult. There's a lot of players in there that are emerging.
SPEAKER_03:Right. Why is that so difficult, Chris? Is it just the the crowded market or are there other factors in play here?
SPEAKER_01:Yeah, you know, we're uh moving from traditional infrastructure to high performance compute, and this requires an entirely different set of skills to operationalize it, manage it, and really understand how to utilize it. So that's a big challenge. So any organization that's been doing this for a while that thinks they want to put it on-prem, they might struggle if they try to do this because of the nature of this high performance compute. And in some cases, you know, I think that that high performance computer AI might just be an environment sitting over somewhere rather than a hosting provider in another data center for a lot of these big organizations, because it will be 10% of what they do, but they still have got to support all their traditional users and all the traditional infrastructure that they have. You know, I think the other side to that is that the complexity of some of these stacks are getting a little bit cleaner just because the reference architectures that NVIDIA has that, you know, we have 17 or 18 in the air proving ground as an example from different vendors. I think those are proving to be a good guidepost for folks trying to look at how we're trying to reduce complexity with those architectures. So overall, I think we're in a good spot. We're gonna get more efficient. All the vendors have a way to try to get more efficient. But you know, to Jason's point, they announced Ruben Veer last week. And then this morning I just noticed that Lily and Nvidia just announced uh an AI lab with Ruben Veer, I think the first of its kind. So more and more to come as these projects come out, but that that those users are gonna look at these next level architectures and say, how do we support them? What do we do to operationalize them? And those are giant questions that they haven't had to answer yet, usually in their entire career. So a lot to come with that and help we can actually provide, and others are gonna have to look forward to partner to provide.
SPEAKER_03:Yeah. A lot of what we've been talking about is all within the WWT research report that came out earlier this year, AI and data priorities for 2026, which is a fantastic report. We encourage everybody listening to go check it out. Uh, one of the things that I liked at the end of it was just, hey, here's some trends that we're gonna be tracking over the course of 2026. A couple of the examples, robotics and automation. So obviously physical AI, digital twins, extended reality, computer vision. Jason, any of those kind of pique your interest that you think we could dive into? Oh, very much so.
SPEAKER_02:Physical AI, especially, I think, is where, you know, that's where all the magic is, so to speak. I think you know, we watch pop cult, you know, move various movies from from the past, like an iRobot or Terminator series with Skynet or Error or any of the others that we joke about our AI days. It you know, that physical representation of what AI could become is is the thing that scares us a little bit, but also is crazy opportunistic tomorrow land-like possibilities of a robot that can do all the things that we don't like to do. You know, and funny enough, you know, it it the idea of a self-driving car is an agentic architecture by definition. It's kind of a fun thing that's not said very often. But if you look at all the component parts and you look at computer vision and digital twin and all these different technologies that are kind of related to the physical world, I think we're seeing a an enormous amount of energy poured into that because finally things that we've dreamed about are possible. I do think that it's gonna start more simplistic than we might all think about. It's not like it's gonna be a you know a Rosie the robot that's doing everything for us. It might be a self-cleaning floor robot or something that can do certain things in the warehouse or or what have you, especially those early use cases. But one thing to consider is the physicality of the robot, the robot doing things is actually not really the the the that's not really the the issue. We've been able to do a lot of the physical parts for a long time. The robot being able to do that on its own without a human operator doing literally everything is really where where AI is coming in and going, oh, hang on a second, we can actually now discover the world, have a world model, understand things. There's a lot more that it it will will be is capable of and will be capable of because of AI being injected into it.
SPEAKER_03:Yeah. Yeah, Chris, any of those examples that I gave or even something beyond that on a trend or issue that you you know you're you're ready to keep an eye on and keep track of over the next 12 months?
SPEAKER_01:Yeah, you know, I I work a lot with the facilities groups and and the idea of power, and and I think this idea of access to power is the number one thing influencing everything in AI right now, and especially as they look to go build and and Ruben Vera having you know more requirements from a power standpoint. Uh, we're seeing a lot of folks with access to power wanting to build out facilities. And you know, I think more and more that idea of how we're gonna go and grow and build and support all this from an infrastructure standpoint. I mean, power, cooling, all of those things, we're gonna have to keep our eye on because of efficiencies. Uh, if you look at a Gartner report that came out last summer, you know, we're gonna be out of power in the next year or two, and there's not gonna be any place to put it. Well, we know that there's a lot of people out there promising more power, more megawatts. So I'm really very interested in keeping my eye on this particular market over the next year just to see who the players are, how it kind of pans out, any new sources of power that that might come up as well. So I think those are interesting. And then the second piece to me would be the global aspect of this, because I think the United States has been far ahead in AI development, infrastructure development, and investment. China's been doing a ton, we don't read about, but more and more looking at the global growth and and the impact that that might have on our market as other countries start to spend more money or we start getting more sovereign type data centers that are in there. So I'm very interested in what happens across the globe as well. And and we worldwide obviously are working very closely with our partners around the globe on that.
SPEAKER_03:Yeah. What's the ripple effect of that international expansion type idea? Is it just that, you know, we'll start to see more innovation bubble up, or is it gonna be more in the the tone of like, well, you know, some other country will introduce their new deep seek and it's like, oh, this is this changes the math on everything.
SPEAKER_01:There's an entire market in Singapore that manages some of the Chinese technology companies that we don't really even hear about much anymore, you know, for that matter. So I think as we look at it, we're trying to better understand that. I think that those international and APAC businesses that we're working with today frequently will have Asian manufacturers that they're working with. And so I think that's part of it. I think this saw the idea of sovereign data and being able to keep the data in country is going to be the biggest driver of data center development. As you may or may not know if you're listening to this, that uh data has to remain in country. If it's for that country, that's what sovereign data is. And so if you look at this, you know, we're working in Australia, we're working across the globe in EMEA for these types of sovereign data centers, and you'll see that growth as well. So I think it'll be a very interesting challenge and a bit of a race to see who is going to build this faster and more efficiently than most and get these customers signed on. Yeah.
SPEAKER_03:Yeah. We're coming up on the uh the bottom of the episode here. I do want to get into just a little bit of predictions. I mean, no better time to make predictions than an early in the year. And, you know, there's a lot that we could potentially cover here. So I'm going to ask the both of you, Jason. I'll start with you. Give me one prediction that you think is a pretty slam dunk guarantee. We'll see by the end of this year. And then give me a little bit more of a bold take on something that you expect to or not expect that you hope to see happen.
SPEAKER_02:Aaron Ross Powell I think the first one, or perhaps the most poignant for me, is for 26, us having a virtual teammate that is an AI agent of some kind, and in the truest sense, where we actually start to use that virtual teammate and we're like, hang on a second, there's some things that I can offload to this role or offload to this agent. I think that 26 is the first year that that really becomes real for a big chunk of at least the technical community, and then of course, you know, um broader users from there. On the more, you know, maybe crazy front, I do think we're gonna see to especially towards mid and later 26, the first you know, robotic I want that for some purpose in my personal or professional life. I feel like we're gonna start to see a little bit of that start to percolate in, but that's much more of a stretch. I think there's there's a longer tail on that than like. But experimentation at at with that in a tangible way as a technologist, I see that being being something really interesting and and very excited about it as well. I think that's just where like all the all the uh possibilities lie, but not necessarily reality.
SPEAKER_03:Yeah. Yeah. Chris, something that you think will happen, slam dunk, and then something a little bit more bold, uh, pie in the sky type of uh activity.
SPEAKER_01:Yeah, you know, I'm gonna hit AI literacy, and I think AI literacy is one, you know, we spent the last year at our AI days, and with our clients being a bit of the public education system for AI, where teaching them everything that they need to know. Well, now they're catching up. The articles are catching up, the different types of people that are writing about it, uh, people are understanding how to use these models. So I think as that becomes more comfortable in those areas, that literacy, our clients are gonna adopt more and they're gonna be able to adopt faster because they themselves will be better educated and more comfortable. Very similarly to any of these early stage efforts we had, whether it was the internet in the end of the 90s or cloud, uh, these early adopters are gonna be out there, but more and more people get comfortable with the idea of data sitting in a cloud or whatever it may be. So I think that's part one. And we'll get more and more comfortable with that in 26. As far as kind of a pie in the sky, I think that, you know, I'm I'm kind of like Jason. I think that there's some really cool things that could happen with physical that we, you know, we have yet to adopt. I I have some great ideas in my own mind, but the idea that, you know, many people are gonna find that Chat GPT can serve in a number of different ways for them as they get more personalized and it learns more. The same with robots and home assistance and some of those things. You know, it's about adoption. Very much like I used that iPhone example. We're gonna figure out that the killer app is is not your music or not your pictures, it's Instagram. Well, what is that killer app gonna be for AI? So it it'll come.
SPEAKER_03:Yeah, no, absolutely. Well, excellent. Great conversation. Uh, perfect way to kind of you know get into the bulk of 2026. So I'm excited to see what we have. I'll definitely document your predictions and we'll revisit in 12 months and see how you did. Uh, but to the two of you, I'm sure we'll have you on again here sometime soon. But for now, thanks for thanks for joining. Thanks for having me.
SPEAKER_01:Thank you, Brian. Thanks for having me.
SPEAKER_03:Okay, thanks to Chris and Jason for making time on the show. Before we wrap, here's one lesson that stands out. Enterprise AI doesn't fail because the models aren't powerful enough. It stalls when organizations try to scale AI without fixing the fundamentals. The companies making real progress are the ones that are disciplined about use cases, serious about data and security, and realistic about what it takes to operationalize AI at scale. This episode of the AI Proving Ground Podcast was co-produced by Nas Baker, Kerr Kuhn, Maggie Ryan, and Stephanie Hammond. Our audio and video engineers, John Knoblock. My name is Brian Phelps. Thanks for listening. We'll see you next time.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
WWT Research & Insights
World Wide Technology
WWT Partner Spotlight
World Wide Technology
WWT Experts
World Wide Technology