AI Proving Ground Podcast: Exploring Artificial Intelligence & Enterprise AI with World Wide Technology

The Ferrari Problem in AI | Intel

World Wide Technology: Artificial Intelligence Experts Season 1 Episode 62

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 30:31

Enterprise AI is moving out of pilots — and the infrastructure gaps are getting harder to ignore.

In this episode of the AI Proving Ground Podcast, Intel’s Lynn Comp and WWT’s Mike Trojecki break down why treating enterprise AI infrastructure as a single hardware decision is a costly mistake. As agentic systems push AI into real operations, assumptions like “AI = GPUs” start to crack under pressure from power, cost, governance, and scale.

The takeaway from 2025 is clear: performance alone isn’t the advantage. Fit is.

We unpack how agentic AI is reshaping security models and centers of excellence, why disciplined architecture beats oversized builds, and what leaders need to plan for in 2026 to scale AI without locking into brittle, overbuilt systems.

Because driving a Ferrari to run errands looks impressive — until you see the bill.

Support for this episode provided by: Proofpoint

More about this week's guests: 

Lynn Comp has a wide range of experiences spanning her ~30 years in the tech industry, from strategic planning and go to market of RISC SOCs for both communications infrastructure and mobile phones, to software pipelines laying the groundwork for rapid video-based services innovation, to pioneering the foundational libraries that paved the way for 'software defined' networking with telecommunications operators. Lynn has extensive experience in marketing, product management, product planning, and strategy development across software, hardware, cloud, and communications service providers (CoSPs). Lynn has a Bachelor of Science in electrical engineering from Virginia Tech and an MBA from University of Phoenix

Lynn's top pick: AI Meets the Classroom: Shaping the Future of Learning with Intel

Mike Trojecki brings more than 25 years of experience across technology and leadership. His career began in the U.S. Air Force, supporting missions for the White House and Air Force One, where he developed a foundation of precision and reliability. After transitioning to the private sector, he led emerging technology practices at firms including ePlus and Logicalis. At World Wide Technology, Mike now leads the AI Practice, focusing on high-performance architectures, data, computer vision, and AI data center design to help organizations scale AI with impact.

Mike's top pick: AI and Data Priorities for 2026

The AI Proving Ground Podcast leverages the deep AI technical and business expertise from within World Wide Technology's one-of-a-kind AI Proving Ground, which provides unrivaled access to the world's leading AI technologies. This unique lab environment accelerates your ability to learn about, test, train and implement AI solutions.

Learn more about WWT's AI Proving Ground.

The AI Proving Ground is a composable lab environment that features the latest high-performance infrastructure and reference architectures from the world's leading AI companies, such as NVIDIA, Cisco, Dell, F5, AMD, Intel and others.

Developed within our Advanced Technology Center (ATC), this one-of-a-kind lab environment empowers IT teams to evaluate and test AI infrastructure, software and solutions for efficacy, scalability and flexibility — all under one roof. The AI Proving Ground provides visibility into data flows across the entire development pipeline, enabling more informed decision-making while safeguarding production environments.

SPEAKER_01:

From Worldwide Technology, this is the AI Proving Ground Podcast. The enterprise AI landscape is loud. You already knew that. New chips, new factories, new policies, constant headlines. From the outside looking in, it may look like momentum, but from the inside, through the lens of IT leaders, it can often feel like confusion, too many signals, and simply not enough clarity about what actually matters. For enterprise leaders, this creates a real tension. Move too fast and you lock yourself into expensive assumptions, but move too slow and you'll fall behind a curve that's already accelerating. So today's conversation sits right in that gap. To unpack it, we're joined by Lynn Kump, a vice president at Intel, who leads its AI Center of Excellence. Lynn sits at the intersection of hardware strategy, security, and enterprise deployment realities and offers a playbook for how to make sense of it all. And Mike Trijeki, who leads AI Go-to-Market Strategy here at WWT. Mike has been working closely with organizations of all kinds trying to scale AI without losing control of it. We'll start with a simple but difficult question. Amid a marketplace full of announcements and ambition, what's the real signal enterprise leaders should anchor to? And what can they afford to ignore? So let's jump in. Lynn and Mike, welcome to the AI Proven Ground Podcast. I think this might be the most geographically diverse uh set of guests we've had. Lynn, I know you're out there in Oregon near Intel HQ. And Mike, I believe you're at our WWT New York City office. So and I'm right right smack dab in the middle is St. Louis and WWT G HQ. So to the two of you, thank you so much for joining.

SPEAKER_02:

Thanks for having us.

SPEAKER_00:

Yeah, this is great. And yeah, we've got every time zone covered. And I'm staring at lovely New York. It is a little freezing cold here right now, but for us South Carolinians, it's definitely cold.

SPEAKER_01:

I love it. Before we get into the meat of the conversation, Glenn, I do want to start with you. A little bit of a clear-the-table type of question. You know, 2025 was a consequential year for Intel. Lots of investment, partnerships, manufacturing milestones, and so on. You know, certainly competitive pressures. For enterprise leaders out there listening today, what should they be thinking in terms of what's real signal versus what is noise they can kind of put to the side?

SPEAKER_02:

So 2025 has been super consequential, as you know. We started with different leadership, we started with different strategies. And what you see reflecting in the end of 2025 is this recognition that AI is moving super fast, and we want to participate in all aspects of the AI hardware continuum. And so, you know, at the beginning of the year, you might be thinking, oh, Intel's gonna exit networking. We're not exiting networking. We have a huge emphasis over the last nine months on security. We want to be the secure foundation that goes across all of these capabilities. Whether or not we're actually running the models on our hardware, you still have to be able to secure and govern things, especially as we're moving from what was much more of a chat interface, much more creative world, to something that's agentic. And you're bringing in diverse models, you're bringing in diverse data sources. And so the real signal is that as you look at AI being diverse and broad and having efficiency challenges, having governance challenges, looking for security and making sure we cover those vulnerabilities, and making sure that we're actually getting all of the data in and out the way it needs to be, in an efficient and low-power way. That's really the signal to pay attention to for Intel. It's much more about recognizing not where we've been, but where the industry itself has to go and what has Intel's role been in that that is unique. And that includes things like manufacturing. It includes things like supply chain and making sure that that there's there's security, regardless of what economy you happen to be participating in or where you're deploying your AI solutions.

unknown:

Yeah.

SPEAKER_01:

Yeah, Mike, your read on all that.

SPEAKER_00:

Yeah. So it it's interesting, right? Because you look at 2025 and you know, Lyndon mentioned leadership changes at Intel and some of the things that are happening, but you also look at how the government is getting involved and investing in companies like Intel. So it becomes more than just kind of financial aid. It's really you look at where this is all going, and it is about establishing the US as the world's leading chipmaker, right? So we look at that and we see these things and what Lynn is talking about, I think it all points to that. So from a governance standpoint, a sovereignty standpoint, we're looking at trying to you know reimagine the US as that kind of that leading manufacturer, right? We have to shift the balance of power back, right? So it's more about national industrial policy, I would say, at securing our technology foundation.

SPEAKER_02:

Mm-hmm. Got it. Yeah, Mike, I guess one question that I have for you too is like some of the latest news that's come out. It it seems like we're going back a little bit and forth on AI policy a little bit. Curious if if you see that as something that is going to have a long-term effect, or is it more just it's pragmatic, it's what has to happen, because you know it it right now is a global economy.

unknown:

Yeah.

SPEAKER_00:

Well, it's and you have to separate that into two areas, right? You look at just AI governance from a global standpoint. But even here in the US, some things that are that are going on, you look at how do we regulate AI or do we regulate AI? Do we do it at a federal level or a state level? I mean, that you know, the beauty of being the United States of America is that you know the states get to make their own decisions in a lot of cases. And when it comes to AI, we're starting to see that AI can it be can it be managed down to a state level or does it need to be managed at a federal level? Or does it need to be managed at a global level? I mean, we have a ton of people on staff that deal with a lot of those things from a government standpoint, but that's the thing that is interesting going back to uh what you just mentioned, Lynn.

SPEAKER_02:

Yeah, I mean, from our standpoint, we're participating broadly in the broad economy. That's why, you know, we've got 18A and our 14 nanometer process, and actually, sorry, that's why we've got 18A and our advanced process nodes, and and the reason that we're building them out is is making sure that we are the foundation that the world can manufacture on. And then, you know, from the standpoint of national global interests, there is a key element of being a domestic manufacturer that is strategic and and that that is a multifaceted dynamic that we have to manage through is the both and of being in all those markets. So super exciting. I think the main signal is AI is moving fast, Intel's moving fast, we are being flexible about what we're offering to different customers. That includes being the most widely deployed head node or host node with GPU systems. Super exciting about that because again, it's that data foundation and that CPU processing foundation. If you look at things like agentix systems, you're gonna bring in much more diversity of models. And so we want to be the technology provider, and it's not just at the chip level. We want to be providing solutions that look at the rack level. We want to be providing solutions even in packaging. So, really, the the main the main principle is we're open for business, and we are going to be aggressively participating at every level of the AI hardware build out.

SPEAKER_01:

Things are unfolding kind of so rapidly all the time, in real time, really. And and that that unfolding, that pace of change can certainly lead to confusion in the market. And I think that's you know, one of my next questions was certainly, you know, I think a lot of people still misunderstand AI infrastructure as just GPU. When I know you and and others at at Intel kind of have an interesting perspective here. What other what other misunderstandings might the market have from your perspective as it relates to AI and and how to how to harness it and how to move forward with it?

SPEAKER_02:

I think one of the main things that we've seen is that there's this default of AI equals specific hardware infrastructure or AI equals a specific model. And the reality is we've had a number of partners, and and WWT and Intel have a couple key partners right now. They're finding that it is completely overblown. It's driving a Ferrari to get groceries or take your kids to school if you're just doing document summarization or if you're doing things like chatbots. And so the question is really matching the need for the capability to what you have available before you start over-engineering what's going into hardware. And, you know, if we'd had a little bit more of that, we probably wouldn't be looking at some of the power and cooling challenges, some of the water challenges that we're hearing about as there's these build-outs. But there's still this opportunity to start pragmatic and to start very with what you have deployed and be very simple and judicious and then add as you go. You know, I've heard about 20,000-person call centers that are basically running on CPU servers with really light accelerators, such as what we talked about at the OCP event, which was the codename Crescent Island. So there's a lot of flexibility in this. And I think people are gonna have to go back to finding those efficiencies because you can't just build out and then find that you've hit your cap on power, water, cooling, available data center, space, et cetera. Mike, I'd be curious in your perspective here.

SPEAKER_00:

Yeah, and I I think the true value here is for the enterprise anyway, is combining the strengths of all those different types of processors. So whether it's CPU, GPU, specialized accelerators, and you look at this world as an ecosystem world. So when we when Intel is looking at this and you know, looking at where they fit into the market, there are so many places that people are looking at this, like you said, specific workloads when it comes to some of the agentic pieces, and especially as we take these agents and we move towards very specialized type agents. Maybe it's you know predictive model and manufacturing, maybe it's you know, document summarization, like you said. Those are the things where we don't have to, you know, fit everything into one architecture. It's going to be a multi multi-architecture down to the chip level.

SPEAKER_02:

Right, right. And that's the reason for the flexibility in our strategy. I mean, essentially in September, we partnered with Supermicro to talk about how our TDX Connect confidential AI capability works next to NVIDIA product and NVIDIA GPUs in those accelerated systems. So it's it's not an either-or. It's really match the need to what the capability requirements are. And don't over-engineer it, I think is is really the main point on that. And then, you know, one of the other things that I would also say is as we're looking at predictive that predictive models for manufacturing and things that do require more security, one of the things that we also want to be clear about is that we are best in class on product security assurance. You know, we proactively reported 94% and then remediated them, and and some of our competitors were just over 50%. And so all of those things are really what enterprises have to think about when they're deploying, as opposed to thinking about, you know, how do I create a new flyer for my restaurant? And so those differences are really, really important to think about as we move forward.

SPEAKER_00:

Well, and one of the things, and Lynn would, I I don't know if you would agree with this comment or not, but when it comes to this stuff and you know, where CPUs play, they they play great at general purpose, kind of low latency type tasks where it becomes more about the bottleneck being data movement, not raw computation power.

SPEAKER_02:

Right. I I completely agree with that. In fact, you know, a conversation I had earlier this morning was was talking through the fact that you can put agentic and AI systems in place, but if you can't get past the data security, the data silos, and things like that, and even the ability to move the data efficiently, then you're starting your AI system. So you really have to start with the data repositories, those data systems, and making sure they shouldn't, but that the AI systems access to them is efficient.

SPEAKER_01:

Yeah, everything you both are articulating, it's a it's a very practical message. Lynn, I I love what you said about, you know, taking your kids to school in a Ferrari. My kids would absolutely be thrilled to go to school in a Ferrari. But, you know, if you're taking that practical approach, if you're taking that flexible approach, Lynn, I mean, what is that, what type of implications does that have for how you build your, you know, your infrastructure overall? What types of considerations should we be thinking about to ensure that flexibility in 2026 and not get boxed into a corner where you're, you know, you are stuck with that Ferrari on the way to school?

SPEAKER_02:

You know, I think one of the main things that I've seen, and I've seen it even with some of the largest vendors on the planet, they they they go and effectively default to a specific solution approach and then discover, oh, I can't power them on. In fact, one of the leading CEOs for one of the leading lights has said, we have we have GPUs and we're gonna stick them in inventory because we can't power them. And so I think that that's a really, really critical thing is is not to just order and then figure it out. It's really understanding how do we get the maximum out of the infrastructure that we do have deployed and then judiciously add to that as we know more about the system. A lot of the do anything with AI meant push that we've been having required you to just order things that were the default solution because you didn't know what you're gonna do. Now I think that enterprises are much more clear on what the solution space is. And so it's getting more and more refined. But also, when you can buy GPUs and you can't power them, instead of having that situation, maybe the thing is to actually add an ounce of planning and prevention instead of trying to remediate it later. So I think it looks like really knowing what is going to be a mass deployment, what is gonna be a broad deployment, and then how can we most efficiently leverage the infrastructure we have, and then really plan for your power capacity. What can you afford to include? And how much how much headroom does that give you? Mike, I don't know if you've seen something similar, because I know that WWT does a lot of really intensive hardware implementations from very simple to very extreme.

SPEAKER_00:

Yeah. And just before you even go down there, I typically I take my Aston Martin to go get groceries and I leave the Ferrari and the Lamborghini in the garage. So just so we're clear on that. Crystal clear. Yeah. So I you know, one of the things, Lynn, that you just you just brought up was, you know, kind of not skirting your question here, but going, I just want to go back, is the energy piece of this. And I look at this and say, that is one thing that I wake up and that I worry about. We have, you know, this enormous, you know, demand for GPUs, but potentially nowhere to put them. And it's we're sometimes we over-rotate on does it have to be a GPU? And with a CPU, and when we're looking at that, it's it you to get just look at computer vision or you know, natural language processing, those things can be run on CPUs, and they're you know, the energy piece, the cost piece, the lower latency piece, I just want to drive that point home. So I think they're that's an incredible thing to for you to bring up is you know, the things that frankly I worry about is okay, where are we going to put this stuff? And I can't go into a data center and just magically move things around to add you know more power or you know, have a megawatt type rack. You know, those things take time to actually get implemented.

SPEAKER_02:

Yeah, and you can't mix liquid cooling and air-cooled. And I mean, there's so many, so many different things to think about there. One of my favorite examples of natural language processing and manufacturing, Intel managed to deploy sentiment analysis across our tools flows in manufacturing using natural language processing. It was all CPU-based. And all it did was help us understand what was going to happen in terms of failure mode analysis. Can we get ahead of that and do predictive maintenance instead of reactive maintenance? So there's some very, very practical use cases that really do make business more efficient, make enterprises run well that aren't just big LLM systems. And that NLP use case that I just gave, that could be part of an overall agentics system around failure mode analysis, around predictive maintenance that is one element of it. So it's not just retire what you had deployed, it's really how can you refactor that into a full workflow, I believe.

SPEAKER_01:

Yeah, I love what you're getting in there. I mean, the next wave of AI is certainly not always going to be large language models. You're talking about agentic systems, you're talking about small language models, you're talking about use case-specific models, maybe at the risk of making a little bit of a dummy in-the-room question here. But Lynn, are is it, you know, are as we move into more specified use cases, is it always CPU, is it always GPU, or do they bounce back and forth from time to time? How does that how does that work?

SPEAKER_02:

I think there's no question, if it's training, you're gonna do a GPU. I think for some of these other small language models, vertical specific models, predictive analytics, computer vision, it really comes down to the use case. Like we have an example where you can do 40 cameras with a CPU, you can't do that with certain GPUs. That's more of a computer vision edge use case. I think that it is ambiguous, which can be frustrating because people would like the simple rule of thumb. But I think that it's also important to really, really understand the use case before you start engineering the hardware.

SPEAKER_00:

Yeah. There's no there's no hard and fast rule with any of this yet, right? So when you look at it, you and Brian, you mentioned agentic, as we get down into the agentic piece, you know, you may have agents running on GPUs, you may have agents running on CPUs, right? They're it it's going to depend on the task that is required. And going back to Lynn's comment about you know taking the Ferrari to get groceries, right? If you try and deploy everything on one single architecture, I mean you're gonna you're gonna misuse capital, right? You're gonna spend too much on energy, and you are going to misuse resources within your organization. So I I look at this and kind of this agentic world is going to shake things up a little bit.

SPEAKER_01:

Yeah. And I mean, this seems like as good of a time as any, Lynn, to start bringing in the concept of uh center of excellence. And I know that you do uh lead Intel's AI center of excellence. And when you're talking about it depends on the task required, it depends on where the workloads are going. I mean, is this where the the real crucial role of an AI center of excellence lives so that it can start to understand kind of a an organization as a whole and parse it out? Or let's dive into that a little bit.

SPEAKER_02:

You know, it started that way, which is really trying to parse it. Now, when you look at centers of excellence for AI, a lot of companies have them. Google has them, others. And I believe that every year, maybe even every six months, the roles have shifted. Like, for example, at the beginning of this year, it was really much more about how do you figure out what kind of asset runs what kind of workload for what kind of use case, and giving as clear a rule of thumb as you possibly could. Going out of 2025, I think the center of excellence is really asking questions around agentic governance, around agentic sign-off. Like when you have agents running your infrastructure, but humans are still supposed to be the ones signing off on the operations. You can have all the arguments you want about humans make mistakes, AI makes mistakes, what's the big deal? But our legal frameworks or regulatory frameworks aren't caught up to that. And so I think that there's an element of practices, of governance, of application in an AI center of excellence. And then there's always a technology element of where's the technology going? What are the optimizations for today? So it's a both and.

SPEAKER_00:

Yeah, that's that's an important part, right? Because when you look at the governance piece and you're talking about these agents and you know, you're talking about this human in the loop. I mean, you're really going to need to assign a digital credential to these agents, right? You've got to be able to provide traceability, you've got to be able to implement zero trust for those agents. So for me, when we look at the impact that these agents are going to have in the world, I mean you've got to run that through your organization, not just talking, and you mentioned security early, not just talking from an architectural standpoint, but from a security and governance standpoint as well. And that's one of the things that a center of excellence should be responsible for, is taking all those different pieces.

SPEAKER_02:

Yeah, it's interesting because I had I have a certified IAPP AI governance professional. And a lot of the curriculum has started out as much more around model, model provenance, data provenance, you know, digital rights, digital services acts, and things like that. Really hasn't started touching ground yet, I believe, on signing off on agents and the provenance of what the agent behavior was and having that human in the loop and figuring that out yet. So I really see that as the next wave.

SPEAKER_00:

One of the things that we've been doing there is being able to test these agents. And Brian, I know this, you know, we talk about the you know, the AI Proving Ground and the uh AI Proving Ground podcast, but you look at being able to test those agents in a controlled environment and being able to bring those into something like the AIPG is critically important, right? And when you look at that from again, going back to the center of excellence, those are the things you need to keep in mind is what is this, what does this look like? How am I building this out pre-production, making sure I'm testing it in my environment, in a simulated environment before I actually roll it out to scale?

SPEAKER_01:

Yeah, absolutely. Lynn, you're mentioning uh Providence here. And I know one of the things that you had mentioned to me before is certainly how that sovereignty conversation is becoming more important for our shared clients as we move into 2026. Does that change any anything that we've been talking about to date, or does it just reinforce what we've been talking about so far?

SPEAKER_02:

You know, I think it reinforces it because sovereignty is very difficult when you're when you're talking about a fully global cloud-hosted business. And so I think that really what you're gonna end up seeing is it's not just providence, it's also where's your hardware, who owns your hardware, who's managing your hardware, and and and basically who has rights over the control systems for it. And we are just, I think, on the front edge of those conversations about sovereignty. And and sitting in the US, we might have a different perspective on what that looks like than if you're sitting in Europe, you're sitting in Asia. And so I don't know that we're even scratching the surface on what you're seeing internationally, but at least for us, yeah, the provenance is gonna be, I think, at a heightened area of concern. Governance is gonna come into play. It used to be run fast, break things, and deploy. We're gonna have to go back and really start looking at how do we govern? What does human in the loop mean? What are they accountable for? And what's the traceability of the event logs and and some of the sightings as these systems are getting more and more rolled out and more automated for things that are mission critical?

SPEAKER_01:

Yeah, Mike, uh you're dealing with clients on the ground. Well, in this case, New York, maybe high up in a skyscraper, but is that how organizations are actually acting right now, or are they gonna have to make big changes?

SPEAKER_00:

It's not make big changes. They just need to get more organized around their centers of excellence, define what that means, define what governance means. I mean, we talked about this. You know, what is what does it look like from a sovereign standpoint? These agents and AI are going to make mistakes, just like a human would. So, how do we have intervention protocols in place to establish when that happens, right? Where's that kind of big red button that they say to all right, we're gonna you know turn this off before it gets gets out of hand here? So I look at that and say, our customers are doing an okay job of identifying, hey, we'd love AI to do X, but where they're not getting to the, I guess, to the crux of the issue is I want it to do X. What is it? What happens when it doesn't do X? Right. Right. What happens when it makes a mistake? And you know, there's data that runs outside of that organization, whether it's uh an enterprise, whether it's a a country, a nation state. I mean, there are going to be those situations. So you that center of excellence should be able to have that kind of intervention protocol, if you will.

SPEAKER_01:

No, absolutely. When we're we're coming up on the bottom of the episode here. So, you know, I we'll kind of wrap up with some of these last couple questions. We're recording this episode actually at the end of 2025, but we'll release this uh very early in 2026. So it's a good time to kind of say, you know, what did you learn this year that you're really going to start to apply in 2026 that's going to be a priority this year?

SPEAKER_02:

I think that for me, what I learned this year is that the hypothesis AI is not one size fits all was absolutely true. However, the diversity and the breadth and the sprawl of how AI is deployed surprised me. And so that was one of my big learnings, is just how quickly we pivoted from agents are the thing to agents are doers. And and we didn't necessarily keep up on some of the governance and security and the other considerations. So for me, that was the learning of it felt like a huge snap and a huge explosion in diversity. And so going into 2026, I expect that we're gonna be in a digestion period from all of that, that explosion and that that sprawl and that diversity. And then we're gonna come up with a next level of deployment questions and the new wave of what's next.

SPEAKER_01:

Yeah, Mike, similar learning slash action, or do you have a little bit of a different take?

SPEAKER_00:

No, it's it's it's similar. One thing I will add to that is from an adoption standpoint, right? People adopting organizations, governments adopting AI. It's this whole world right now, people are looking at this and saying, hey, are we in an AI bubble? What I've learned is there is there is no bubble. We are underinvested in AI from an infrastructure standpoint. The demand for AI is outpacing the supply for AI right now. So it's different than when we look back to you know the dot-com bubble. That was a lot of investment without customer demand. We're seeing the reverse right now. And looking at that, that was the biggest thing I learned was man, we're we're really underinvested in AI. And if it's true, and you know, Lynn talks about the sprawl, it for that sprawl to continue, we needed, we need to invest more in the infrastructure. And that's both on-prem, in the cloud, and in hybrid situations. So that was the biggest thing I took away from from 2025.

SPEAKER_01:

Yeah, well then finish that kind of thought. What is that, what does that mean? Not just for you and for WWT or Intel, but what does that mean for enterprise IT leaders, AI practitioners? What is that what does that signal to them that they're gonna have to do in 2026?

SPEAKER_00:

Yep. It's you're going to have to look at moving beyond production into scale. We've got you've got to get these infrastructures scaled out if you're going to be competitive, right? As you look at this agentic world, you're that scale is probably is not, doesn't necessarily have to come from a large data center or a large model. That scale is probably going to come from smaller models and the use of agents or specialists that are good at one job rather than being one big agent brain or one big AI brain that's sitting out there to do everything. So that's my advice is look at that, make sure your infrastructure is ready to go. And that's not just from a technology standpoint, but it's power, it's cooling, it's space, all of the above.

SPEAKER_01:

Yeah. Well, certainly an interesting future that we have in front of us. Lots of opportunity, but also lots of complexity and lots of questions still to be answered. I would I'd love to have the two of you on at this time next year just to see exactly kind of you know what transpired over the next uh 12 months or so. But uh Lynn, Mike, thank you so much for taking time with us here on the podcast, and we appreciate your time and thanks for the partnership.

SPEAKER_00:

Thanks for that.

SPEAKER_01:

Okay, thanks to Mike and Lynn for joining the show. As we wrap up, one lesson stands out. The constraint isn't imagination, it's discipline. What this conversation makes clear is that AI didn't stall in 2025, it fragmented. More models, more agents, more architectures, and more ambition. And that sprawl is real value creation, but only if it's matched with intentional infrastructure choices, governance, and restraint. This episode of the AI Proving Ground Podcast was produced by Nas Baker and Kara Kuhn. Our audio and video engineers, John Knoplock. My name is Brian Pelt. Thanks for listening, and we'll see you next time.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

WWT Research & Insights Artwork

WWT Research & Insights

World Wide Technology
WWT Partner Spotlight Artwork

WWT Partner Spotlight

World Wide Technology
WWT Experts Artwork

WWT Experts

World Wide Technology
Meet the Chief Artwork

Meet the Chief

World Wide Technology