AI Proving Ground Podcast: Exploring Artificial Intelligence & Enterprise AI with World Wide Technology

The Network Is Becoming the Real Unit of AI Performance

World Wide Technology: Artificial Intelligence Experts Season 1 Episode 65

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 35:02

As AI systems grow larger and more distributed, enterprises are discovering that performance, security and time-to-value depend less on individual components and more on how well the system moves, synchronizes and protects data at scale. In this episode of the AI Proving Ground Podcast, WWT's Justin van Schaik, Cisco's Dave Jansen and NVIDIA's Taylor Allison talk about how a well-designed network fabric extends time to value, increases risk and quietly erodes the economics of AI initiatives.

More about this week's guests:

Justin van Shaik is a Technical Solutions Architect at World Wide Technology, specializing in High Performance Networking, AI and Open Networking. A seasoned technologist, he helps organizations design and deploy advanced infrastructure to support next-gen workloads at scale.

David Jansen is focused on technology strategy for Cisco's Global Solutions Engineering team across all segments, verticals, and technologies via an ongoing series of innovation initiatives. David spends a lot of time with strategic customers + partners engagements globally. With over 30 years of experience in the IT industry, David is an industry expert in Cloud, Software Defined Networking (SDN), virtualization, orchestration, large scale WAN backbone, and AI Infrastructure.

Taylor Allison is responsible for product marketing related to the NVIDIA Ethernet switch portfolio, including the hardware platforms as well as network operating systems and telemetry tools. Taylor has a passion for product marketing and management in the data center infrastructure space, with expertise in networking, storage, HPC, and AI/ML. Prior to joining NVIDIA in 2021, Taylor was Lenovo's HPC/AI storage leader, responsible for high performance storage platforms, software, and solutions. Taylor earned his MS in Mathematics from the University of North Carolina.

The AI Proving Ground Podcast leverages the deep AI technical and business expertise from within World Wide Technology's one-of-a-kind AI Proving Ground, which provides unrivaled access to the world's leading AI technologies. This unique lab environment accelerates your ability to learn about, test, train and implement AI solutions. 

Learn more about WWT's AI Proving Ground.

The AI Proving Ground is a composable lab environment that features the latest high-performance infrastructure and reference architectures from the world's leading AI companies, such as NVIDIA, Cisco, Dell, F5, AMD, Intel and others.

Developed within our Advanced Technology Center (ATC), this one-of-a-kind lab environment empowers IT teams to evaluate and test AI infrastructure, software and solutions for efficacy, scalability and flexibility — all under one roof. The AI Proving Ground provides visibility into data flows across the entire development pipeline, enabling more informed decision-making while safeguarding production environments. 

The Network Runs AI

SPEAKER_03

From Worldwide Technology, this is the AI Proving Ground Podcast. When it comes to AI, GPUs may get the spotlight and models may get the buzz. But the real differentiator, the thing that turns isolated capability into scaled intelligence is the network. Because in today's AI systems, a single GPU is powerful, but thousands of them working together, that's transformative. But only if they can communicate, synchronize, and move data without friction. That's why the network has quietly moved from background utility to strategic constraint. Power is scarce, GPUs are precious, but without the right connective tissue, all that capability stays fragmented. The work doesn't flow. The system never quite becomes whole. So in today's episode, we're talking with NVIDIA's Taylor Allison, who spends his time translating NVIDIA's AI architecture into something customers can actually operate, which gives him a clear line of sight into where performance depends less on the model and more on everything around it. Cisco's David Jansen, who's been working with organizations to build large-scale AI environments where power, security, and connectivity all collide. His perspective is grounded in what it takes to make thousands of GPUs behave like a single, reliable system. And WWT's Justin Van Scheik, who has extensive hands-on experience in network architecture and design, with a focus on high-performance networking for AI fabrics. Together, they bring two sides of the same problem into focus: how AI capability is created and how it's sustained. So let's jump in. Okay, to the three of you gentlemen, thank you so much for joining. Uh Justin, how are you today? I'm doing fantastic. Loving my Arizona weather today. I hear it's snowing elsewhere. It is, it is, and excellent. Mr. David Jansen, how are you? Thanks for being being here with us.

SPEAKER_00

Hey, thank you. And thank you for having me. Doing awesome. And Justin, the snow is up here in Michigan and it's super, super cold. So hopefully we're gonna have a team event down in your neck of the woods.

SPEAKER_03

Perfect, perfect. And last but certainly not least, Taylor Allison, how are you? Thanks for being here.

SPEAKER_01

Doing great. Thanks so much. It's we got like a little bit of flurries here in North Carolina, enough to close schools, which is really doesn't say much at all because it's North Carolina.

Power, Heat, and Hard Limits

SPEAKER_03

Absolutely. Well, for those of you out there listening, this is not a weather forecast. We are going to be talking about tonight. I'm gonna start with you, David. At a recent panel, actually, we had uh Cisco president Jeep Chu Patel at a panel that WWT was hosted, and he said, quote, um, as it relates to AI, power is the constraint, GPU is the asset, but the network is a force multiplier. End quote. But David, you know, just elaborate that on a little bit. Why is the network so important right now in the age of AI?

SPEAKER_00

Yeah, hey, thank you. So let me just start with I just wanted to acknowledge, yes, power is an issue, power is the issue, real estate is a problem, thermal, thermals, cooling weight, the whole data center facilities is a real challenge for our customers, right? So, you know, as everything becomes more dense, GPUs, CPUs, you know, XPUs in in general, networking needs to meet that growth for connectivity. This is where the network becomes the force multiplier. You know, the network acts as a force multiplier, enabling thousands of GPUs to operate as a single unit or a single unified system.

SPEAKER_02

Yeah, absolutely. Yeah, Taylor. What I've often uh included in there is that the GPU is like a brain cell. An individual brain cell without an actual connection to another brain cell doesn't do that much. I mean, sorry, Taylor, it's great math, but it's not gonna work, you know. But when you start getting several hundred thousand of them wired together, you can get some really impressive things going on. And if you are, you know, we've all had bad tequila hangovers, that's slow, the network slows down the brain cells. The GPUs don't work well. So you have to make sure that you've built something that will handle all that speed.

SPEAKER_01

Well, and I'll just add too, you know, your your problem space and your solution space are also growing. You know, the the the size of these models, uh yeah, uh I I will say a single GPU can do a lot, but you know, we're taking on these, we're we're taking bigger and bigger bites of the Apple. And and so so, yeah, I completely agree. You know, you need that high-speed connectivity to to get all those all those GPUs talking and synchronized, uh, it which is which is very important.

SPEAKER_00

Can I add one more comment to that, please? Yeah. So Justin, you just triggered a little thought. Uh you mentioned a single brain cell. So if I can make an analogy, like the network is sort of like the central nervous system for AI workloads that so you can sort of connect and grow as you grow, meaning you can have different scalable units in modular designs. So you can go ahead and build things out, like start with X and move to Y. Why is that important? There's different architectures that continue to evolve. So you have the whole scale up Ethernet, the whole scale out, and the whole scale across as far as what's happening and connecting these factories together.

SPEAKER_02

Yeah, that circles right back around to the power conversation, or that the traditional methods for optimizing data centers and optimizing the density, the power, everything else falls way short of what we need now to bring into new AI space. So best practices that be updated. We have to rethink assumptions that were always fine that are now no longer at fine at all.

Data Movement Is the Bottleneck

SPEAKER_03

Yeah, David, I do want to get back to you here in a second because I want to I want to make sure we give enough time to secure AI factory and the solution that we have here. But uh, Taylor, just real quick, for those that may not be familiar, why, you know, so much of this is about data flow and movement. Why is AI optimized data movement different than what we had prior to AI? Can you just articulate that a bit for us?

SPEAKER_01

Absolutely. So, you know, in AI, you kind of have two major categories of workflows, you know, sort of the training of the models, building the model, and getting all the weights calculated through through iterations of training runs. You also have inference, which is where you deploy the model and queries are directed at the model and tokens are generated and responses come to the end user. You know, I say that because both of those, both of those fundamental categories of AI workloads are incredibly network dependent. It's it's really easy to see on the training side, you know, that you have this sort of the paradigm where you know each of the GPUs is doing a little piece of the puzzle and then they finish and they all have to share what they've what they've calculated with every other GPU. And in that scenario, you're you're waiting for that last GPU to finally communicate and and you know share its results with the rest of the fabric. So you're dependent on kind of your average performance of your network, uh, but also your worst case performance, because you can only move on to the next training step, you know, once once you've finished that that last piece. But it's also really important for inference as well. And I won't I won't really kind of dive into the you know the propeller head details here. I don't want to spend 20 minutes talking your ear off on that, but you know, you have these these sort of architectures that are becoming more dependent on storage, you know, either in the node, but often, you know, remote storage that has to be the network comes into play when you're communicating with it. So basically, in both inference and in training, the network is is vital. And and it is is essentially, you know, that's really where where you sort of see the performance of your AI as a whole. And and I'll just briefly add, you know, kind of going back to the previous comment about the force multiplier, you know, the the thing to think about here is that, again, you know, you you mentioned you sort of have this bill of materials, and I'll go beyond the data center, you know, I'll talk about like the facilities, the power cooling, of course, the the uh servers, storage, networking. You have all these pieces, and and networking is is a smaller piece of the pie in terms of a spend perspective. But again, that's that's where this this performance is coming from. That's that's where you you you either either you have a you know a high performance supercomputer or you don't. It's coming from the network. So, you know, that that really is is the key.

SPEAKER_02

And I'll let me throw into there as well that the between you know, yes, you're right, the generative versus inference. Generative generally has been the the highest focus, has the most demanding set of requirements. However, the models are getting larger. Inference may not be able to reside on a single box anymore. It may be spread out. So then becomes so then that becomes another complication in there. These are not a fixed set of complications. They're evolving as things get bigger. So much what we've done for the last you know 30 years in networking. We chase the bottleneck. We create a new bottleneck, we follow that one, nail it on the head. It's a whack-a-mole, but on a large scale.

SPEAKER_03

Just to stand with you here for a second, where where are some of the where are those most common bottlenecks? I mean, that's obviously a major issue that's you know inhibiting progress with AI. Where are we seeing those bottlenecks right now?

SPEAKER_02

Well, I mean, in the networks that I design, there are no bottlenecks whatsoever. So, of course.

SPEAKER_03

But for those not designed by you, let's just say, for example, where are they?

SPEAKER_02

Um I mean, it it it it it depends a lot on the exact workload, how they want to implement it, you know, like data gravity is very relevant. Are you using, you know, yes, generative, there's inference, then there's agentic, then there's rag, and then there's the hybrid things, and then we have AI at the edge. How have you structured it out? Will actually determine where that bottleneck will live. Oftentimes, let's just assume that communication within a given box is going to be unlimited, no problem there. However, between two boxes, where's their storage? What is the latency for retrieving data from there? How much do they have to get? How chatty is it? Are there tweaks you can do in the storage space to actually move that, you know, make the storage work faster, which then makes the network the model like? So you really have to, there is no single point, but let's just say there's always gonna be communication between the the compute and the storage and the endpoint that's going to be creating something.

unknown

Yeah.

SPEAKER_00

If I can comment on that, please. So to the point about the models, right? Let's start there. There's a lot of variability of different models. There's vertically integrated specific models, and you have to have requirements for different types of networking and load balancing to optimize the best job completion time, right? So, Justin, to your point, centralized inferencing versus distributed inferencing is gonna have a different set of capabilities and requirements for the network. So, bottlenecks, you know, you have congestion. I know Justin doesn't design congestion, but in the point that it happens, you need a proper architecture that's gonna yield optimal path in performance, um, resulting in the best job completion time. You know, Justin, you and I have had many conversations regarding all-to-all tail latency and need for the network to accommodate for these different types of requirements of non-blocking, lossless, low latency, radix matters, CERTES matters, etc. So I'll pause there.

SPEAKER_02

Yeah, and we expect I think you know some of the innovations coming down the line, like Ultra Ethernet, are going to be addressing some of those well-known challenges and overhauling some things that should have been overhauled a while ago, like RDMA, selective retransmits, things like that are very important, but they haven't been done yet. So I think a lot of these problems that we're seeing right now are gonna be replaced by a whole different set of complications later on.

unknown

Yep.

SPEAKER_02

But Dave, by the way, I wish I was speaking, like when you say it, it sounds really, really smart. I want you to keep on talking for me, okay?

SPEAKER_00

I appreciate you, bud.

Latency Kills Jobs

SPEAKER_03

David, is the idea that are bottlenecks always gonna be there? Or let's get into Cisco's secure AI factory. Is that is that smoothing it out, or tell me how uh the you know Cisco Secure AI factory is helping, you know, bottlenecks from from the network standpoint?

The Network Stack Is Changing

SPEAKER_00

Well, I I I think listen, you you can't so you need to be able to have proactive and reactive mechanisms to as well as like visibility. So those three things like we can get into like PFC, ECN, adaptive routing, dynamic load balancing. And they these are mechanisms that we built. So we've sort of taken like the high-performing stuff, the high performance computing, the InfiniBand stuff, and moving these things on the on the Ethernet. I would argue lossless Ethernet is not a new thing. We've been doing this for a long time, but we're enhancing it to Justin's point as far as the ultra Ethernet consortium that's happened from a standardization perspective. In addition to that, you mentioned uh the question was specifically around secure AI factory. So if I can just take a moment to take a step back. So if you look at what are the foundational building blocks of a factory, so you have scalable units or pods. Those are, depending on what you're trying to build towards your power, everything else. The foundational building blocks is a pod slash scalable unit, and I can replicate multiple of those to build like an AI factory. Security needs to be infused into the fabric, into the application, into the ecosystem around that. It just can't be this bull tonight. So we have the Cisco Secure AI factory, which is specific to Cisco and NVIDIA and the partnership that we have and the innovations that we're building together and delivering to market for our customers and our customers across different vertical stacks. So we have everything from like NeoCloud enterprise customers to hyperscalers that are part of this secure AI factory. And I can get into some of the how, but I don't want to lose sight of it's just not secure, it's just not like the network compute optics, the NVAA software stack. It's security as well as observability because you can't sort of secure what you can't see kind of thing, to where I can now have observability and I can understand you know network events, security events, and tie those pieces together from a network application and security point of view. Yeah, Taylor, you're shaking your hand there.

SPEAKER_01

Yeah, I just want to jump in and say, you know, and and Dave, you you you nailed it, I think, kind of at the close of what you were saying that you know you've got both whether you're an enterprise and you're you have like data government governance regulations, things like that, or you're you know, like a a large CSP, a cloud service provider, and you know, you have these multi-tenant environments where it's sort of inherent that you have this like zero trust kind of requirement for security. In in both cases, this it security can't be an afterthought. And and I I think you know, uh from in the NVIDIA perspective, we we are in my perspective at least, we talk a lot about Spectrum X ethernet and and the performance. You know, that obviously is is critical, and but it's almost like table stakes. You know, it's kind of you you sort of have to go beyond that. That's that's like a it's a prerequisite for AI. But you know, to truly deploy this stuff and operationalize it, that's where the security comes in. That's where the you know seamless operations, visibility, like you said, is is crucial. So so yeah, I think I think you you you know, what whether you're you're big or small, these things are all really important.

SPEAKER_02

Well, I was gonna say that in terms of the security, though, is like it is it is we have to rethink the entire approach. Uh yes, you we have to actually evolve it, but isn't it is a whole new approach because it's not like you just slap a firewall in the middle of your AI, your your generative AI build and say, there you go, we're covered. The the the old paradigm is that there's a secure place and there's a non-secure place. And within that secure place, everything can communicate freely. However, what we have here is prompt attacks. We have ways that, like, you know, we have a generative AI that is going to look for more information to figure out stuff, and it doesn't know that it can't volunteer this information. So the the approach has to be a lot more than just the infrastructure, and the firewall generates a lot of traps and knowledge, and then there's a telemetry thing that reads, you know, heuristic algorithms say, hey, look, there's a change in patterns. No, no, no. It's got to start at the workload itself, work at the model, work at all the different data points, and coordinate there. This is extraordinarily complex in a very different way than we were approaching it historically.

Inside the Secure AI Factory

SPEAKER_00

Well said, Justin and Taylor. If I can just add, please. So if we look at what customers are building out, there's multiple networks at play here. So you have traditional back end, sort of an air gapped network. Theresa have like training that happens on the back end, you have front-end network that we've sort of been building for many, many years. This is how the users come in and access this from an inferencing perspective, management, and then high performance storage. So security needs to be encompassing to that. And then if you look at the application layer, we now introduce this new thing called a language model. Whether it's a large language model or a small language model, we have to put, you know, red team it, put guardrails around it. Then there's the whole supply chain of the language models as well, right? Like, I'm not going to create my own, I'm going to go to Hugging Face and sort of get, and that's why we made my supply chain and seeing what type of models am I bringing on plan on-prem to sort of fine-tune to put my customer or my my use case data in there.

SPEAKER_02

Yeah, they very well said. The simplistic way that I've explained to customers is like, you know, that little voice in your head that tells you you shouldn't say that. Well, your AI doesn't have that little voice in its head telling it not to say these things. Correct. So assume that it's going to be like a little toddler just blurting out what you just said recently and won't do any fact-checking unless you really get strict about it. Exactly approach it like that, and you'll probably you'll you'll you'll anticipate some of the mistakes you'll be making, but the mistakes won't be made.

SPEAKER_03

Okay, Justin, I'm gonna stick with you here. We we just mentioned Spectrum X Ethernet. Tell me more about how that's changing, changing the name of the game here as we as we're in going into 2026.

SPEAKER_02

Well, I mean, you know, first of all, Spectrum X is awesome. I mean, what it's really done that's you know, uh Ethernet has had to evolve. Now you have the fact that we have many customers who don't want to have combinations of operating systems out there. There's, you know, yes, there's nxos, there's EOS, there's Junos, there's Cumulus, there's Sonic, there's, you know, running FRR on a Linux box. I mean, there's a lot of different ways to, you know, to connect it all up.

SPEAKER_03

Okay, yeah, and David, you're nodding your head over there. Please, please feel free to add to it.

Security Is Table Stakes

SPEAKER_00

Yeah, thank you. So, Justin, 100%. So when I look at this Spectrum X, and Taylor, please correct me if I'm incorrect. I Spectrum X is NVIDIA's Ethernet architecture. Spectrum 4 is the actual Mellanox ASIC that is in the in the switch itself, right? So from a from an NVIDIA Spectrum 4 perspective, we offer operational system flexibility. So, Justin, I would say we try to we're gonna try to narrow that window down to uh deployment for NXOS and or Sonic to run on the N90100 platform for large-scale deployments. That's part of the NCP uh reference architecture. It yields a lot of capabilities, such as adaptive routing and the handshaking, to where you can do the NVIDIA stock with a Cisco operational model for NXOS and or Sonic for day zero, day one, day two operations with like common ACE APIs back to the visibility piece from an end-to-end perspective. And then if you choose not to do the curated stock, you can also like I call it bring your own controller. So if you're like a Neo Cloud doing Sonic deployment, you can sort of bring your own type of controller because they have development staffs in-house to do that. Thoughts or comments or feedback on that?

SPEAKER_01

I I just want to chime in. Yeah, I think I think you've got it perfect, Dave. The one thing I'll say, you know, we we we we call ourselves NVIDIA these days. The the Mellanox is is uh is the the legacy, but but it is the legacy that we've built on, you know, decades of of innovation with hardware and software. And I just I think it's so cool that you know NVIDIA has built this this platform with Spectrum X, and and we've worked with another, you know, another Titan in the guise of Cisco to partner on this and and come out with something that you know is is like you said, it's really powerful, it's really flexible. And and I think especially with with this partnership, the the ability to penetrate new markets and really kind of expand our our footprint with with Spectrum X Ethernet is is huge. So so yeah, it's it's the the the work that that both companies have done. I I know you know talking to the product managers, the engineers, architects, there's a lot of a lot of good kind of hard work and and and ingenuity going into this to build uh this this platform.

SPEAKER_02

Yeah, and that that collaboration is crucial for another reason. Customer demand, the the like how do I put it gently, a single monolithic, single vendor approach tends to concern both CTOs and CFOs. And that exposure leads them down a path of trying to diversify all well and good. However, all the various components of a proper AI stack have to be very tightly integrated. So you can't just grab out of the, you know, grab it out of the box and make it work. So this collaboration between Cisco and NVIDIA is delivering one, a tightly integrated stack that'll still work perfectly, and two, doesn't have the same logo on every single thing that's shipping.

SPEAKER_03

Yeah, just I mean, certainly lots of you know incredible power here behind all of this. Let's talk, you brought up the customer. What does delivery look about? How do we operationalize all this on behalf of clients that we all have together?

SPEAKER_02

So, well, that's why we have the A approving ground. The difference between marketing and sales is proof. And again, when it is a single monolithic stack, it is very often certified completely internally. And Vidy's done a great job with, you know, with the SuperPots. It is all guaranteed performance levels. Once you start changing those variables around, customers want to verify it. They want to put together all the different pieces and see how they perform in a whole new gestalt that was not what was originally designed by the by either one of the independent vendors. So the the composable ecosystem that we have, I kind of put like the phrase, one beer short of a six-pack. So the customers have the six-pack, they're missing the plastic thing that holds it together. What the A Proven Grounds does is bind it all together in a way that you can actually say these different components, they work in a functioning, validated, verified sort of format. And you can take those numbers and use them for performance metrics, you can use them for predictive growth, you can use them for remediation. But those numbers, and I rattle off a long list of things that benefits, but those numbers are the critical things in both a business case and a technical case.

SPEAKER_03

Yeah. Well, Justin, you've already brought up one beer short of a six-pack. You had a tequila reference earlier. Can't wait to hear what's uh coming from a bourbon or vodka. Pick your pick your uh drink there. But uh you know, David or or Taylor, I I'm curious from like the outside perspective, where do you see the value with the with with something like the AI proving ground and and what it can do for a customer to help drive all this forward? David, we can go ahead and start with you.

SPEAKER_00

Taylor, do you want me to go first or would you like to go first?

Flexibility Beats Lock-In

SPEAKER_01

I'll I can I can chime in. Um I I I yeah, I guess um I'd just I'd just say, you know, uh okay, you guys know I'm my title is in marketing, but I've I've spent some time in in, you know, kind of the I'll call it the real world and and getting my hands dirty with this stuff. And I think, you know, maybe maybe this is a given to audiences listening here, but I I think I just want to say it. Folks would not believe how hard it is to truly implement these sorts of things. You know, and and and I I don't mean I don't mean that as a dig at NVIDIA's technology or anybody's technology. It's just putting different puzzle pieces together from different companies, different stacks, different APIs, different everything. When when the rubber hits the road, that that is always where you know the the I hope this is okay to say where the boys become the men, and and you know, where where this sort of stuff becomes, you know, where the challenges are found and where the solutions are found. And so, you know, uh something like AI proving ground, I think is is vital. And you know, you I I talk to our customers and and they certainly know this, like I said. So so I think, you know, from my perspective, it's it's almost it's almost self-evident, but but that's it's such a powerful thing. It absolutely needs to be stated, you know, just like testing interoperability, testing, you know, how these how these things come together, it's it's crucial.

SPEAKER_00

Yeah, thank you, Taylor, and thank you, Justin. If I can just add a couple more comments, please. So this is where we prove to customers that actually show that this stuff works from a practical perspective, like the software stacks, all the drivers, all the complexity sort of you know delivered as an outcome, I would say as a differentiator. Cisco and NVIDIA has been investing into the AI Proven Grounds with WWT. And if I can just give an example, so a lot of the work that we've been doing is we have, if I can just give like a how. So we have hyperfabric AI. This is you know, Cisco and NVIDIA, again, partnering to come with an outcome-based solution. So if I look at some of the value that we've done with some customer engagements, if I have GPUs that are in a three-year depreciation cycle, right? And if I take six months to get like the network, the compute, the NVAA software stack, all of those components, if it takes me six months to do that, I've lost one-sixth of my investment in that three-year amortization. So we're showing the customers a true outcome as it relates to time the value, time the first token, token, whatever term you want to use to go ahead and get that investment up faster.

SPEAKER_03

No, absolutely. As as we approach the bottom of this episode, let's talk about a little bit of the future. How can organizations kind of maintain their grasp on what's coming as it relates to the conversation we've had here today? Taylor, you know, like knowing that it's unpredictable what's going to happen in the future, whether it's you know, further advancements into agentic or even something beyond what we're already talking about now, what's some of our advice for organizations to make sure that they're staying up to date with all of this?

Multi-Vendor, No Regrets

SPEAKER_01

Yeah, I think you know, trusted partners are vital. People that have institutions, enterprises that have gotten their hands dirty with this stuff. And and you know, even as as challenges evolve, as as workflows evolve, and you know, maybe, you know, maybe like you said, whether it's generative, agentic, robotic, what what have you, you know, regardless of the challenge, you you yeah, quantum yeah, right, quantum. Absolutely. You know, you you you want to work with folks that have, again, kind of kind of seen seen where we come where we came from and you know, started, started solving these problems and and have some sense of you know what the future holds. And I think that's why I mean, again, maybe this this is is probably a given, but you know, uh Cisco and WWT are are true veterans in in this space. You know, they they've these are companies that have that have done this, they've gotten their hands dirty, they know these customers, they know their workloads. Uh that's super important too, of course, is is you know, kind of tailoring approaches to specific industries and verticals, but but you know, you you you have to be you have to have that background. And I think all three of our companies here today are are, you know, we we've we've talked the talk and walk the walk, and and you know, the future will always have surprises, but but that's sort of how you get prepared. There's there's nothing truly new under the sun, and and you know, something may be a different scenario, but but experience is is really important to to how you address this new challenge.

SPEAKER_02

Yeah, and I'd like to riff off of uh uh revolve that for a second, Taylor. So what's required a lot is is agility.

SPEAKER_01

Yeah.

SPEAKER_02

So the example that I'll draw is cloud. Remember the last big thing? Clouds. And customers would put everything in the cloud, and then they would migrate everything back out of the cloud into the data center, and then put it all back into the cloud again, and they would realize that they had to build a very agile, flexible infrastructure to be able to migrate the workloads wherever it had to be. What we have to, we while we don't necessarily have a full crystal ball for our customers, we can tell them that with 100% certainty, any architectural standard will be torn down shortly. So you have to build, don't design yourself into a corner where that's the only choice you have. Be prepared for I mean, I can't I can list off a million things, but like the concept is there. Build for agility, build for an organization that's not going to be dragging their toes for eight to 12 years past that three-year depreciation cycle. Be prepared in advance. That also maps into things like Cisco Enterprise Agreements that have specific line items for AI to refresh instead of just standard data center refresh is on a different cycle. These things have to be anticipated on a procedural, operational, functional, financial, spiritual level.

SPEAKER_03

David, yeah, anything to build on top of uh.

From Hype to Proof

SPEAKER_00

You know, while this thing is moving incredibly fast, I've never seen a market move so fast. And continuous learning that I do, and I don't want to speak for Justin and the others, but I'm assuming they they do this as well. I think the the best thing we can do is to help our customers, help them define use cases, and not so much worry about what's coming. I mean, the use cases will drive what's coming. Meaning, Justin, to your point, the refresh cycles are coming. This is not a new thing. There's this sort of accelerated, you know, the networking lifecycle, I've never seen it move so fast, you know, 400 gig to 100 gig to 1.6 T. It has direct impact cabling infrastructure, 2.0 architectures. The volume of AI traffic is going to drive, you know, the customers to sort of refresh, changing capacity alignment, etc. And then don't forget about the security aspect of it. I think customers right now are looking on where do I start kind of thing, at least in the enterprise space. Like, where do I start and how do I define these use cases and get rolling and guide them to what's sort of coming to, if it's agentic, robots, you know, things that were mentioned earlier. So I'll just pause there for comments or feedback.

SPEAKER_02

I would say, you know, you you start with the pain point. Start with the thing that's actually dysfunctional and see if you can actually, if there's an AI case or method to address that pain. And it can be anything from we have a call center that needs a thousand people, it has a hundred people, and wait times are too long. Well, then that's a perfect role for an agentic AI to go with their perfect little chat bot to go back and forth, figure it out, resolve 90% of the problems before they have to get to a human being. Um, if that's not your pain point, then why invest in a full-on call center AI? You don't need it. Where else do you need to look at it? So individualized register.

SPEAKER_00

Yeah, exactly. Like start with a use case. Don't show up and say I have an eight-way server. Like, you know what I mean? Like that's not that's not gonna solve that that business problem you're talking about. It's exactly what I'm referring to.

unknown

Thank you.

SPEAKER_03

Taylor, any closing thoughts before we wrap this episode or just based on what you know what uh what uh Justin and David are mentioning here?

SPEAKER_01

Yeah, I think you know, this is it's an exciting time. Like you guys said, change is so rapid. I I I I you know, both from a networking perspective, but holistically talking about data centers, talking about AI factories, you know, how how the the leaps and bounds that that we're we're experiencing after, you know, the the so the the demise of of Moore's law and and you know there's this new new scaling laws that have come into play. And and it's just an incredible time to be in in this space and to help solve problems and and help our uh customers get what they need. So yeah, I think you know that it's it's a pleasure to to work with Cisco and WWT, and I I look forward to seeing the the fruits of of this partnership.

unknown

Yeah.

SPEAKER_02

I have one more thought to add into there because we can just kind of go on forever here. But we're all engineers and scientists here. And at the end of the day, apply the scientific method. The first thing you do is define the problem. And by the way, the problem is I don't have an AI. The problem is something a little more complex than that. That's where you have to we have to approach or take that reasoning approach, define the problem, gather information from hypothesis, test the hypothesis, a little plug for the ad proven ground again, and then you know, talk about the results. Move it, move it forward that way. Approach it like methodology with a with a with a proven methodology, not with I saw read about an info AI and in flight magazine, and want to buy three of them.

SPEAKER_03

We've all had that one happen. I don't know who's I don't know who's reading AI or in flight magazines much anymore. But uh, I just hated myself, didn't I? You're good. Uh I mean, so much potential out there, but as much as there is potential, as much complexity. So, you know, Justin, I think what you said right there hits it on uh you know right on the head of you know, be agile, be ready for what's coming next. So um a good way to end that episode. Um, to the three of you, thank you so much for for taking time out of your busy schedule to join us here and talk about something that's that's definitely important and going to be more important as we head into the future. So thank you again.

SPEAKER_00

Thanks for having me as off in the part with you, Taylor and uh Justin. So thank you. Yeah, likewise.

Time to First Token = ROI

SPEAKER_03

See you guys in the trenches. Okay, thanks to Justin, David, and Taylor for stopping by today. The key lesson here is simple and maybe a bit uncomfortable. Intelligence doesn't emerge from components, it emerges from systems that can move data, absorb change, and stay intact under pressure. The question that lingers is whether the foundations you're laying today will bend with what comes next or quietly become the bottleneck. This episode of the AI Proving Ground Podcast was co-produced by Nas Baker, Kerr Kuhn, Diane Devery, and Addison Ingler. Our audio and video engineers, John Knoblock. My name is Brian Phelps. Thanks for listening. We'll see you next time.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

WWT Research & Insights Artwork

WWT Research & Insights

World Wide Technology
WWT Partner Spotlight Artwork

WWT Partner Spotlight

World Wide Technology
WWT Experts Artwork

WWT Experts

World Wide Technology
Meet the Chief Artwork

Meet the Chief

World Wide Technology