AI Proving Ground Podcast: Exploring Artificial Intelligence & Enterprise AI with World Wide Technology
AI deployment and adoption is complex — this podcast makes it actionable. Join top experts, IT leaders and innovators as we explore AI’s toughest challenges, uncover real-world case studies, and reveal practical insights that drive AI ROI. From strategy to execution, we break down what works (and what doesn’t) in enterprise AI. New episodes every week.
AI Proving Ground Podcast: Exploring Artificial Intelligence & Enterprise AI with World Wide Technology
Why the AI Factory Is Becoming the Enterprise's Next Critical Infrastructure
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
As organizations struggle to move beyond AI pilots, a new architecture — Cisco's Secure AI Factory with NVIDIA — is emerging as a missing link between experimentation and real business outcomes. In this episode of the AI Proving Ground Podcast, WWT's Neil Anderson, Cisco's Kevin Wollenweber and NVIDIA's Chris Marriott discuss how the Secure AI Factory represents a shift from bolt-on protection to security built into the architecture itself.
More about this week's guests:
Neil Anderson has over 30 years of experience in AI, Software Development, Wireless, Cyber, and Networking technologies. At WWT Neil is VP and CTO in our Global Solutions and Architectures team, with responsibility for over $16B in WWT's solutions portfolio. Neil advises hundreds of Fortune 1000 companies on their global architecture and technology strategy.
Kevin Wollenweber is Cisco's Senior Vice President and General Manager of Data Center and Internet Infrastructure. In this role, he leads product strategy to enhance Cisco's infrastructure solutions for the data center, high-performance routing, and mobile networks. His leadership is pivotal in driving growth and developing cutting-edge solutions to meet the dynamic needs of businesses worldwide.
Chris Marriott is the vice president of enterprise platforms at NVIDIA, spending the last 14 years advancing enterprise solutions. With a background in engineering, including 10 years in ASIC development, Marriott combines technical expertise with strategic insight to address the evolving technology landscape. Outside of work, he enjoys playing ice hockey and exploring the outdoors with his family.
The AI Proving Ground Podcast leverages the deep AI technical and business expertise from within World Wide Technology's one-of-a-kind AI Proving Ground, which provides unrivaled access to the world's leading AI technologies. This unique lab environment accelerates your ability to learn about, test, train and implement AI solutions.
Learn more about WWT's AI Proving Ground.
The AI Proving Ground is a composable lab environment that features the latest high-performance infrastructure and reference architectures from the world's leading AI companies, such as NVIDIA, Cisco, Dell, F5, AMD, Intel and others.
Developed within our Advanced Technology Center (ATC), this one-of-a-kind lab environment empowers IT teams to evaluate and test AI infrastructure, software and solutions for efficacy, scalability and flexibility — all under one roof. The AI Proving Ground provides visibility into data flows across the entire development pipeline, enabling more informed decision-making while safeguarding production environments.
From Demos to Durable AI
SPEAKER_00From Worldwide Technology, this is the AI Proving Ground Podcast. Right now, Enterprise AI is having its second act moment. The first act was look what the model can do. The second act is a bit messier. Can we actually use this every day inside the walls of a real company without breaking things? Because once AI moves past demos, it needs a new kind of infrastructure. It needs power, it needs speed, governance, guardrails, and it needs to work with the systems you already run, from networks and storage to security and operating models. And this is the gap most organizations are stuck in. How do we scale this without risk and without turning our data center into the proverbial science fair? So today's conversation is about what happens when you treat AI less like an app and more like a factory, where you feed in data and decisions and reliably produce outcomes. You'll hear from Chris Mary out at NVIDIA on why reference architectures and blueprints are becoming foundational. Kevin Wollenweber at Cisco on how the math changes when you bake security and operations into the design instead of trying to build them later on. And Neil Anderson here at WWT on how Fortune 100 leaders are thinking when they decide what to build, what to buy, and what not to trust. But the deeper story is this the companies who win with AI won't just have better models, they'll have better systems. And by the end of this episode, you'll understand why the most important question in enterprise AI isn't what can it do, it's who or what is allowed to do it at scale when nobody's watching. So let's get to it. Okay, well, gentlemen, Chris, Kevin, Neil, welcome back to the AI App Provenground podcast. I think it's been several handful of months since we've had you on. How are you all? Doing great. Thanks for having us.
SPEAKER_03Indeed. Happy to be here.
SPEAKER_00Chris, I'm going to start with you here. You know, the concept of AI factory has been out there for a while now, certainly, you know, mega charged by what NVIDIA is doing. So, you know, what do you see as the definition around AI factory? I think we all have the general idea of, you know, you're putting in data and you're getting out intelligence, but but what is it exactly and what do uh leaders within a business need to know about it to move forward?
SPEAKER_03Yeah, absolutely. So, yeah, AI factory is, you know, a common term that we've been throwing around a lot uh nowadays, obviously, with this build-out of all this infrastructure. So I think we've we've started distinguishing into kind of two factors, right? We have AI infrastructure, which is the build-out of a lot of this large-scale compute and networking and everything else. But really, an AI factory is where we start to roll in the intelligence from a lot of these business operations where we can start to develop insights and make you know business critical impacts to a lot of that. And you know, what we you know at NVIDIA, the foundation of what we do is we effectively we sell technology and libraries, and really it's up to our partners to integrate the technologies into platforms and uh into all the applications and everything that makes up kind of the AI factory piece, they integrate a lot of those libraries. And so what we've done foundationally to make a lot of these new workloads, this new infrastructure simple is we've tried to get to architect from the grounds up reference architectures on the hardware that our partners can consume and build into solutions for customers. And then in the application layer, since a lot of this, you know, orchestration, security, all the D AI ops and all those things are really new now as we start to deploy this into enterprise. We've tried to make it easy into essentially a validated AI factory stack where we have partners come in that are integrating a lot of our libraries, validate on early infrastructure to make sure it all works. And then that gives like our partners, our channel partners, our OEMs in the ecosystem kind of a really great palette to choose from when they're looking at partners to build out an entire AI factory on that infrastructure. So that's really how we've tried to simplify and how we see you know the foundation now that you know Cisco and WWT are are building upon and building those solutions to bring those solutions to customers.
SPEAKER_00Absolutely. Kevin, I mean that's that's the what from a general perspective. And there's so many different flavors out there. Run us through what we're seeing with Cisco and the secure AI factory. What's what's unique about you know the flavor that you have over there?
Security Built In
SPEAKER_01Yeah, definitely. And if you listen to what Chris was talking about, it's all of the building blocks put together in you know very simple, easy to consume blueprints for customers to go and deploy. What we started to see is as we wanted to roll these out with enterprises, the ability to not only build those building blocks to start delivering tokens as rapidly as possible and as efficiently as possible was important, but then tying on things like telemetry operations, security, and enabling them to bring it into their IT ecosystem in a in a in a much smoother manner is what we've been focused on. So kind of taking all the great value that NVIDIA brings when it comes to their understanding of AI and the expertise they have in that space for for the last decade as of all these builds, and then bringing in some of our enterprise focus around security, simplified operations, and really bringing these together in a much more easy to consume piece of work.
SPEAKER_00Yeah. And Neil, give us the why here. So, you know, leaders aren't buying necessarily the assembly line. They want outcomes. So what is the Security AI factory accomplishing for businesses? What types of outcomes is it? Is it risk? Is it cost? Is it what is it?
SPEAKER_02Yeah. I mean, first, just to comment on Kevin. Like I think when I think of the AI factory, what immediately comes to mind for me is let's get out of prototyping and let's get to scale, right? We we need to start to really produce results for businesses. And that that takes a lot of tokens for those outcomes. And that's why you need a you need a really you know robust factory to be able to produce all those tokens that all these applications are going to need. And the business outcomes to me are by by you know, kind of pre-assembling this architecture, you you've taken a lot of the guesswork out and out and the risk out for a lot of our customers, right? Where we can we can go to scale and think about their applications more and a little bit less about the underlying architecture. Like it's it's a it's especially with the the great job Cisco does with validating that. And then where I think the secure piece comes in is, you know, it's one thing for you know a hyperscaler or a neo cloud provider to build out AI factories. But when you when we get to the enterprise space, it's super important. The number one factor there that's kind of holding them back is how do I do this without risk? How do I mitigate the security concerns that I've got? So and I and I think this is just tremendously unique to Cisco, the secure AI factory that that they've developed with NVIDIA. It's it's an it's really unique in the market and uh just you know gets rid of a lot of those barrier points for many of our enterprise clients. So I think this is gonna be really fantastic for our customers.
Outgrowing Pilots, Reducing Risk
SPEAKER_00Yeah. Kevin, build on that a little bit. You know, the the Cisco AI readiness report that came out a few weeks or maybe a couple months ago at this point found that only it was like 10 or 15% of organizations are ready to scale AI. Neil's bringing up how the the AI factory is something that can help scale. How is the Secure AI factory helping lend itself to scaling AI and really getting out of that pilot phase that so many are stuck in?
SPEAKER_01Well, and first of all, you get go back even a few years and kind of think about what's happened over the last couple of years in this space. This this market is moving so fast that even assumptions that we would have had two or three years ago are probably not true today or have changed over time. And so, you know, we we thought we would see you know rapid adoption of these AI factories inside of every enterprise in the world, and people would just be stamping these out and at maturity, you know, at a very large scale. And what we've seen is one of the big blockers was their ability to consume all these different technologies. And so think about you think about what the building blocks of an AI factory are. A lot of them are the same kind of piece parts that they would have had in a traditional data center at a high level, you know, networking, compute, storage, even security, and and orchestration technologies. But these things have to work together as a single system. So I think Neil said it really well. It's it's removing that friction and kind of giving them a risk, a low risk uh way of deploying these technologies and getting to maturity and scale to focus on the applications themselves, those AI workloads that are going to be driving enterprise value versus how do I connect these parts together and how do I get things up and running? And that addition of security, the reason why we were so focused on that is as enterprises start to bring in AI infrastructure and start running their own workloads, you know, their data is critical. Access to that infrastructure and access to that data is something they absolutely want to protect. And so thinking about not only running the workloads, but protecting the workloads, protecting the models, and protecting that that AI factory infrastructure is absolutely where we've been focused.
SPEAKER_00Yeah, Neil, why get maybe dive a little deeper into the security aspect? Why is it valuable to have it built in from the start as a opposed to the bolt-on that that maybe others are doing?
SPEAKER_02Yeah, I mean, I think security architecture in general is changing, right? This idea of perimeter security is pretty much gone, right, from most CISO mines. It's just the world has changed, right? So you this idea of needing a highly distributed security architecture and built in from the ground up, that's to me, that's overall where security architecture is moving. So with the concerns around AI, it's just essential that data is flying around very super fast between storage and and the and the GPUs from NVIDIA. You've got to make sure you can insert policy into that. And not only that, but being able to protect the models. These models were developed by, you know, essentially taking all of the publicly available data, including some of the bad stuff, unfortunately, that's out there and building it into the model. So, you know, products like you know, Cisco AI defense to be able to put guardrails around that and say, look, we you don't have to just live with the, you know, some of the bad stuff that's in the models. You can protect that, but it's got to be integrated from the start. It's not something you can kind of do as an afterthought to me.
SPEAKER_00Yeah. Chris, what are you seeing from the security aspect? Is that generally something that is an afterthought for Morse organizations? Therefore, this is added value, or what are you seeing from security?
AI Without Lock-In
SPEAKER_03No, absolutely. I think it's the the same thing that uh the that that we're seeing here, right? Is customers want want the foundations of it because they're used to deploying these enterprise applications already for so many years. And I I think in a lot of ways, they're expecting these new as you bring AI into these applications for it to just be foundationally there. And that's why I think it is critical to have it right from the start. And we're even doing it now from all the way from the hardware level. And so we've started building in a robust roadmap of even confidential computing, so like technologies into the platform so that users can be you know confident and reassured that security is attested all the way up from kind of the metal through the infrastructure. But I think having uh Cisco AI defense where it's it can guardrail, it can monitor prompt injection, it can do all of these things real time, I think is is critical for the deployment into enterprise.
SPEAKER_01Yeah, I mean, and and just to build on that, you think about it, these are these are new workloads, these are new applications, these are new technologies coming to the network. And so, yes, some of the existing technologies can be leveraged to help us protect them and help bring security into that ecosystem, but we're also gonna have to invent things that are new. And so the the thing that Chris talked about with AI defense, you know, we're we're sort of protecting in both directions. We're not just protecting people's access to stuff, but we have to protect the model from divulging some of the information that may or may not be applicable in certain scenarios or to certain agents or to to certain people trying to access it. We need to be able to protect from nation-state attacks and people that are trying to do malicious things with that infrastructure. And so the the threat surface has has basically grown significantly. And so you will see new tools, new technologies, and new approaches to how we want to protect things in this AI era.
SPEAKER_02Yeah, and this is something that at worldwide we're we're investing in a framework called Armor, which is going to be, you know, our stance is essentially that you we don't sell an AI factory unless it's secure, right? And so that's why it's super important. And, you know, we've been working really closely with, I'm sure you're gonna probably talk about that on our security episode, is working with NVIDIA and Cisco on okay, how do we how do we meet that, you know, no AI without armor, right? Is is is sort of our are gonna be our tagline. And you know, but it's just super important for the clients. Again, as we turn to enterprises, our our stance is going to be look, you you don't want to just buy an AI factory, you have to buy a secure AI factory. Yeah, no, absolutely.
SPEAKER_00Yeah. Yeah, Neil, um, let's let's talk a little bit about the fact that I mean AI is already happening within pockets all over the organization. Could be co-pilots, could be coding assistants, whatever it might be. If someone to were to buy an AI factory with Armor, of course, is it do all of those workloads then just get funneled in there? How does it integrate with what organizations are already doing right now?
Networks Built for AI
SPEAKER_02Yeah, I think a lot of a lot of our clients have gotten started in different places. Some of them have started in the cloud because it's just super kind of easy to get started, right? And they may not have the infrastructure yet that they need to do it. But what most of our clients are telling us is, you know, the the risks here of you know an exposure of intellectual property or just, you know, cost could be a factor for a number of customers. They want to build this in their private data center. Yeah. Right. And so those applications, if they were built on an NVIDIA stack to start with, right, which is I I kind of preach to our clients, like, hey, you can do lots of things, but there are certain ways if you build an AI application, you could get locked in. Yeah. And it's not portable. What I love about you know the NVIDIA stack is if you've built it there, you know, whether you've built it on DGX cloud or you've built it, you know, in a in a Neo Cloud that's you know running that, you you've got now a portable workload that you can kind of migrate very easily to your private, you know, enterprise data center. Some some have not done that. They've they're they're a little bit locked in. They may have to re-architect the solution on make sure it's on top of a of an NVIDIA software stack. But once you've done it, it's it's pretty portable, which is I what I love about the NVIDIA software part of this out, you know, the stack here. Yeah, no, absolutely.
SPEAKER_00Yeah. Chris, maybe walk us through a little bit of the blueprints here and why those are so valuable to get organizations, you know, 70, 80, 90% of the way there. And then is from there, is it a a snowflake, or how do we, how do we work with organizations to make sure that they get what they need?
SPEAKER_03Yeah, yeah. I mean, the the whole goal and and what Neil was alluding to, right, is like we we built the stack so it can scale on-prem to the cloud, essentially end-to-end, even onto the edge, right? So everything's supported through like our NVDAI enterprise and everything like that. But then layering that on top, we and we will continue doing this probably till the end of time across so many different verticals, is we will look at business critical areas where we can focus a whole bunch of technologies and build it into a workflow to really make some business critical impact into that vertical. And so that's what we've been calling NVIDIA blueprints. And so we have deep research blueprints, we have video search and summarization uh blueprints, and and the idea is to exactly that give people a head start to get them 75, 80, 90 percent of the way, they can take that open blueprint, develop on it, change it, alter it for their use case, and deploy it in the cloud, on-prem, anywhere that you know NVIDIA GPUs and infrastructure are based. And then, yeah, where where do like the the partners come in? I think that's where you have an enterprise customer who needs to adapt a particular solution. That's really where a lot of the GSIs and partners in the channel will fill that expertise gap between you know where they want to go and where they are, right? And I think that's super critical.
When Agents Change Access
SPEAKER_02Yeah, a good example of that is is like our, you know, we we developed a digital human you know, prototype. And I look at the the blueprints, Chris, and and and and your NIMs that are kind of underlying that as hey, this this makes it lower code. I don't have to worry to get down to the absolute and build it from scratch myself. I can simply say, for for in the case of a digital human, hey, I need I need a multilingual translator, I need a speech to text and a text-to-speech translator, I need, you know, I need to be able to actually visualize a digital avatar of a human. We can take those pieces and and pretty quickly get to an actual prototype we can show a customer, like, hey, this is what a digital concierge might look for you, or this could, this is a digital con, you know, concessionaire uh demo that we have. And that that's the way I'm I look at those is it's just uh it just makes it much lower code for us to be able to get started and and actually get to an outcome versus having to start from the bottom of the stack and and code everything ourselves. Yeah. Kevin, did you have a comment?
SPEAKER_01Yeah, on top of that, like you think about I like the the portability comment is is important because I mean every AI workload and application is not the same. And so so I think uh what a lot of people think is you know, are am I gonna build on-prem infra? Am I gonna run in in cloud or neo-cloud? Am I gonna move things back and forth? I think the the answer for most is yes, right? So there may be things that if you think about the infrastructure you might need to build to fine-tune and and do something periodically, that might not be something you want to invest in. So you may use other resources to be able to drive those those workloads and then do large-scale inferencing with with localized data on-prem. And so I think a lot of people are going to be doing combinations of all these different things. And so the ability to sort of build once and then deploy wherever you need at whatever scale is needed is is extremely powerful.
SPEAKER_03Yeah. And and just adding on to that, you know, the one thing that we're I think certain of is that, you know, things are going to change probably every month, if not, you know, every year pretty drastically. And so, yeah, that flexibility is important because I I I it blew me away. Just, you know, even just the chat GPT moment was only three years ago. And I think uh, you know, it feels like it was 10 years ago, right? So you can imagine, you know, if you want to port these applications to different, different locations, different infrastructure, I that that's really the the goal behind our strategy there.
SPEAKER_00Yeah, well, it's funny, Chris, that you talk about that chat GPT moment being, you know, three, four, five years ago uh being removed. It is quite wild to think about it that way. Kevin, I'm wondering, you know, from Cisco's point of view, how how are you all designing the secure AI factory to ensure that it can adapt to the times and it ages gracefully and it's not something that we're just gonna rebuild in a couple years?
The New Threat Surface
SPEAKER_01Yeah, and and and so we have this this concept of AI factory that the you know NVIDIA has basically pioneered and and driven. We've applied security, like we talked about, to build the security AI factory. But one of the things that we've been focused on on this partnership with with NVIDIA is actually bringing things that we think are gonna be beneficial for some of these changes. And so as we look at this move towards the enterprise, you know, our our customers, we have we have tons of enterprise customers, they've been building data centers, traditional data centers and and fabrics forever. And so they have certain tools and technologies and things that they've they've really become used to using. And we're trying to figure out how we integrate these AI factories into the ecosystem as easily as possible. And so one of the things that we just delivered or actually launched it at the last GTC was the ability to take NVIDIA's Spectrum X ethernet architecture to take their silicon and put it into a Nexus form factor to allow customers some of that known quantity, some of the consistency with what they're already deploying in networking, but with all the value that NVIDIA brings in their end-to-end Ethernet architecture for deploying AI services. And so you'll you'll see things like that continue to come into this ecosystem where we're not just building the the existing factory, but we're trying to adapt it and evolve it for some of those changes that we see.
SPEAKER_00Yeah, Neil, um, a moment ago, Kevin mentioned the fact that not all workloads are created the same. As we move into inference, as we move into the world of agentic AI, how is that going to change, if at all, how customers or organizations should operate their AI factories?
SPEAKER_02Yeah, I think it becomes, and again, security becomes super important there. When you think about, you know, today the way I explain it to clients is like today you're we're living in a little bit of a client-server world when it comes to AI, right? You fire up a browser, you you you know, you talk to a with a chatbot, and it it, you know, that that goes back to your nice, tidy cluster in your data center. But I think that as we move out into agenc and physical AI, that's gonna change. And you're gonna have hundreds or thousands of agents that are talking to other agents, talking to humans. And how are you going to influence or you know, the policy? Uh what agent is entitled to talk to what other agent and what day, what data is that agent entitled to access and not? So again, security and policy becomes just paramount for the the entire AI architecture as agentic get becomes you know more and more real. And not to mention, like physical AI is gonna kind of turn it, turn it on its head. You're gonna need compute out at the edge, GPUs out at the edge, where the data lives. And so I I think that again, the way that Cisco is kind of approaching this with and with in partnership with NVIDIA is it it's gonna scale to those kit use cases. It's going to move, it's moving where AI is now moving into a gentic and and physical AI. So that's we're we're really excited about that. And you know, Kevin, you you mentioned bringing in the Spectrum X technology. I think that's just absolutely paramount for enterprise clients because the when I explain to clients, you know, it's like there's like a light bulb moment. It's like, wait a minute. So I can just I get the innovation of Spectrum X coming into my Cisco products, but I can still manage it like I've been managing my data center network. Is that what you're telling me? And so it I think that's just gonna be a huge advantage for clients. They can get the innovation, but they don't have to necessarily spin up a brand new technology support team. There it's it's familiar to them and they know that they can rely on that architecture.
unknownYeah.
Why Ecosystems Win
SPEAKER_01Well, I want to hit on one of the things you were talking about on the agentic side, because I think this this is fascinating for me. And and you know, having been in the networking world for for 30 years, you know, we we always run into these inflection points that kind of get us outside the bounds of scale that we're used to, and then we have to invent new technologies to try to solve those problems. And the way you describe the agentic problem is it's exactly the the thing that I like to talk about a lot, which is you were going from this world of you know, you have a profile, we understand you, we understand your access rights, and we understand we can build policies around that. But now you're potentially giving those access rights to agents, and maybe not all of those access rights. Maybe certain agents have certain access, certain agents have more or less. And so the ability for us to then identify, build policy, and then apply segmentation and policing and manage that in an ecosystem, we're we're getting beyond the bounds of current technology. So if you think about the the old world when people used to build those rules themselves and build the policies and say, you know, Neil can't access this and can't access that, that doesn't work in a world of agents spinning up and new new policies having to be created on the fly. And so we're having to start to actually leverage AI and intelligence to build the policies to manage the complexity of those engagement models. And so it's exciting, but it means we're gonna have to invent new technologies. We'll definitely build off of existing technologies and concepts, but the level of scale that we're gonna be talking about in this agency workflow world is very different from what we're used to in traditional enterprise applications.
SPEAKER_03Yeah, yeah. And I just building on some of those concepts, I think when we start to think about, you know, so many agents in an enterprise interfacing with so many systems, but it's also unlike you know traditional computing, where it was maybe just a virtualized kind of container or something on a CPU node, now we're talking, you know, GPU infrastructure, where we're talking storage, we're talking networking, we're talking compute, where those policies essentially have to scale. And I think that is, you know, the next challenge that we've been working on is so now with inference, even to you know, really scale up the performance and and bring down kind of the top cost per token. Now we're looking at distributed inferencing where you're doing, you know, pre-fill maybe on some one processor and decode on the other. And now you've got to take that KD cash across all these users, across all these agents, and make sure that you're applying security policies as you spin up and spin down these agents. As you're saying, the world is gonna be exciting as we all invent new technologies to you know protect those workflows.
SPEAKER_01Well, then take it one step further, right? Just just visualization, telemetry, and understanding what's happening. Because at some point, agents are gonna not go rogue, go bad. They may do that as well. But but you know, we have to be able to identify are they performing as expected? Are they doing what they need to? Do they have access rights or do they have too much or too little? And so the ability to actually visualize what's happening and identify anomalies in that is also not something that the human mind or or a traditional operator is gonna be able to do manually. So we're gonna have to build new tools and technologies that sit around this to help us with that.
Proving What Actually Works
SPEAKER_02Yeah, and to your point, Kevin, like I the security architecture has to change in my mind, right? Because you're you're now talking about essentially you need a mesh of firewall technology to be able to, you know, distribute A, where the you know, policy where that where the data is and where those agents are operating, but also much more sophisticated leveraging AI itself, much more sophisticated threat detection about like, hey, should that agent is that an anomaly? That agent's hasn't done that before, right? And and asked for that kind of information. Like we're we're we're definitely entering a new world here of innovation. But again, I like I like the approach that Cisco and Nvidia are taking together of, you know, you gotta think about so the the security from the start. We don't want to get way down the path of, you know, building out a huge AI factory and then all of a sudden it's like, oh, how are we gonna secure it? To me, that's just a non-starter. We you've we've got to think about these things from the start.
SPEAKER_00Yeah, Neil, one of the things that you were mentioning earlier just kind of spoke to the fact or to the value of interoperability and you know, partners working together. Can you describe for me like what the value is of having that ecosystem of partners being able to work together to drive towards a customer outcome as it relates to AI factory?
SPEAKER_02Yeah, and I think, you know, the the way I think about this is, and then Kevin I think alluded to this is with traditional data center, we kind of lived in this world of like you could make storage decisions independently from network decisions and independent from compute decisions. But with AI, it's it's different. Yes, some of the same technologies, but it never before has been this need to tightly couple them as a system. And you need to be making those decisions together on is is my storage gonna be able to feed the GPUs as fast as I need them to get the utilization I want? And is the network going to have the bandwidth for that? And can I apply policy at that speed? Like it's it's it's it has to be kind of one architectural decision versus I'm I'm deciding all these things separately. So that's that's kind of the way I think about it. And only an ecosystem is gonna be able to solve that. No, I don't think there's one vendor out there that can really solve that. You know, the all the way from the network layer, the compute layer, the storage layer, the software stack that you're gonna need to run. I it's it's gonna take a village to solve that architecture. And so the the ecosystem becomes paramount. You know, NVIDIA working with Cisco, working with storage vendors, working with other use case software stacks that might come into the picture. Like that's just gonna be critical.
SPEAKER_00Yeah. Chris, I'm wondering, you know, that's certainly a shift as Neil's mentioning, but is it more of a pendulum where it's gonna start to go back at some point, or is it gonna become increasingly important that we have this interoperability, a much larger ecosystem? And therefore we're, you know, that integration piece is become gonna become more important.
Simplify, Then Secure
SPEAKER_03Yeah, I I think it continues to become more important. It's it's as Neil's saying, it's it now we have partners all the way, you know, from security and orchestration, now data ops, all the application layers and everything. And and it's not, you know, maybe at some point we see some consolidation of that stack you know in the future. But right now, the it's like startup mentality in each one of these layers, right? And so you have all these new companies coming in. We're gonna have tons of new innovation in the space. And again, what we try to do is as they as we get word of new partners and they start to integrate new solutions, we have them in come and validate on the infrastructure to make sure, first of all, like they they all work so that once they come on to E300, for example, we're not running into performance or feature issues. But then also foundationally, what we've been doing as well is to broaden that ecosystem is for some of the govern government and heavily regulated industries. We've now rolled in like STIG hardening into our containers and HIPPS compliance where required as well. So really, you know, broadening, I guess, the that AI factory, not only for just standard enterprises, but for now all the government regulated enterprises and everything else. But yeah, the ecosystem and and kind of our big village together is going to be critical.
SPEAKER_01Yeah, and just and the the innovation that comes out of it. So like it's not just we're figuring out how to deploy the building blocks that we have, but it's it's you know, my network engineers working with Chris and the and the NVIDIA engineering teams and actually inventing and defining future solutions. And then you know, working with WWT. I think one of the really cool things about WWT is is this AI proving ground where we can actually not just talk about these technologies, but then go put them into the network, implement them, and actually show people the technologies and then determine what it's going to look like at maturity and scale and how we scale it out physically inside of our customers without them having to experiment on their own and potentially move in in you know wrong directions. And so I think it it takes all those different pieces. And I I agree with the statement before like there's there isn't one company that has all these pieces fully figured out, and so that ecosystem and the partnership aspect of what we're driving is is critical.
unknownYeah.
SPEAKER_00Neil, I mean, so uh Kevin brings up the AI proving ground, the namesake of this podcast is what we have here at WWT that helps implement and deliver solutions such as the AI factory. What does delivery actually mean? And maybe give us an insight into the AI proving ground. Where are we seeing bottlenecks? Where are we seeing successes as it relates to what we're talking about today?
Edge and Physical AI
SPEAKER_02Yeah, and the proving ground is I we we think it's a pretty unique asset. We we sort of foresaw that there was going to be this need to evaluate how are these technologies gonna work together as the ecosystem and are they plug and play, or do I really need to tailor an approach based on an AI workload, an application I'm trying to do? And and so it's been phenomenally successful. I mean, we we have organizations in there every day. It's hard to keep track of all the different clients that are in there. I actually created a logo slide a couple of weeks ago, like everybody that's that we've been working with. The big my biggest surprise about the AI Proving Ground, I thought, look, the the biggest organizations out there probably have it. They've got their own, you know, they've got this problem solved. Yeah, right. It's gonna be our smaller clients that are gonna be in the proving ground. And I was completely wrong. It's the biggest organizations on the planet that are in the proving ground trying to evaluate everything from should I pick this model or that model to build on? Can I use the my existing storage partner with this cluster I'm trying to build, or do I have to, do I have to buy something new and spin up a new technology? Can I deploy this on Ethernet technology, which I'm I'm used to, right? And I might have people that can support that, or do I have to use something like InfiniBand? Like these are all the questions that we're answering for clients every day in the proving ground. And then Upstack too. Hey, if I wanted to do this, you know, can can we build on something that's in the NVIDIA software libraries to make that come to life almost like a you know, an art of the possible. That's also work that goes in the proving ground every day. I I've never, it's fun because I've never seen before where you've got network engineers, compute engineers, storage engineers, data scientists, you know, NVIDIA software experts, all kind of working together on a common problem and focused on the outcome. We know what we want to build, but let's let's validate all these different components of the ecosystem, make sure we can actually build that thing.
SPEAKER_00Yeah. Chris, the surprise that Neil brings in, the fact that it's not just the small organizations that need to figure all this stuff out, but it's the biggest of the big that need to get in there and get their hands dirty. What does that say about where the market's at or where it's gonna go?
SPEAKER_03Yeah, I I think uh first of all, we we see the the critical nature of having this kind of center of excellence that WWE has built across all of these different platforms and even to be able to test the interoperability, the performance, like model selection. I think it's it's highlighting, you know, the just the skill set and expertise that you guys have built in the company to go deliver that to customers. But we see the same thing. We on a much smaller scale, we have we have kind of a launch pad where we put some of our new technologies for customers to come in and test. And we see the same thing. It's kind of the the fortune, you know, 50, fortune 500s that want to come in and get early access to the equipment because they don't know what to buy initially. And so having a place that they can come in and evaluate that quickly, it comes back to this, you know, the foundation of all of this AI factory motion is to make it as easy as possible with all of these new workloads, these new infrastructure to come in and you know get from POC into a deployment stage. But I I think it really does tell you that even, you know, small to big, all of these companies are looking to deploy AI and they need partners to help them make those kind of decisions.
SPEAKER_00Kevin, I we're coming up a little bit at the bottom of the episode here. So uh just a couple more questions, and you all have been fantastic and gracious with your time. So thank you. But Kevin, uh maybe a little bit of a roadmap question here. What are we what are we gonna see coming down the line for Cisco Secure AI factory? You touched on it a little bit already, whether it's agentic or you know, you can get into edge. You know, pick your path. What are we gonna see as it relates to the factory, as it relates to operating these systems and so forth?
SPEAKER_01Yeah, I mean, I think the first thing for people to remember is we're still in the early days of you know inference and and running these workloads. I think we alluded to things like physical AI, and and I like to think of that as like when AI moves outside of the data center and when you're gonna have GPUs and data floating around the real world. And so don't don't don't get discouraged by the fact that that maybe you're not, you haven't deployed infrastructure on-prem or you haven't figured out the ROI of a certain application and driven it.
SPEAKER_00Yeah.
Move Now or Miss Out
SPEAKER_01But at the same time, I think you'll see solutions, you'll see reference architectures that we partner with with NVIDIA and deliver with WWT on enabling customers to deploy in a much easier way. Focus on the applications that are sitting on top. I think we we spent a lot of time talking about how we bolt these things together and how we build these AI factories. And the reason we call them AI factories is we want you to treat them as an architecture you can go and deploy, really with the benefit of delivering tokens and running applications faster. And so I think what you'll see is simplification of the technologies, bringing more and more partners into that ecosystem and innovations like bringing Spectrum X in and bringing in consistent tools and technologies that our existing enterprise customers use for operations today, just to make it easier for them to consume.
SPEAKER_00Yeah, Chris, I think you know a lot of us here look to NVIDIA for for what's to come in the future. So even beyond what Kevin has mentioned, what do you see as the future of the AI factory? And and Neil, know that I'm coming for you next. How are we going to be able to get to what to what Chris is describing?
SPEAKER_03Yeah. No, I I think building on kind of a lot of the foundations that we've spoken about, right? We believe that, you know, the concept of an AI factory, although we're we're talking about, you know, landing business impact applications on these factories across all these enterprises. But if you think into the future, when we build out all this physical AI, all these, like where anywhere we're building equipment, there will be robots, and those robots will be need tokens to process all of that intelligence that they're seeing, that they're sensing, and everything like that. So the future definitely holds. And we've talked to many, many customers who are already planning this. You're building a manufacturing site, you're building, you're you're kind of building a holding space for that for an AI factory in that manufacturing facility. We'll probably see that with ports, with traffic, with everything. And so you'll see AI factories start to pop up anywhere that intelligence needs to be processed. And it's not just going to be in like mainstream, large mainframe data centers. It's going to be distributed, it's going to be the edge. And, you know, that's that's definitely the theme that we see coming.
SPEAKER_00Yeah, Neil, I may have set you up to fail there. I mean, I talking about how do we unlock the future of physical AI, robotics, and so forth, maybe a little bit too much. But, you know, if you have an answer, great, but maybe back it up a little bit. What do organizations need to be doing this year so that they're able to accomplish what they need to by the end of the year?
The One Takeaway
SPEAKER_02Yeah, I'll start with this year. You know, I think that it's really key to you, you got to get started. You you know, I see some clients that are kind of sitting on the sidelines a little bit and saying, well, I'm there's new generations coming out. I'm I'm gonna wait till this settles down. And my answer is always, it's not gonna settle down, guys. Like you need to, at some point, you got to get in the game and play the game. Yeah. And and the reason that's important is that you're probably not gonna get it on the first try. It's it's gonna take some learning. And we had to go through that ourselves when we applied it to our own business. Our first, you know, RagChat bot was not very successful. It it had lots of hallucinations and and you know, things that were wrong with it. People wanted to add data sources, it wasn't easy to do. Now that we've transitioned to an agentic architecture and we've we've learned those lessons ourselves, which hopefully we can help our clients avoid, but it's gonna take learning, right? And and you're probably not gonna get on the first try. So you got to get in the game, you gotta start learning, you know, to play the sport. And you may not be uh, you know, an expert at that sport today, but you're gonna the it's only gonna come through practice and and trying, right? And as I look out in the future, you know, I I I think that you know, we we sort of look back and go, yes, it's been three years since chat GPT was released, right? And that it seems like I think Kevin, you said it was like it seems like 10 years ago, but it was it was just three years ago. And but we're in early days. I think that it's hard to even imagine what this is gonna look like in five years, if you think about it. And it it reminds me going back to like when the internet was first created, no one really understood like what what is this thing gonna be used for? We're gonna invest billions of dollars in this thing, but what why? But it's pretty obvious now why, right? Looking back on it. But it it took it it actually evolved over several years, right? And almost a decade to get it really to where it's like, oh, that this is the transformation that we're talking about. I look at you know, mobile and smartphones the same way. When the first smartphone was released, no one quite kind of understood like what is this for? Are people really gonna like walk around with a computer in their pocket? But now it's pretty obvious. Like we, it's transformed the way we we live our lives, the way we construct business. It's just transformation. And I think this is another one of those inflection points where we're we're we're doing okay today and we're delivering outcomes to our clients every day. But I, as this moves on, I don't I don't even know that we can imagine what this is gonna look like similar to those other inflection points. It's just all I know is it's going to be very transformational and you can't afford to sit on the sidelines on this one.
SPEAKER_00Yeah.
SPEAKER_01Right now we still think of AI as a thing, like it's an application, it's running over there, it's in its own little protected space. You know, the the vision that I have is eventually it just becomes part of everything. It it's not it's not an application, it's a part of every application. We're accelerating applications, we're making things more efficient, we're bringing more intelligence into everything that we do. And and when you hit that, then to your point, I think a lot of the the use cases that are gonna drive massive, massive returns aren't understood yet today. But as soon as all those technologies are there and mature, you're gonna see things that that we didn't even think were possible two years, three years, five years ago.
SPEAKER_00It's an exciting future, no doubt. Um, but the point taken, uh, get in on the action now. Otherwise, you'll be um at a deficit uh when you do decide to jump in uh in the future. Uh well, Chris, Kevin, and Neil, thank you so much for taking the time today to talk about uh AI, the future of AI, secure AI factory, so on and so forth. It was a phenomenal conversation. We'll have you all back on again soon. Okay, thanks to Kevin, Neil, and Chris for joining. Before we go, here's a key takeaway. Enterprise AI isn't being held back by imagination, it's being held back by integration. The hard work of making new workloads behave like trusted, operable, secure parts of the business. That's what Cisco's Secure AI Factory with NVIDIA is really about. Not hype, not magic, but a practical path out of pilots and into production. Where security isn't a patch, operations aren't an afterthought, and the technology can actually live inside the enterprise. This episode of the AI Proving Ground Podcast was co produced by Nas Baker, Kara Kuhn, Diane Devery, and Addison Ingler. Our audio and video engineer is John Knoblock. My name is Brian Phelps. We'll see you next time.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
WWT Research & Insights
World Wide Technology
WWT Partner Spotlight
World Wide Technology
WWT Experts
World Wide Technology