AI Proving Ground Podcast: Exploring Artificial Intelligence & Enterprise AI with World Wide Technology
AI deployment and adoption is complex — this podcast makes it actionable. Join top experts, IT leaders and innovators as we explore AI’s toughest challenges, uncover real-world case studies, and reveal practical insights that drive AI ROI. From strategy to execution, we break down what works (and what doesn’t) in enterprise AI. New episodes every week.
AI Proving Ground Podcast: Exploring Artificial Intelligence & Enterprise AI with World Wide Technology
Is Security Now a Prerequisite for AI Adoption? Inside Cisco's Secure AI Factory with NVIDIA
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
As companies move from chatbots to agents, the hardest work isn't prompting — it's building an always-on, governable, cost-aware system that leaders can trust. In this episode of the AI Proving Ground Podcast, Cisco's President and Chief Product Officer Jeetu Patel, NVIDIA Vice President Craig Weinstein and WWT CTO Mike Taylor discuss how Cisco's Secure AI Factory with NVIDIA is designed to close the gap between experimentation and execution by treating AI as infrastructure.
More about this week's guests:
Mike Taylor is the Chief Technology Officer and Executive Vice President of Services. He oversees WWT's Global Engineering and IT organization and Services segment to position WWT as a single-source provider to accelerate digital transformation. Mike aligns WWT's unparalleled technical capabilities with its collective business acumen to both advise and execute customers as they seek to become more agile and innovative.
Jeetu Patel is Cisco's President and Chief Product Officer, leading global product vision and strategy. Under his leadership, Cisco has driven innovation across its broad portfolio of products, establishing the company as the critical infrastructure for the AI era. Patel joined Cisco in 2020 to lead the collaboration and security business. He quickly became recognized for his commitment to product design and user experience, leading the team through a period of rapid innovation to transform Webex and support customers through the global pandemic. In 2024 he was promoted to the role of Chief Product Officer, where he now leads several multibillion-dollar categories, including networking, computing, security, Splunk, and more.
Craig Weinstein is the vice president of the America's Partner Organization at NVIDIA. He has over 26 years of experience in sales, sales management and channel leadership. Previously Craig was part of the America's Partner Organization leadership team at Cisco Systems, where he built strong relationships with key decision makers, stakeholders, channel partners, customers and colleagues. Craig holds communications degree from San Diego State University.
The AI Proving Ground Podcast leverages the deep AI technical and business expertise from within World Wide Technology's one-of-a-kind AI Proving Ground, which provides unrivaled access to the world's leading AI technologies. This unique lab environment accelerates your ability to learn about, test, train and implement AI solutions.
Learn more about WWT's AI Proving Ground.
The AI Proving Ground is a composable lab environment that features the latest high-performance infrastructure and reference architectures from the world's leading AI companies, such as NVIDIA, Cisco, Dell, F5, AMD, Intel and others.
Developed within our Advanced Technology Center (ATC), this one-of-a-kind lab environment empowers IT teams to evaluate and test AI infrastructure, software and solutions for efficacy, scalability and flexibility — all under one roof. The AI Proving Ground provides visibility into data flows across the entire development pipeline, enabling more informed decision-making while safeguarding production environments.
Why Enterprise AI Stalls
SPEAKER_02From Worldwide Technology, this is the AI Proving Ground Podcast. Right now, Enterprise AI has a weird problem. Almost everyone can point to a pilot, but few feel like they've truly scaled anything, and ROI remains elusive for many. The demos are impressive, but inside the enterprise, the hard part is still the same, turning something that works into something that you can trust, govern, afford, and perhaps most importantly, leverage to deliver value. Meanwhile, the conversation continues to shift. As agentec capabilities mature, we've moved from AI that answers to AI that acts, and that's changing the stakes, while putting a strain on infrastructure, security, data readiness, and the real economics of inference at scale. So today, we'll try to set a baseline for where the industry actually is and what it will take to move from experimentation to enterprise capability. We've got three great guests that if you aren't familiar with already, will want to give them a close listen. You'll hear from G2 Patel, president at Cisco, on why this moment is a fork in the road for enterprises and what he sees as the core impediments holding AI back. Craig Weinstein, a vice president at NVIDIA, on the operational reality of production AI, what it means when your utility meter is running all day on GPUs, and why talent and architecture matter just as much as algorithms. And finally, Mike Taylor, CTO here at Worldwide Technology, on what he's seeing across customers as anxiety turns into action, and why the fundamentals of data, governance, and integration are becoming the difference between progress and stallout. And as we go, we'll start to introduce an idea that's beginning to organize these conversations at the executive level, the AI factory, a way to think about AI like a production system where performance, security, observability, and cost efficiency have to work together. So let's jump in.
SPEAKER_00This is an exciting time, but I also feel like it's going to be one where enterprises will be of in one of two categories. They'll either be very dexterous with the use of AI and they'll get to be very good at using AI, or they will really struggle for relevance.
SPEAKER_02That was G2 Patel. What he's describing is a capability gap that will show up in speed, cost, and decision quality. But being dexterous with AI doesn't happen because you bought a model or ran a pilot. It happens when the enterprise gets clear on ownership, data, and operating model. Who does what and which data under which controls? And that, according to Craig Weinstein, is where most organizations are right now, trying to get ready before they try to scale.
SPEAKER_04I think a lot of enterprise customers are getting their house in order. I think there's a lot of organizational thought on what they need to do to get ready. How do they bring clarity to roles and responsibilities? Do they have a true data strategy that can lead to an AI outcome? Are they getting the right help, you know, from advisors or third parties to think through where the most impactful ways to start with AI and then making sure that you have a couple things in place. One is that you do you have the talent?
SPEAKER_00The first phase was chatbots intelligently answering questions for us, where you know it felt like magic three years ago. Now everyone's used to it like it's a normal place. The second phase is agents being able to conduct tasks and jobs almost fully autonomously on our behalf.
When Urgency Hits
SPEAKER_02That shift from AI answering questions to AI taking actions raises the stakes. The moment software starts doing work, you need more than clever prompts. You need guardrails, accountability, and systems that can operate reliably under real world conditions. It's a technical challenge, but it's also an organizational one. Talent, governance, and the underlying data foundations. And while the hype cycle talks about the future, the more interesting story is what's changing right now inside enterprises. Here's Mike Taylor.
SPEAKER_03I I think we're like in the first inning or between the first and the second inning. Like I think it's that early. When I think about what's different, though, from a year ago or even shoot six months ago, is the sort of call it excitement or anxiety in in some cases has turned into action. We're seeing far more experimentation going to production. We're seeing far more budget and executive alignment across our customers to give uh these use cases a you know a real shot at the light of day.
SPEAKER_04CEOs are being asked by analysts about their AI strategy and now holding them accountable to what are you doing? What's the pace of innovation? How is it changing the way you go to market? Are you building new products? So you need that CEO contribution and alignment.
What’s Working Now
SPEAKER_02Pressure is coming from the top and the outside at the same time. Boards, markets, customers, everyone wants to know whether AI is becoming real advantage or just internal theater. That pressure is one reason we're seeing this conversation move from experimentation to execution. But execution means trade-offs, what to prioritize, what to standardize, and how to avoid random acts of AI that never compound.
SPEAKER_01So what's actually rising to the top as early winners?
SPEAKER_00When you go to the enterprise, there's a fair amount of experimentation going on across multitude of different use cases. And then there's a couple that are that are surfacing up to be winners that you're already starting to see a fair amount of traction.
SPEAKER_03Probably the best sort of agentic use case that we see out there today is really the in-supportive developers.
SPEAKER_00You look at a use case like coding. I think in the next 18 to 24 months, it'll be really hard to find a competent engineer that doesn't use AI to code.
SPEAKER_02This is one of the few AI use cases that's already crossed the line from experimentation to standard practice, including here at Worldwide Technology. Industry reports found that more than 90% of engineering organizations now use AI coding assistance in their daily workflows. And analysts expect that within a few years, three out of four enterprise software engineers will rely on them as a core part of how work gets done.
SPEAKER_00The other use case is customer support. 90% of the first line calls should be taken by an AI agent before they actually get passed on to a human for those 10% that can't be solved with an AI agent.
SPEAKER_04I think the most horizontal use case across every industry is customer experience. Yeah. You know, the way in which an enterprise organization interacts with their customers. And you you can see that depending on, I mean, we're all customers of these brands. And the way you interact with those brands could be a physical interaction, omnichannel interaction. And the good news is AI or machine learning has been used for quite some time. But there is commonality from industry to industry. And that's when I think you start to really scale out the business, which is taking one example of a use case and applying it to other industries. And then it's not just one issue, it could be potentially five or seven. You know, and that's where I think we we would love to be with all of our partners because then every partner that we have in our ecosystem can take advantage of that scale.
SPEAKER_03But we're also looking at the market saying, you know, there's a ton of blocking and tackling that goes on inside of organizations, regulatory pressure, care and feeding, maintenance hygiene things that have to happen to deliver, you know, not just stable and reliable applications, but unique experiences to your customers.
SPEAKER_02So across all the excitement, there's still a less glamorous truth that most enterprise progress is built on a flawed foundation. Because for most, AI operates inside a messy environment that includes legacy apps, fragmented data, and inconsistent processes. If you want durable impact, Mike Taylor says you need to start by getting the enterprise house in order.
SPEAKER_03That hygiene creates breakaway moments in technology, without a doubt. And I think it's both on the examples we were talking through and then kind of tying it back to the AI side are the hygiene around your data, kind of understanding where it is, it provides a platform for people to move quickly.
SPEAKER_02So when we say AI readiness, we're not just talking about ambition. We're talking about discipline, data hygiene, lineage, access controls, and architecture decisions that aren't paperwork. They're what determine whether a promising use case becomes a repeatable capability. And once you start scaling, these questions get sharper. Where does the workload run? How is it governed? And how do you keep it secure and cost effective as usage grows? Which brings us to what can actually slow AI down. Here's G2 Patel.
The Real Bottlenecks
SPEAKER_00And if we were to sit back and look at what could hold AI back, we think there's like three major impediments that need to be tackled. The first one is there's a massive infrastructure constraint. So there's just not enough power, compute, or you know, network bandwidth in the world to go out and satiate the needs of AI. So that's number one. Number two, there's a trust deficit. People don't trust these systems sometimes, they're not going to use them. So security for the first time is becoming a prerequisite for adoption rather than something that is looked at as an alternative to productivity, which is what historically was always the case. Thinking do you want to be secure, do you want to be productive? That's no longer the case. And then the third one is a data gap. I just don't think, even though there's the best intentions for most companies to want to go out and have, you know, their data be their moat, they don't have the tooling, they don't have the scaffolding in place to be able to use that effectively.
SPEAKER_02Those three impediments, capacity, trust, and data, show up in every serious enterprise AI conversation. They're also deeply connected. If infrastructure is constrained, costs spike. If trust is low, adoption stalls. And if data is fragmented, outcomes degrade. The result is a familiar pattern. Lots of pilots, lots of demos, but not enough production grade systems that can handle enterprise requirements. The real question becomes: what does it take to turn a winning use case into something you can scale safely across the business?
SPEAKER_00So you're starting to see momentum, but the challenge that large companies have is not in turning on experiments. Everyone's doing a lot of experiments and prototypes. The challenge that large companies have is when you find a winner, do you go all in and double down? Yeah.
SPEAKER_01And that's that that's where I don't think we're quite there yet.
Scaling Gets Hard
SPEAKER_02That double down moment is where enterprises often get stuck because scaling isn't just additive. More users means more inference, more data movement, more exposure, more governance, and more cost sensitivity. And it forces a harder set of decisions about architecture, what stays on-prem, what goes to the cloud, and what runs at the edge and how you design for continuous operation. In other words, once AI is real, it starts behaving less like an app and more like an industrial system.
SPEAKER_00If you think about the currency of the world today, it is going to be the ability to generate tokens. Now, in enterprises, you will see some inferencing that happens in the private cloud, because in some cases it'll be because of sovereignty and safety and security requirements. In some cases, it'll be because the scale is so high that they might not want to pay those few basis points to a hyperscaler. But you're going to see that more in inferencing than you will see in training.
SPEAKER_02This, according to Craig Weinstein, is where AI stops behaving like software and starts behaving like infrastructure.
The Cost of Edge AI
SPEAKER_04You take an enterprise that's sending out an enormous amount of data that's been, you know, harnessed for let's say decades, and depending on the heritage of the company, it could be a very long period of time, and organizing in a way to provide relevance to an AI use case. And, you know, the question there is, how do you do it in a way that you can do it at speed and with agility? And then taking account the business model, like we're seeing right now, the majority of the data is actually coming from the edge. Right. You know, you see the just because we as human beings are interacting with AI and create an enormous amount of data based upon what we're doing, and then it's coming back to the enterprise. The question is, how do they do that? And then create a data platform where you can ingest that, turn it and apply it to the AI model, and then turn it around from an inference perspective and use it in a production level AI environment. Enterprisers are gonna want to do that. The problem with it is inference is expensive. You're running basically your utility meter all day long. You're basically running GPUs at capacity, and I always use the analogy. It's almost like me turning on my gas meter at my house and letting it run all day long. I'm gonna get a gas bill that's pretty darn expensive.
People, Security, Gaps
SPEAKER_02This is the operational reality that gets missed in the early hype. Production AI isn't a one and done. It's always on, it's compute intensive, and it turns usage into an ongoing cost curve. That's why architecture matters, where data lives, how it's processed, how models are served, and how you measure efficiency at scale. And it's also why enterprises rarely solve this with a single team or a single vendor. They solve it by aligning engineering, security, infrastructure, and partners around a common blueprint.
SPEAKER_04AI is still about talent, developers, data scientists, researchers, IT teams, and folks that can collaborate together. It's not just about one organization, it's about a collective set of organizations working together.
SPEAKER_03It's about two things from my perspective: scale and security, you know, and making sure that as we're moving quickly, how do we have our finger on the pulse of where all this data is going, how it's being transmitted?
SPEAKER_00Securing these AI models, which are inherently non-deterministic in nature, is pretty important because they're unpredictable. And enterprises are trying to build predictable applications on these models that can behave slightly out of character at times because then they might hallucinate, they might have toxicity, they might have self-harmed behavior.
SPEAKER_01And once AI systems are always on, failure is no longer theoretical.
From Custom to Scalable
SPEAKER_02It's operational. Large companies are now disclosing real AI risks in SEC filings, and executive surveys show nearly all organizations with AI have already encountered mishaps, yet only a tiny fraction have mature governance in place. That's why production grade, reliability, and oversight matter as much as performance. And that's why security here isn't just about perimeter defense. It's about governance, visibility, policy, and control across the entire AI lifecycle, from data to model to runtime behavior.
SPEAKER_03And NVIDIA and Cisco are both invested significantly in continuing to develop features and capabilities within their platforms that integrators and consulting organizations like us can help make sense of in terms of the use cases that our customers are working on. What it does, though, by the intentionality of integration on their part, is we don't have to do as much of that anymore. We can focus on the business and technology outcomes that ride on top of those platforms. And that's exciting for us, you know, to be able to fast forward, if you will, through a few of those things because they're they're doing that work for us.
Inside the AI Factory
SPEAKER_02So that's where the market is heading, away from bespoke one-off builds and toward repeatable, validated architectures that enterprises can trust so that the foundation is reliable before you start innovating on top of it. The practical aim is to reduce friction in standing up the core stack, compute, networking, security, and observability, so teams can spend their scarce time on outcomes, which use cases matter, how to operationalize them, and how to measure performance and cost. With that baseline, we can finally start talking about the concept that's starting to organize this entire discussion, the AI factory.
SPEAKER_00I love the term because it actually describes best what it does, which is it is a token generation factory. It allows organizations to generate tokens so that they can have better financial results, so that they can have better competitive differentiation, so that they can have better productivity for their employees, better cost efficiency, so on and so forth. It made a lot of sense when we talked to Nvidia to say, hey, you've got an AI factory. What if we provided the networking, but also the security associated with it? And then now we also have observability, which can tell you not just how the GPU is performing, how the model is performing, but also have tokenomics. Like how efficient is that model? And is is the token generation per kilowatt efficient or it it actually needs work?
SPEAKER_04Our goal is to get the maximum amount of compute with the lowest token economics. Right. So we can produce intelligence at a low cost.
SPEAKER_03At the end of the day, Cisco and NVIDIA are two platform companies. And what our customers want most is a sense of a defined and and so call it kind of pre-certified architecture that they can go in and use that checks all the boxes from performance and innovation to you know security and reliability and everything in between.
Secure AI at Scale
SPEAKER_04Cisco's built an amazing platform with security at the forefront of that architecture. You know, you want to make sure when you think about security, AI systems are always on. And now we're talking about agentic and agents accessing information on their own and providing answers back to potentially it's not just about compute. It's about compute, networking, security, and then making sure that you can wrap a security envelope around that with policy. We want incredibly powerful systems at the least amount of cost to an enterprise customer. If if you can do that, you are building an industrial revolution based upon knowledge.
SPEAKER_00There's going to be a a a class of roles that are going to be very you know kind of uh existential to the success of this this entire initiative.
SPEAKER_04There is no better company in the world to build and deploy AI factories than WWT. I we showed up at WWT a decade ago and started teaching them about accelerated computing, this thing called the GPU. Why was it important? Fast forwarded today, that those teachings and learnings are now being used in the enterprise at massive scale. And they're winning some wonderful customers and they're doing great things.
SPEAKER_03And and this is truly creating time for human beings to elevate their thinking. And I just see it as the first time in a long time, at least that I've participated in a technology like that. Where it goes from there, I think human creativity seems to have no bounds and excited about the opportunity for all of us to think more, to learn more.
What Leaders Do Next
SPEAKER_02Okay, major thank yous to G2, Craig, and Mike for taking time to share their insights. Here's a quick takeaway. As AI moves from answers to actions, your data has to be governable. Your infrastructure has to be ready for always on inference, your security posture has to assume non-determinism, and your leaders have to treat this less like a software feature and more like a production capability. That's what the AI factory idea is really pointing to: an operating foundation where compute, networking, security, and observability are designed to work together so the business can move faster without losing control. In our next few episodes, we'll get a lot more specific, breaking down the core components of Cisco's secure AI factory with NVIDIA. We'll look at what changes when the network becomes part of the AI performance envelope, what compute strategy really means when inference is your new utility bill, how security has to evolve, when AI systems are always on and agents can act autonomously, and why observability across infrastructure, models, and what it costs to generate intelligence becomes the difference between scaling with confidence and scaling blindly. That series is coming up next. For now, thanks to our crew here at the AI Proven Ground Podcast, we'll see you next time.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
WWT Research & Insights
World Wide Technology
WWT Partner Spotlight
World Wide Technology
WWT Experts
World Wide Technology