AI Proving Ground Podcast: Exploring Artificial Intelligence & Enterprise AI with World Wide Technology

AI Pilots Are Dead. Now What? Dell Explains.

World Wide Technology: Artificial Intelligence Experts Season 1 Episode 54

Rather than chasing GPUs and one-off pilots, Dell's Allen Clingerman and World Wide Technology's Matt Halcomb argue that the winning AI "factories" will be built on use-case clarity, data sovereignty and flexible, end-to-end infrastructure that can keep pace with a fast-moving market.

The AI Proving Ground Podcast leverages the deep AI technical and business expertise from within World Wide Technology's one-of-a-kind AI Proving Ground, which provides unrivaled access to the world's leading AI technologies. This unique lab environment accelerates your ability to learn about, test, train and implement AI solutions.

Learn more about WWT's AI Proving Ground.

The AI Proving Ground is a composable lab environment that features the latest high-performance infrastructure and reference architectures from the world's leading AI companies, such as NVIDIA, Cisco, Dell, F5, AMD, Intel and others.

Developed within our Advanced Technology Center (ATC), this one-of-a-kind lab environment empowers IT teams to evaluate and test AI infrastructure, software and solutions for efficacy, scalability and flexibility — all under one roof. The AI Proving Ground provides visibility into data flows across the entire development pipeline, enabling more informed decision-making while safeguarding production environments.

SPEAKER_00:

From Worldwide Technology, this is the AI Proving Ground Podcast. If you're an IT leader right now, chances are you've been given some version of the same mandate. Go do the AI. Stand up something, show progress, prove value. And so across the industry, we've seen a familiar pattern. Massive GPU orders, hurried reference architectures, pilot projects spun up in pockets of the business. But then the hard questions start to surface. Data bottlenecks, power and cooling limits, security and sovereignty worries. And the big one, if you're pushing all these tokens out of the shiny new AI factory, where exactly is the business value coming back in? So today, we're talking with Dell Technologies Alan Klingerman, a chief technical strategist and partner AI officer. He helped bring Dell's AI factory with NVIDIA to market, not just as a stack of servers, storage, and network, but as a blueprint for bringing AI to where your data actually lives and for protecting the intellectual property that makes your business unique. And joining him is Matt Halcom from Worldwide Technology. Matt runs WWT's AI Proving Ground, the namesake for this podcast, and an industry-leading test track for enterprise AI. Matt and Alan will unpack what an AI factory really is beyond the buzzword, why so many AI proofs of concept fail, how to think about data platforms, sovereignty, and locality, and what it takes to move from hurried experimentation to purposeful, scalable AI that your board, your teams, and your balance sheet can all get behind. It's an insightful, important conversation, so let's get to it.

unknown:

Mr.

SPEAKER_00:

Halcom, welcome back to the show. How are you? Thanks. Thanks for having me. Been doing well. Thank you for asking. Excellent. And Alan, welcome. Uh welcome to the AI Approving Ground Podcast. How have you been?

SPEAKER_05:

Oh, been doing great. And Finn, I'm so excited to be here, like in person. In person. No longer just on a Zoom screen. That's right. And uh in town for Supercompute. I know. So it couldn't be any better time to have this conversation.

SPEAKER_00:

Absolutely. A busy time. I know the two of you are extremely busy over there at Supercomputing25. So I appreciate you taking out the time here. Um, Alan, I will start with you. You know, certainly every organization across the world looking to push through AI strategy. And as many are experiencing in real time, it's not easy. Data bottlenecks, security concerns, uh GPU usage. I mean, the list goes on and on. It can be a little bit of a pain. Let's level set right now because we're going to be talking about AI factory. Um, what is it through Dell's lens that AI Factory is and how is it helping accelerate some of that AI strategy?

SPEAKER_05:

It's a great question. And and uh it is interesting, right? I think the industry overall has struggled with what this definition is. And I I would say, you know, Jensen always right uh starts to lead the charge with with ever for everybody to think about what the AI factory means. And I think it is at the highest level of energy, power, cooling coming in and tokens coming out. But at the end of the day, like, okay, that's the mechanics of it. Yeah, but the the organizations are struggling, like, how do I actually get business value out of the tokens that the factory creates? Yeah. So hence that's where we kind of think about like the higher level of what you just defined of like, how do we help a customer develop their strategy across all areas of what a what a factory might be? Yeah. Because the factory is well beyond just the infrastructure that many people think of. That's what gets all the highlights, especially here, supercompute. Everybody's looking at all the incredible compute platforms, right? To do all this processing of data back to kind of what you sold. Um, but but I think being able to process that and get value out of the tokens is an extremely part, you know, the extreme important part of the Dell AI factory, yeah, and and our approach to making that happen.

SPEAKER_00:

Yeah. Well, Matt, I mean, that's a very key term, getting value out of the tokens. Um, what what do what do we advise organizations on on how to capture that value?

SPEAKER_01:

First, um, get organized. We saw early on um in this whole fun race, this new modernization of technology and ways to run their business. Everyone's in a hurry to do something. And at World, we try to get people to kind of slow down. Um, we adopted many of the crazy. No, it's not at all. We adopted a lot of the great technologies that came from Dell, right? Dell was one of the first AI environments that we built with the reference architecture, right? So our partners are doing the best they could do at the time in which everyone was getting hit with this challenge. Um, and what's been great to see is having the ability to actually show to a customer that it's more than just hardware, as Alan spoke about, right? It's more than just getting a bill of materials in the door and hoping that I have enough power to stand it up. It's understanding, as Alan put it, what am what am I driving towards? What outcomes am I going to need that's gonna change my business as opposed to how I gain my information through my standard e-business suite products or things that I have already today? Um, how do I gain better insights into it? How do I make things take it to another level, right? And I think that's what the transition we've seen from just like a reference architecture into an AI factory, and then helping customers understand that there's there's so much more to what they have to take on. I I think a lot of times, and we've heard these stories, 80% of AI proof of concepts fail inside of a customer's environment because I don't think they really understand all that has to go into it. Like early on, we started training our engineers at the an AI center of excellence and an AI journey has to include people, process, silicone, software, data, security, facilities, locality. And that's what an AI factory is going to get customers to understand. Uh and that's why we're we're we're so blessed to have partners like Dell that have done a lot of this work, and then we can bring that technology um out of our AI proving ground to the customer, letting them see firsthand, like seeing is believing, right? You can tell somebody, you can show them a PowerPoint, but until they truly get in there and start to see value out of what you're showing them, the light bulbs start to then go on at that point in time.

SPEAKER_05:

Right. And I'll even, if I could pull on that fabric a little bit of what you said there, I like that because I think one of the big challenges that organizations are having is they're getting pressure. It's kind of what you called out earlier, especially public companies, to do something with AI. Yeah. So they just went in and heavily made big investments, stood up compute without having a defined strategy. Yeah. So, like one of the most important pieces is like understanding what are you trying to accomplish. If you're going to get business value out of the tokens that come out of the factory, I have to define what am I going to use AI for? Because it's, I always jokingly say it's one of the most common questions I get from customers and partners. And I know you get it all the time, Matt, of like, what can AI do for me? And I'm like, turn that that question needs to go the other direction. It's how do I put AI to work for my organization?

SPEAKER_04:

Yeah.

SPEAKER_05:

And then you start thinking differently of like, oh, okay. Now I want to think about like our simple strategy across the organization. Like uh, you know, John Rose, who's our CTO, talks about it in three buckets, and it's clearly what we've been driving to. And you've heard me talk about this before simplify, standardize, and automate every single process. So, like, think about what differentiates you in the market as a company and an organization, double down on that, and then think exactly as you called out people and process, simplify, standardize, and automate everything you can.

SPEAKER_00:

Yeah. Alan, I mean, we we talked right before we started recording about how everybody has a little bit of a different flavor, so to speak, of this AI factory concept. Um, you know, from your perspective, what sets Dell uh apart? What what's unique about your AI factory?

SPEAKER_05:

Yeah, I I I think the biggest thing is what we just talked about is like the outcome. Yeah, we looked at it from the very beginning. In fact, it's kind of interesting. I still remember the first GTC, because we were the first ones to bring an AI factory to market, right? And the first one we introduced with the Dell Valid data design that you just called out was the uh, you know, Dell AI factory with NVIDIA. And we think about it, you might have seen the visual, or some people out there might have seen we have it kind of drawn out as five chevrons that define exactly what you and Matt just kind of highlighted. Yeah, we think about it as it's bringing AI to your data, to the locality, because 83% of enterprise data, according to Art Gartner, lives on-prem. So it's how do I take that GPU-enabled compute to achieve those outcomes to generate those tokens, yeah, bring it on-prem to where the data already exists. Right. Because I would argue if the organization hasn't moved their data to the public cloud in the last decade and a half, there's probably some good reasons why they haven't done that.

SPEAKER_01:

Well, they tried though, Alan, right? They tried at the application layer with a cloud-first kind of initiative without putting thought behind it. It was just an idea. And and and I mean, I think that's the key thing, Brand, is ideas are easy. Um, execution's hard. So without having the proper thought process behind it, being able to execute to that is where the customer struggles, right? Executives can have great, wonderful ideas. They sit around, they listen to um the pundits talk around the importance of it, but they don't truly understand the level of effort that goes into it and all that it includes. Um, so they just go many times I uh it's funny to say it, but they get told the organization gets told by an executive to go build the AI.

SPEAKER_05:

Yeah, 100%. And so, like the data being the foundation. So that's like our chevron on the left. You think about it as left to right, you might have seen AI factory visuals like that before, even without the Dell one. Yeah. Uh, and then we kind of think of the underpinning that infrastructure of we're gonna generate tokens is very important. Yeah, and we feel we're highly differentiated as the only end-to-end provider of server storage and networking. I'm gonna go back to the old verb of what everybody loved to call an uh in IT single throat to choke.

SPEAKER_00:

There you go.

SPEAKER_05:

I used to work for IBM back in the day, so then I kind of tell you we've actually become that.

SPEAKER_00:

Right.

SPEAKER_05:

So that infrastructure end-to-end all the way out to the endpoint because while we love to talk about all the incredible work we're doing in the data center and building the factory there, to us, the factories go everywhere, right? Because it's not just infrastructure that can generate tokens. We think about it all the way to the client devices, yeah, right? AI PCs with small language models are are going to be in the future. It's going to be very interesting to watch that continue to evolve and how that happens. So we see that as a component of our infrastructure. And then we think about the open ecosystems, because without I have all this hardware, but without software, I can't do anything with it. So we have an incredible set of partners with us in that open ecosystem, which by the way, WWT is part of that open ecosystem because we can't make this real without incredible partners like yourself and you know, assets like the proving ground, why we continue to make investments there together, right? And building that out.

SPEAKER_00:

Yeah, Matt, I mean, go dive in a little bit to that end-to-end part. Um, give me a little bit of a balance of why that is valued as opposed to more of a modular approach.

SPEAKER_01:

Well, again, when we early on, I guess like we talked around the eight pillars, so to say, yeah, the term, but we'll use it anyway. Um, we've also made sure that our our organizations understand it's from data center desktop to cloud to edge and everywhere in between, right? So you have to be able to incorporate all those components. They all have to play a factor in your AI factory. Yeah. Dell has that ability to bring not just different OEMs and ODM technology solutions that make up the AI factory, but they also bring the components from a scalable. You you've heard me use the term frame main before. How do we design the right technologies for a customer? And that doesn't always mean a single one device with one accelerator type only and just hope it does what it can do the best way. Dell brings a suite of products that we can build scalable technology, build different sized clusters or different types of clusters that can handle the different workloads. In this world of AI, one thing that is that has state a standard is it's constantly changing. Just even from a generative AI where we were two years ago, and how we were using the largest model we could find, a 200 billion, 300 billion parameter model and the largest amount of data. I know, largest data sets ever, um, into now using pre-trained models, utilizing vector databases, graph databases for more specific data, agentic now as a part of all this stuff, right? A more efficient way. We talk even around VLLM as a portion of how to build a more efficient, right? And Dell has the ability to allow a customer to take the concept of an AI factory, but then define it in a manner and bring it in that that scales as their needs grow. And I think that's the most important portion that it does cover the data center, it does cover cloud adjacent, it does cover the edge, it does cover the client. Um, and you can bring that all together um with a suite of products, right? That that that that work together that can tested. Yeah. Um and I think that's what's key with that.

SPEAKER_00:

Well, Alan, I mean, so we often say AI is not usually the problem. It's it's a readiness um challenge, or maybe it's a data challenge or something like that, which could be bucketed into readiness.

SPEAKER_05:

Part of the readiness challenge.

SPEAKER_00:

Yeah, what what do you see? You know, you work with a lot of um, you know, customer uh architectures here and just readiness in general. What are you seeing as some of the initial obstacles that uh organizations are facing? And how does Dell's AI factory help operationalize AI for them?

SPEAKER_05:

And and I'll even kind of separate it into two different buckets. I would say one is the strategy and be able to understand back to the tokens. If I'm gonna get business value, what are the use cases that I define with the right set of metrics for the business to achieve, you know, with a set of KPIs that are measurable for me to achieve those results. Okay. And building that strategy, customers are still very much struggling on. Yeah. And that's where, like collectively, all of us need to work with you know our customers and help them build their strategy if they don't have one. Uh so what because even Dell, here's a good example, like internally, benchmarked. We had over a thousand use cases we were pursuing. Right. Because there's a lot of people just wanting to, you know, look at artificial intelligence and how do I, you know, automate processes, etc. But maybe they were never going to achieve or the tokenization cost, we're never gonna achieve an ROI for the company. So we shouldn't have even pursued them. Yeah. So even that case, we're like, hey, let's prioritize them. Let's look at back to what differentiates us as an organization and then simplify, standardize, and automate. But in order to do that, to your point on data, data is the most important aspect of this. Back to when I say bring AI to your data, that's exactly why. And and I'm gonna go back to I was also at Oracle for a while. I helped them bring uh Exadata and Exalytics to market way back in the day. And you know, it was database consolidation and high performance data analytics, real-time setting against a business application stack. And most organizations still technical debt exist in their data engineering and architecture because they typically don't even have somebody dedicated full time to that role. So they never even got to, if I go up the stack from traditional BI to AI, they never even solve the problem of predictive analytics, right? To be able to predict the future, not just what you know what happened yesterday and why did it happen, but actually predict, like if I make these changes, what's going to be the outcome? And a lot of these use cases that we define as they come up, it is a set of you know different taxonomy and technologies, whether it's predictive analytics or HPC, like back to the supercompute reference here, um, or artificial intelligence, they actually all work together in these domains. So sometimes these outcomes might be one of those buckets, but everybody calls it AI.

SPEAKER_04:

Yeah.

SPEAKER_05:

But the the like we actually have something called the Dell AI data platform that we have to help customers think about that, to like think about just all the work that has to be done to bring all my data in one place to then start to build what you know most people had never done, which is build the old school was a lake, you know, data lake with Hadoop. That now we think about it in context of a data lake house and being able to high-speed query data no matter where it's at. Yep. Back to what you said earlier, like uh multimodality. Could the data could be streaming at the edge, combined with a data set of data set that lives in the core data center? Um, and we have several engines that we're building to help customers like modernize and build what we're calling data products because that should be those data products based on the outcomes. That's how I get business value to the tokens.

SPEAKER_00:

Yeah. Yeah. Matt, I want to I want to dive into with you to the use case part. If you think of the factory um concept here, is it better or is it more advised to go in with a a focused set of uh use cases and then start to build that flywheel? I think I know what your answer is gonna be. Or is it more like a wood chip where you're just throwing stuff in and seeing it?

SPEAKER_01:

Just guess at it. Just guess be be be in a hurry to do something. And we'll cut that. That never works ever. No, no, you're spot on, right? No, you have to define um what it is you want to do and what kind of outcomes you want to drive towards, right? Because if you don't know where your finish lies in, how do you know when you ever get there?

SPEAKER_04:

Right.

SPEAKER_01:

Right? This is going to be a journey. It's just the starting point of it. It's a it's a new world, it's a new adoption, it's a no, it's a way that organizations will run their their business um more intelligently to even use a term in it, right? Um, but it's gonna expand out, right? It's gonna have to create a new way in which they they create different processes, the way they train their people in a different manner, right, to drive to the outcomes. It's not just solely, I'm gonna put this tool in my organization and everything's magically gonna happen. You still have to have defined outcomes. You have to take a look as you're going down your path, just like we do with agile development. Um, start going down the path towards what you think is right. And there's gonna be challenges that come along the way. Maybe you don't have the proper data set, right? Maybe you don't, maybe you haven't got the proper algorithms built or tuned to prompt engineering done on your AI component to get you the outcome. So you're gonna have to make some adjustments and be agile along the way. And that's why the factory helps you kind of start thinking that through. Be in a hurry to do something never usually doesn't lead to good outcomes. It usually leads to a lot of frustration. Um, and that's where customers are at that they as Alan put it, we ran into that multiple times. There were large organizations that got really excited around AI and they would come to us and say, I have a thousand use cases. And we're like, Great, how would you ever get through that?

SPEAKER_02:

Right.

SPEAKER_01:

Let's go take the thousand, get it down to ten, and let's prioritize of those ten which ones we think are have the biggest return on investment, and let's then understand the level of effort that takes to get that done. Let's get you some success going, get some positive yet, right? So if a customer gets in and the first project they do doesn't end well, and then the second one doesn't end well, then the frustration takes over. And why would I want to do number three? Because I haven't succeeded in the first two. So start showing some level of success, put that in your pocket. People like to see positive thumbs up that I drove to a completion and then leverage what you learned along that way. It's it's similar but different, right? Is the way I explain it to people. And if you can take in the concept and understand how you did those steps, apply those steps. But maybe you just have to make a couple adjustments along the way. And then that's where we like to get to get customers to really think it through. Tell us what it is that the outcome needs to be, and then we can drive towards that based upon how we've done other work with other customers, right? The other it's not like the only one global financial is trying to solve one problem, they're all trying to solve the same problem.

SPEAKER_05:

And most of the time it's the same problem, exactly. Right because it's it has the biggest potential business impact downstream to the company. I was with a very interesting company last night in the trading uh space, right? Just think about high frequency trading and the impacts like HPC brought. Now it's how can I apply AI against the logic that they did with high frequency trading? Uh and I I liked one thing you said there. And go, I'm gonna go back to our example of we had thousands, we boiled it down to four that we found had the highest potential business impact to us downstream internal is Dell, because hey, I I get it all the time of you're a global 50.

SPEAKER_04:

Yeah.

SPEAKER_05:

What is Dell doing with AI? You're trying to sell me AI. Yeah. What are you guys doing with it? And one of the biggest first impacts that we had was with a tool set that we called sales chat. It's kind of in the name what it is sales tool, right? Collapsing, by the way, kind of interesting back to the big data problem. 10,000 SharePoints, actually over that, across Dell Technologies. Think about all the disparate information that's sitting out there. Oh, by the way, that's a lot of time, many copies of it.

SPEAKER_04:

Yeah.

SPEAKER_05:

So you can actually now start to regimen and get your data in control, as well as now some policy controls that the marketing team, for example, 10,000 copies of the AI factory deck, some that are all over in various states, right? At different timelines where we might have changed our message and they want to make sure the most recent things getting out to the market. Oh, now I have those all those levers to pull as I start to rationalize it. But it goes back to what you said there. I love your like the fly, the speed of the flywheel. This is the thing that I think a lot of companies are struggling with. You have to move fast. Like they're trying to move fast without logic. Once you have the plan and you understand what's going to have the most potential business impact, you got to move fast. It's like C I C D times 100. Right. Like DevOps is just the start of it. And you can't wait for perfect. Because back to like, because this is what I think also what Matt said is a lot of people then thought we were practicing that it had to be perfect. And it's never going to get perfect. It's the nature of AI. It's got to be good enough. Uh so it was like just that. So we rolled out, we got it to what we felt was a good state. We threw it out to all 40,000 sellers globally, right? And guess what? We learned a lot.

SPEAKER_04:

Yeah.

SPEAKER_05:

Two things we learned, and it goes back to what you said of like that flywheel and having to adjust. Number one, there were a huge amount of use cases in the sales area we had never thought of. So what do you think we had to do again? What are the top ones that can provide the biggest value? And then let's funnel and go back through that and add into that existing tool. Just augmenting it with my existing data. Oh, and then by the way, there might be data sets that we hadn't even thought of that were going to be required to achieve those outcomes. Right. Do we even have that data? Or can we get that data from external sources? So, like that's what you have to think of. Like every week we go back and through our kind of P tuning process of the engine. So every time the seller gets a recommendation back or response, they can give thumbs up or thumbs down. Yeah. Was it good or not? And then if it wasn't good, why was it not good? So then we can go back and change the weights and gradients when we do the P tuning to have a better, smarter, more responsive, you know, highly effic uh with a high efficacy model for all the sellers when they get in on Monday morning. So every week we're doing it.

SPEAKER_00:

Right. Well, Matt, you mentioned it has to be good enough. Is good enough a moving target for organizations like you know, good enough for company B? Is it different good enough than company A?

SPEAKER_01:

They're they're all similar, but getting to an outcome like again, it's predictive analytics. Is it gonna be a hundred percent correct all the time? Probably not. Yeah. But you gotta get to where you're getting the answers right more than you're getting them wrong, right? And that's the starting point. That's gaining putting something in your pocket as a takeaway that we've had some level of success. And then it just gets into the fine-tuning component of it that the more users start to ask and leverage it, like Alan said, with the like we did here at World War, right? Give us a thumbs up, give us a thumbs down if it gave you, because if we don't know when you get into generative AI, right? It's the most subservient solution that anybody will ever put inside their organization. Because right, wrong, or indifferent, it's gonna give you an answer. Um, and it's gonna take trial and error to get to a more positive outcome, a more correct, closest to 100% as we possibly can get.

SPEAKER_02:

Yeah.

SPEAKER_01:

Um, but users still have to understand that they can't be lazy with the outcome they're getting. They still have to be inquisitive and make sure they understand where's the data coming from. Is that data, does that really make sense? Is they can't fully again. I still see it today where people 100% trust any sort of generative AI solution they look at because it told me it's right, so it must be right. Okay, no, please trust. Trust, but verify how many times.

SPEAKER_05:

And exactly that's probably twice.

SPEAKER_01:

So getting good enough to start with is important because you start gaining executives when they give the initiative to go do the AI, the people are scrambling. They want to appease their leadership. So getting something in front of them more sooner than later is important. And then getting input again, because you're gonna continue adopting. Look, as Alan spoke about it, when we talk around this, it it's constant ways in which we're designing and executing on this AI. When we did AI, when we first built our Atom AI, it was different than what it is today, right? And it's gonna be different next week than it is last week. Sure. Um, it's continually changing in the way we're going about it.

SPEAKER_05:

Like originally, when when retrieval, I don't need to think about it as like a one-time no, it's not like this is the problem. Everybody's like, oh, it's a common off-the-shelf software application. I deployed SAP and thumbs up, I'm done.

SPEAKER_01:

You did the business suite install and you call it done, and now we just create Tableau reports. No, it doesn't work like that.

SPEAKER_00:

Yeah, well, I mean that flexibility, obviously crucial, Alan. How how do you how did how does Dell think about that flexibility as it designs its AI factory, both what it's already done, but then you know what you will do moving forward?

SPEAKER_05:

I think it's in a couple of design tenants, and I like that you you said this because it it is the way we're thinking about it, is very open ecosystems that are silicon diverse. Oh, and by the because obviously NVIDIA has the mic right now. Yeah, obviously they make incredible technologies. We're tightly integrated into their hardware and software stacks, but you know, hey, there's innovation in the marketplace, right? And and just look at in the proving grounds, right? We have Intel, AMD, and NVIDIA represented across the board. Yeah, and then we think about it as like we built uh something called the Dell Automation Platform recently, and think about it as like an orchestration layer for me to deploy outcomes. And sounds like maybe there's something that we could do with AI this year, right? Sure. So now we're thinking about like the different layers of hey, I've got this great infrastructure, which we've talked about, and you know, we're gonna manage that. But then what's my runtime setting above that? So initially we think about things like Red Hat, Ubuntu, right, with all the canonical OSS stack, uh, and certainly all the NVIDIA stack end to end, right? Move it all the way up through mission control. And then you go above that of like the tool chains that the data scientists are doing on their day-to-day jobs to make these things real. And and we took all the tool chains that are commonly used, elastic, since you brought up uh vector databases earlier, right? World's largest vector database. Like, imagine those tool sets built into it that the customer could have uh an almost like an erector set. Like, hey, here's a trusted, validated blueprint for me to quickly achieve these outcomes. Like, right, oh, I want these three tools because it's what my data science teams know is how we've benchmarked uh you know our own internal efforts.

SPEAKER_04:

Yeah.

SPEAKER_05:

So, and we built that by the way, on what we did at Dell as well as what NVIDIA is using internally. Because again, Dell and NVIDIA get these questions all the time of like, what are you guys using for tools? Maybe we should follow the same blueprint. So it was the obvious you know place for us to start.

SPEAKER_00:

Matt, keep going with that uh flexibility angle. Why why is that so important? And is that is it going to be important moving forward?

SPEAKER_01:

Oh my gosh, yes. Incredibly important. Again, NVIDIA makes a wonderful product, right? But they make multiple variations even of that product. Not everything gets fixed with the single latest and greatest today, the Blackwell B300, right? Right. Um, although great for computational work that you're doing, we're doing things like digital twin. We're doing doing visual models, right?

SPEAKER_05:

Um, I love that you said that because like everybody still thinks I hate that it's in the name. You and I've talked about this a thousand times. Like it's called a GPU. Well, that incended graphics processing unit. Most of these things have no graphics processing in them anymore. So that's why you need to even think about the workloads, especially things like computer vision, yeah, to be able to leverage the right asset, right? Or accelerate.

SPEAKER_01:

It's still the same problem we had. What was once old is still new now. Um, and having the right technology stacked to solve the use case problems that you have is incredibly even more important today than ever because there is no longer headroom in a data center because they've run out of power. Certain technology requires a different way in which you cool it. So you have to be agile in which and uh and thoughtful around how you design the original. So that again, that goes back into my frame main approach, is that when we put the team together in the IPG, I threw that silly term at them, but it had a lot of meaning behind it. Meaning I wanted them to go think about building agile, high-performer computing that scaled in a manner that matched with the end users' workloads and as they changed, right? As we see that we're even adopting this technology out there that we're bringing not just a GPU into the play, but we're bringing large memory systems into play. So there's a the concept that we talked around last time. Supercomputing was here, this whole consortium called Gen Z folded and it rolled up underneath a CXL consortium last time this happened.

SPEAKER_05:

Finally, everybody's working together again.

SPEAKER_01:

Yes, and they're back on this. And so we're we're following what's happening with the ultra ethernet consortium, we're following what's happening with the the CXL consortium, the OCP3 type consortium that's out there. There's changes still coming, big changes in the way we design and architect this technology. And if a customer thinks it's simple and that a single GPU type in a single box is gonna solve their problems into the future, then we need to go have a conversation quickly with them before they start purchasing and making investment in that. Because what they're gonna end up doing is building a rigid solution that won't scale as their needs scale. We're we're going about this. Go go look at again, look back six months ago when this thing called a reasoning model came out, right? And all the noise that that even created. And that's a completely different approach and technology needed to support that sort of solution, but it's all still driving towards again, it's gonna continue to morph and move. We we're building vector databases at different times, putting vector databases in memory, using graph databases as well. Data is all sorts of data. That's why the data lake house is important. Our data lakes back in the day were unstructured data. Then you had a data warehouse that were structured data. Now we're building technologies with analytics around it. Every single storage organization I'm working with is bringing an analytics portion with them. It's a whole new role.

SPEAKER_05:

Almost everybody still thinks analytics is AI. And we get this all the time, right? It's like the magic. Like, oh, I didn't know that I could predict the future with that.

SPEAKER_01:

But again, and it it's all coming and it's still evolving. So they have to stay agile, they have to go about this with an open mind. Yeah. Right. And again, the AI factor is a great concept to be thinking about it, but it needs to be a thought process around how they begin to understand what's their challenges today. What successes can we get them short term? What are some longer terms? Because we we and explain that to them that we're gonna get to this stage and we're gonna we talk around and laugh about the never-ending POCs, but this isn't there is no finish line. This is a journey, and we're literally on the first mile of the marathon, if if even that, right? We're we're just we may be even stretching right now before we've even started running the race.

SPEAKER_02:

Yeah.

SPEAKER_01:

Um, but again, they have to be, they have to go at it with a purpose, right? They're gonna have to put effort into it. They're gonna have some failures along the way, but learn from their failures will help drive future projects, right? That didn't work well. We made a misstep there, but let's stay agile in our approach, go into it, and be open to work with multiple groups across our organization. It's not just one business unit that's gonna solve the problem.

SPEAKER_05:

We got to do it horizontally across the organization, right? Back to the use cases bubbling to the top, like that sales use case I mentioned of what we're doing with sales chat. Oh, could there be another one for marketing? 100% where most of our dollars are spent, right? How do I lower customer acquisition costs? Can I be more targeted in my ad dollars and my ad spend? In fact, you know that most of the CMOs have the highest dollars to go spend in AI. Of like, how do you actually pursue that? And I I want to double-click uh what you said there, because that is you know what we're trying to accomplish with the Dell automation platform and ultimately with something called the Dell Private Cloud. We think about in the context of what you just said there tightly integrated, right? Highly opinionated architectures is where we're starting to see almost a closure coming back in the industry, back to his joking term there. But I think everybody caught it of exactly what it is. And then on the back to my heritage, and on the right-hand side, we're trying to say, hey, we want to have a more open architecture approach, right? That has some tested, validated, you know, uh blueprints around it for customers to easily compose the architecture to achieve the outcomes from the factory.

SPEAKER_00:

Matt, you'd mentioned a moment ago um just kind of the shift from training to inference, um, just as an example of how the industry is moving. Um, Alan, I'm wondering how do how do those changes or even changes that we don't know necessarily what they might be in the future, how do how do you see that changing the role of the AI factory, if at all? Or what does the AI factory need to do to keep up with those changes?

SPEAKER_05:

No, I think it's a a set of different set of bespoke architectures across the factory, because the factory, again, not one thing. That was my number one question when we launched it, right? Like Alan, where's the skew? Where's the easy button? Yeah, like doesn't exist. Uh funny, but not funny, right? Yeah, because you and I address this all the time. Uh, but I think that's one of the important aspects of it. Like, how do I get out of the box, right? To think about this as I move. Think about training customers. Let's talk about customers. There's really about 20 or less customers that are doing foundational model training in the world. We all know who they are. We see them in the headlines, right? We see the billion-dollar POs that are floating around from those organizations because they're really thinking about it as like a return on future. Like we remember, we talked about the investments and people are looking at ROI.

SPEAKER_02:

Yeah.

SPEAKER_05:

Return on future is I'm making a big bet because I'm I'm trying to go faster and deeper than anybody else in my industry to completely disrupt it. Like, even that, I would call like a lot of people, you know, might have followed some of the things that Elon did. And right, certainly, you know, I think a lot of people thought he was trying to build a car company. That was never his end goal. Yeah. He was creating all these culmination of companies to be his RD to ultimately start to solve the energy problem that I think people now are starting to understand where he was actually heading.

SPEAKER_04:

Yeah.

SPEAKER_05:

These were just his research and development farms. So that's an example of return on future. Because by the way, what do you see? AI, supercomputers, AI factories across every one of his companies, which by the way, now they're trying to think about consolidation. What does that look like for training? And how does my factory change over time? But I think there will be a probably a very bespoke set of you know architectures that we see for those customers. What will happen, just like technology every other time? You look over the next 10 years, what's gonna happen? It's gonna get commoditized, it's gonna come down in cost. So, what do we expect to see? More and more customers, because that barrier to entry will lower right than what it is today, cheaper to get into. More customers will be interested in doing foundational model training for return on future to disrupt their industry. So I think that's one. And then, like right now, our big focus is really helping enterprise customers get things at inference back to your you gotta show results to the leadership, right? Of like, what are we doing, or back to the board. Right. Hey, like what are we doing with AI? What's our strategy? Sure. That's where we've got to help everybody start moving forward faster.

SPEAKER_00:

Right. Matt, um, so if that's if that's kind of the Dell's position here, or at least Alan's position here, here at WWT within our AI proving ground, we see a variety of of AI factories. So what do you think the future holds for the AI factory? Where do you think it should go so that it best supports clients and their needs?

SPEAKER_01:

Stay agile, right? Stay dynamic in its nature as it were again, and again, it's not a single OEM or ODIM, right? It's it's a community uh that of technologies that have been working together. And this is where we we why we built the AI proving ground, right? It's the center hub of the wheel when it comes to the to proving and designing and looking into it. So we're working with Dell on technology that people haven't even heard about yet. And there are other partners that we're doing that with as well, that we're helping them develop technology today in our AI proving ground that we have to be very quiet about. Um but it's continually morphing forward. So having an open ecosystem like Alan spoke about is important. We know that not there's no customer out there that has a single, they can't have a single OEM or ODM because it's a combination, but finding the best to breed that work together, that are innovating together, I think is key as well. Um, hand in glove, because sometimes you can't have one without the other. Uh and it's helping a customer understand that those technologies, and what we haven't even hit on here is security layer, right? There's a security wrapper all around this. And then how do you secure that, right? How do you make sure that the answers you're getting don't have hallucination in them? Um, how do you keep prompt injecting to happening so somebody doesn't go teach your child that one plus one equals three when they go to school and they they learn the right things? Because if we were told that in school, we'd all believe that one plus one equals three. Um, so it's it's having that constant relationship, the constant working with the different partners that we have. Um and then working with them with the different how there's there's no secret that there's Intel and AMD NVIDIA working with multiple solutions that that compete with Adell, but it's still kind of the same concept that's happening. And then we help these, each of our other partners and their development and validating and testing it, and then bringing that to a customer that helps add value where they were confused, right? Sometimes customers go out there, they don't understand that there are solutions readily available for them that they do have choices in different areas along the stack that align to some of the technologies that they've already grown accustomed to and put investment in and learning and having in their data center. It's just how do you now take that and transition it into the new movement into your AI kind of journey to call it? And I think it's important that when we work with these guys, right, in the AI proven grounds, that we're constantly on the front foot with Dell. We're working with their different BUs and different ideas where it's an open, it's it's funny conversations that we have, an open banter around. Have you thought about that? But have you thought about that? And a lot of whiteboards get get used up rather quickly, but it's it's a fun conversation of what we're seeing in the industry. Where do we see gaps in which somebody hasn't come up with super competing today? There's tons of niche solutions out there. Yeah, it'll be fun to make bets on who what partners are gonna go gobble up again. We talk around that innovation happens two ways, right? Innovation happens from within or innovation happens through acquisition. MA. So it's fun to watch that play that game, get introduced. We meet with Dell Capital. I literally just met with them last week. So, what technologies are you guys investing in right now that you think are gonna be the next big thing? Then we go back, we take a look at those technologies on the side, we build those relationships, and then when Dell makes them a part of their factory, ready, we're already there understanding it, ready to position it with the customer and move forward.

SPEAKER_00:

Pivot here, but I do want to touch on data sovereignty, which is something that has increased or I feel has increasingly come up with um with clients that we speak to. And Alan, I know that's a a bit of a differentiator too for Dell, is how how you all view data. Tell me why that's important right now and it's it's continuing to gain momentum.

SPEAKER_05:

Uh I I think there's two things. If you go look at the open models that are setting out there that we're all that many people are still used to subscribing to, whether it's Copilot or Chat GPT or other tools, Jimmy, it doesn't matter which one. Um, that you know, those achieve some type of productivity, right? Early return on employee, uh, you know, productivity gains. But what is the one thing they weren't trained on? Your organization's data. Sure. And uh without mentioning customers, I'll just say like the number one thing that CEOs are are concerned about is intellectual property. If I'm going to achieve an outcome, like think about what I said of sales chat and what we brought, what did if I'm going to enable my sellers to be able to spend more time with their customers and partners to help you know all of our customers achieve better outcomes, guess what? I better make sure if that's powered by all my IP, all Dell IP that we bring to market, it better be protected, it better be sovereign, it better be on premise and governed, right? And have some regulatory process set around it. So it's that number one thing that every C level uh executive and board member is asking for those public companies of like, what are you doing to protect your IP? That's more important than ever right now. And then since I I took a global role recently, it's been very interesting to see how geo you know political climate has changed, you know, that to another level uh of people wanting to use models that are in region, right? Trained in like in Europe, leaning in very heavily with Mistral or uh one of our partners and one of the reasons we brought them on board up in uh you know Canada with Cohere because it's based in Toronto. So trained on regional. So it's it's not just the data of the IP, but also that model stack of what does that look like? Yeah. And I think it goes down, I'd like to what you said there a little bit about security, because many people think about security at the high level of AI, just securing AI. But uh, you know, one thing is just the output. Think about the output. I need to protect my employees, or as I start to, you know, we're stepping into external use of AI very closely because where can we safely experiment? I can say more safely experiment internal with my employees, right? Get that right and then move it external. So now I've got a flywheel of like what does that look like protecting my IP before I ever expose it to our external customers? But but being able to ensure that that model responds in the appropriate manner, right, is extremely important. And was it trained on not just my data, right, but the correct amount of data so that it doesn't hallucinate and make wrong decisions, too. Um, so I I think all of that comes into play when you think about sovereignty around a uh around data. Yeah.

SPEAKER_01:

But Brian, add locality, right? Data data sovereignty, data locality, you have to bring them both with the customers that we're dealing with, right? Um, and that's why it kind of plays into the geopolitical thing, like I said. It 100% does. And that's why it also plays into bringing AI to your data because you cannot, by law, in some countries, move data from that location. So you have to keep that in mind. You have to bring the AI to your data. You can't have one massive repository. First off, it's a bad idea because then you start moving data around, you have no idea what your single source of truth is at that point in time. And now what do you do? Um, so data sovereignty, data locality have to be said in the same sentence, almost in my mind, when we have those conversations with the customer because of who we're dealing with.

SPEAKER_05:

I like what she said there too, because we haven't really even talked about it today, but data at the edge, right? Like all the data is expected from a growth perspective, right? 86% of the world's data, according to Gardner, is supposed to be created at the edge, right? Uh over the next five years. So if it's going to be that high growth rate, we know it's all unstructured data set, normally streaming data in OT and other edge-based locations. How do we keep that sovereign? That becomes even more interesting, right? Oh, by the way, that's where I could take those streaming data sources and ingest them into the Dell AI data platform, right? To build the data products out, you know, as as uh getting use cases and outcomes from the fact.

SPEAKER_01:

Well with that point, no, again, that's it brought a funny story that we talked in another interview with the with a publication in which I brought um on Fridays. I like to watch you know non-thought kind of TV. And I I I brought an analogy of a TV show called Gold Rush um to the edge. We've talked around petabytes of zettabytes of data at the edge, right? Well, we're not going to ingest and save zettabytes of data. Um and I the analogy of the data.

SPEAKER_05:

The value of that data tends to decline over time. Exactly.

SPEAKER_01:

But we're gonna we need to bring in and hold on to the right data, right? And I use that analogy of gold rush. Like when you go look at somebody who's mining gold, just like your data, um, there's gonna be overburden, there's gonna be noise, and there's a lot of noise in the OT data. And the secret isn't how you design that technology stack that allows you to capture the data that you need and then throw away the noise, right? To build them more efficient. We gotta stop trying to go build 600 KW solutions because we don't have unlimited power in this planet. Um, so what we have to do is build more efficient solutions, right? Not just because it provides better return on investment, but from a climate standpoint, yes, but from a climate standpoint, we have to be more efficient in what we're doing and more effective in what we're doing and not just hitting it with the biggest hammer we can find.

SPEAKER_04:

Yeah.

SPEAKER_01:

And I think that's the key component as well that happens with our partners and working with them is how do we go out there and build a better machine, right? How do we take what we have today and make it better and redefine it and be agile enough to do that? And that's where the AI factory leads to that.

SPEAKER_00:

Yeah. We could spend another hour or two or five or whatever it might be on power, cooling, efficiency, and so on and so forth. So we'll put a pin in that one and save it for a future episode. Um, at the risk of asking a little bit of a silly question here, you're talking about locality of data, you're talking about data produced at the edge, which is just exploding. Does that signal that we need to have, you know, if you think about a factory more as a physical presence, does we have to have multiple AI factories around the world here? Or how does how do those two forces compete?

SPEAKER_05:

It's interesting. So I'm gonna go back to uh what we said uh bringing AI to your data. Yeah. A lot of those edge use cases back to you into the streaming uh, you know, OT example we were just giving, it is many times bringing GPU-enabled compute for massive parallelism right there, you know, at the edge to your point. So remember, the AI factory is not a SKU, it's not a single button. Yeah, the factory might be a combination of what I have in my data center, you know, with a large language model. Oh, with some computer vision use cases living at the edge. I might take, I love what you said there of like the relevant data from that edge use case from maybe some of that computer vision stream and the output that we've seen in manufacturing, might go upstream to the product managers to make better decisions on what's happening and how they would manufacture, you know, build a better product that could be manufactured more easily, for example.

SPEAKER_00:

Okay, got it. Uh, well, we're at the bottom of the episode here. Any just clear-cut signals that uh our listening audience, whether they're you know practitioners or executives, like what are some of the priorities? We're at the end of the year, we're going into 2026. What are the priorities that we need to get straight now so that we're winning in 2026 as it relates to AI factory? Alan, we can start with you and then Matt, you can close us out.

SPEAKER_05:

Sure. I I I'm just gonna go back to what we talked about this whole episode. Yeah, you've got to define your use cases. You got to think about what your strategy is overall. How how are you differentiated in the marketplace? Because that's what's gonna define your strategy overall. And then start to think about how do you scale, right? Back to think about that one that provides the highest potential. And I I I mentioned a little bit about return on future, but you know, return on employee is a huge one. This whole idea of sales chat. Think about that of being able to give 70 plus percent. This is already what we're tracking, 70 plus percent of time back to sellers to go meet with customers and partners. Yeah. What is that gonna do overall? Now, if they can spend more time, they're gonna find more opportunity, they're gonna create more value for their customers and partners, and we're gonna see more, you know, revenue as a company overall because they're gonna be significantly more efficient. I mean, large companies get you know overburdened. So I think number one, defining that strategy. Yeah. Number two, then we think about the factory components. We've talked about what that all looks, you might look like starting with data, like in our case, the Dell AI data platform and some of the infrastructure components to build an efficient architecture, to all of Matt's point, right? To make sure that we've got something highly scalable. The customer can start with, like we actually, and part of that Dell automation platform, we're gonna build out things at the foundation level for a customer to think about pouring the foundation of your house. And then they have add-on modules that they can just continue to scale and grow. So they don't have to go make the you know$100 million plus investment to get it done. And then I think the third thing is we've got to get it out there. We got to put it in inference, and then we got to start to scale it like quickly. Yeah, like hey, I gotta iterate, I'm gonna do that. Now I'm gonna get it to scale, and then we're gonna put it back through that flywheel that Matt and I have been talking about. It's never, never, you know, ending development.

SPEAKER_00:

Yeah, purposeful build, scale, Matt. Not necessarily priorities, but how how finish us on how we get there to get from purposeful AI to building it to scaling it.

SPEAKER_01:

So first, don't be in a hurry to do the wrong thing, right? Um, think about what you're wanting to do, what outcomes you're wanting to drive to, understand that there are options. So don't be in a hurry to make a mistake. Start making informed decisions. You don't know what you don't know. Ask questions. What is the art of the possible? What technologies are available? Maybe who's done what that's similar to what I'm doing today that will help get me further than they ever got. Because now I can make adjustments along my journey that maybe had created a roadblock for them that if I can see that roadblock, I can go around it. Um, so it again, it's sometimes slowing down so you can go faster. Yeah. I use that term a lot in the AI proving ground when I talk to our customers. I tell them up front, don't get frustrated if I slow you down before I accelerate you. Because if we don't have a purpose in what we're doing, then we may just be going in circles and getting nowhere.

SPEAKER_04:

Yeah.

SPEAKER_01:

So to me, the number one thing is understanding where you want to go, understanding that the skill sets and the people that you'll need is across multiple business units inside your organization. Um, set a goal and be agile in how you're gonna get to where you're at. It's gonna be okay. There's gonna be challenges along the way, but if you bring everyone together, I'm guaranteeing you have intelligent people inside their organization that will help get them around the different Roblox they run into if they include them. It can't be a single business, it can't be a single person. This is truly a group effort. And when I I call it the the world's largest uh uh team team sport in IT. And and when I explain that, like I literally tell customers that worldwide comes to them like a relay team, right? We have to have different members of the relay team that help get the the baton is the challenge. Sure. And we have the ability to go from a consulting standpoint to an education around technology standpoint to proving the standpoint to building the integrated solution that then gets put in their data center that builds their first ever AI factory. And then from there, we can build upon that, right? And take the same approach to it, now doing it in a different manner and heading to a different direction, but being agile enough to scale and grow in the direction in which their use cases do.

SPEAKER_00:

Yeah. Well, excellent stuff from the both of you, and thank you again for taking the time out of what I know is a busy schedule during this week, but really any week during during the year. So uh thank you to the two of you again. We'll have you on soon. Absolutely. Pleasure joining you. Thank you both. Awesome. Okay, okay. Thanks to Alan and Matt for their time in today's episode. For enterprise IT leaders, a key closing insight. AI factories aren't a single product you buy, they're a living system you design, govern, and keep evolving. When you get that right, strategy, data, architecture, and people all working together, the tokens coming out of the factory stop being science project and start becoming real business value. This episode of the AI Proving Ground Podcast was co-produced by Nas Baker, Kara Kuhn, and Diane Devry. Our audio and video engineer is John Knoblock. My name is Brian Felt. Thanks for listening, and we'll see you next time.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

WWT Research & Insights Artwork

WWT Research & Insights

World Wide Technology
WWT Partner Spotlight Artwork

WWT Partner Spotlight

World Wide Technology
WWT Experts Artwork

WWT Experts

World Wide Technology
Meet the Chief Artwork

Meet the Chief

World Wide Technology