AI Proving Ground Podcast: Exploring Artificial Intelligence & Enterprise AI with World Wide Technology
AI deployment and adoption is complex — this podcast makes it actionable. Join top experts, IT leaders and innovators as we explore AI’s toughest challenges, uncover real-world case studies, and reveal practical insights that drive AI ROI. From strategy to execution, we break down what works (and what doesn’t) in enterprise AI. New episodes every week.
AI Proving Ground Podcast: Exploring Artificial Intelligence & Enterprise AI with World Wide Technology
Before You Scale AI, Fix Your Data
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
AI is working. Your data probably isn’t.
As enterprise AI moves into production, a new constraint shows up fast. Not models. Not compute. Data.
In this episode, recorded live at NVIDIA GTC, NetApp’s Tore Sundelin and WWT’s Derek Elbert get into what’s actually slowing teams down. The shift from clean, structured data to messy, high-value, unstructured data that’s harder to find, govern and use in real time.
This is where things start to break. Data spread across systems. Inconsistent policies. No clear way to trust what’s being used.
And once AI depends on live enterprise data, those gaps don’t stay hidden for long.
Because at this stage, AI doesn’t fail at the model. It fails at the data.
Support for this episode provided by: Riverbed
More about this week's guests:
Derek Elbert is an AI Practice leader at World Wide Technology focused on hybrid cloud AI and high-performance architecture. He specializes in networking and storage within modern AI stacks, helping organizations design and scale infrastructure to support data-intensive, production-grade AI workloads.
Tore Sundelin is a product leader at NetApp focused on enterprise AI and data platforms. With 20+ years at Microsoft, Google and NetApp, he has built AI-powered products at global scale, including early machine learning capabilities in Microsoft Office. He specializes in turning complex data, governance and platform challenges into production-ready AI solutions.
The AI Proving Ground Podcast leverages the deep AI technical and business expertise from within World Wide Technology's one-of-a-kind AI Proving Ground, which provides unrivaled access to the world's leading AI technologies. This unique lab environment accelerates your ability to learn about, test, train and implement AI solutions.
Learn more about WWT's AI Proving Ground.
The AI Proving Ground is a composable lab environment that features the latest high-performance infrastructure and reference architectures from the world's leading AI companies, such as NVIDIA, Cisco, Dell, F5, AMD, Intel and others.
Developed within our Advanced Technology Center (ATC), this one-of-a-kind lab environment empowers IT teams to evaluate and test AI infrastructure, software and solutions for efficacy, scalability and flexibility — all under one roof. The AI Proving Ground provides visibility into data flows across the entire development pipeline, enabling more informed decision-making while safeguarding production environments.
AI Breaks Without Data
SPEAKER_01The window to fix your data foundation before Agentic AI scales across the enterprise is quickly closing. So if you're trying to make AI usable, governable, and safe in production, your bottleneck isn't likely to be the model. It's probably your data pipeline, your governance posture, or getting the mounds of unstructured data ready for agentic AI at scale. This is the AI Proving Ground Podcast from Worldwide Technology. And on today's show, we talk with NetApp's senior director of AI product, Tori Sunderland, and WWT lead for hybrid cloud AI and high performance architecture, Derek Elbert, about the shift happening inside enterprise AI, why storage, data classification, policy enforcement, and AI ready data pipelines are becoming mission critical as organizations push towards production. We spoke with Tori and Derek while at NVIDIA GTC just a few weeks ago, and the start of our conversation gets straight to the question at the center of this shift. How do you make enterprise data usable, secure, and ready for AI at scale? So let's jump in. Well, Derek, Tori, thanks for joining us here in the makeshift studio we have here at NVIDIA GTC. How are you doing today? I'm doing great. I love the studio. Absolutely. Tori, how you been?
SPEAKER_02It's great, man. GTC conference is the atmosphere, it's just so electric and energizing.
SPEAKER_01Yeah.
SPEAKER_02I look forward to it every year.
SPEAKER_01Absolutely. I mean, they bring it every single year. It's never short on news, announcements, insights. But I mean, just curious from the top, what what feels different this year than from years past?
SPEAKER_02That's a good question. I'll say, look, AI leaders like Jensen and Nvidia have been talking about agencai AI and the agentic wave for a number of years. What feels materially different to me this year is just how much of a democratized reality that really seems to be happening. Like how much real momentum and actual transformation and enablements happening now versus you know some great demos and and and sound bites.
SPEAKER_01Yeah.
SPEAKER_02That that seems to me materially different this year.
SPEAKER_01Yeah, I mean things moving quickly based on what he says and a little bit of your own observation. I mean, how is that changing kind of what we're seeing from an enterprise AI IT standpoint?
SPEAKER_02So I thought I think that during the keynote, I listened to the whole thing, right? And you could see that how important data has become, right? The whole first what 30 to 45 minutes of the keynote was all about how CUDA has evolved and it's now become coo VS and CUDF and and around unstructured and structured data and how that plays into the actual NVIDIA ecosystem. And it's something that you didn't you didn't necessarily always hear around all those integrations from a CUDA specific standpoint.
Data Is Now the Bottleneck
SPEAKER_01Yeah. I mean, Tori, what data feels like it's been that bottleneck, you know, so to speak, for for a while now. What is it that enterprises are underestimating or not thinking about as it relates to data strategy that's that's standing in the way?
SPEAKER_02Look, if you had uh if you had asked me that exact same question a year ago, I would have ticked off a pretty long list.
SPEAKER_01Yeah.
SPEAKER_02A year ago, the tenor of the conversations that we were having with our our customers and partners, a number of them were they knew the promise of AI was there, they were doing proofs of concept, sure. They were seeing, you know, other folks prominently see an actual transformation of their business. But the and so they were really focused on things like models, like generative, like development platforms and toolkits. And I think they had vastly at that point in time felt like they were vastly underestimating the criticality of data, right? And getting right fresh data, getting the right access patterns for data, getting to govern and monitor properly. That has shifted dramatically, honestly. In the last the conversations I've had in the last three to four months, the vast majority of enterprises that I talk with, it's no longer, yeah, I hear you. Data things really important. We're kind of figuring out all this tech stuff, right? And the tech stack first, and we'll worry about data. And now it's the tech stack is real. We've got people out building applications, connecting to data, and now the data management piece is really front and center. So honestly, I feel like most of the the enterprise leaders that I talk to are no longer underestimating just how critical and challenging and important data management is. Yeah. So I'm gonna I'm gonna piggyback right off of that, right? Because there's there's I've been involved in the in the NetApp ecosystem. I'm also obviously involved in the worldwide ecosystem with my role. NetApp has been focused on data management for a really long time, right? Like it's been a staple in the portfolio. And I don't know how many people actually listened from an enterprise standpoint, right? But the the fact remains is that we have always preached through our whole AI journey, starting some two, three, even probably longer, 10 years ago, data is the most important part, right? If you want good AI, you gotta have good data. And what's funny is that the enterprises that have actually taken the time, right? Whether they knew it or not, right? Whether they knew that they were doing the right thing or not, but they took the time to activate and be proactive on their data and how they how they're going to organize it, what the strategy is around it, you know, can do they have access to it, all those types of things. They are the ones that are ahead. Everyone else is like that they want to deploy the GPUs right away, but what's happening is like they've used all the good data that they have, right? They've scraped the top, and now they're like, okay, I we have all this proprietary data. What do what are we gonna do with it? Right? Like, how are we gonna access it? How are we gonna gain it? And now they're now they're slowing down because of data. And I feel like especially this year, right? Like it'll be put at the forefront that if you have, if you're not ready, if your data is not ready, then you're not going to move into production with actual AI applications. Yeah. Like that's the state where we're at, especially with companies that have been around for 30, 40, 50, 100 years, right?
SPEAKER_01Well, I mean, Derek, stick with you. You're talking about the companies that you see that are doing it right. Maybe articulate a little bit about what they are doing right. And is the right thing going to shift in context of what we're hearing here at GTC?
SPEAKER_02I don't think it's necessarily going to shift, but the ones that were proactive, right? So, like I said, NetApp's been around, had a data management story for a really long time. It's been part of the portfolio, was data fabric. Now, you know, data management is all all over the portfolio. And and others have this have similar things, right? And what what's happened is like, you know, CDO, the title of CDO used to be very prevalent, right? And now I'm seeing actual some consolidations at the C level of that actual role, right? But we've had, you know, if you look at how we go to market from a services standpoint, right, or a consultative standpoint, we've always had some type of data strategy methodology, right? Or or even consultative engagement that where we would come in and help a company fix their data, right? And help develop that strategy. And and there's a few that have taken that initiative, right? Whether they knew what they were doing or not, that they probably just felt like they were doing the right thing, right? Maybe they maybe they had data integrity problems before and they were fixing them. It's those companies that took that leap to modernize their data that are actually ahead.
SPEAKER_01Yeah.
SPEAKER_02It's it's it's the ones that were like, nah, we don't, you know, our data is all over the place. It's in silos over here, it's in silos over there. Like, we don't need to do anything with it, right? It's just is what it is what it is, right? It's where it is, and that's where it's going to stay. We'll build compute next to it and and move forward. Well, I they're the ones that are behind now, right? Or they're having to slow down in order to get to catch up, right? Essentially.
Your Data Is a Mess
SPEAKER_01I mean, Tori, where do you see the market right now? I mean, is it more towards the we'll put compute next to it, we'll deal with it, you know, next year or whatever? Or are you starting to see organizations start to more aggressively move to modernize and to put themselves in position to capitalize on this very real opportunity that's at hand right now?
SPEAKER_02Yeah. So look, once you start talking about generative agentic, unquestionably unstructured data. Yeah. And in many cases, pretty large corpuses of unstructured data are absolutely critical to get highly relevant answers, right? And highly relevant insights and take action. The maturity, so when we we meet with our our customers and we talk to them about their their data state, their data corpus, many of them have pretty good solutions, honestly, that they're pretty happy with around uh structured data.
SPEAKER_00Okay.
SPEAKER_02Yeah around how they where they store it, how they protect it, how they give access to it. The vast majority of the companies we talk to, when you talk about agentic generative workflows in production using critical enterprise data, it's a very different story. And there are a number of industries that have regulatory requirements around where data can live. There are a number of companies that have policies around where data can live. And frankly, there are now data sets that are of sufficient size when you want to run AI models against them, data gravity matters.
SPEAKER_01Yeah.
SPEAKER_02And so increasingly, customers that we talk to about their unstructured data corpus, increasingly it's around how do I respect data sovereignty? How do I respect data gravity? How do I bring the right models and the right compute to where I want my data to live and is appropriate to lower?
SPEAKER_00Yeah.
SPEAKER_02And that is has been a big shift. And there's a lot of companies that are realizing that and are really looking for good solutions and solutions providers and partners to address that. So, so when we when we think about where we're at, right, in the Gen Chic world, the problem with data has always been it's a huge, it's actually a human problem, right? And now we're talking about a uh scenario where it it is systems that are going to access the data. They're not going to inspect that data. They're going to access the data that you give them access to. So when you start talking about role-based access controls for these systems or for these agents, like you have to make sure that it is very clear what data they can access with which classifiers, and and not, and because they're not going to necessarily inspect it, right? Like you, there's always been a human in that loop around the data. And and at times I think it's caused some friction because certain organizational pieces within a within a company like to you know hold on to their data or conceal their data from the rest of the organization. And and you're not going to be able to conceal it once once you have an agent out there that is that is acting on behalf of a system.
Agents Raise the Stakes Fast
SPEAKER_01Yeah. Well, I mean, with open claw or or nemoclaw, I mean the wave of agents is coming. Right. Or if it's not already here. And not to, you know, not to sound alarmist or anything, but is are we dealing with like a an expiring game clock here where people need to or organizations need to get their data in order in order to not only capitalize but not put themselves at risk?
Your Pipeline Isn’t Ready
SPEAKER_02You know, I might not frame it quite as dire, sure as that. Yeah, but it's really apt point. And that is what in terms of critical data. Look, data is unquestionably a company's most valuable asset, yeah, particularly in the age of AI. And when you talk about democratized AI and you talk about agenda and multi-aging systems and semi-autonomous and autonomous AGs, having a really clear idea and a clear set of policies and practices around what data you have, who can access it for what purpose, what portions of that data and what form can they access it is going to be absolutely essential. So that's kind of one dimension of the problem. Another really important dimension of the problem is look, companies and individuals are absolutely, whether it's internally or externally administered, tremendous pressure or desire to become more productive. And so as a company, your choice can be I'm gonna put in into place these correct, you know, these really good, robust governance policies, monitoring policies, or you're gonna have employees right or wrong creating shadow pockets of data. People call it shadow AI. Yeah. So you know, you talk about expiring game clock. It is unquestionably an imperative for enterprises to transform their business with AI and enterprises to do that safely and appropriately, unquestionably governance and understanding and good policy and administration is is essential. And it becomes exponentially more important as soon as you have agents and networks of agents working autonomously against those data uh data. Because they're not gonna they're not gonna inspect it, right? Like they're they're gonna they're gonna act based off the access that they have been given, and they're not gonna necessarily know what the data is, if it should be shown, is there PII, is there HIPAA, like what are the classifiers? Like that is up to the person that is that is cleansing that data to do beforehand, right? Or the person that's given them the access. So I I mean, I think it becomes very important and it could be very risky for companies that don't do it right. Not only is it going to slow them down, but it's also going to potentially put them at risk for like exposing data that they don't want to expose.
unknownYeah.
SPEAKER_01I mean, so Derek, it's no longer about necessarily where the data lives. It's now it needs to be usable by AI. So what's the priority? Is it you know, capacity? Is it data freshness? Is it more to the policy? I mean, or is it all you know, you're gonna have to prioritize all of it?
SPEAKER_02So, so where where I think it it needs to happen is there has to be a firm data pipeline, right? And and a lot of a lot of what needs to happen is in that curating, ingesting stage, right? So you need to be able to properly impose policies. I think policies is key as you are ingesting that data and curating that data, right? Like there's a whole data cleansing pattern that has to happen as part of that curation, and it has to start in the pipeline, right? Now, the good thing is is that like NVIDIA sees it as a as a value, right? It storage is no longer data, is no longer a commodity, it is something that of value, and and NVIDIA sees that. So they've started coming out with conceptual ideas where you know, like an AI data platform, where that makes sense, and it fits right in that data pipeline.
SPEAKER_01Yeah.
SPEAKER_02Now, how cup how companies act on that, I think is still up to, I think, I think there's still some organizational issues that we need to overcome, right? Such as owners of the data and and who who's going to allow the changes that need to happen, right, in that pipeline. Like uh that I that are human problems. Again, we're talking about human problems. And and if they do, if they can get over them, then I think it it makes it easier for them to fix them. But it there's a whole if, right? Like if they can overcome those organizational dysfunctions at times with the people that are in place that that own that data.
SPEAKER_00This episode is supported by Riverbed. The Riverbed platform provides open full stack observability, enabling customers to optimize their digital experiences by using AI to prevent, identify, and resolve IT issues.
If You Can’t Trust It, Don’t Use It
SPEAKER_01Tori, you guys look at this problem over at NetApp. I mean, what's the what's the architectural response that helps satisfy some of these issues that are you know urgent to you know coming down the line?
SPEAKER_02Yeah, I mean, I think Derek hit on some really important points. There's there's a few elements that have to be part of a company's data governance strategy, particularly once you start talking about unstructured data. These challenges get that much more difficult. One is as Derek was talking about, is you need a pipeline that's able to actually inspect data that's flowing through it in your real time. You know, in the data path, as well as the access path. Be able to expect inspect the data and be able to appropriately classify the data. This data contains this type of personally identifiable information. This data is a patient medical right. This data is a you know person's favorite TV show, right? Like there's a whole spectrum of sensitivity of data. And so, in an unstructured world where you don't have the benefit of known schemas and structured query and process controls in place, the ability to kind of inspect that data is flowing through, classify the data, enrich the metadata about that data becomes absolutely critical in governance for generative AI and agenda bear.
SPEAKER_00Yeah.
SPEAKER_02So that's one would that cop not. And then the second is once you have that enrichment and understanding of the data that happens in that data path as it's created and changed, because we're talking about these huge corpus of data and they're changing at you know incredible speeds, it's then the ability to define policies based on that understanding of that data and be able to apply these policies dynamically as that data is flowing through. It's monitored, it's classified, it's enriched, it's prepared for AI. And then there's the access path of trying to access that data by particular application or agent or rad workflow. And so it's really both of those components are very important. Understanding the data in your real time, classifying it, and then good policy-based governance of that data are both essential.
SPEAKER_01Yeah. And is that starting to get into AI data engine, AIDE from NetApp?
Security Can’t Be an Afterthought
SPEAKER_02Yeah. And I think Derek framed it quite well. When you think about an AI data platform, AI data pipeline in the you know, the generative age, the agentic age, there's really kind of a couple big constituent problems. One is just understanding the data you have. Yeah. And it is, you know, when you talk about these companies been around for a while, significant size, the size of their global data state and the distributed nature of their data state, and as you said, the siloed nature of the data set, and some in some cases for very good reason, being able to look across that estate, understand what you have, and and structure and index or catalog in the data state is the very first problem. Understanding the data you have, what lives in it, and which data you want to and can bring to bear for a particular problem. And the second is once you have that mapping and understanding, is just like we were talking about, is governing that data, being able to classify and understand as it flows through, have policy-based access control that monitors and controls access to it. And then finally, it's making that data ready for production workloads that happen at high-scale speed. And so it's things like generating vector embeddings and optimizing vector indices and creating endpoints for them. Those are really the three building blocks of a kind of complete AI data pipeline. And those are the three building blocks of the AI data engine that we've been building with in Bitter. Yeah. The good thing is that if you look at NetApp's portfolio, right, like I'm very, I'm very intimate with NetApp's portfolio. And NetApp has the most secure portfolio from a data storage perspective out there, right? So when you start talking about where the AI data engine fits and the components of it, right? Like there are definitely SaaS products out there that are acting on some of the pieces, right? Around metadata caching and semantic search, right? Yeah. But that's just one piece. So that is one arm of the AI data engine. Then you move that across to actually classifying that data, right? Inspecting what what we expect, right, to come out and putting that those PII and guardrails in place around that data for policy. And then the last piece of it is actually applying the security best practices to that data, including guardrails, right? So now, because because as you start working with data and you're doing it in a generative sense, you're you're not the foundational components of security for your data still apply, right? But now you have to start thinking about okay, what about the the the input, right? What about the input tokens? What about the output tokens? What about the LLM itself, right? What about the the conversion of that LLM as we start inferencing with it? Like all of those things have to have guardrails put around them, and it all it all is it plays right into the NetApp portfolio being as secure as it is.
SPEAKER_01I mean, Zarek, you're on the ground talking to clients of all kinds, you know, every day. Is that type of progression that you just described? I mean, is that a mindset shift that organizations are ready for right now, or is there a little bit of education that needs to happen to get them there?
SPEAKER_02There's definitely some education, right? But but the concept of of security and cyber resiliency and data protection, it's it's it's become very top of mind, right? Like there's been enough people that have been attacked over the over the last few years that people are really starting to pay attention. So when you start having an actual security conversation, and then you can say, oh, hey, by the way, NetApp has the most secure storage platform on the market, it really it really plays into that that data conversation, right? Because naturally you're gonna move from NetApp into a data conversation, right? So if we have to use security to get in, right, and then we evolve that into an AI conversation around generative AI, great. Like I'm I'm all about it. But I think it's going to take a mixed view from these enterprises, right? Because you're gonna have those very traditional. Storage individuals that aren't going to understand fully all of the fundamental components of AI. And that's where these extra layers that we're talking about around data engineering, more data science-like, right? Around pre-processing, embedding, chunking, all of those things. You need to be able to have those resources within your organization in order to take real effect of something like AIDE from NetApp.
Storage Is Back in Charge
SPEAKER_01I mean, certainly no shortage of news here at GTC. A little bit of a you know future question here. Is what we're talking about here addressing the needs for current state, or is it starting to address where we think the market's going? And then how do you see storage evolving into a more critical and critical conversation as it relates to AI, you know, adoption and success?
SPEAKER_02It's a great question. Lots of layers to that.
SPEAKER_01Just tell us about the future, you know?
SPEAKER_02All right, here we go. If you looked at my stock stock portfolio, you didn't you wouldn't be asking that that prognostication committee. So yeah, it's a it's a really important question. It's a it's a multi-layered question. Look, at the end of the day, the fundamentals, the fundamental building blocks that we talk about data mapping understanding, data governance and governance policy, data transformation and preparation, they're absolutely required for the you know now widely and commonly deployed types of architectures you see today, like RAG, for example, they are also absolutely essential for agents, uh-huh, and they are absolutely essential for multi-agent swarms, right? And so those building blocks do not change. And so having a data platform that can that can act on that data in an efficient way, a secure way, and really put those tools into the hands of the practitioners, the the storage admins, the data stewards, the data scientists, the app developers is essential, whether it's what was announced last year GTC, what's announced this year GTC, what's gonna be announced five years now for GTC. That's a constant through lag. And so the way we're trying to build and structure this AI data platform is those building blocks are in place and they evolve and expose new capabilities, new data modalities, new architectures, new scale points as the AI technology evolves. Right. No, no, go ahead. Okay. So when I think about this, right, I'm gonna go backwards to go forwards, if that makes sense. Yeah, so AI data platform from NVIDIA was actually announced a year ago, right? And now you start to see NVIDIA turning the will around how CUDA is going to be applied to the data, right? So getting very, very prescriptive to some aspect around the now, right? The now is agentic. As we look forward, I'm not gonna go too far out, right? But we are right on the cusp of physical AI. And as you start talking about actually it enabling physical AI, all of those individual, whether they're robots or individual physical components, right, where AI is going to be active, they all are going to have data in place, right? So we're right there, and I think this fits right in the middle. Like presently, NVIDIA is trying to get people to understand data is a value, it is not a commodity, yeah. And what what you store your data on and how you do it is very, very important as we start moving into where we are today. We have a gentic, and even more so, probably as we move into that physical space. And I'm not gonna go much past that because we all know that uh we come to this conference and and and Jensen gets on stage and does his keynote, and he's an innovator, and I it there will be something new that I can't foresee because I don't think in that mindset. I'm not the it, I'm not the innovator, I'm the realist usually, and I'm the one that that people don't like to bring me into a call because I'm like, no, that's not going to work. Like, we need to re-architect it to make it work, right? And so I I slow things down to speed it up or to get it right. So I'm not gonna go much past where we're at and where we could be. Yeah. You have one other another dimension. Absolutely, yeah. That question, yeah, specifically about storage systems. So we've been talking about data platform, data layer. Storage systems are absolutely essential in AI, you know, up to this point, that first generative wave, things like feeds and spoods for model training, model fine tuning were very critical.
SPEAKER_00Yep.
SPEAKER_02As soon as you move to inferencing and you move to inferencing of live production data, of live enterprise data, though the corpus is changing, being able to load and unload contents windows, being able to monitor and govern as that data is created and flows through the data path is absolutely essential. And the scale at which you have to perform those operations and the latency at which you have to perform those operations becomes more and more challenging. And so these storage architecture systems underneath this data layer, having the right types of compute, having the right types of memory caching systems in conjunction with the right silicon and you know mathematical models for folks like NVIDIA, as well as the right context building models and caching models and an intelligent caching and cache invalidation models, those storage systems and those data systems and data layers have to evolve hand in hand and really to achieve kind of that potential and promise at scale, particularly once you start talking about multi-agent systems and these huge distributed corpuses of bait.
Fix This Starting Monday
SPEAKER_01Yeah. Uh we're running up on time here. So I'll go ahead and close on on this question, Derek. I mean, based on on what Tori just said here, you talked about how you're kind of you're the guy that's like, all right, let's talk about how we're gonna make it happen. Yeah. What are some of the basic uh steps that we can take right now, or you know, leaders out there can take right now to make it actionable when they get home on you know Thursday, Friday, or Monday, whenever they get back?
SPEAKER_02So I think part of that is like listening to to people that want to have a data strategy conversation. Yeah, like that that that's always been a tough conversation to have because people don't want to actually inspect their data, but it has to happen. And if if you haven't already taken that step, like you should you should start taking that step right now, right? Yeah, the further, the longer that you wait, the more the further you're gonna be behind. Yeah, so if if I would say do one thing, that's it. It the ones that are there, right? And they've already taken those steps. I think for them it's about finding the solution conceptually around the AI data platform that's going to work and it's going to work for your data, right? Because we are now in the in the age where we're moving the GPU nodes closer to the storage, we're taking them out of the generalized compute and putting them in line with the storage so that you can start doing a lot of those pre-processing tasks on those GPUs and save the GPUs in the compute stack for actual AI applications, right? And that that's very important. So, how how customers actually start acting on that is going to be key for the ones that have already gone through the process to get their data ready.
SPEAKER_01Yeah. So, yeah, Tori, any closing thoughts?
SPEAKER_02Yeah, look, I think I think Derek hit on several key points there. The only additional context I'd add is exactly as you said, think about your data strategy and think about it in terms of those kind of core building blocks. Do I understand the data I have? Do I have a policy for governing that data and controlling that data pocketing? And do I have do I have the systems in place to transform and optimize that data for production workloads? You need to think in terms of all three of those dimensions. One of the big things that we're trying to do with AIDE is solutions for each of those exist today. There are customers at different stages of their AI maturity and AI lifecycle and journey. On average, though, they're having to deploy 12 different tools and pot thumbs on average, yeah, to take that life cycle from start all the way to finish and serving operationalization. One of the big opportunities that I think we had with AI data engine and with WWT and your customer base and your AI practices. Now we can go to customers and we can say, let's think in terms of all three of those dimensions. Let's talk about a data platform that's a seamless extension of your stage that understands and monitors and classifies prepare that data in, you know, as it close through the pipeline and serves it at scale. And here's a data platform that can provide all of those key building blocks in conjunction to underlie your actual workloads that you run in production. So, really, I think the the key kind of innovations that we're building, I think I'm really excited to kind of work with and co-develop and tune for your customers and your verticals is here's a unified platform. It has these building blocks. If you're serious about using your enterprise data in production workloads, you need each of these building blocks. And you can go and you can deploy the patchwork quilt today, or you can look at a unified platform that scales, that's secure, that lives at the do storage and data layers that works seamlessly with your tools and app development creation. It's really about having secure, AI ready data that respects data gravity, data sovereignty, and a unified pipeline. And I'm really excited. And we're we're one last thing I want to talk about is like when when a data scientist or an AI practitioner, whoever's close to the AI application, right, when they come to the storage team and say, hey, I need a data set, they want a data set. So in order for us to speak their language on this on the storage side, we have to speak in terms of data sets. A data set is not just a volume that you give them out of the storage array. So keep that in mind, right? All these, all of the key components inside AI data engine are going to be necessary in order to give the data science team an actual data set that they ask for, not just a storage volume.
SPEAKER_01Right. What they need. Yeah. What they need to succeed. Exactly. Yeah, yeah.
SPEAKER_02Well, they don't understand storage, right? Yeah, or should they have to, exactly.
SPEAKER_01Yeah, you want it to be an invisible enabler. Yeah. Yeah. Yeah. Yeah. Well, I mean, like you mentioned, solutions are out there to take advantage of right now. Certainly work to do to get to where we want to uh be, but that'll probably always be the case. Um, Tori, Derek, I mean, thanks so much for taking the time. I know GTC is a very, very busy time for all of us, you two in particular. So thanks for taking the time.
SPEAKER_02Thank you, Brad. I appreciate it. It was good. Yeah, and really enjoying the partnership.
SPEAKER_01Absolutely. I mean, I'm very excited. For sure, for sure. Thanks again. Cool. Thanks, Brad. Okay, thanks to Tori and Derek for their time. The takeaway here, production AI depends on data readiness, not ambition. If your data is not ready, your AI strategy is not ready. This episode of the AI Proven Ground Podcast was co produced by Nas Baker, Kara Kuhn, Sarah Kiadini, and Addison Ingler. Our audio and video engineer is John Knoblock. My name is Brian Felt. Thanks for listening. See you next time.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
WWT Research & Insights
World Wide Technology
WWT Partner Spotlight
World Wide Technology
WWT Experts
World Wide Technology