
AI Proving Ground Podcast
AI deployment and adoption is complex — this podcast makes it actionable. Join top experts, IT leaders and innovators as we explore AI’s toughest challenges, uncover real-world case studies, and reveal practical insights that drive AI ROI. From strategy to execution, we break down what works (and what doesn’t) in enterprise AI. New episodes every week.
AI Proving Ground Podcast
Cloud‑Native AI: A Blueprint for Faster, Safer Innovation
Cloud‑native AI is rewriting the enterprise playbook; in this episode, guest host Robb Boyd convenes data strategist Ina Poecher and cloud architect Zaid Elkhateeb to unpack why 2025 belongs to firms that treat hyperscale cloud as their default R&D lab. Reporting from WWT’s AI Proving Ground, they reveal how on‑demand GPUs, modular services and policy‑as‑code governance collapse GenAI timelines from quarters to days —without riling security auditors. Tune in for a four‑step blueprint — clean data, outcome‑driven architecture, elastic cloud services and continuous guardrails — that turns experiments into production‑grade advantage.
More about this week's guests:
Ina Poecher is a Technical Solutions Architect at World Wide Technology (WWT), and collaborates with customers and internal teams to design and validate innovative technology solutions. Working within WWT’s Advanced Technology Center, she leverages extensive experience across IT infrastructure, cloud, networking, and automation to develop and test complex architectures that drive business outcomes and support strategic initiatives.
Zaid Elkhateeb is a Technical Solutions Architect at World Wide Technology (WWT), specializing in Google Cloud Platform. He brings deep expertise in cloud engineering, data architecture, and AI/ML services, joining WWT's GS&A team from another leading Google partner. Zaid helps enterprise clients design and implement scalable cloud solutions that power innovation and drive real business outcomes.
The AI Proving Ground Podcast leverages the deep AI technical and business expertise from within World Wide Technology's one-of-a-kind AI Proving Ground, which provides unrivaled access to the world's leading AI technologies. This unique lab environment accelerates your ability to learn about, test, train and implement AI solutions.
Learn more about WWT's AI Proving Ground.
The AI Proving Ground is a composable lab environment that features the latest high-performance infrastructure and reference architectures from the world's leading AI companies, such as NVIDIA, Cisco, Dell, F5, AMD, Intel and others.
Developed within our Advanced Technology Center (ATC), this one-of-a-kind lab environment empowers IT teams to evaluate and test AI infrastructure, software and solutions for efficacy, scalability and flexibility — all under one roof. The AI Proving Ground provides visibility into data flows across the entire development pipeline, enabling more informed decision-making while safeguarding production environments.
Every week, another organization discovers their million-dollar AI initiative was built on a lie the lie that you can skip the boring stuff and jump straight to the transformative part, that data quality is someone else's problem or security can be bolted on later, or that cloud infrastructure just somehow works. You know where this is going, probably. By some estimates, more than 80% of AI projects fail twice the rate of failure for information technology projects that don't involve AI. Organizations are hemorrhaging money trying to build breakthrough AI solutions, while their cloud costs spiral out of control, their data sits in silos across disconnected systems and their security policies haven't caught up to their ambitions. Its systems and their security policies haven't caught up to their ambitions.
Speaker 1:Today, on the AI Proving Ground podcast, I'm throwing it over to my colleague, rob Boyd, who recently talked with Ina Posher, a data scientist who bridges the gap between what data scientists want to build and what actually works in production, and Zaid El-Khatib, a technical solutions architect at WWT, who helps organizations escape the AI corners they've backed themselves into. You're about to discover why most AI failures happen before the first model runs and the roadmap that separates AI winners from casualties. So, without further ado, let's throw it over to Rob.
Speaker 3:What have you learned about this gap between what data scientists which, as I understand, that's really your focus here what data scientists want to build versus what actually works when it comes to real production. I would say the biggest gap that exists is that there's so many high-level ideas and so many things that I could solve so many use cases that have high value and would have high return, and then the data is just not there or there's not enough data there, and when the data is not there as that foundation, then there's not much you can do for your use case and you're just stuck trying to create something out of nothing. So that's the biggest gap on delivering a solution and really making it worthwhile to invest in. Is your data there and is it ready to go?
Speaker 2:And sometimes I guess that results as a mismatch maybe between what the organization is where they're setting their expectations versus you know what can actually be done. So it comes into. You know, pick and we'll talk about this further but picking your use case especially early on and kind of being narrow about that and understanding what all feeds into it, it sounds like.
Speaker 3:Yeah, and prioritizing it based on the data you have available, the data that you can easily access and the tools you have available to solve it. So it's not just what will provide the most value to the company, it's a return on investment. So there has to be you have to invest something and the return comes from what's available and we can't just invest, invest, invest and not return anything.
Speaker 2:All right. Well, obviously we're going to talk about that one a little bit further in detail about how we actually pull that off. But, zaid, let's get you in there here. You had told me you specialize in Google Cloud and specifically, you kind of help account teams and customers. However, it happens, they get themselves back into a corner, perhaps with cloud, with AI, with some combination of the two, and you specialize in helping bring them out. Are you seeing any kind of patterns in terms of common mistakes that are made? Any patterns.
Speaker 4:So, yeah, definitely, I think what we see a lot of is that you know, because, admittedly, right in our field there are politics that are involved, and you know, I don't mean government in the state of the world or anything like that, but I mean in how you know how a solution is given to a customer or a product is.
Speaker 4:And so a lot of the times I'll see a customer who, for one reason or another, has been talking a certain product or a certain tool for a very long time and they've purchased it and they're trying to use it and get the most value out of it, but they only really qualified the product itself to see how sound it is, you know, in a bubble, and they don't often look at what the outcome they're trying to achieve is, and so they'll purchase a product where they'll start using a technology and it doesn't necessarily fit the outcome that they are looking for, or it's not the only thing that's going to get them to the outcome that they're looking for. And so sometimes I'll come in, because cloud is so wide and there's so many technologies and there's so many tools that are hosted on the cloud cloud native, et cetera so I'll come in and further qualify the tool that they're using and see if we can mess that with another tool, or see if they're not using it correctly and see how we can get them out of it.
Speaker 2:Yeah, and if you see that happening on a repetitive basis, are there common elements that are just not taken into consideration when approaching these things? Is that it, or does it come from many different directions?
Speaker 4:You know it comes from a lot of different directions.
Speaker 4:I brought up politics a little bit because a lot of the times, right, you have C-level executives who are being told really know, told really good things by people that they're familiar with in a company already, and so, honestly, I see a lot of I would say a lot of issue come from those kinds of conversations. But then a lot of the times it's that you know, perhaps they are have a generative AI initiative that is a little too lofty and so they're trying to hit that you know initiative. Or they're trying to hit that is a little too lofty and so they're trying to hit that you know initiative, or they're trying to hit that goal a little too quickly. Or you know, maybe their you know outcome and the tool that they're using to get to that outcome is, is solid and it's qualified really well, but you know their data isn't quite ready for it yet and so we have to backtrack and so they didn't look at the entire roadmap. They, they didn't look at the entire roadmap, they were just looking at the end of the destination.
Speaker 2:Well, there's nothing more fun than finding those out when you're mid-deployment. Oh yeah, and then it comes up. Those are not good conversations, yeah.
Speaker 4:Then timelines, money everything just starts expanding. Yeah, that's when the customers start bleeding money, For sure.
Speaker 2:Yeah, Well, and we're here to help hopefully prevent the bleeding. So there's two priorities that we're kind of blending together into this conversation and want to acknowledge that, because we kind of bounce back and forth between some of them, but I actually think this works well together. In the research paper there's priority two, which is about data security and governance, and then there's priority three, which is harnessing cloud infrastructure for AI. We've had elements of these things talked about on the show and other ways before, but I'm curious we talk about. It's interesting to me always. I always love seeing where security gets mentioned. Is it before or after certain elements, because I feel like it's always put in as kind of oh yeah, don't forget security. Of course, now we have trouble with regulatory and compliance and things with that, and I'm probably just as guilty as anyone else at doing that, as we'll see here in a minute. But, ina, I'm curious about why we can't just skip ahead to the AI part. Why would it be prioritized? Why would we need to think about security?
Speaker 3:There are a lot of reasons.
Speaker 4:There's a lot of opportunity to give your data away without even knowing it.
Speaker 3:There's a lot of opportunity to give your data away without even knowing it and leaking important information out into the ether or having available to be taken. So incorporating security from the get-go allows for your product or your model, your whatever to be developed in a secure fashion so that at every given point in time you are doing that check and balance and making sure that what you're developing is sound and will be very robust once it gets to deployment. When you throw it on like a little extra layer at the end, it's not baked all the way through. There are opportunities for there to be vulnerabilities throughout your data collection where somebody could have injected malicious data. There's opportunities within your pipelines where somebody could be changing values or not even maliciously, but something be going wrong when you're not catching it changing values or not even maliciously, but something be going wrong when you're not catching it.
Speaker 3:And if you had security baked in some checks there, then you would have caught that earlier on. Or if you're letting your, if you're not securing your model weights files and somebody is able to get them and they're able to just basically steal your model for lack of a better term Then all of a sudden all that time and effort that you put in is just it's gone, um, not gone, but uh, available to somebody else without you knowing it. And if you work hand in hand, um then then you're avoiding that and you're building it from a strong foundation rather than just putting like a little whipped cream on top of your sundae security as well is not just something that you, it's not a conversation you have at the beginning, and it's certainly not a conversation you have at just the end.
Speaker 4:It's a conversation you have throughout the entire process. Every work stream, every you know if you're doing a deployment, every sprint and everything like that needs to have a security conversation and needs to have a security expert involved in it, because everything and I'm thinking about the cloud right now every single thing you're going to be doing needs to be secure. If you're working in a data set, you need to let's get down to the specifics of even role level security in your data set. These little things really do matter, and so it needs to be a conversation throughout the entire process.
Speaker 3:I also know that data scientists don't have that inherently built in. Speaking from my own perspective, you don't really learn a lot about cybersecurity in your data science curriculum. It's not something that's placed as high value, but as the ecosystem that we're currently operating within has moved forward at such a rapid pace, you need to incorporate that security and it's kind of being left behind right now. So we're playing the game of catch up to make sure that we have cross-functional teams, that we're including those security experts and teaching them the world of AI, because maybe not always all experts just like we're not experts on cybersecurity and trying to find some common ground in terms of verbiage can be really difficult and is where some time can be spent to really help the end goal.
Speaker 2:The two things I think of is, when I think of Zaid I think you'd mentioned when we talked earlier about how ideal cloud can be for prototyping, ai understanding the caveats that are going to come along with that or need to be aware of. But also, in my experience, at least initially, a lot of organizations may have over assumed that security is kind of taken care of by using cloud. It's almost like well, they take care of the infrastructure, so now they're also taking care of the security, don't they? And obviously there's a shared responsibility model We'll talk a little bit more about and the understanding of where that begins and ends, because I assume the cloud is probably a good place to do security, but it doesn't mean you don't have to think about it and bake it in, as you're saying. Do you see organizations that are baking it in, as you said at the beginning and all the way through? I assume there's got to be some that are at least attempting to do that. Oh yeah, oh yeah definitely.
Speaker 4:I mean, I think this is more about maturity, right. And so a lot of customers, right. When cloud really started to find its feet or its ground, a lot of people were just migrating their workloads from whatever on-prem data center they had to the cloud. Some people did it well, some people didn't do it well.
Speaker 4:People who didn't do it well are trying to catch up, but there are definitely a lot of people who are doing it well and a lot of the times, what I've seen in my experience is that they are from industries that have historically always done it well, right. Like, if we're talking about global fire health care, there's really strict compliance regulations that they have to follow at every, at every stage of an application development cycle. Or just literally like the nurses and the doctors who are badging into the building, right. So these are industries who, historically, have done security very, very well, right. So these are industries who historically, have done security very, very well, and so you see, a lot of times, those industries are also excelling at their security in cloud because they're the ones putting it first versus you know, a retail company who maybe historically hasn't been the best at security, because they have a need to you know, consider cybersecurity for their small business that is now rapidly growing.
Speaker 3:So specifically for AI, cybersecurity for AI is just something, especially worldwide, that we are just pushing at the forefront because it's so necessary for those industries that don't have compliance that they have to make or adhere to.
Speaker 2:It was always struggled with the fact that we want security to be more proactive, but by its very nature it is reactive. It is very difficult to get money and get time and attention paid to something that hasn't happened yet, and so there is a chicken and egg challenge here, I think, because I don't think anybody would disagree that it's important, but it is probably harder to establish up front. You're like I just want to prototype, make sure it works first, then we'll worry about securing it. And you're like well, you're probably going to pass up some opportunities to really do it correctly if you don't do that early on. Oh yeah, definitely, definitely. Well, I want to get into action steps.
Speaker 2:I took the liberty of kind of combining these into four areas and somewhat of a practical roadmap. And it's foundation architecture, strategy and then security. Notice how I put security at the end. We'll just roll with it and pretend that he knows better, he knows. I want to go through these real quick, just so I can bring up on screen, because number one is establish your data foundation. Two is build with the right architecture. Three is align strategy with execution. And then four implement security and governance. I love the actionable nature of these actions, of these action steps, that's not redundant Ina. So, number one establish your data foundation. What does it mean, especially from your perspective as a data scientist, about having good data?
Speaker 3:When it comes to use cases, making sure that you have your use case well-defined and that there's data there. It's like step one of everything. And then, once you've made it that far, making sure that we have access to all that data, making sure it's readily available when we go to pull from it and that it is actually full of rich information and not just full of empty values.
Speaker 2:That's kind of the whole point of building your own models right, which is it needs to be in something that is going to be expressly and uniquely yours, because that's going to become part of your extending your IP. I guess, so to speak, step two build with the right architecture. Zaid, I'm curious about when it comes about securing access to high-performance architecture. What does that mean? When you talk about securing access to that architecture for an organization, there's the ability to actually use it, secure your ability to use it.
Speaker 4:and then there's actually securing the infrastructure itself, whether that's on-prem or in the cloud.
Speaker 4:So, you know, being able to actually leverage the infrastructure itself is one conversation because, you know, famously, especially for a lot of large organizations like, for example, nvidia GPUs are really hard to come by.
Speaker 4:And then also there's, like CapEx versus OpEx, in terms of actually purchasing the infrastructure that you're going to run models or do training on. So securing, you know, in terms of getting the infrastructure, is different for every company and it's different depending on your use case. But I would say that it's a significant problem that people are facing, especially if they want to build their own data center. But then it's also a significant issue for people who are running in the cloud. Because if you are running in the cloud and you're doing, you know, high performance compute, you're probably it's probably going to be pretty expensive. So when I hear securing access to high performance architecture, there's the issue of actually acquiring the infrastructure or the ability right to compute necessary. But then there's also the skill issue right, the skill and the education that's required when it comes to building, with the skill issue right, the skill and the education that's required.
Speaker 2:When it comes to building with the right architecture. I know what lessons could be drawn with that in mind in terms of understanding architectural differences between something that's maybe proof of concept versus scalable production, ready Right.
Speaker 3:Yeah, I mean when Zed and I were working together on building out the demo that we built out. It was really a proof of conference Whoa.
Speaker 4:It was a proof of conference at a conference. It was a proof of concept for a conference.
Speaker 3:I swear we were going to have gone, but the idea was like okay, we have a really short amount of time, I need to make sure I have access to some GPUs that go fast, because it was going to be using a large language model and typically when you need to use those, you need a little bit more support. I needed access to some sort of foundational model, so some sort of LLM that we could use for our use case, spin it up in a short amount of time, and all it had to do was run on any computer where we pulled it up, uh, at any given point in time in front of an audience which, uh, is is demo nightmare, um, but it is what it was. It we weren't selling it to anybody. We were doing it to show what the art of the possible really was and what you can do with what exists out there right now.
Speaker 3:Specifically, we were using um gcp and they'd was was supporting on the architecture side and was able to show hey, we don't, we don't need to to spin up a whole new environment within the air proving ground. I can click a couple buttonings and within 10 minutes we can have an environment for you. You know to go put some data in and start developing, which for me is fantastic and definitely not always the case. Sometimes it takes a lot longer to get the environment for myself to work within spun up and ready to go with all the hardware different libraries and then models that I would like to build upon available to me or and have enough storage for the data in a somewhat secure location.
Speaker 4:Part of what we are trying to paint with the art as possible there is that you know, the time to actual valuable insight is definitely sped up by AI. Right, there's generative AI is kind of is kind of integrated into every piece of what we did, for example, you know, was using Google collab. Google Colab to actually write the scripts that she was working with, and there's like Coda system there, and so that's something cool to help you finish what you're writing. But then you know, I don't want to diminish the fact that we got to results, meaningful results, very quickly, but then to kind of answer the original question, it was just a POC, right? There's so many other things that we need to consider. I mean, like we were using sample datasets that we, I think that we used like ChagPT or Gemini to create for us.
Speaker 3:Yeah.
Speaker 4:But what?
Speaker 3:the value there was was to show hey, even with fake data, we can draw insights. If you have a use case that's similar to this, this is something that can be solved in a similar manner to which we did, and the proof of concept really sparks ideas in people and then they start thinking about well, I have a similar problem, but it's slightly tweaked. Can you address this in a similar fashion? Yes, we can. Well, I have this way. Way, I would like to do it the opposite way around. Could I? Could I invert it? Yep, models can also handle that.
Speaker 3:So it's the architecture and the poc. The value there is doing it quick, because when you're talking with customers, usually you only have so much momentum for so long, and so when you can spin it up and you can spin it up fast then you can show what the value is. They keep their excitement, momentum. Everybody's happy. If you need to spin up an entire lab space and take about a week and a half, two weeks, to do so because you want more robust architecture for a long-term engagement, you might lose that momentum. And then great, you have this super robust environment but nobody, no customer, to fill it.
Speaker 2:So it's a give and a take, and when you're poc things out and doing quickly, like zay and I do um on a weekly basis, then then the short and dirty way uh is is very nice saying there's a it's important to set expectations correctly about this does not equate to yes, if your team's not doing it at this speed in production, there's something wrong with them. You're not saying that at all. You're saying this is a proof of concept, or even more work was done to make it for a lab, but the lab is not production.
Speaker 2:Correct and it shouldn't be relied on as much.
Speaker 3:Yes.
Speaker 2:Because understand our point we're making and the points we're not making, to make sure that you don't conflate the two.
Speaker 4:Yeah, yeah, and I think that speaks to just how far AI has come right, the fact that we were able to spin up a demo like that so quickly, honestly and I mean, this was our second job, right, we still had all of our main priorities that we're working on on a daily basis, and then we kind of hodgepodge this together in like a week and a half or two weeks, and so that's just. It's awesome that we were able to do that, and I think that's also speaks to the flexibility that cloud gives you, because we were able to run it on the cloud If we tried to build this in the ATC or anything like that, which is an amazing, you know, an amazing thing that we were doing, and so spinning up a really quick instance in cloud was really useful.
Speaker 2:Let's double down on the paper, and then we talked about this a little bit. Talks about the importance of real world use cases, and then step three, where we talk about aligned strategy with execution. How do you help customers focus on actual or pick use cases that matter to them? What are some things you may help them keep in mind?
Speaker 4:So I think every customer is different, and I think part of the great thing about my role is that, you know, I honestly don't get as deep as Ina does into you know, she gets super deep into data science.
Speaker 4:I get to be needy about a lot of different things, so I get to work with a lot of different industries, which is awesome, and so what I've noticed across the industries is that I mean, that's why we have industry committees, right Is that they're all. All their use cases are very different, and so I think what helps is that when you first speak to somebody, if you're speaking to whatever stakeholder, it is, focus on the outcome. Focus on the outcome rather than the tool, rather than the process. Honestly, before I even talk about their current environment and what struggles they're facing, in that minute I talk about what their ideal future state is going to look like, and when I get a really good understanding of what outcome they want to achieve, that tells me about their current environment. That tells me about the pain points that they're seeing, without even having to have that conversation. Of course, we'll dive into those conversations, but focusing on the outcome that they want to see and what their ideal future state looks like really gives me a good ground to understand where we should go.
Speaker 2:How often do you guys run into a situation where you then have to? Then you know, okay, I see what you're trying to achieve. I think we need to narrow it down to ensure.
Speaker 4:Everybody wants to boil the ocean. Everybody wants to boil the ocean. As a trusted partner, I get to see people level set and be honest, like maybe this tool that you're using isn't great, or maybe you need to come down not just one step or two steps, but like 10 steps, so that we can start having a real conversation, talking about a real use case.
Speaker 2:Do you get a lot of pushback often on those things, or do they listen to what they're paying you to do?
Speaker 3:People always want more than you can deliver. That's my two cents from the data science perspective.
Speaker 2:Do you like to see them maybe run full circle on something simple that can show results before they start maybe tackling more complex? Things Like even with the way Worldwide rolled out Atom I don't know what it's called today things Like even with the way Worldwide rolled out ATOM I don't know what it's called today. It may still be somewhat similar, but I remember it was very constrained in terms of when it was being rolled out, as to who it affected, what data it was, resourcing and things like this, and they waited until they got comfort level with that before expanding access as well as expanding the training models, I believe.
Speaker 3:Yeah. So I'm going to talk about data science and how we approach it model-wise. But we really like to iterate and you can grow when you're in this iteration phase. So, rather than going full circle, think more spiral, where you're spiraling up in a spring. I guess you achieve that first use case or you get to a point where you're comfortable with it and then you can build upon it.
Speaker 3:So with Atom, we started just with the basic LLM, evaluated it yeah, basic LLM and then gave it access to information on worldwidetechnologycom or wwwcom, the platform. Okay, great, everybody can have access to that. All right, what was the next level? Next level was okay, make sure that the model is responding intelligently. It's not hallucinating so much. Okay, improve upon those pieces, all right.
Speaker 3:Next step more data sources. Let's feed in more data sources. Now we have five that are available to us Awesome. Next step above that well, let's limit who can access what. I'm not necessarily an account manager. I don't need access to the same type of data that they do. And then, finally, now we've gotten to the point where we're deploying agents, so different models that have different responsibilities and are fine-tuned to a specific task.
Speaker 3:So when you start small and you're able to build that solid foundation that you can then reuse and build upon. Then you have the ability to do this exponential growth, which is what we've seen with Atom. Exponential growth, which is what we've seen with Adam. It was a very slow build at the beginning and I think that was something a lot of people who weren't scientists at this point in time that's the feedback list. We've been working on WWT, gpt for so long, so long, so long, so long, so long. And just recently, have you seen that exponential growth of like oh, we all have access, oh, there's new data services, oh, now there's agents, and all of that in a short number of months? And you see the time that it takes to add on those extra features and build upon what you have go down significantly, because you started small and then iterated upon that initial solution.
Speaker 2:So step four is around implement security and governance From a cloud perspective. From a cloud perspective, zayd. What specific skills do organizations need for governance when it comes to cloud? What expectations should they have about what cloud's doing for them versus where they need to take more responsibility when it comes to security and governance?
Speaker 4:Governance isn't just a specific tool. There are tools in cloud that help you with governance so that you are following right to right permissions access and you're labeling and tagging your data correctly, et cetera. But governance, to me, I think, starts first and foremost with education.
Speaker 2:Yeah, and I think and that's a big issue because, as you guys both had mentioned earlier you know, one of the challenges we're dealing with is that we're all we're learning what AI can do for us. At the same time we're trying to learn how to operate it, and usually historically, those things have been really separated. We didn't have technology roll out so fast that we're using it, while it's still changing as rapid as it is.
Speaker 3:My two cents on governance is get in early, understand what is important to your organization, whether it's within the cloud, legal data, science, network space. How are you running your governance? And then have every solution adhere to that, Because you can build models and products and solutions in any type of way, shape or form, but when given those guidelines, you are much more likely to get a product that adheres to it rather than at the end being like oh, by the way, this needs to adhere to this one policy that we didn't tell you about ahead of time and that might just be the exact key piece of information that the model is using to make its predictions.
Speaker 4:Once you're educated and you've established what your framework's going to be and how you're going to implement governance, then let's talk about tools, because there's a lot of tools inside. You know, of course I'm speaking from the cloud perspective, but there's a lot of really cool tools to implement governance, whether it's at the data level for labeling and tagging, like Dataplex in GCP or like something like Assured Workloads and again, I speak from the GCP perspective. But if you have a compliance standard you need to hit or you need to meet excuse me you can implement that across your entire cloud environment in Google Cloud by using something called Assured Workloads, and so when you try to use a tool or you try to access data or something like that that you shouldn't be able to, it literally flags you and stops you. So if you have a solid foundation of how you want to implement governance and what standards you need to meet, there's a likelihood that the tool is there to help you follow that.
Speaker 2:Do you think? Is it logical to assume maybe this is too easy to answer but the notion that I think we're going to have more regulations and oversight that we can't predict now, and so if you say, well, I'm safe doing these things now because no one's watching or cares or something, and so planning it out and making sure that you're addressing this not based on what the law or anyone else says you should do, but what you know should be done, might save you a few steps in the future and your ability to pivot when those things inevitably change, perhaps, oh yeah, 100%.
Speaker 3:There's also a bunch of voluntary frameworks out there right now, or even like heads up like OWASP and NIST. They put out a top 10 vulnerabilities of LLMs. So if you look at what the current vulnerabilities are and what the industry trends are, you can see where those guidelines are then pointing. At this point, within the United States, there is no AI regulation. That's very clear. So you have to look a little bit deeper into what the industry is recommending and that is not easy because there's a lot of voices. But paying attention to those like OWASP and NIST that have been very strong in the security realm for a long time and are now looking to include AI as part of that. And I think Cloud's a little further along. So tell me if I'm wrong, but it's still following. It's still behind traditional security.
Speaker 4:Yeah, yeah, there are trends and publications that come out all the time with you know, recommendations about how you can continue to secure your workloads and, specifically, how you should be prepping for AI.
Speaker 2:Yeah, and then we generally see a lot of those get adopted, because that becomes the easier route, especially because a lot of forward thinkers are engaging with that and they're going. This one doesn't make sense, even though they're all voluntary. If we engage enough, then we come out with something that is reasonable and allows us all to still hit our objectives without being too onerous. Yeah, yeah, you're right, they're probably going to come from those pre-existing voluntary frameworks. Okay, so final questions. What's the one thing that you wish that they knew more about in terms of building AI-ready foundations early on? Who wants to take that first?
Speaker 4:Tina does I do I do.
Speaker 3:80% of your problem is a data problem. Every AI problem 80% of its data. So just expect that, because the first thing I'm going to tell you is that you need to handle your data, and being surprised every single time when you have a new use case and you're like but my data was fine, last night, we fixed it. It's like we fixed it for that use case. Every single AI problem is 80% a data problem.
Speaker 1:Okay, that was a great conversation and I want to thank Rob, ina and Zaid for their insight on this important issue facing any organization looking to scale their AI initiatives. A few things stood out to me about the conversation. First, your AI initiative will rise or fall on the quality and accessibility of your data. Prioritize projects for which the needed data already exists and is well governed. Second, bake security and governance into every step, never as an afterthought. Integrating security controls, role-based access and compliance frameworks, early prevents data leakage, model theft and costly rework. And third, start small, iterate fast and align architecture to outcomes, not politics or hype. Success comes from a clear outcome, the right high-performance infrastructure and a spiral, iterative roadmap that expands only after small wins are reached.
Speaker 1:Bottom line. The companies dominating with AI in 2025 won't be the ones who move first. They'll be the ones who moved right, who understood that boring data governance and infrastructure work is what makes breakthrough AI possible. That's it for this episode of the AI Proving Ground podcast. A special thanks again to Rob Boyd for hosting, brian Flavin and Amy Riddle for their content support. If you're interested in more episodes of the AI Proving Ground podcast, please consider subscribing on your favorite podcast channel or check us out on WWTcom and we'll see you next week.