
AI Proving Ground Podcast
AI deployment and adoption is complex — this podcast makes it actionable. Join top experts, IT leaders and innovators as we explore AI’s toughest challenges, uncover real-world case studies, and reveal practical insights that drive AI ROI. From strategy to execution, we break down what works (and what doesn’t) in enterprise AI. New episodes every week.
AI Proving Ground Podcast
The Brutal Truth About AI Data Readiness: How to Slow Down to Move Fast
Charged with moving fast on AI? You might be setting yourself back. In this episode of the AI Proving Ground Podcast, WWT AI expert and data scientist Ina Poecher and Chief Technology Advisor Bill Stanley break down why the organizations that win with AI won't be the ones spending the most — they'll be the ones that get the fundamentals right. From "garbage in, garbage out" to building modular, reusable solutions, they share hard-earned lessons on how to align your teams, clean your data and choose the right first use cases to create a compounding "flywheel" of success. If you're feeling the pressure to move fast, this conversation will show you why the smartest move might be to slow down.
Support for this episode provided by: Netscout
More about this week's guests:
Ina Poecher is a Technical Solutions Architect at World Wide Technology (WWT) and collaborates with customers and internal teams to design and validate innovative technology solutions. Working within WWT's Advanced Technology Center, she leverages extensive experience across IT infrastructure, cloud, networking and automation to develop and test complex architectures that drive business outcomes and support strategic initiatives.
Ina's top pick: AI Agents: Scaling Your Digital Workforce
William Stanley is a Chief Technology Advisor with nearly 30 years in IT, specializing in data strategy. With an MBA and BS in Computer Science, he aligns data, technology and business goals to drive outcomes. A trusted advisor and innovative leader, his expertise spans IT strategy, architecture and analytics. A lifelong educator, he also teaches a graduate big data course and brings deep, cross-industry experience to every engagement.
Bill's top pick: The Data Traps That Are Killing AI Initiatives
The AI Proving Ground Podcast leverages the deep AI technical and business expertise from within World Wide Technology's one-of-a-kind AI Proving Ground, which provides unrivaled access to the world's leading AI technologies. This unique lab environment accelerates your ability to learn about, test, train and implement AI solutions.
Learn more about WWT's AI Proving Ground.
The AI Proving Ground is a composable lab environment that features the latest high-performance infrastructure and reference architectures from the world's leading AI companies, such as NVIDIA, Cisco, Dell, F5, AMD, Intel and others.
Developed within our Advanced Technology Center (ATC), this one-of-a-kind lab environment empowers IT teams to evaluate and test AI infrastructure, software and solutions for efficacy, scalability and flexibility — all under one roof. The AI Proving Ground provides visibility into data flows across the entire development pipeline, enabling more informed decision-making while safeguarding production environments.
In the rush to win the AI race, a lot of companies are sprinting toward the future only to trip over their own data With generative AI. Bad inputs don't just cause bad outputs, they multiply them. So this week we're talking with Ina Posher and Bill Stanley, two veteran guests of this show. Ina and Bill have been in the trenches helping organizations slow down just enough to get AI right Things like aligning teams, cleaning data and starting with the right first use case. In the trenches, helping organizations slow down just enough to get AI right Things like aligning teams, cleaning data and starting with the right first use case. It's a conversation about resisting the urge to go fast at all costs and how building the right foundation today can create a flywheel effect for tomorrow From Worldwide Technology.
Speaker 1:This is the AI Proving Ground podcast Everything AI all in one place. Today's episode will be a good reminder that in the race to adopt AI, speed means nothing without a solid foundation, and the fastest way forward often starts with slowing down. Let's get to it. Ina and Bill, thank you for joining the AI Proving Ground podcast Two repeat guests, although, ina, you've been on the show a number of times so welcome back.
Speaker 2:I've gotten so many messages recently being like you're on it again, so this is.
Speaker 1:Superstar. Yeah, podcast, superstar, wow.
Speaker 2:Might be a new career path for me.
Speaker 1:There you go and Bill, second timer, third Third timer Wow, might be a new career path for me. There you go, and Bill, second timer, third Third timer Wow, well, it's good to have you back for the third time.
Speaker 3:Two in person, one was remote. I definitely prefer in person.
Speaker 1:There you go, the hat trick. I'm going to start with you. So I did in prep for this call. We're talking about how to bring data into an AI output standpoint here. I read an interesting article about who's going to win the AI race in general, and the author made the argument that it's not just about who spends the most, although certainly the hyperscalers that are investing a ton of money are making incredible advancements. They said the winners will be more defined by who gets the economics right, and I think what they're trying to say is it's not just about building the flashy model, but it's about who's got the right process to go from data to output of AI. Make that case for me. Double down on that case.
Speaker 2:Yes, yeah, I can totally argue that case so I fully agree with that statement being that those who successfully get their data under control, from both it being raw to being ingested by a model, are going to be the ones that succeed in this AI race, or are the ones to see the most value come out of it. They will be the ones to actually be able to say, hey, I have ROI quantifiable ROI because they go through that upfront process of getting their data in order. Gen AI in general is known for the whole statement of, or AI in general garbage in, garbage out. Gen AI is a microphone. It makes it 10 times worse. So any garbage you put in, you just produce 10 times more garbage, because the way that generative AI works is that it takes that training data and then it makes new data based off of that. So those that have their data in order are going to be the ones that succeed and will see monetary output and actually get that ROI that everybody's always asking for, and it's a hard number to provide.
Speaker 1:Yeah, and Bill, do you think that's where most of our clients are, or most organizations are right now, understanding that value proposition of you know, garbage in, garbage out we need to have our data estate in order, or are we still in a spot where people are?
Speaker 3:thinking they can just move rapid fire without paying attention. Definitely the latter. I think a lot of people still don't want to take the time to think long-term about data strategy and getting the data in order up front. And I think, to add to what Ina said, another component why certain people will be very successful is when they align organizationally around AI, right, the data scientist and the data. It's a simple chatbot. You know, we'll just chunk up some data and you know document and throw it in there. Well, if the business doesn't communicate about what their intention is, how they're going to prompt it, like if they're writing multi-part prompts that are two and three different questions in one, then that's going to affect how you chunk it up and prepare the data data. But then also the other thing we've been working on is providing a rubric for people to score their data is how ready is their data, how ready are their documents? And then I mean that's an area that you get a lot more into.
Speaker 2:Yeah, I feel like, specifically the customers that we've been working together on, it's almost like you have to convince them, like I understand you want to move fast and I understand you want to have AI in your environment yesterday, but convincing them that they are actually setting themselves back by taking that step forward too soon is a really hard conversation to have. That people really have trouble internalizing and so a lot of times we have to be the bearer of bad news. But what they get when they actually set their data up correctly, is long-term success. Success with multiple use cases rather than just the one. And when you have that organizational alignment and everybody understands what's the priority of all these use cases, all of a sudden, if your data is in order and you're aligned strategically, then you can just tick off the use cases one by one and start to see that timed ROI go down.
Speaker 1:Yeah, I mean certainly a lot of pressure out in the market. One of the things we hear from clients most is just I have, especially, like you know, in the C-suite is I'm getting an immense amount of pressure from my CEO, from my board, to move fast, you know, to your point. But they are also the ones that recognize the need to like well, how are we going to do this? How are we going to slow down? What type of answers do you typically give that resonate, that would resonate with the CEO or the board about saying maybe let's not move at such breakneck speed here, but let's put the right pieces in place. How do you answer that to them?
Speaker 2:The best way I found is to talk about what we did at Worldwide, which is start with your first use case, align on your use cases, start with your first use case, set everything up correctly, get all the right policies in order, have your data set up in a strategic manner and do it in an abstract way so that it's repeatable and showing how we've done it, which is starting with one model.
Speaker 2:Okay, we learned our lessons with that one model and we got the opportunity to do so. So then, when we went through for the second use case I think it was RFP Assistant was our second use case? The time to actually implementing it in the environment where our team was able to use it was so much faster than the first model we attempted to put out there. Then, after that, it was a compliance engineer and again, time was cut in half. So being able to speak with our experience, I think, is the best way to show it, because our experience, either with what we've done internally or with other customers, is what convinces people, because we're able to show that time went down.
Speaker 1:So what I'm hearing you say is you're going to make mistakes. You're going to make errors. It's going to be a tough uphill battle on your first use case, but it'll become easier on your second, easier on your third. Then you start to think about a flywheel bill. If that's correct, Is that something that is applicable to all other organizations right now?
Speaker 3:I think it's applicable to everything, and even taking it back to just basic analytics, right. And then machine learning it's having a methodology, and we took our methodology and turned it inward when we did these projects, and I think that's critical to success. Right, you have the outcome in mind ahead of time. What are we trying to answer? What is the business value we're going to deliver? And then, from there, do we have the data sources? Right, identify the data sources, is the data quality there? All of those things? You have to have a methodology, whether it's ours or your own. Yeah, I think that's critical to success.
Speaker 1:Yeah Well, bill, you mentioned I think you said it was like a data scorecard that we've started to maybe communicate or share with with some clients. When do you see when? When would a client, when would an organization realize that they're in the right data posture to move on? Is there? Is it like a red light, yellow light, green light, or how do they know when they're ready?
Speaker 3:I like that. I like that as well. I, you know I don't know if I would frame it that way I think there there's always some data right, that's good enough for now. You kind of yeah, that data is good enough. We know that data sources is fairly consistent. But applying a rubric to really look at the data and bump that up against the questions you're trying to answer, that's going to tell you whether you're ready or not to do that with that particular data source for that particular outcome.
Speaker 2:Yeah, I think it really helps with the use case prioritization right at the top of when we're trying to figure out what we want to go for first and setting ourselves up for success. Let's start with something that is closer to being ready than something that is red light. If we're going with that, analogy.
Speaker 2:But, as we always say, you can do. You should start implementing and playing with AI at any stage that you're at. You should try. It's just whether or not should I productionize this specific model that was trained on data that was not ready.
Speaker 3:That's a great point because, yeah, I mean you start. The tools are so available. There's so many things you can do in a development environment to explore and do things. You can stand things up quickly, especially if you leverage the cloud right. Yeah, those tools are fantastic. Yeah, short time to value.
Speaker 1:Well, and also, you know, keeping with that red light, yellow light, green light analogy, some use cases may get a green light earlier, the more low hanging fruit to start to build up that momentum, those foundations of working with AI, while other, more transformational, borderline pie in the sky use cases might have a red light for some time, until things are ready. So should you just look to get that low hanging fruit first and get a couple of those green lights? Is that what's going to start to get that flywheel going?
Speaker 2:I think so, and because you're constantly iterating and even those small pieces have value, I'm going to again tie it to a worldwide use case. We have an RFP assistant helping to respond to RFPs Requests for proposals.
Speaker 2:Requests for proposals. Thank you. The tool itself is built up of basically three parts. You have a summarization piece, you have a comparison piece and then you have a generation piece. So the first part is what is this RFP about? Tell me about it, give me a summary. That's one tool, that's one model, and if you build out a summarization tool, you can apply it to anything. It's not just RFPs. You can apply it to any type of document, to my emails, to whatever people are sending us these days, and it can summarize that up for you. Comparison piece same thing. You can compare documents of any type. It does not have to be RFP. So you build out these small pieces and then the generation piece as well. You build out these small tools going towards the goal of a larger tool that's more transformational, but you can use those smaller ones over and over again for different use cases. If you build abstractly, with the strategy in mind, so that it can be reused.
Speaker 3:If you build abstractly with the strategy in mind, so that it can be reused. Indeed. I would think of Atom AI that way as well. Right, because we continue to add other data sources to it. That just makes it more powerful, but again, built with the strategy in mind that we are going to add more data sources and incorporate other things.
Speaker 1:Yeah, Atom AI being our own internal and now externally available chat bot that can search our website, also the web and various other areas of our business. So you know, if you're listening right now, certainly go out and test out Adam AI. Ask it a question about what we're doing here. Where is my data? What do I need to be doing?
Speaker 3:Um, but I am. I am curious, so you know you're talking about, you know, getting those kind of lower hanging fruit. Does this apply to the buy versus build? How are we going to deliver value to the business and then think about what are we going to do? What is that solution going to look like? What is the architecture going to look like? Can I just go buy that or do I have to build it? I think you absolutely come to that, but you've got to start with the outcome in mind first. And then, how far does that tool get me right? Does it get me 80% of the way? There Is 80% enough. The 80-20 rule applies to so many things.
Speaker 2:Yeah, I second what Bill says. You can plug a bot tool into your ecosystem, even if you're building a lot of everything else, just making sure it fits into that overall strategy that you have, into the frameworks that you have set up and that it follows the same rules. You don't want to let a tool that you're buying just go off willy-nilly on its own and have its own logic, rules, et cetera, and not have to adhere to the same policies you put in place when you set up what you want your AI journey to look like.
Speaker 1:What should organizations be prioritizing to get to the point that they're getting to more of those green lights? From a data perspective? Where do they need to prioritize? Either cleaning the data, organizing the data or just all around good data hygiene. Is there a sequential order here? I'm sure it's going to depend on where they're at sequential order here, or is it? I'm sure it's going to depend on where they're where they're at? But if you were talking to the kind of the industry in general, what do organize organizations need to do now to get their data ready to get to those green lights?
Speaker 2:Yeah, I mean.
Speaker 3:I would say that they have to understand where their data is, like the the data rubric thing I brought up. I'm bringing up again that we've been working on understanding understanding what is good, what is what data is ready and specifically, we're talking about generative AI. So I want to make sure we we understand that right, because there there are different ways to prep data for machine learning and other things like that. But for generative AI scoring and understanding what, what is quality, what is red light, green light, you know yellow, and what do you have to do to make that green you can do that during a project, but then you can bring that knowledge back as you're creating more content, apply the rubric while you're creating those documents and other things, artifacts that will eventually make their way into generative AI. So you could do it the other way first and think about how to go, look at everything and try to get everything right.
Speaker 1:Well, that's tell me more about the rubric what, what, what are the points on there that is, evaluating or looking to get context on?
Speaker 2:Yeah. So starting with identifying what are your data sources, where are they? Who has access to it? Do we all have access to it? And then going through the different types of data Again, for generative AI, data is represented differently and generative AI can take in so many different forms of data.
Speaker 2:It can take in speech, it can take in plain text, it can take in an image, it can take in a video, etc. And all those different forms of data at some point need to be translated into something that a machine can understand, and that's the difference. Every one of those data sources needs to be transformed. So if you have a bunch of data and you want to put it into a knowledge assistant like Atom AI, you would need to first start with is my data ready to be consumed by a model? If it's just text, you only need to provide a transformer that turns it into a mathematical or vectorized representation of what that English text is. If it's a video, maybe you should transcribe it first and then it's in English text and then you can transform that into a mathematical representation. Images are even different If you don't have a model that is able to see for lack of a better term you're going to want to have some subtext describing what that image is so it can understand and bring that context. There's a lot of understanding of where the data is at. How do I need to change it to make it easy for a model to understand?
Speaker 2:So, once you've found it, a lot of this rubric is where are we in that process of? Is it actually useful for a model in its current state? If not, what do we need to change? And so, for that rubric, we go through that quite a bit for different PDFs. Is it formatted correctly? Or is there a random table thrown in there with no header or columns? No, nothing, it's just numbers in there. Or is there a random table thrown in there with no header or columns? No, nothing, it's just numbers in there. Or is there an Excel sheet that's just decimal points? That doesn't help your model. That's not going to do anything unless there's a description or some headers or some context, and that's really what we go through with that rubric and with custom races. Let's look at some samples. Is your data ready for what you're hoping to achieve? If no, how can we?
Speaker 1:Is their data ever ready?
Speaker 3:I've never seen anyone's data perfect. Yeah, yeah, are we looking for perfect? No, I guess I've never seen anyone's data ready there. It's a journey, right, it's not a final destination, and that's what I was getting at before. Right, I think you start with what's in front of you, right, what you're trying to answer, and then figure out what does my data look like? How do I need to clean up my data with that exercise and then the next set of things? But at the same time, we should be building some long-term strategy around. Okay, as I'm creating new data, can I apply these rules while I'm doing it, so that the cleanup's not as much as I go through it?
Speaker 1:Yeah, who's doing that work? Is that a data engineer? Is that a data scientist? Data stewards, data stewards.
Speaker 3:I mean, that's who I think is the most appropriate, right, you can have data engineers do it, but not data. Just, real quick, is that on the IT side, business side? Is it a mixture of both? Business side, you know that's a whole different topic, but I think over the last decade we'd really seen shadow IT pop up, and what that is is we have these technical business analysts, data stewards. They really know the data. They're building things in self-serve BI tools, so they're technical, but they're embedded in the business unit.
Speaker 3:So you have to leverage those data stewards. They know the data so intimately, right, they know the metadata, all those things, those descriptions, what that data element is, how it should be described, how it should be used. We need to leverage that knowledge. So why not just enable them to do that? And I think, especially when we talk about, like, semi-structured data documents and things like that, they're creating them. So, yeah, let's give them the guardrails, or rather, help them establish the guardrails around those. Because, again, like I said earlier about how you know the business working with the data scientist and data engineer, make sure they understand how we're going to use those documents technically, what's going to happen with that document, why they need to put certain metadata in there, as they're creating things. No-transcript, they're embedded in the business, they're technical folks. Now more than ever, they have to be working with the data scientists, specifically when it comes to generative AI, like the old school data scientists in the back room doing machine learning. That type of organization doesn't work in generative AI.
Speaker 1:Yeah, no, I'm sure you have an opinion? Well, yeah, you know you are a data scientist, so why do you appreciate a data steward coming to you and working hand in hand?
Speaker 2:Because then I don't have to go track them down, creating my data dictionary and trying to pull these answers out of some folks. A data steward will know the answer and I don't have to go ask 20 people and still end up at the data steward. But yeah, as a data scientist, you need to. It's a new skill now. You need to be able to communicate with people. You cannot just, as Bill said, go hide in a room and write code and build these models.
Speaker 2:That might have worked a lot better when we were doing a lot more statistical models and math was really the basis of all things. It still is now. But the data sources are so important and they're all written in natural language, natural text. So you have the people who know what the context of that text is, and you have people like me who are supposed to build a solution for that. And if I don't know what the context of that text is, and you have people like me who are supposed to build a solution for that, and if I don't know what the context surrounding that is, then I'm left to make assumptions, which, as we mentioned, making assumptions really leads to problematic responses.
Speaker 1:Yeah, so in other words, the data stewards are going to be talking to you one way or the other. Correct, they might as well be talking to you early, right? Off the bat? Yes, I mean, does every organization have these data stewards or are they kind of like hidden and you don't really know where they exist? This doesn't sound like a real, like official, like title title.
Speaker 3:Exactly. I think they're not an official title most of the time, but you can find that person. They're the person that knows the data. They're building usually BI solutions, self-service BI solutions within the business. They were the folks that you know a decade ago. They were doing it all in Excel, creating spread marks and things we don't want to talk about. But, yeah, they know the data, you can find them, but there's usually not a title of data steward and that leads more into the governance conversation. Right, that's another area where we need to tap those folks, formalize those roles and responsibilities into a governance program which is it's all tied together, all those things you need to do with an AI center of excellence. It's going to incorporate your governance and everything else.
Speaker 1:And does that governance and that just good data hygiene? Is that what helps unlock use case two and three to be able to plug into that same system Because you have like a standard? That that's why it's so important to do all that work up front, because it's going to help you to your point, I think, earlier down the line be able to just extract more and more use cases where it can become like a. If you build it, they will come, but you can't just start with that mindset.
Speaker 3:Yes, and it helps with that flywheel right. We will often talk about the flywheel effect, getting the first use case, building muscle memory. There are all sorts of things that apply to that right. It's the technology, it's the way we work together as a team, it's the way we approach the work. That flywheel has many facets to it. This episode is supported by Netscout. Netscout provides network performance management and cybersecurity solutions to ensure service delivery. Maintain operational excellence with Netscout's comprehensive monitoring tools.
Speaker 1:What other stumbling blocks do organizations typically encounter, other than just my data is not completely clean and ready to go? Are there other areas within that realm that they typically stumble on that we see as a challenge or a roadblock?
Speaker 2:I feel like a lot of people want to achieve everything in one go and that's just not how AI at this point in time, if you are not very mature works. You need to understand that it has to be broken down, that you need to iterate, and if you break it down, it's a lot more achievable and you will see the value. The stumbling block with that comes then. If they want to go for that major transformation and then they don't see that ROI instantly, it takes too long to develop, and then they're just dumping money and dumping money and not seeing anything in return. I think that's that stumbling block where then people get impatient and the solution could have been smaller chunk.
Speaker 3:I agree. I think another piece of that is the people part. Right, having the resources that know the technology and can do it. Again, we talked about the tools. Right, it's so easy. I can go to SageMaker and spin something up real quick, but is it production ready? Did they communicate with the business folks? Do you have the resources to support it and support it long-term, or do you need, you know, some strategic staffing? Or do you need a partner to come in and do the first project and educate your folks and help you figure out? Well, where are the gaps in my staff? You know where? Where do I need to do some hires long-term? Those, those things are all, I think, a big stumbling block because, yeah, people are running so fast, they want to get to it and they don't think about, yeah, the long term talent seems like such a giant gap and such a hard one to overcome.
Speaker 1:you're seeing headlines and not to say this is what's happening in every um ai hire instance, but you're seeing hundreds of millions of dollars being handed out as either bonuses or salary to these AI professionals, and that doesn't seem scalable at all for a majority of clients that you know that we would interact with. So is it more of like an upskilling, like upskilling your current workforce? And if that is the case, where would you say organizations should invest in training, like, is it just data hygiene? Or I mean, there's no better time to learn from AI than right now. So what can we be doing to upskill ourselves? Where should we be looking?
Speaker 2:Yeah, I mean upskilling. I say upskilling and hiring To your question of how to best approach that upskilling just building out projects that have relatively low consequence to begin with, or building out projects that have relatively low consequence to begin with, or building out projects that have high consequence, but giving the time to do so. So again, at Worldwide we built out Atom AI and we use that as an opportunity to upskill a huge number of our data scientists. A lot of folks got to touch it and got to work on it and because of that they gained the skill and now they're very comfortable with those types of projects when they go to different customers. I think for the general workforce a lot of it is exposure, using the tools and understanding. Oh, I gave it a silly prompt and because I gave it a silly prompt, it gave me a silly response understanding how your actions affect those models and then providing education in the data space as well.
Speaker 3:I agree, you know, going all the way back for us, when we started, we had our AI driver's license. I think it's critical for people to understand what AI is, because a lot of the just end users they don't know. Oh, I can copy my email and take it out to chat GPT to drop it in there and it's great, yeah. But if you don't have that kind of general education about it. And then absolutely the prompt engineering when you go to use the chat bot, I think now I see a lot more people that oh yeah, we have one of those, but we don't really use it or it's not delivering quality. They didn't consider training their folks on on prompt engineering.
Speaker 3:They didn't consider training their folks on prompt engineering. Yeah, you know that's just a big loss.
Speaker 2:And those prompts are what help you learn from, Like, as you said, learn from these models. Well, Trash EPT is quite smart. It can teach you a lot of things If you prompt it correctly. Odds are it will help you upskill yourself to use AI effectively or to learn how to best enable the models that you have in your environment.
Speaker 1:Yeah, that was a little bit of a tangent and a sidetrack there. But bringing it back to data, we understand about the importance of organizing and cleaning data so that you can ingest it, but what other spending priorities exist right now, whether it's, you know, cloud versus on-prem, or bringing in security platforms to the mix? How do we think about that in terms of mixing that in to our strategy? Is that early on? Is it at a certain point? Do we have a? Is there one size fits all way there?
Speaker 2:No, I do know the security folks will tell you, bring us in at the beginning.
Speaker 1:Yeah, and why is that?
Speaker 2:Because AI is moving very quickly and security is not keeping up in the traditional sense. Oh my gosh. The build versus buy conversation is also typically towards the beginning, but you need to have the right people in the room. So I guess I take back my answer of it. Depends in the room. So I guess I take back my answer of it depends these conversations. Conversations should be had at the beginning because they're all a part of that strategy in the long term and aligning on when is the best time to bring these folks in. So the conversation should be had at the beginning. When do they actually come in? That might be a little bit more debatable, depending on what your goal is in the end.
Speaker 1:I'll challenge you a little bit on that, because I mean, that's everything that I've been hearing too. But then it occurs to me, if everything has to be considered at the beginning, then nothing can be considered at the beginning. So what is an actual like? What do you actually need to be taken care of first to advance? Is it the data? Is it making sure that you're secure first? Is it that you're secure first? Is it that you're thinking with a business use case in mind? Maybe you don't have a right answer, but where do you think what is an absolute must consider one, two and three.
Speaker 2:Know what you're trying to do. Once you know what you're trying to do, let's understand our data. And then security. I guess security is not a step three. Security just exists throughout. You should think about everything with security in mind, as we do with traditional security mechanisms. We think about every workplace decision with is this going to expose the company to some sort of liability? And so security just goes on forever. It's just always in parallel, but have a goal.
Speaker 1:Know what you want to do, Bill meaning what business problem you're actually wanting to tackle, For sure.
Speaker 3:I would even maybe take a little bit different lens to it and for me I would probably number one would be education, but tied to security. Right, because we're all responsible for security and we hear that over and over. But if you really embrace that, that baseline security and we hear that over and over but if you really embrace that, that baseline security, because people generally not everybody knows, don't take and I said it before like don't take that out and copy it out into the wild. People should understand that. So also that you can't just trust everything it gives you. I think that that's, that's a big piece of it too, and maybe that's off topic a little bit, but you know, people have to understand it's not going to take somebody's job because you still have to validate what it's putting out and no matter how good it is there, there always needs to be some human review of that content. You definitely don't want things just going out into the wild without review bit like I.
Speaker 1:I want to tackle this or that business problem, but like there are so many considerations to take into account, security, data cloud, uh, tab, you know, boil the ocean that type of thing like are they? Is that? A similar theme with our clients is just the complete kind of analysis, analysis, paralysis. They don't know where to start for some.
Speaker 3:But you know, I think other folks are just running so fast or like we have to, we have to have that. Now we're falling behind. So I feel there's a lot of you know, emotion behind that, like we've got to get going, we've got to get started, we got to get something stood up very, very quickly. So I don't run into a lot of folks that are stuck in that analysis process part. They're almost going too fast.
Speaker 2:I would say that they don't know how to approach it. They're moving so fast but they don't know what they're wanting to do, how they want to do, what to consider. So they're almost moving so fast that everything's a blur and they can't decide what to do. And I think those conversations where they are moving fast but they want to do everything is, I guess, where we really run in. And that's when bringing in a partner where you have a data strategy person, a data engineer, a data scientist, a security expert, where we can have a lot of these conversations with just a couple people in the room, becomes super helpful, because sometimes even having that conversation with, say, five experts can help you narrow down what you're looking for and you're like well, I didn't actually want to do that, I'd rather do this Can help kind of sort through some of that, some of the blur that occurs when you move that fast.
Speaker 1:Without getting into specific like client details, can you give me some examples of like what a good like starting project would be with AI from a, let's say, a company that is mildly or moderately ready with their kind of data estate? Like? What types of projects Is it chatbots Is it applying? Is there certain business processes that we think are kind of ripe for the taking here? Or what's a good couple of real world examples that you've seen?
Speaker 2:A lot of people like the chatbots. I don't think chatbots are going to solve every problem. I find LLMs to automate processes without needing a chatbot interface to be of a lot of value. I would say I maybe don't have a really great example of like. This is one business use case that everybody deals with. But look at your processes. What are you spending a lot of your time doing? What are you doing like very, very often? And you're just stuck in the rut of clicking the buttons, looking at copy and pasting it over. What are you doing very, very often? That is very mundane. That would be a great use case because there's probably a lot of history. We have a lot of things to build off of and if it's, if you're doing it that often, you will see the impact of not having to do that very quickly, and that's a good small chunk to take.
Speaker 1:Or even just like a good you know a use case that we've seen developed where you're like. That was a good example of where you should start. It's something that's not super aspirational or transformational yet, but you can see how it can start to build its way towards something like that.
Speaker 3:Automation of processes. You know I can't think of anything that's very specific. I'm struggling a little bit. Can you think of any of that? I?
Speaker 2:mean, I know so many people are implementing tools like Copilot or Glean. Those are like very entry-level ones, where people are using tools that they can buy and they're using that as their knowledge base assistant. It's very plug and play. It does not take a lot of plug and play. It does not take a lot of hopefully lift on their end to get more information out to their users. The data doesn't have to be perfect because their users can provide feedback, but that's where I've seen knowledge assistance is where a lot of people are at, and I know that is a chatbot type of experience. But that's a place to start, where you see people using it and you can gain feedback right away, and you can even gain feedback in the sense of this is not giving me correct information. Maybe that's an area where we should go tackle our data first, yeah, and then iterate there.
Speaker 1:Yeah Well, bill, you mentioned, you know just automation of processes, but you also don't want to be automating anything. That's not right. I mean, that's even potentially worse than not automating at all. Indeed, so what kind of you know? What should we be looking out for to make sure that, when we are in a position to automate something, that we're not making a wrong turn and we're automating the right things turn and we're automating the right things.
Speaker 3:Because, back to the analysis of the process, right, Starting with the outcome, what is that process designed to deliver? Because sometimes the process is broken anyway. Right, that's a business process we've done for 30 years. We've always done it that way. But take the time to analyze that process before you automate it and figure out if there are steps in there that you don't need to do. They're no longer needed. But it's just history. It's just we've. We've always done it that way. I hear that a lot. We've always done it that way, so that that's a great opportunity, right? Well, do we need to do it that way? Can we improve it? Yeah, Do we even need all of the parts of that process?
Speaker 2:no-transcript, and then you can just faster and faster and faster implement these use cases. If you need to use a summarization tool that you've already built out, just pull from that code base. If it's a general summarization model, you should be able to pull in any data source and then look at that. You didn't have to build that part of a new tool. So they're building blocks and you can use and repurpose different pieces, which is how you can iterate faster. But starting slow and making sure everything's set up correctly really helps. Otherwise you'll spend a lot of time refactoring code and refactoring your process and not understanding why is my model giving me the wrong answer, where is it breaking? And it's really hard to understand if everything's broken.
Speaker 1:Right. Well, I would imagine, too, that start slow to move fast model or mindset would also help you add and find the right talent along the way, as opposed to just going out and shotgunning saying I need to hire AI talent. If you're purposeful about it, you'll be able to hire the right people, and that would help you then manage this on an ongoing basis, because that is also another big gap that exists out there, as I understand it is. You know, once these solutions are created, or if we're delivering something to a client, there needs to be somebody on the other side that can take it and run with it for the foreseeable, if not indefinite, future. Where are we at right in terms of that, in terms of like organizations having the talent or capacity to handle these solutions once it's given to them?
Speaker 2:We like to work with them. We like to kind of drag them along as we build out the solution. So there's somebody there from day one who understands what that process was. That way they're instantly upskilled. If they didn't have the skills before they go through that process, they gain those skills, they gain that understanding, and then we train other folks in addition. But there is at least one person that has seen the entire process from A to Z and can then help educate their folks as well.
Speaker 3:I would add to the things you were saying earlier. It's all about iteration right and reducing time to value. And as far as the people part of that, when we work with a customer and we come in and we do a project that, like she's describing, that knowledge transfer is just part of it it we don't come in with the mindset that we're embedding ourselves forever, we're going to come and stay forever with you. No, we're. We're here to help you deliver that project, transfer the knowledge and and you're ready when we leave. You're not sitting there going oh my gosh, what do we do? Yeah, the knowledge transfer is part of it, from beginning to end. Okay, so the customer participates in that project, that process, and hopefully eventually they pick up the flywheel on their own. Maybe we stay for the foundational piece, another project and maybe even another one, but our participation is pulled back a little bit. They're taking on more of that responsibility and it's essentially teaching them to fish.
Speaker 1:Yeah, yeah, Looking ahead a little bit, you know where do you think the industry is going in terms of using data to fuel smart, purposeful AI projects? Is it and I'm just going to throw out maybe a couple like buzzwords just to get you started Is it more synthetic data? Is it leveraging data to get into agentic models? Is it AGI? Where is like, where is the puck moving right now?
Speaker 2:Something really interesting that I've found and heard a lot about recently is being able to transfer context. So Chad GPT knows me, we're good friends, we work together quite well. So ChatGPT knows me, we're good friends, we work together quite well. But when I go over to something like Claude, or even my Adam AI doesn't know me as well as my ChatGPT does. More private instances, or even if I have a project and I've added a lot of context surrounding my project to one specific model, but if I want to be able to reference that for other things as well, being able to transfer context, and context is that data, the data that the model is building its responses off of that it or in addition to looking for and what I care about, and being able to transfer that context. I recently was talking with somebody and they were joking around saying I just need to carry a USB stick of my context around and then I can plug it into any model. And while that sounds almost archaic now by saying a USB stick, hilariously, that may be not to you.
Speaker 3:Oh, my gosh Ouch, right in the heart, the heart. I feel like you'd be able to do that with usb stick right here.
Speaker 2:We don't know being able to do so and it that speaks to the whole data part, the data that the model uses to represent you and what you care about. Yeah, um, that's a part that I've been hearing more and more about recently that I find fascinating.
Speaker 1:I think the question would be how can data that's useful to you follow you around everywhere you go? Yeah, and help me make decisions. Can we build that right now? Is there something a little bit more sophisticated than a USB stick? I feel like security might get uh nervous when we say bring back usb, but like, is there a way to have that data follow you wherever you go? Are we building that now?
Speaker 2:is the best way I could think about it is asking again for me it's chat gpt, asking it to please print everything it knows about me into one document and then providing that to the next model that I'm going to. Yeah, but even that won't be holistic, because it's probably. If we're being real, the output of chat GPT most likely will not include everything it knows about me. It just won't, unless we find a way to dump that really successfully.
Speaker 3:So I would say I'd like to see where it's going. I would like to see where it's going. I would like to see where it's going. I would like to see it eventually end up in a data mesh, right where our context is just more metadata that's incorporated into the mesh and then all of the data sources or agents that I interact with have my context that's there and they can just grab it.
Speaker 2:Ooh, maybe the context is just on my computer and then the agents following something like MCP model context protocol, where they are all following the similar setup, it's just easily able to plug into my computer where all my context is stored, and then that agent just has access to it.
Speaker 3:Or it's on a server somewhere within the organization on the network as a component of Mesh.
Speaker 2:Yeah, I was going to say, but it's better than my computer Because the USB stick, the organization right on the network as a component of mesh.
Speaker 3:Yeah, I was gonna say better than my computer because bill usb stick yeah, yeah we're gonna carry one from now on it every, every time we have a meeting. I'm gonna set it on the desk right next year when half of us would be like what is that?
Speaker 1:yeah, is that exactly? Um, well, bill, you mentioned mesh and I know I've heard you talk about mesh versus fabric and and that's, and so attach mesh and fabric to what you were just talking about with Ina.
Speaker 3:Yeah, so data mesh is really a concept about how do we leverage all of our metadata about all of our different data sources. So the mesh connects everything. So, in what she was just describing, right, our context, our metadata about me could just be loaded into the mesh as another data source. Right, we'd all have our context, which is a really brilliant idea. Right, because you think about what everyone's trying to do with your personal data. Everywhere you go, they're collecting little bits of information about you. But if we could leverage that in the business sense to what are our strengths, what are the things we know? You know? Here's my LinkedIn profile. Here's all the things I've touched and know about. Here are the types of questions I ask in certain engines. I mean, that really could be a wonderful collection of data.
Speaker 2:Yeah, just got to implement security with that too, so I can't access all of my context.
Speaker 1:Back to the. What do we need to do first?
Speaker 3:I'm going to create a synthetic Aena.
Speaker 1:Ooh, even better. Well, lots to get to on what to consider first, but it does sound like really treating all of the areas that you would need to get to first, treating them with the fundamental respect, whether it's data security, on-prem, off-prem. Get back to the fundamentals so that you're doing everything right at each step of the way. So lots that we could talk about, but I know we're coming up on time, Bill, you know. Thank you so much for taking the time and appreciate your participation today.
Speaker 2:Yeah, thanks for having us.
Speaker 1:Yeah, always a pleasure. We'll have you back soon, even as you're well aware and you know, bill, you've been on here now three times, so that's awesome, excellent. Wear. And you know, bill, you've been on here now three times, so that's awesome, excellent. Thank you for having me. We'll see you next time, can't wait.
Speaker 1:Okay, as we wrap this episode, a few takeaways from Ina and Bill's experience. First, data discipline is your launchpad. Without clean, well-structured, context-rich data, every AI initiative is at risk. Use rubrics and governance to know when you're truly green light ready. Second, small wins build big momentum. Modular, reusable components, create faster ROI, reduce rework and fuel the flywheel effect that we talked about. And third, upscale, align and secure early Cross-functional collaboration. Embedded knowledge transfer and security by design. Ensure you can sustain and scale AI capabilities long after the first deployment. The bottom line, the path to AI success starts well before you write a single line of code, and there are a lot of priorities to take into consideration before you can go fast or far. If you liked this episode of the AI Proving Ground podcast, we would sure love to see you give a rating or a review, and if you're not already, don't forget to subscribe on your favorite podcast platform. This episode was co-produced by Naz Baker and Cara Kuhn. Our audio and video engineer is John Knobloch. My name is Brian Phelps. We'll see you next time.