AI Proving Ground Podcast

3 Easy Steps to Kickstart AI the Right Way

World Wide Technology

AI leaders these days have little time for proofs of concept. They need ROI. If you're leading AI transformation in a large organization, this episode of the AI Proving Ground Podcast is your blueprint. AI consultants Kathleen Nowicke and Yoni Malchi share how top enterprises prioritize use cases, align across business units and unlock lasting ROI, and detail a three-step process to get AI right from the start.

Support for this episode provided by: VAST Data

More about this week's guests:

Kathleen Nowicke is an experienced consultant in data-driven digital transformation, with a broader background in Enterprise Architecture and a passion for healthcare. Skilled in leading delivery of complex services engagements, business development, and building high-performing teams. Previous consulting experience with Boston Consulting Group; PhD in Biomedical Engineering from Johns Hopkins University.

Kathleen's top pick: Building for Success: A CTO's Guide to Generative AI

Yoni Malchi is a Managing Director in Consulting Services at WWT focused on AI and Cloud Strategy. He leads AI engagements with key customers bridging the gap between the business and technology teams. Yoni also leads the AI R&D efforts which researches cutting-edge AI techniques, tools, and platforms to provide differentiated recommendations to our clients.

Yoni's top pick: A Guide for CEOs to Accelerate AI Excitement and Adoption




The AI Proving Ground Podcast leverages the deep AI technical and business expertise from within World Wide Technology's one-of-a-kind AI Proving Ground, which provides unrivaled access to the world's leading AI technologies. This unique lab environment accelerates your ability to learn about, test, train and implement AI solutions.

Learn more about WWT's AI Proving Ground.

The AI Proving Ground is a composable lab environment that features the latest high-performance infrastructure and reference architectures from the world's leading AI companies, such as NVIDIA, Cisco, Dell, F5, AMD, Intel and others.

Developed within our Advanced Technology Center (ATC), this one-of-a-kind lab environment empowers IT teams to evaluate and test AI infrastructure, software and solutions for efficacy, scalability and flexibility — all under one roof. The AI Proving Ground provides visibility into data flows across the entire development pipeline, enabling more informed decision-making while safeguarding production environments.

Speaker 1:

When it comes to kick-starting your enterprise AI strategy, leaders are often flooded with ideas and a deep sense of urgency, but without a clear framework, even well-resourced teams can end up spinning their wheels, because the reality is only 1% of organizations consider themselves AI mature. The rest are just stuck in what today's guests call shadow AI, a mess of disconnected proof-of-concepts, siloed data and untracked value. In this episode of the AI Proving Ground podcast, we're talking with Kathleen Nowicki and Yoni Malki, two well-versed AI consultants who've helped Fortune 100s escape that so-called POC purgatory, and they'll walk us through a deceptively simple three-step process for doing AI right not flashy, but foundational. So you're building flywheels and not just going through fire drills. If you're tired of AI conversations that start with we need a chatbot and end with what did that even accomplish? Well, stick around, because this episode may change the way you approach enterprise AI from the ground up.

Speaker 1:

This is the AI Proving Ground podcast from Worldwide Technology Everything AI all in one place. Let's jump in. Kathleen Yoni, thanks for joining us on today's episode of the AI Proving Ground podcast.

Speaker 2:

Thanks, Brad.

Speaker 1:

Well, yeah, no, absolutely, jinx. Let's start with. I read a recent report from McKinsey that talks about how only 1% of organizations consider themselves AI mature and I'm wondering, from the perspective of you know, developing, identifying and prioritizing use cases, what is that 1% doing that? The 99% are simply just getting wrong.

Speaker 2:

So I'll take this, kathleen, just to get started that the 99% are simply just getting wrong.

Speaker 2:

So I'll take this, kathleen, just to get started. So I think what you hit on first is that there's some upfront work that has to get done to get started on the right foot, and I think that's going to be the majority of what we talk about today in this podcast is what needs to be done upstream of actually building and deploying AI models and use cases, but there's a whole host of other things as well that we're not gonna talk about today. Things like having the right platform in place, being able to reuse components from use case to use case, having your data set in a way that you can continuously build and grow your AI capabilities. All these things are what set off what we call the flywheel effect, where we can pump out use cases, generate value in organizations, and that's really what I think the 1% of the organizations are doing well. It's a combination of getting started up front correct, on the right foot, but then also setting themselves up for success so they have this flywheel effect. Anything else you wanted to add, kathleen?

Speaker 3:

Yeah, I love that flywheel. It's like you build up the momentum, you know, prove the value, generate the value and then keep it going. And I think to what you're sharing we really want to cover today how do you get the flywheel going and prove the value of it organizationally to jumpstart the AI journey? Imagine yourself being, I think, for companies, large organizations that are starting their AI journey, it can feel like you're standing at the base of a giant mountain and you're trying to figure out where do I put my foot down first. You could hike in some random direction and, even worse, if everyone starts hiking in different directions, you have, you know, sheer chaos, duplicative efforts. It's very expensive, right? I think we call it shadow AI, yoni that, right yeah, yeah, shadow ai is the term.

Speaker 2:

It's kind of a play on the words from shadow it.

Speaker 3:

Now we're moving over to the ai world yeah, but the value is not tracked well and I think you know in large organizations if you can't track the value right, you might as well not have even existed, kind of if.

Speaker 1:

If a tree falls in a forest type of situation, yeah, Yoni, I like that you mentioned what we're not going to focus on today, because I do want to ask it's probably very easy for organizations to jump into those components from the onset the need and the rush from executives or boards to move fast with AI. You're probably thinking to yourself you have to build a system or you have to get all of your data state in order. How quickly is it to, or how easy is it to fall into that trap of wanting to do all that first before you really start applying some just you know hypothetical thinking to all this?

Speaker 2:

Yeah, it's very easy and you know that's actually where we're going to go. Here is the reason why I mean it's kind of ironic, right? You know, before the explosion of chat GPT onto the scene at the end of 2022 and beginning of 2023, we were working in a world of just doing machine learning models, and machine learning models take a lot of effort to get up and running. You've got to get all the data in the right place and you've got to be able to train these models based on your company's information, and so it was a lot of work to get going. Now you fast forward to today and we have these LLMs and they're pre-trained on the entire internet. So running a very complex AI model like these LLMs is actually, ironically, very easy to do. So it can be extremely tempting to just get started and start building, because you will see some value. You will see these models respond back to you in human-like terms and answer questions that could be valuable to your organization human-like terms and answer questions that could be valuable to your organization and the hard part of actually bringing your organization's data to those LLMs that is pretty challenging, but to do it right but but to get it just kind of good enough that you can see answers again about your organization isn't that hard. So while these, while these models are extremely complex, the ability to spin them up is pretty easy and that actually leads to this POC chaos.

Speaker 2:

And if you look back I just look back at the time from 2023, 2024, it really was most of the organizations we talked to just had POCs all over the place and in fact, worldwide. We were no different. We had shadow AI everywhere. It was hard to kind of track the value and people. You know we're kind of doing things in their own silos and you know you run into a lot of issues here where the use cases that are being built they really only matter for a very few people in the organization or even if it matters for those people, it's still even it could matter for, like, a lot of different people.

Speaker 2:

It's siloed so it never actually expands out to all the other areas, or the use case just becomes extremely just kind of half-baked because there's no greater authority or the organization pushing you to build it to the end and make it really fantastic and it kind of just dwindles and dies on a vine. So there's just a lot of things that happen if you kind of go forward and just start building and you kind of take that temptation. And so I think what we want to talk about today is how do you avoid that? And so I wanted to just pass it back to Kathleen here, because I know, kathleen, we've, we've talked about this before. You know, how do we, how do we stop that from happening and ourselves aligned on the right foot and do it in a way that it's not just overbearing and onerous? We, you know we have to slow down and speed up, but we don't want to take the entire year to do something like that. What's your thoughts on that?

Speaker 3:

Right. And that's where we want to, where we talked about leaning on our kind of consulting toolkit to really get alignment to. You know, achieve this kind of crucial paradox of slowing down to speed up, not for the sake of bureaucracy, right, but to really build a strong foundation, get alignment and, you know, bringing it back to our mountain metaphor like, plan your path to the top in a way that's going to help, you know, brian, organizations get to the, you know, more organizations get to that high level of maturity that you mentioned, high level of maturity that you mentioned. And so this is where we want to. We want to tee up this like three step process.

Speaker 3:

There's lots of consulting frameworks in here forgive me, yoni and I are, you know, consulting geeks but step one you want to exhaustively identify and comprehensively identify potential use cases using what we call a driver tree framework, and we'll go into details here.

Speaker 3:

But once you have, you know, an exhaustive set of use cases identified that are tied to organizational value, then you want to organize and prioritize those, leveraging kind of two frameworks or approaches that come from the consulting world a value complexity matrix and that is fueled by hypothesis-based thinking and using a data-driven approach to kind of validate the placement on the value complexity matrix. Again, we'll go into some details here and then the third step is really to select the best use cases, leveraging what we call the 80-20 rule. So, and that's how you get the flywheel started right. Those three steps help you achieve the flywheel and in a world where AI is only an API call away, it can be very tempting to skip this step and just get to action. But you know, we really encourage you to kind of take this three-step approach to get set up for success so you don't have the failures, the common pitfalls that Yoni was just chatting through.

Speaker 2:

Yeah, and I don't want people to think also, kathleen, while what we're talking about today is, that is that part upstream of the actual build. You know it's everything that you need to do before you get there. I think we also want to send the message that, while you do it before you actually start building, even after you do start building, you probably want to continue some of these frameworks throughout and continue to iterate, right.

Speaker 3:

That's right, that's right.

Speaker 1:

Yeah, well, let's dive into the identification here. Who should be at the table? You know, the executives probably have the vision, but there's probably a certain subset of employees that are continually kind of testing out new AI tools or understanding the art of what's possible. So who are you bringing to that table to have that discussion, or at least start to identify and bring in the right people? That's a great question you that table to have that discussion, or at least start to identify and bring in the right people?

Speaker 3:

That's a great question. You really want to have representation across the various business units. The name of the game here is not creating these siloed use cases, and you really want to start at the use case level. We are not talking about tools, right, ai tools right now, off-the-shelf things to buy, right? We are talking about business challenges, business opportunities, and so, to make sure that we don't fall in the common trap of you know what's the solution, looking for a problem, we like to start with this driver tree concept and again, you're bringing you know lots of business units to the table.

Speaker 3:

Executives and getting a really good workshop going is typically how we suggest doing this.

Speaker 3:

But your starting points of this driver tree are going to be, you know, the common drivers that you want to bring to the table when you're first starting are going to be, you know, revenue growth, cost reduction, risk reduction, maybe employee productivity, customer satisfaction, and there's potentially secondary ones related to innovation, enablement or maybe compliance-related value or operational speed and agility.

Speaker 3:

But I do think oftentimes, when we're talking about organizations in corporate America, we've got revenue growth and cost reduction or potentially risk reduction as top drivers, and so from there you want to go and break out. You know and and this is where it's so critical to have lots of business unit representation at the table brainstorm, okay, what would be a revenue growth you know concept in your business unit, you know chief marketing officer, and their use case is probably going to be pretty different from a use case that's generated in HR or, you know, in a manufacturing business unit, right? So that's why it's so important to have a lot of representation at this use case generation, because the ideas are going to be very different. But if you start with this driver tree that's rooted in the business outcomes kind of starting with the top of the summit, the top of the mountain in your mind, so that we're all charting towards the same final destination that's really key.

Speaker 1:

Yeah, yeah. Is this a situation where you're thinking no idea is a bad idea, or do you want to put some focus on this driver tree?

Speaker 3:

No idea is a bad idea, but you do want it to fall into the framework right. You need to be able to tie the ideas to those original business outcomes and if you see this on paper, it kind of starts with these big buckets and they branch out into, you know, we say MISI, mutually exclusive, collectively exhaustive buckets, and then it may branch out again and again into their component parts and it's just an upfront brainstorming session to get all the ideas on the table. We'll talk in the next you know step here about how to actually organize and prioritize this, but at this point there's no bad idea on the table.

Speaker 2:

Yeah, I mean, maybe it would help if I brought it to life with an example here, right yeah?

Speaker 3:

Because Yoni led the internal ideation on use cases right.

Speaker 2:

Yeah, so at Worldwide, we went through this ourselves back in early 2023. Went through this ourselves back in early 2023. Again, we had shadow AI, you know, everywhere, and we needed to kind of take a step back and get this organized. And the first step is exactly what Kathleen was talking about we had to bring the right people to the table. So we have. We had the heads of each business unit as key stakeholders here and they brought their director level people to the table as well. We had people from technology as well as all the way up to our CEO was very deeply involved in this, and that actually set the stage for us creating this driver tree around revenue growth and profitability, or how do we reduce costs right, those were our main buckets and we wanted to kind of take use cases that kind of were focused more on one side or the other, and so at Worldwide, our first two use cases that we went after, based on the driver tree, were Atom, which is Atom AI it's our internal knowledge management chat bot that kind of knows everything about Worldwide and the RFP assistant. And it was interesting because Adam, being a general knowledge system and a question and answer chat bot, you can go in a thousand directions and each one of those can have and they have use cases. There's multiple use cases for Adam. But going down the revenue path, what we found, you know we were looking at things like marketing, things like sales, um, how we, how we do our, our services, capabilities. Where we landed was we wanted to focus on the sales efficiency and sales effectiveness, and so our first couple use cases with Adam were focused on salespeople more effective at their job by being able to pull together, uh, vignettes or you know what. What have we done in this industry type questions? Or can you tell me an example of when we did this type of project in the healthcare industry, you know, focused on increasing patient, you know, through time or something along those lines? Right, and so that allows them to prepare for meetings really quick.

Speaker 2:

Then, on the flip side, we at Worldwide respond to a lot of RFPs. That's a big part of our job. We have a whole proposal team and it takes them a long time to get to the first draft of the response to the RFP, to actually understand what the RFP is actually asking, and then to ultimately just respond and get to our final response, and so that to us felt like a cost reduction type of driver where we could reduce the time and effort it takes for this team to respond to RFPs and thus, you know, allowing us to save, save on costs there in time. Uh, on the Adam side, you know, with the sales organization, it was fantastic. I think you know we were seeing numbers like 25 to 30 to 30%, uh gains in how quickly they can uh come up with a sales pitch process, a sales pitch, a way for them to be more effective in how they're talking to their clients on a day-to-day basis, and we have a lot of metrics around that.

Speaker 2:

On the RFP side, it was pretty interesting. Like I said, we went in thinking it was gonna be a cost-cutting activity, but actually the fact that we can now respond to RFPs in a more efficient manner. We realized there were so many RFPs that we just said, you know what, we don't have time, we can't, we can't get to this one, we don't have enough people right now, and now we can, we can actually respond to RFPs way quicker, way easier. There's some that you know could be a moonshot for us, but we're still going to respond with our best effort and it could only take us a day or two rather than two weeks, and that's led to some top line growth for us as well. We're responding and winning more RFPs than we ever have. Our time to first draft of an RFP is 80% faster.

Speaker 3:

That just translates into cost savings, but also revenue growth, so that was a nice surprise for us from a driver tree perspective of our toolkit here to get started to build the flywheel momentum in an AlignSmart way is to organize and prioritize your use cases, right. So you've just stepped along, you've generated a laundry list of them great, from different business units. But now which ones do you want to get started with? You need to organize and prioritize. So we leverage this framework called the value complexity matrix. And that sounds complicated, but it's not Just imagine. You know a simple graph where your y-axis, the one going up and down, is your value and your x-axis, going across, is complexity. Okay, and you, you take your use cases and of course it's hard to precisely quantify the value and complexity of each use case. You know when you've just created this laundry list.

Speaker 3:

But you make some assumptions, leveraging hypothesis-based thinking. Leveraging hypothesis-based thinking okay, there we go more consulting terms, hypothesis-based thinking, to place each of your use cases on your two-by-two here. And you say, okay, yoni, your RFP assistant, we think we can cut down on first draft generation by 80%, right, what does that quantify? How do you quantify that into the value, ie cost savings? Right, because the value is, in this case, if we're focused on revenue and cost, I mean value is a dollar-driven number. So you make assumptions, you develop a little back-of, you develop a little back of the envelope model and you have value On. The other axis is complexity. So do we have the data readily available to ingest, to train, you know, to fine tune the models? Do we have the computational power? We need Yoni. What are the other?

Speaker 2:

pieces. Yeah, those are the big complexity ones from a technical perspective. But there's also regulatory complexity, compliance complexity. There's sometimes even within organizations, political complexity. Some use case may seem amazing from a value perspective, but there's maybe three groups that are fighting for the ability to do that use case and it's just not worth it to do it right now. It's just going to cause too much stir. So there's a lot of different things you have to consider when you're thinking about complexity.

Speaker 3:

Yeah, and then so you come up with your first, you know, hypothesis based placement of your use cases on this value complexity curve. Right, and it's not perfect and you know that. But it's a starting point to help you select your, your best use cases. Now a critical part of hypothesis based thinking and I say this as a recovering scientist is, you know, you develop final selections. Maybe you want to go gather some additional data from the proposal team to you know to gather information on how long it takes to do X, y, z, like. Maybe you want to actually go and gather data to refine your hypothesis on the use case placement on the value versus complexity matrix.

Speaker 2:

And the one thing I'll just add to this is that it just like it was tempting to just get started on building AI.

Speaker 2:

It's very tempting, especially as I'm a recovering scientist as well, and it's hard for some people to make a guess, make a hypothesis in the absence of information, make a guess, make a hypothesis in the absence of information, and so a knee jerk reaction may be.

Speaker 2:

I need to just go gather information first before I make a hypothesis on what the value is or what the complexity is. But that could lead to analysis paralysis and it also makes it very difficult for people to kind of react. And so generating the hypothesis first in the you know in the absence or in the black, in the, with having lack of information, is a good thing. You could be completely wrong, but now you have it up on a page, you show it to the subject matter expert, the person who's in charge of running this use case, and now they know exactly kind of what they need to talk to and how they need to refine it. So it's important to be able to just kind of put your foot out there and just make a hypothesis to get started, and then you can get to the data to refine it. It will also direct you to the right data that you need to refine it in the right direction.

Speaker 3:

This episode is supported by Vast Data. Vast delivers a unified data platform purpose-built for AI and advanced analytics, eliminating silos, accelerating insights and scaling to meet the demands of modern enterprise workloads.

Speaker 1:

Well, Yoni, bring us back to the internal example here. You talked about the two use cases that we did move forward with, Atom AI and RFP Assistant, which are going phenomenally for us. But I'm interested on that two by two chart. How did it compare to other use cases? Because we weren't just dealing with those two, as I understand that we had dozens upon dozens of use cases, if not more, that we were also considering. So how did we make those weighted decisions?

Speaker 2:

If not more, that we were also considering. So how did we make those weighted decisions? Yeah, I mean, it's exactly how Kathleen laid it out. You know we had 82 use cases after all was said and done. 82? 82.

Speaker 2:

Oh my gosh, weeks, four weeks or so in war rooms with different lines of business, different subject matter experts, technology, trying to make sure that we're all aligned on what this value is. We were putting hypotheses out there, we were refining it with data, we were shifting things. I think at one point the RFP assistant was probably ranked number 64 on the list. But after we continued to refine it we kind of realized a bunch of things about some of the use cases that were above it, about the RFP assistant itself. Adam was always top on the list. It was kind of the thing that you need to do to get started and we knew it was going to branch off and be able to be used for a number of different use cases in its own right. But RFP assistant started pretty low and kind of bubbled up after all this refining. So that's how we went about it.

Speaker 2:

We are just to fast forward to today. We still have this value complexity matrix. It's evolved immensely over time. We're constantly in taking use cases, removing some that we don't think are good anymore, some we thought we need to build custom, but now they have off the shelf products for that, so no reason to reinvent the wheel. So there's a lot of things that happen with this, but getting started off with that on the right foot, it is now a place for everyone to react to, for everyone to you know, um, align against and it and it just kind of like is our. It's our place we go to when we're trying to think about what are we doing next and where are we getting our money yeah, it's a, it's a great backlog.

Speaker 3:

Now, right, and and this starts to tease towards you know, if yoni and I are invited back, a part two podcast around, how do you sustain the momentum and keep the flywheel going? But, like you know, how do you continue cranking through that backlog of use cases? Uh, is, is.

Speaker 1:

Yeah, what's another conversation? I will definitely, or we will definitely, invite you back for a part two here. But I'm curious, you know, how do you balance the need to go for some more moonshot projects that might be harder to accomplish versus the low hanging fruit that might build momentum towards going for those moonshot projects? So, you know, are you looking for just the quick, easy wins, or are you looking for, you know, something that you can sink your teeth into?

Speaker 3:

As you just teed up step three of our process perfectly here. So thank you for that. We, you know, we recommend taking the quote unquote 80-20 approach for actually selecting your use cases, and this is not about blindly selecting everything that's in that, you know, bright green quadrant. If you break up your, your matrix into you know four squares, essentially, then your high value, low complexity is going to be your low hanging fruit, obviously right, and you clearly want to get started with some of those. That's going to be where you're driving value with not as much effort. But maybe your high value, high complexity will include some of your you know, moonshot, as you say use cases, and you probably want to go for some of those too. And so this is where you go from a math algorithm to using a conversation, leveraging the 80-20 approach and just strategically picking the right use cases to go after, once you've organized them in a way that a leadership team can wrap their head around and make a decision around what to move forward with and how to get started.

Speaker 2:

Yeah, and I'll just use yet another consulting phrase here it depends, right, it depends on the appetite of your company and the culture of your company.

Speaker 2:

There are some companies that are very risk averse, very, and there are some that are, you know, risk tolerant, and so, depending on the leadership, how much momentum you actually need to build, to create that flywheel.

Speaker 2:

There are some organizations that just need a little bit of momentum and they can get going and so you can sustain, kind of taking on a really complex use case because the leaders and the board and whoever has some patience, and there are others where it's like, no, you need to keep on just getting those quick wins and proving yourself. But this again, having this 80-20 kind of mindset, where you're spending 80% of your time on 20% of the use cases and only focusing on those that matter to your organization. And now you have this with some categorizations of low hanging fruit or moon shots, however you want to call them on the four quadrants. It gives you direction and it gives you an ability to make meaningful presentations and it and everyone knows what you're talking about, because you've kind of again bringing back to the original analogy of the mountain. We've circled everyone back to the same spot on the mountain and we're all marching up together.

Speaker 3:

Yeah, and, importantly, you're marching up because you've rooted all of your use cases in one of our common, you know driver tree starting points of revenue generation or cost savings, or you know productivity. So, and I do think that as we have launched our, you know they're not POCs anymore, they're production. You know capabilities, with the RFP assistant, adam, and more, as Yoni was discussing. You know we take a lot of effort to track the value and continue reporting on the value created from these capabilities, because that's, you know, important to demonstrating that AI is transforming how we're working and continue, you know, earning the right to pursuing, to pursue additional use cases.

Speaker 2:

Yeah, and you know you can't yield out as you can't change what you don't measure. And so sometimes the use cases feel good, they feel like they're going right, but then you kind of measure things and it's actually it's okay. Or sometimes things feel rocky and they're not really moving along, but when you kind of take a step back and look at how things have changed, you're actually making a pretty meaningful, impactful difference. I think the other thing to mention about use case use case driven approach is that we often find with all of our clients that use cases beget use cases. You actually there's sometimes you didn't even think of these use cases when you were getting started, but because you did one use case that was related, you kind of uncovered something else that you would never have thought of before, and that's happened, I would say, 99% of the times on client engagements. It's happening within worldwide as well.

Speaker 1:

Yeah Well, I mean, the two of you state this out. It feels and seems rather simplistic and it is pretty straightforward. But I'm curious as we head out into the real world, where things are never as easy as they seem where are we finding stumbling blocks with the clients? Is it with bringing the right people to the table? Is it aligning, Because that feels like several landmines in and of itself, aligning everybody behind one or two use cases, or is it just openness to revisiting things? Yoni, your example of RFP assistant, where we thought it was going to save money, it ended up saving money but also driving growth. So where do we see stumbling blocks? And you know better put, what should listeners be avoiding or what should they try to avoid here, so that they can make it as simple, as you stated.

Speaker 2:

Yeah, I think I think I'll just speak freely here about some things. I think you know everything we talked about today just getting started on the right foot, taking a little time to slow down, to speed up that's a big stumbling block. That's why we're talking here today. And so having a structured approach to defining and prioritizing and then ultimately selecting your use cases, like we're recommending here, is a big one, and that's one that we've seen over and over and over again. But I do think just some other things that come to mind are getting the right leadership to the table.

Speaker 2:

There's AI is a big buzzword. It's a lot of hype right now and it's easy to say you're an AI first company or something along those lines. But if you are a leader and you're not actually driving those meetings or asking the tough questions, then I think you're doing a disservice to your organization. I mean, we still, to this day, have weekly meetings all the way up to our CEO, where he is driving us hard on our internal use cases and how it's translating into value. And I'm not saying every company needs to have their CEO driving that. It would be great. But leadership does need to actually take action and be engaged. That's a big stumbling block that we're seeing as well.

Speaker 3:

Yeah, and I also think that you know, if you look at a traditional kind of starting point engagement that we take with clients, brian, like we, you know we start with the with the workshop to bring the stakeholders to the table, to do the use case generation, ideation from the driver tree, right. We do the prioritization matrix. Like not everybody, not everybody is comfortable taking that hypothesis based approach and doing a value versus complexity scoring. That can be challenging, right, and so as consultants we can oftentimes help, you know, overcome that and get a placement, get alignment on the right use cases and then actually get the pox started Like that's a typical way, you know starting point that we partner with clients, because just that whole process can be a little bit overwhelming when you've got the you know shadow AI that's shotgunning in the background. So oftentimes this will come from like a you know top-down approach to get alignment, which I think is a really smart, smart thing to do before the shadow AI takes off.

Speaker 1:

Well, yoni, I loved the earlier that you said. Use cases tend to beget more use cases. What else can the value complexity matrix tell us? Because can it tell a company where to invest for talent, to help support some of these things? Can it help identify business models? What can we learn other than just understanding what to go after first?

Speaker 2:

Well, I think you hit on the talent thing for sure, because what you're going after that's going to necessitate the right people, right, and so I think it's going to allow you to kind of set your hiring pipeline in the right direction, where you're going to invest in your people. I think, if you get the right technology people in the room with those use cases and understand what will it take from a technical perspective to get it going and the data that you know, the chief data officer and his or her organization, what data needs to go in place. I mean, we're actually working with a client right now where we are doing this use case identification prioritization, but we've been in there for some time helping with their data platform and their data governance capabilities and the data platform is we're migrating them from an on-premise platform into a cloud-based platform. What data we migrate first, which data which you know data domain we are doing data governance on first and second. That also can now be use case driven right Because we have this use case identification matrix.

Speaker 2:

It's got all these different use cases on there. That's going to now guide how we govern our data and where we start that process first, because it's a big process to get everyone aligned on how we're going to do our data governance program. It's a large, large effort to figure out how you're going to prioritize your waves of migration of data into the cloud. But now you've got this guiding light of use cases driven by all going all the way back to the beginning, the value, the driver tree, and then you have you know the value and the complexity of each of them. So now you can have direction on how you migrate data, direction on how you govern data, direction on how you hire and then, ultimately, what you put into production and get value out. Yeah, anything. I missed there, kathleen.

Speaker 3:

Well, yeah, no, that's right, and I was also just thinking a little bit about Brian's question, and I want to actually take it back to the first step, which is the driver tree ideation. Right, and it's a practical tool, not only because it links your use cases to business value, but it forces you to think about things in a really comprehensive way. And so I do feel like we often get unlocks in those discussions where leader, business unit leaders, may not be thinking about you know a use case, until we force the framework on them and they say, okay, gosh. Well, maybe I'm thinking about a mining client that we've been working with. Right, like, the revenue growth side is harder in mining because the price of the materials is very much controlled by the market.

Speaker 3:

Right, but but costs, ok, let's break out what are the five major cost components of our organization out. What are the five major cost components of our organization? Right, and then, okay, let's follow the line on that. Okay, if maintenance of our equipment is one of our biggest cost drivers, what could we be doing to decrease the cost of maintenance of our equipment? And when you ask the questions like that, it leads to ideas that you maybe wouldn't pull out of a hat. If you ask somebody hey, what ideas do you have for Gen AI for your business unit? Right Like it forces the ideation in a way that that you know covers the entire comprehensive bucket linked to the top value drivers. So I think that that driver tree approach really helps, lends itself to, you know, I think that the hidden the hidden like value and gem that you get out of doing this.

Speaker 2:

It's a very human process and you're bringing people together to kind of ideate on things that they probably wouldn't do in their day to day and just going through the motions of prioritizing things, going through the motions of thinking about what truly you're trying to get at from a value perspective that brings people together and often unlocks ideas that they would have never even thought of or even you know, gets them excited and feeling ownership around getting these use cases off the ground, whereas if you're just kind of told to go make this an AI use case and make it happen, it doesn't have that ownership feeling. It doesn't really have people kind of like feeling rallying around it rallying around it.

Speaker 2:

So there's this intangible thing that happens when you actually do this work that you know, again, it's very human. So, you know, that's that's my pitch, to also not turn this into, you know, consultants, gpt, and make it, make it automated, like you definitely need people rallying people. Yeah, you're sending me chills, yanni, I know Sorry.

Speaker 1:

Fantastic. Well, we're running short on time. So you know, thank you to the two of you for taking time out of your busy schedules and your busy days. You know analyzing, identifying and plotting those use cases, not only for us here at WWT but on behalf of all of our clients. So thanks again for joining this episode and we'll definitely invite you back for a part two.

Speaker 3:

Thanks, Thanks for having us.

Speaker 1:

Thanks, thanks for having us. Okay, as we wrap up, a few clear takeaways emerge from today's conversation that apply whether you're just beginning your AI journey or looking to scale with more structure. First, AI success starts well before any model is deployed. The most effective organizations invest in upstream alignment, identifying use cases tied directly to business value, and not just experimentation. Second, prioritization is critical. A structured framework like the value complexity matrix helps leaders make trade-offs based on ROI and feasibility, not just enthusiasm and executive pressure. And third, broad engagement matters. Cross-functional input ensures you're not just solving in silos and that your initiatives have both traction and longevity.

Speaker 1:

The bottom line is as AI becomes most effective. The bottom line is AI becomes most effective when it's treated as a business initiative and not just a technical one. And what sets high performing organizations apart isn't just their tools, it's their process. If you like this episode of the AI Proving Ground podcast, please consider sharing with friends and giving us a rating, and don't forget to subscribe on your favorite podcast platform or watch us on WWTcom. This episode of the AI Proving Ground podcast was co-produced by Naz Baker, cara Kuhn, mallory Schaffran and Stephanie Hammond. Our audio and video engineer is John Knobloch and name is Brian Felt, we'll see you next time.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

WWT Research & Insights Artwork

WWT Research & Insights

World Wide Technology
WWT Partner Spotlight Artwork

WWT Partner Spotlight

World Wide Technology
WWT Experts Artwork

WWT Experts

World Wide Technology
Meet the Chief Artwork

Meet the Chief

World Wide Technology