The Catalyst by Softchoice

The AI Failure Episode: Why Asking the Wrong Question is To Blame

Softchoice Season 7 Episode 4

When mid-market IT leaders call about AI, most ask the wrong question. Instead of “What problem are we solving?” they jump straight to “Which LLM should we use?” — a mindset that helps explain why 95% of AI pilots fail to deliver measurable business returns.

In this episode, two of Softchoice’s leading AI experts — Sean Larkin, AI Principal Architect, and Ron Espinosa, Director, Google Cloud Category — break down why technology-first thinking derails AI initiatives before they start. From market validation to crawl-walk-run design, they reveal how organizations can escape the hype cycle and build AI solutions that actually work.

Key takeaways:

  • Why “What LLM should we use?” is the wrong first question
  • How market validation prevents multimillion-dollar failures
  • Why data hygiene is still the most overlooked risk
  • What crawl-walk-run actually means in AI deployments
  • How executive alignment eliminates costly blind spots between IT and business teams

Guests

  • Sean Larkin, AI Principal Architect, Softchoice
  • Ron Espinosa, Director, Google Cloud Category, Softchoice

This episode is packed with field stories, hard truths, and practical frameworks mid-market organizations can apply immediately.


---

Take the first step in your AI journey

Book your Executive Alignment Session today: https://www.softchoice.com/solutions/ai-and-data/executive-alignment-session

The Catalyst by Softchoice is the podcast dedicated to exploring the intersection of humans and technology.

We were sitting in a meeting over a year and a half ago, and the team of the customer that was charged with doing an AI project was told by the CEO after he had a weekend away with some buddies, other CEOs, that they needed to do something in ai. They needed to do it in eight weeks, and he didn't really much care if it worked or what it did. He just needed to be able to say we did something in ai. If you are wondering why 95. Percent of AI projects fail to deliver any business value. That story right there, that's exhibit A. From Softchoice, a worldwide technology company. This is the catalyst. I'm Heather Haskin. This season we're doing things a bit differently. We're making audio documentaries. Real stories from the front lines of it, exploring the challenges of small teams chasing big dreams. Today's episode, what happens when countless organizations decide they need to do something in ai, but nobody stops to ask why? We're calling it the AI Failure Episode. Act one, the wrong question. Sean Larkin spends his days on the phone with mid-market executives who all have the same problem. Hi, my name is Sean Larkin. I'm an AI principal architect, and usually when I'm asked, I, I describe my role as having a really fun role. My role here is to help demystify AI and make it more consumable for technology executives. Demystify, that's the nice way of saying it, the less nice way. Most companies calling Sean don't actually know what they're asking for. I'd say in the last year and a half or so we've been, we've seen a lot of companies convey a fear of missing out the fomo, coupled with asking about what I can do, instead of asking what we should do with AI, what problems we can solve with ai, what new things can we create with ai? It sounds like a semantic difference capability. Versus strategy, but the gap is where billions of dollars are currently disappearing. A tiny portion of them ask us to build their own large language model for them without really knowing that an effort like that normally takes millions of dollars to something like $26 million in six months of work just to get the first version out. And then for each subsequent release of a point version, it's about the same amount of cost. So they're asking the wrong question. They, they at this point, those that small set, they don't really know what they don't know yet. So here's where things get uncomfortable. Last year, MIT released a study that sent shockwaves through the industry. The headline, finding 95% of AI pilots delivers zero business return, zero. Now, Sean will tell you, and he's right, that those numbers require some nuance. Not every quote unquote, failure is a catastrophic collapse. Some projects get shelved, some get re-scope, some technically work, but don't move the needle. But here's what Sean sees in that number. For me, what it points out to and when we're in a boardroom setting is that that initial 18 month period since November of 2022, since, uh. People generally discovered chat GT and that Caine explosion in ai. We're seeing the result of that experimentation and it shows up as shadow ai, which is a parallel to shadow it that we've known for decades. And then there's a greater potential when people start to use shadow AI for security problems and data loss. And so it becomes a concern organizationally for am I leaking data out, just like shadow IT before it, employees downloading tools, spinning up experiments, bypassing procurement and security entirely. Because if the CEO says do something in AI in eight weeks, well you do something. Sean also sees a pattern he's watched repeat itself. Technology cycle after technology cycle, about 50% of a I POCs never make it past that stage. 60 to 80% of projects in general, not specific to ai, but just in general. This includes startups and things that big companies do and little companies do. They fail to meet market expectations even when competently executed by experienced people with a prior track record. And there's so much evidence for this. You look at the Google graveyard, the Amazon ambulance, the Microsoft Morgue, these are real things, by the way, websites cataloging the dozens, sometimes hundreds of products these tech giants have launched and then quietly killed Google Wave. Amazon Fire phone, Microsoft Zoom, the list goes on. And if companies with unlimited resources and world-class engineering talent can't avoid this pattern, what Chance does a mid-market company with three overworked IT people have? Sean has a favorite example, an object lesson in what happens when you skip validation entirely. So we had talked about the AI pin story, and it's actually one I'm, I've been following closely. When it was first announced, I thought, this is fantastic. What a great idea they thought it was gonna take off. The AI pin is or was a wearable device that you clipped to your shirt. The idea was that it would use AI to replace your phone, take a photo, answer questions, make calls all through voice commands and gestures. The core behavior that they wanted you to do didn't really land. So this whole notion of having a pin to your clothing, the feedback that I read was around, well, what if it falls off? What if I forget? It goes in the laundry. It's too small, right? Am I really gonna hold my hand out and can other people see over my shoulder, my password that I'm typing into something? The reviews were brutal, right? So the product tried to do too much and it didn't really do any of it. Tech YouTuber, Marcus Brownley, who has millions of subscribers, didn't hold back. This thing is batted almost everything. It does, basically all the time. What's the traffic to the Empire State Building from here? Finding directions. Use the voice command feature of AI PIN to ask for traffic information to the Empire State Building and it will provide you with the details you need. I did the problem, nobody wanted it. Not at that price, not with a subscription. Not when their phone already did most of those things perfectly well. They missed the crucial initial step. Early validation is what happens at the crawl phase. Low cost. Low risk. Nobody gets fired if it goes wrong and in no answer's. Okay? Right. Because you're testing the market validity of a product, of a solution, of an idea before you set out to spend $230 million to do that. A bit of work upfront, validate the problem before you build the solution. It's not sexy advice, but then again, neither is burning through that kind of money. Ron Espinosa will tell you this isn't actually new. Ron Espinosa, director of the Google Cloud category basically means I direct strategy for our Google Cloud business, which has a concentrated focus on data and ai. Ron spends his days in boardrooms and on teams calls, helping companies figure out what they should do with ai, not what they can do, what they should do. He's watched this movie before. I, I think we have to take a step back from AI and, and look at technology projects in general and how many fail. The spotlight is really on AI because it's the big shiny object right now with executive and board presence really looking into it. And people are obviously spending money on it, so it, it's become a hot button. But in fact, many technology projects fail because they don't have a why beating in them. There is no business outcome driven. It's just. Something that someone decided they wanted to do, like say a CEO, coming back from a weekend retreat with his buddies. And so I, I wonder about how many times something like that happened that fed into the 95% of failures where it was no real rigor, no delineation of what the outcome was supposed to be. No allocation of budget. Just go figure this out and make something happen. Make magic happen. That phrase make magic happen. It sounds inspiring, doesn't it? But what it actually means is I don't know what I want. I don't know how to get there, and I don't know how to measure success, but I need to be able to tell the board we're doing ai, and that right there, that's the crisis. Ron's been through this before, not with AI specifically, but with every other technology wave that's swept through the industry over the past two decades. Without dating myself too much or at the risk of dating myself completely. Uh, I've been through the wave of, of interactive web and mobile evolution, SMS Marketing, digital Marketing. We've seen this over and over again. What we're seeing now, I think is AI offers an incredible amount of horsepower. So the failures are spectacular. You can show up a lot of money doing a lot of things that don't really yield anything. Mobile apps. If your app flopped, you wasted maybe a few months of developer time, maybe a hundred grand ai. You can burn through millions before you realize you're solving the wrong problem or solving a problem nobody actually has or solving a problem that doesn't move the business needle even when you succeed. The stakes are higher, the technology is more complex. The hype is more intense. The two different sides of the spectrum of understanding what the technology is versus what the technology can do and what it takes to do it, versus the magic, if you will, the Amazon effect of just one click, I can buy it. That mentality I think has really exacerbated the problem statement, if you will. Executives see copilot answer a question in two seconds and think, why can't my IT team solve our inventory management problem that easily? Because those are not the same thing, not even close. Act two, the alignment gap. So let's say you're an IT leader who's figured out the first part. You understand that you need a why, you need business outcomes. You need to ask, should we, before you ask, can we? Congratulations, you're already ahead of 95% of the market, but here's where things get really interesting. Even when you know you need alignment, getting everyone on the same page about what to prioritize is where most initiatives go to die. Ron sees this constantly and he's developed a method for surfacing the disconnect before it kills the project. What's very interesting is that in how we do these executive alignment sessions is we've pulled clearly executives into the room, but we also pull in the line of business owners and sometimes the foot soldiers pull them in and we say, Hey, tell us about this, and. We were working on a CRM. This wasn't even necessarily an AI situation, but this was a first steps toward getting into AI for a, uh, a kitchen provider. And they've got some big clients like Walmart that they have to, or distribution chains like Walmart that they have to comply with. And the executives thought that the system they had, although phasing out, was going to be instantly successful if they just went to Microsoft 365. They deploy it and nothing would be a, would be a miss. Famous last words, right? Because what the executives didn't know, what they couldn't know because they weren't in the trenches every day, was what the actual users had been doing for the past 30 years. And some of the folks that came in said, well, this is gonna take a long time. Because over the decades of doing this, we took RAS 400 and we really customized it. And what that really meant was they came up with a ton of workarounds, and those workarounds had to be unwound. Then you see the employees, the footsoldiers, very, very sheepishly kind of, well, if you're asking, here's what actually happens, but it's incredibly rewarding when those aha moments come out, some eye-opening moments. And I have to tell you, I'm smirking a little bit because they're quite comical when they happen, except. It's not really comical when you are the one about to drop a million dollars on a system that won't work because nobody bothered to ask the people who actually use it, how they use it. This is the alignment gap, and it's not specific to ai. But AI makes it worse. Softchoice has developed a framework for addressing this. It's called an executive alignment session. The approach that we take to addressing this division between IT and line of business is very common in offering we call an executive alignment session the abbreviations, EAS. Uh, as we started doing this, we started to brand it as EAS. Yes. And then we realized, hey, that spells easy. So hit the easy button. Put quite simply, it's a structure and a framework to build a repeatable process by which to prioritize AI use cases. Full stop. Get everyone in the same room. Figure out what you're actually trying to accomplish. Prioritize ruthlessly then, and this is the key part. Teach them how to do it themselves going forward. Because if you don't build that muscle internally, you're going to be right back where you started in six months when the next shiny AI tool hits the market. But Ron has noticed something about these sessions. They're exhausting, not physically. Emotionally. So it is rewarding when you see a customer come back and say, I was able to get the CIO, the CISO and the chief marketing officer in a room. It's incredible how happy they are actually, and, and quite amazed that they were able to get the minds share of these folks when in fact the executives get in the room and they're like, this is great. Instead of trying to shove something down my throat, you're actually asking me why I need to do it, and then, then I gotta tell you. Then the work gets into it. And it's exhausting, um, because you're really, if it's done correctly, we're pushing and poking and causing people to stretch muscles. They may not have stretched in decades of their career, but just like any good workout, you typically feel pretty good when it's done. This is the work that nobody wants to do. It's not coding, it's not deploying models. It's not even particularly technical. It's just hard. It's politically uncomfortable. It requires admitting that the way we've always done things might not be the way we should keep doing things. And in a lot of organizations, that's the conversation nobody wants to have. But when it works, when you actually get that alignment, the results can be dramatic. Softchoice uses a three part framework for thinking about ROI, that helps organizations focus on what really matters. For me, this is the best part of the story I think for our customers too. Soft Choice has this acronym, ROI, uh, which I discovered early on when I joined Soft Choice and I thought it was fantastic. And our ROI stands for Reduce, optimize, innovate. And it's, it's wonderful because this introduces the notion of bottom line, middle line, top line discussions. Top line is all the revenue company makes through sales, not through production. Middle line is all of the inventory you need to buy in order to manufacture things into the top line that you can sell. The bottom line is all the opex, all the operational expense that you use to turn the middle line stuff into the top line stuff. And so here's the secret sauce to ai. What we find is when you apply middle line efficiency and optimization where you can get a return, and it's usually between 10% at the very minimum, right? Whereas in the top line, you get a multiplier, you get a five x, the 10 x return on your investments. That's the gap between companies that use AI to make their existing processes slightly more efficient, and companies that use AI to do things they couldn't do before. New products, new services, new revenue streams, but you don't get there by accident. You don't get there by starting with which LLM should we use. You get there by starting with the business outcome and working backwards. These questions, right? They start to go wrong because their business. Questions we're asking, how many angels fit on the head of a pin? Why is grok four better in some arcane way than GPT five Conversations go wrong. In general, in my view, when we put technology ahead of business topics, business should inform the technology not the other way around, right? And the conversation path works better. Top down from a, why is the company interested in doing something to the, what is it that you can do? And eventually to the, how do we do it? So the technology conversation comes in third, right? That's something that we try to. Us upon our, our prospects and customers and people we work with, right? Is start with why do you wanna do something in your business? What are the expected outcomes, impact metrics, executive sponsorship, the change agent that really wants to move something forward in the company. If you figure those four things out ahead of time, then you're in really good shape. Business outcome first, technology second. Write that down. Put it on a sticky note. Tattoo it on your hand if you have to act three the path forward. Okay, so you're an IT leader. You're under pressure. Your CEO heard about AI at a conference and now wants results by quarter end. You know, you're asking the wrong questions. You know you need alignment before you start building, but where do you actually begin? Softchoice has developed an approach that Sean and his colleagues use with customers. It's built on a principle that Ron says he borrowed from coaching youth sports. I think coin is probably too strong of a term. I was involved in the discussions and I, I'm a big sports guy. I, I coach baseball. I, I've been a USA football master trainer, so I'm, I'm around youth sports a lot. And, and one of my favorite things to say is that you have to start at the beginning and it sounds ridiculous, right? Of course, you start at the beginning. Where else would you start that so many people with AI want to jump to the end and start there? I think it was, it's either Steven or Franklin Covey said, you know, begin with the end in mind. Yes. We always wanna begin with the end in mind, but let's first understand truly what the end state is, the desired end state, and then let's go way, way back to the beginning and get started. And I think it was Josh Bennett that actually, actually coined the, oh, you mean like we should crawl, then walk, then run, like, yes. That's precisely it. We should crawl before we walk, before we run, crawl, walk, run. So the way that the crawl walk around approach hits right at that 95% failure rate. The short answer is the AI pin example, right? It's an excellent one. We're investing a small amount upfront before initial product release, before spending 230 million would have, in all likelihood, avoided failing to be market expectation and and closing shop in under 12 months. That's the net of it. In brief, you have to start with validating that the market exists. As we said before, you set out to jump into the deep end of the pool. That's the the core difference of what is the difference between a crawl and a crawl, walk, run approach and that 95% failure rate, you have to start there. It sounds obvious when you say it that way, doesn't it? So obvious. It barely seems worth stating and yet been fortunate enough to be the father of four children and every one of them crawled. Then every one of 'em tried to run and then they quickly found out they needed to walk before they could run. And I think that's what we're seeing with AI in the marketplace right now. Remember that AI pin $230 million. That's what happens when you try to run before you walk. But Ron says the framework isn't just a linear path. It's also a diagnostic tool. It could be a left to right consecutive path that we wanna follow. It could also be the overarching guiding principle. So when a customer is in the midst of an AI deployment, we might say, well, tell us again about how you crawled. Right. Let, let's roll this back. Like, why did you get started on this journey, when, when. When the proverbial baby that's about to crawl was being conceived, what was it conceived to do? What? What did you imagine What happened? Because if you skipped straight to run, then you're probably already failing.'cause if you don't know how to do, you didn't do the foundational things. You don't know how to put one foot in front of the other necessarily. When Ron asks that question in boardrooms, you'd be surprised how often the answer is just silence. Okay, but practically speaking, if you're just standing at the starting line, how do you crawl? Sean's answer is going to frustrate a lot of people because it's not about AI at all. One of the key recommendations we always make is start with data, and I think that's prevalent throughout the industry. You'll hear that from most people, not with chat, GPT versus Claude, not with should we use Google or Microsoft or Amazon? Not with, do we need our own large language model? Just what data do you have and is it any good? Because here's the thing, nobody wants to hear. Most mid-market companies data is not good. It's not well organized. It's not well governed. It's sitting in 17 different systems that don't talk to each other. Accumulated over 15 years of mergers and acquisitions and temporary solutions that became permanent and no AI in the world could fix that. A lot of times we have to start with cleaning up the data and organizing it correctly. There's applications that need to be modernized. There's, there's tons of things that have to happen in order for a true production AI experience to be realized. Sean has one more piece of advice, and this one runs counter to everything. Most IT organizations have been trained to do. The bottom line though, is that there's a huge difference between testing a hypothesis with a rapid prototype for crawl versus jumping straight into a POC in a production. And the reason I, I bring up POCs, it's not a term I use very often. I tend to avoid it, quite frankly. I caution people against POCs because fundamentally, by definition of POC is a proof of concept. And fundamentally, you're proving a concept that in most cases, in many cases, it's been proven time and time again, and you can prove any concept. Given enough time and money and who wants to to go that far right with time and money? Who wants to throw bags of money on a fire, as Josh says? And my, my friend in Texas who says, uh, you need enough money to burn a white mule, right? It makes me think of the AI pin. This is why you should avoid POCs. I saw one recently that was a scalable POC, and there's no such thing because when you're in these crawl walk steps before the production, you are purposely designing not to scale. You are purposefully designing not to scale. Let that sink in for a second. Everything in it over the past decade has been about scale. Build once, deploy everywhere. Design for scale from day one. Don't paint yourself into a corner and Sean saying, forget all that. In the crawl phase, you are not trying to scale. You are trying to learn whether the thing is worth building at all. It's the opposite of how most IT organizations think, but it's also how you avoid burning $230 million on something nobody wants. Ron has seen what happens when companies actually follow this approach. I wanna make it clear that the executive alignment session isn't something that happens once we have to revisit and we have to go back and make sure that we're checking. Is the AI doing what you wanted it to do? Is the data visualization doing what you wanted it to do? Well, yes, but what we found is that if we could reduce teachers' workload, it would be a great benefit to us. Could we use AI to grade essays, for instance? Well, yes. Yes we can. And so we set about building that. And so you start to see tangible results within a workforce. For instance, were there emotional toils dropping tremendously? Uh, we also had a customer where we took a 45 minute manual document review process down to 45 seconds. That allowed them to process many more transactions. I don't know exactly how many, but they were doing 10,000 a month at the time, and they can do it without increasing their staff. That was a first level win, 45 minutes to 45 seconds, a 60 x improvement. They got there because they started with the problem. So here's what we know. 95% of AI projects deliver, zero business return. That number. It's shocking, but it's also fixable. The companies in that 5% that succeed, they're not smarter, they don't have better technology. They're not working with some secret AI that the rest of us don't have access to. They're just asking better questions. Figure out what you're trying to accomplish then, and only then start thinking about which AI tools might help you get there. It's not magic, it's not sexy. It's just work, but it's the work that separates the 5% from the 95%. If you are an IT leader listening to this and thinking, okay, I get it, we need alignment. We need to ask better questions. We need to start with business outcomes, but I don't know how to get my executives in the same room. I don't know how to prioritize when everything feels. Urgent. I don't know where to begin. That's exactly what Softchoice executive alignment session is designed to help you figure out. It's not a sales pitch. It's a facilitated workshop that brings your business and technical leaders together to unify around a shared vision, prioritize the AI opportunities that will actually move the needle of your business and create a roadmap you can confidently execute. Because here's the thing. You don't have to figure this out alone, and you definitely don't have to be part of that 95%. If you want to learn more about how an executive alignment session could help your organization ask better questions and get better results, visit softchoice.com/s. That's softchoice.com/s. The Catalyst was reported and produced by Tobin Dalrimple and the team at Pilgrim. Content Editing by Ryan Clark. With support from Philippe Dimas, Joseph Byer, and the marketing team at Softchoice, a worldwide technology company. Special thanks to Sean Larkin and Ron Espinosa for sharing their expertise and insight. If you enjoyed this show, be sure to check out our affiliate podcast, AI Proving Grounds, hear from it leaders and AI innovators on how to navigate the future of enterprise ai, cybersecurity and digital transformation. New episodes weekly on ww t. Dot com or your favorite streaming platform. If you found this episode valuable, please subscribe and leave us a review or even better share it with someone you fear is asking the wrong questions about ai. I'm Heather Haskin. Thanks for listening.