Arguing Agile
We're arguing about agile so that you don't have to!
We seek to prepare you to deal with real-life business agility challenges by demonstrating both sides of the real arguments you will encounter in your work and career.
Arguing Agile is hosted by seasoned professionals who explore experience from their careers, share stories, and suggest advice to other professionals. We do these things while maintaining an unbiased position from any financial interest.
Arguing Agile
AA250 - CEOs Admit There's No AI ROI (But Keep Buying It Anyway)
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
AI had little to no impact on productivity in the past 3 years.
This was according to a National Bureau of Economic Research survey released in February 2006 of 6,000 CEOs and executives across the US, UK, Germany, and Australia.
Instead of throwing fruit at us, watch or listen as Enterprise Business Agility Consultant Om Patel and Product Manager Brian Orlando discuss why it won't be the loudest Silicon Valley CEO helping us make the most out of these new tools and technology, but boring old process improvement and org design!
Yes, the current research-backed consensus (we review more such articles and research) is that AI adoption doesn't lead to productivity gains, but executives are going to still buy it anyway - they've got "positive vibes." 🤡
So that leaves us to figure it out... and figure it out is exactly what we do, by discussing:
- Why Executives Won't Stop The Adoption
- When AI Becomes the Unquestionable HiPPO
- Are We in the J-Curve, or a New Baseline
- Deming & Goldratt 101 (Automating a Broken Process)
That's right! Tired of hype and want to understand WHY there's no productivity increases? This is the episode for you!
#ArguingAgile #AIParadox #Leadership
REFERENCES
Fortune Article (Feb 2026), Financial Times S&P 500 Analysis (Sept 2025), METR Study (Jul 2025), Workday Study (Nov 2025), The Goal by Eliyahu Goldratt, Out of the Crisis by W. Edwards Deming
LINKS
YouTube: https://www.youtube.com/@arguingagile
Spotify: https://open.spotify.com/show/362QvYORmtZRKAeTAE57v3
Apple: https://podcasts.apple.com/us/podcast/agile-podcast/id1568557596
INTRO MUSIC
Toronto Is My Beat
By Whitewolf (Source: https://ccmixter.org/files/whitewolf225/60181)
CC BY 4.0 DEED (https://creativecommons.org/licenses/by/4.0/deed.en)
know, ohm, experienced developers people who know AI tools and know how to code for at least a few years at least according to one of the studies we're about to review they were 19 % slower when they measured them, when they used AI tools versus when they weren't using AI tools, so that's not even the most interesting part about that study either, like the more interesting part, those same developers, they self -reported that they were actually 20 % faster when they were using the AI tools, that, this is not measuring, this is what they were self -reporting and is that they felt faster using the tools, even though what the study says they actually were slower when they were measured, and productivity tool at that point, that's that's not a just uh, working with two stiff drinks, I yeah, definitely, perception is not reality here, it seems like um, also, electricity made factories less productive for 20 years before they figured out that they had to retool the factory floor so we gotta be careful not to confuse you know, bad rollout with a bad technology I, yes but and you see i'm gonna say yes and but it's actually a yes but it's because i see what you're doing i see what you're doing you're disagreeing with me right thousands of ceos thousands of ceos admitted you know another fortune survey that we're going to talk about today that ai had zero measurable impact so the solo paradox is back it's back in full force and it brought all its receipts arguing agile if this is your first time, welcome to the podcast i'm your host, product manager, brian orlando and this is my co -host agility consultant and the fresh prince uh, welcome back to of process is that a following from principality i don't know let's continue it enterprise business episode you'll be able to call out the difference between AI adoption theater and actual AI value in your company let episode you'll have hard data, or data I don't want either one, from peer reviewed studies and CEO surveys to push back when leadership says listen, oh, I don't need adjustment, just use AI, ok everyone's by the end of this using AI, oh, you just need to use, just do what I said, ok, just disagree and commit, oh, oh it might be a podcast, I knew you already on that topic it might have been a podcast previously on that topic alright, uh, and thing you'll walk away with a framework to figure out if AI I mean, or by the end of the really any, any effort being hoisted and then the last on you, one or the other, is actually helping your team or just making everyone busier or making them feel more busy or giving them more busy work or all of the above or all of the above dang so let's start with the solo paradox so, solo as opposed to so high, oh, oh I thought you were going to say Han Solo I thought we were going to make a Star Wars reference but, I almost did, but darn darn, I really thought we were I didn't want to lose our Trekkie friends alright, that's true, the solo paradox, it's back it brought receipts, that's our first category in 1987 economist Robert Solo said you can see the computer age everywhere except in the productivity statistics so now nearly, like 40, 40-ish years later, 40 or so years later, Fortune published, published, uh, an article that, where thousands of CEOs admit that AI had no measurable impact on productivity or employment uh, so the question here is history repeating itself or or, are we just not learning lessons the first time? that's what I'm asking that's a great question questions this is a both are great brand new study that just came out so fresh off the press do you still say press in the digital world? is not paywall, but the the, financial times um, article that it came September 2025 from is paywalled, link in here, and that I have the alright, here, um this is the February 17th, 2026 this is at, the article title on fortune is called thousands of CEOs just admitted AI has no impact on employment or productivity and it has economists resurrecting a paradox from 40 years ago and there is, it's got a picture of Nobel laureate and economist Robert Solow, and economist by the way, everyone is a of, a person the fortune article who is basically unemployed just wanted to point that out boy, in case you all think, in case you thought scrum masters did nothing all day let me introduce you to economists economists and weather people weather forecasters so he says, following the advent of transistors, microprocessors, integrated circuits and memory chips of the 60s economists and companies expected these new technologies to disrupt workplaces and result in a surge of productivity instead, productivity growth slowed, dropping from 2 .9 % between 1948 and 1973 to 1.1 % after 1973, that's a well documented slump, I understand there's some leaps in here where it's ooh, transistors microprocessors, integrated circuits, memory chips and then a slump but there's a lot more going on in the world yeah, so newfangled, I like that they added the word newfangled newfangled, I like it I mean, interesting concept interesting concept I don't know exactly how strongly I'm gonna make my case off of this cause I don't think it's a, I don't think it's great but, but I wanna flip over to the Financial Times article that they cited studies done in this article from the Financial Times in this section ok, and I do need to talk about this they talk about the financial times has used AI tools to identify the mentions of technology on the SEC of uh filings and earnings transcripts they went over a bunch of companies of the S&P 500 is what they did, and they said 374 of the S &P 500 mentioned AI on their earnings calls in the past 12 months now this article I think was out in September 2025, so this is between mid 2024 late 2025 is when they did this research about a year or so 87 % that was a good time period to be, you know, I mean, I think Claude three sets five was out, you know, anyway, the 87 % of the calls logged as wholly positive about the technology with no concerns express. So sought to categorize the expected positive benefits of technology i'm companies tend to see AI as a risk the financial times because they're not used to having or they're not used to having systems or processes which they can't rely on 100 % so in this article, said AI governance expert and author I find it staggering is in that paragraph that you're highlighting the MIT research here, we have that study as well by the way it says, when we spoke to executives, they would often say the internal tool was very successful this is the 95 % of, gen AI pilots fail, but when we spoke to employees, we found zero usage telling, isn't it to categorize the expected positive benefits of technology mostly anticipated benefits such as increased productivity were vaguely stated and harder to categorize than the risk companies anticipated being able to optimize workflows through automation and hope to achieve market differentiation through their use of AI so, I mean, that sentence means nothing right there, first of all some hoped to be able to use technology to improve the personalization of their products filings do reveal that companies able to give clear AI upsides include those that serve the rising AI driven data center boom energy companies, first solar yeah, of course, of course without that, nothing goes, yeah, of course article we read the financial times article that had the study that the fortune article was based off of from, from a little is that what, is that there you go that's what fortune does they just go look at other people's articles and then that's a great business model rewrite them yeah, I can help you with that oh boy category number one is that the solo paradox is back that the claim here in out with this great technology, and then you're gonna come there's gonna be a we read the fortune slump for a while before there's a real rebound that's, that's what the first category is gonna say, and the defense of these points is gonna be, a couple of these defense points because you're I'm gonna rattle off gonna, you've probably heard them a lot especially from the AI folks you know, oh, your ROI will show up eventually, these tools are still new, our businesses are still trying to figure out the best place to put them, and then, you know, every major technology has a lag period that we, we already kind of talked about that as a, as a reason, and then, and then of course, companies just need better implementation. yeah, yeah, you'll hear this a lot, but also these are pretty thin, really, because at this guy Solo called it some time ago, we're still early, are we that, we're not that early, I mean, the study goes back a year and a bit, and we've accelerated a lot since then, the ROI will show up eventually, it's just a hope and a prayer. I think the, the, the evidence in here that's kind of weak metrics I could have swore there was, but I read a lot of research papers today, so maybe I'm confusing financial there wasn't any analysts quote takeaways with researchers takeaways, which are very different uh, which is odd who can blame you for that? yeah, but like the companies were very clear about the AI risks, and that's what they said, it was super easy to categorize the risks but the companies were very murky when it came to saying the benefits yeah, the benefits, they didn't give any numbers even though they were all, even though 87 % of them were wholly positive on AI, I mean, I'm also positive on AI, so like it's it's, it's, it's difficult to not defend this one hey, the tools are getting better, hey, it is early also, boy, there's been a lot of firings uh, in layoffs uh, attributed to AI, so I mean it's not that early it's not that early, and also, just one of the data points we picked up from that paper was 95 % of AI initiatives fail, that, that story that stat rather reminds me of a story where the casinos implemented these, AI robots instead of dealers and it was thought that these, you know, of course they don't take breaks, etc, right, and, and they don't have emotions yeah however most of the most of the customers didn't like that they rejected they wanted the human interaction so eventually the casinos receded and pulled out those robots and put in human croupiers again so there are many reasons why these things fail obviously we're not going to go into those uh just now or even in this podcast but coming out with saying oh, the benefits are murky, there's probably two reasons, one is, they really don't know, and the other is, they're being cagey, because they don't want to be caught off guard, and be caught you gonna be wrong because if you're pulling these stories out into your forward looking basically they're kind of financial, you know, reports, right? If you're wrong on any of these, it just has a bad repercussion in the industry in, in the markets, right? For your stock. So that's, there's that too. Yeah, this is, you it's also probably because yeah, data centers like that's big money. you know, all the AI providers are going in front of the government asking for, billions of dollars to, to supplement their incomes because they're not profitable because it costs so much money they, they have, because they have not invested in figuring out a way to profitable level, you know that, that run AI at a isn't, because I, that, that doesn't like cover the costs, you know, basically being profitable that's, that's what I'm trying to say you know, have to be positive there's a whole other podcast here about like corporate happy speak there's a book that I read a long time ago about corporate happy speak where you know, no one is allowed to question a narrative or bring up evidence contrary to the positive opinion you know, and that's, there's a little bit of that happening here you know, with the 87 % where, where is it, they have to they 374 companies mentioning it, 87 % of earnings calls were wholly positive about AI, but no actual metrics about, you know success cases I think that's pretty much in their lane to be honest, I mean, you know, in these earnings calls, they are always positive, and then they just simply have a little kind of footnote just saying, you know, past performance is no guarantee of future results I actually happening here is, they are I think what's experimenting with it it, and they have not figured out where it goes and how to use it and they just don't want, like they got FOMO they just don't want to miss out on this phase so if it was being written in, it was being written off in a way where, hey, we're spending this much money on R&D to figure out how to rework our work processes then that would be one thing, but as we know oops, there's no money for reworking work processes, there's no money for anyone to try to help the development team figure out how to unblock balkers organizationally, there's no money for that that's right, that's absolutely right oh boy, for 20 years, I guess we'll, we'll Siri, set a timer find out where this goes solo except the pace of change is much, much more, furious so it's not 20 years anymore it might be more like 4 years or 5 years, you know yeah, yeah this is the story that opened the podcast is, invented and they started bringing electricity to cities, like the companies could trade out their steam engines for, electrical motors but, but why it wouldn't, it wouldn't really increase their productivity, right but if you fast forward 20 years, now the whole factory has been redesigned around when electricity was the electric power saw, or, you know, other electric type of machines, then obviously you know, now, now you're not gonna compete so, we go with this category, I think we set a timer for 20 I don't know where years and fall back alright folks, set a time for 20 years, that means you need to put that on your calendar, come visit us again 20 years that's right, I think we call this one, I think we call this category a draw, we haven't had a draw on the podcast in a while, alright, so what do you think about this category do you think, do you think it should be a draw, do you think one side has it, let us know in the comments has your company seen real measurable productivity gains from AI or you just have happy vibes vibe, vibe in the comments yeah, vibe us in the comments I like that sorry, I just realized there's no takeaway in this category, the takeaway category so we're gonna go back to the takeaways so, leadership meeting or AI initiative pitch if you're lucky enough to have budget or have a say in the budget meetings so what specific workflow before your next are we redesigning and how will we measure it before and after that's my question here and the takeaways and my bullet points are you know, the, we're gonna be more productive which is usually that's just a hope and a prayer that's usually what they'll say yeah so we can be more productive everyone's gotta be using these tools so we won't be left behind that's just FOMO I mean, that's not a plan right, and then, you can't measure a process without a baseline you can't measure a process without evidence, yeah, it's not, if it's not in statistical control, yeah, you can't measure it, that's right, so there's a lot of theory of constraints happening in this podcast, it's gonna I'm not gonna name it, but it's happening under hood, just know that it's here there's a lot of deming happening here there's a lot of things that are not new happening in this podcast, that you know all the AI people will be you don't know what you're talking about, you were replaced by AI and sentenced to exile I almost forgot uh, on Gilgan's Island send us to Hawaii I got nothing else, and then, and then, you know hopefully along this process, with these questions you can vet out everyone else is doing it you can vet the FOMO because if, if we're all just like relying gotta have these AI tools because, and then FOMO is the only answer oh, yeah yeah, good luck rationalizing the expense for all of this stuff, right if that's your only answer well, listen, I, why I want to call this category a draw is because I think that it's just like every other feature that the executive walks in and says, we gotta have this right now and you just do it to not start a fight right yeah I'll take my $100 I'll take my $100 free, Claude Max subscription, thanks boss now get out of my office oh boy macro picture and, we're done this time so again that is the with this one the CEO can't prove it's working doesn't matter, cause they're in charge just disagree and commit home and keep that resume updated too by the way real early in the podcast I know, but in this scenario what happens is failure tends to be blamed downstream and the CEO's gonna be just fine, thank you alright, what about people actually using the tools every day, it turns out that the data doesn't prove that these massive, sick productivity gains, real they may not be real, but at the face level, they're there, because we hear these things, right are you saying perception is reality? no, i'm saying perception is perception, we're gonna deep dive into whether it's reality or not and hopefully separate the, the wheat from the chaff alright, they're not even this is a really cool, category i think you're gonna like this one, so the, here, here's something that, here's something to douse the AI hype cycle that you're talking about, ohm the AI hype perception cycle uh, a randomized controlled trial found that experienced open source developers with AI tool experience, ohm they were 19 % slower when using AI tools but, if you ask them if they self report on their speed, they will tell you that they are actually 20% faster, and that is a nearly 40 % perception gap between how people feel that they are and reality when you measure it what happens when an entire team thinks they're winning while we're all actually losers well, about Campbell's Law to start with, which is, hey our old we're going to talk friend Velocity is back on the podcast, because that's basically what we're talking about now, so clearly perception is not reality and it's amazing to kind of discover that the teams are saying they're 20 % faster in reality they're 20 % slower that that 40 % variance what gives so so there was a meter study july 2025 it was called measuring the impact of early early 2025 ai on experienced open source developer productivity oh yeah, that was it let's see measuring the impact of early 2025 AI on experienced open source developer productivity model evaluation and threat research METR meter, you like that acronym? that's a great, great acronym, right I know, it's almost as if they came up with the acronym first and then they backed their way into a, or, nevermind the secret to all acronyms? that's right by, Joel Becker, Nate Rush, Beth Barnes, David Ryan, Reen? Ryan? Ryan. Probably Ryan. anyway, so this, was a study, random control trial of, AI tools, understanding how AI tools at the February through June 2025 frontier affected the productivity of experienced open source developers. 16 developers with moderate AI experience completed 264 tasks in mature products. projects, sorry, on which they have an average of five years prior experience, so first of all, the sample size is pretty small just want to throw that out there for anyone like looking at Pillar Me, they're using Cursor, they're using Claude, or Claude 3 .5 uh, I'm assuming this is 3.5, Opus 3 .5 and 3.7 Sonnet before starting the test, developers with AI, that'll speed them up by 24%, and then after the study, they estimated that their they forecasted that actual time was sped up by 20 % and then the researchers that were following their productivity said that they were actually 19 % slower amazing, so that first number 24 % is just a swag it's a forecast, yeah, which is fine you know, it turns out their forecast wasn't that far off, right their done on the dark side of the moon like, they said they would be 24 % faster, and they if your metrics are ended up being 19 % slower, so they were not they were not far off from their own perceptions after, but the reality was in another universe I was gonna say, and their perceptions were completely wrong, like not even close to being right one more thing I want to show you which I think is really funny because contradicts predictions from quote experts in the slowdown also economics, hang on, hang on, I believe people that are watching this, they can see the chart, so there's a chart zero is obviously zero, right, it's right on the money with your prediction, right, and then here's the chart, and then, um, see the bottom of the chart, which, which is basically the occupations or the, I guess the fields um, the, the hopefully you can economics expert forecast uh, was just short of 40 % faster, they said, oh, you'll be you'll be, you'll be just short of 40 % faster, so maybe a full third faster with AI tools those are economics experts, which again, remember uh, is just code for unemployed people and then the machine learning experts were very close to the same number, you which machine learning experts is but uh they were they're employed, just people with math degrees that don't know how to code and learn to program in Python but don't know how to capture bad variables so when you run their applications and you put in an integer where a decimal should go or vice versa everything just blows up so that, that's a machine learning expert by the way, and then the developer, that was, those, these are all develop if you, if you came cause I got all my QA engineers hat on and I'm making fun of everybody today, and then developer forecasts during the study you see that one is like moderately, it's like a full 10 % down from the machine learning folks, right, if not a little more than that in like the 20-ish, I think we said it was 24, right and then developer estimates after the study were down to 20 and then you see the red on the far right on the bottom, the observed result yes, the reality which is hilarious that the, the economists over here, the people all the, all the, all the people making your, junk food through the drive-thru and serving it to your CEOs out there because all they read is Forbes and HBR and whatever, like those are the people writing for your audience that you work for, by the way that stat means absolutely nothing to me because these people have never, like you said, never done well, listen, all like, it's like from farm to table, but it's like from the from the dumpster straight into the ears of your CEO I guess eyes and ears of your CEO, that's where the economy experts forecast it goes from there, it goes into their their, media that's crafted for CEOs to pay attention to, and they're reading that, and that's where they're getting the FOMO from, but you see, those folks are up and I would argue, 60 % off of reality like connected to absolutely nothing. That's, that was the point yeah, connected to nothing, this sort of reminds me of some of the Gartner studies that are out there you know, if it's in a 2x2 it must be true, I mean, I love a 2x2, let's just skim the everyone loves a 2x2 rest of this, direct impact 246 tasks, they took 2 hours each on average and the open source repos they were working had 23 ,000 stars on average 23,000 stars on average big mature code bases is what they're trying to tell you and they're using cursor and they're using claude and sonnet 3 .7 and they're using good models it's not like they're basically they're back in the day with haiku 3 or something that, you know, right I asked them to do something and just of, yeah, it just guzzles a whole bottle of glue while it's telling me confidently that, you know, it absolutely understands what I'm telling it surprise, you know, I said that and a data scientist once got really offended that I was Haiku 3 can't it's not a reason she got very angry with me I was Haiku 3 is such a bad model when you're actually trying to do anything any kind of like sorting where you need it to be halfway good or any kind of summarization where you need it to be halfway good, I understand it's fast that's the point, and oh, it's not that great of a model, Brian, don't use it for that yeah, but, what am I gonna use it for otherwise, I'll, I'd rather wait and get an answer that I don't have to completely rewrite from scratch because it's terrible and, yeah, she, she got very offended that I was calling the model and all of its output garbage and not suitable for anything that I was doing, because she probably was touting to no, I I think it was leadership how good it was, so she probably was shocked that, you know somebody's, yeah, yeah somebody's contradicting, throwing her under the bus, yeah five factors contribute to the slowdown effect, we find mixed unclear, no evidence for ten factors we find evidence against six factors, alright, so they're giving you the we group these factors into four categories direct productivity loss A, B, experimental artifact, C, raising, uh, factors raising human performance and D, factors limiting AI performance interesting, ok that's all in the, yeah, appendix yeah, I don't know if we're gonna go through, all of this, cause there's, there's a lot in here about exactly how they went through to find these results although I do appreciate the graphics paper because there's a lot of data in here i any kind it's, it's, it's, it's, it's, it's, it's, it's, it's, it's, nothing that big, this is huge I'm glad you went into the against, because the first against I'm gonna throw out, and I know every single I think if there's person in the world has heard this, ohm, these tools are only getting better, this is the worst these tools are ever gonna be, they're get better, right, that's one side of it, the other side they argue is you'll get better you're not prompting hard enough, I do enjoy a good argument I do enjoy that, and then, I, I the tools are gonna also if we're accusing, uh, people, I also enjoy, the, the, the young people get it, and the senior people just don't they don't want to adapt, they don't want to learn a new tool it's let's just hire a bunch of young people IBM, by the way, is hiring a whole bunch of young so people I heard, I heard they announced that to bump their stock up, I'll believe it when, I see the young people, flocking to Big Blue, as their first employee, I, I look forward to that IBM might be making a comeback oh, we're this is a great not um, yeah, I'll put a watch on that you said, you said a 20 year timer for you, see how that one goes all the young people can learn Kobol and program on Mayframes IBM kicks making a comeback oh clear, it's, you know, these were experienced developers that knew how to use AI on a mature code base like something that would mirror reality, and they weren't building this one's fairly hello world, you know to -do lists on lovable you know, in an hour, and then chipping it and like that, what I see a lot of people doing, they weren't doing that this is, this is deep yeah, yeah, solid yeah, the writing's on the wall, you know, it is definitely proven through at least these studies that perception is not reality, um, and using AI tools actually slows you down more which is interesting because that's, that's the proof that they found is that using the AI tools slowed these developers down working on super mature you products uh but the study confirms that the developers reported, themselves reported they felt like they were faster, so that's that's an interesting takeaway that I don't hear anybody talking about is people feel like they're doing stuff fast, but when you actually study side by side you find, you know that, that actually has a cap somewhere it would have been interesting if they this study was only with 16 developers it'd be interesting if they expand that and do it with different size projects or different size libraries, different repos, different size products and different size and maybe different experience levels that would be cool to see that variable too the experience level, especially because we're saying, you know, oh, Julian's will learn it faster, ok if that's the case, right, that study will prove it or disprove it that'd be an interesting one of learned in this study, I you can't rely on self so I think what we-reported AI productivity numbers that, that are just kind of forecasts or estimates and we also learned that the economists uh, shouldn't give forecasts on AI anything we learned that, pretty sure, we need to measure the output, I think that's what I'm saying, we need to measure output right, and don't trust economists, and then that, if I have some takeaways here let's let's, let's, let's pick one team so we're, we're gonna say this is it'd be great to do like a double blind controlled experiment or whatever with hundreds and hundreds of developers but who has time for that? yeah, definitely not me no, no I don't even have time to make a sandwich today so at your work we're gonna pick one team we're gonna measure their cycle time, we're gonna measure their defect rate we're gonna measure their throughput for two sprints we're gonna conduct, we're gonna create a baseline ohm, that's what I'm saying, basic theory of constraints stuff is what we're doing, we're going back to basics uh, then we're gonna add AI for two sprints after that, and repeat or maybe we'll give them a sprint or two to learn the tools and then we'll add AI, right, ok but it did a control experiment, we got a baseline we waited some amount of time, we gave them AI and then we to measure again, and then we're going to see against the baseline, real numbers keeping other things constant, like the repo size complexity, experience level all of those, right, because we're just trying to isolate this one variable you know, AI yes so that would be a great experiment and it doesn't take long in a month or so you can be done you can have real numbers yes yes you can have real numbers uh or you can just snap your fingers like the ceo and just make up the numbers that's that's what i learned in the podcast or or or like you could be economists and just say the answer is blowing in the wind my friend or you could be a data center oh i saw that bob that was a that bob dylan reference almost went right over my head i'm numbers against a baseline to compare and then feelings need to be ruffled, because you've got real numbers and then, if that gives you solid you can't run that experiment, for some reason you can't get the, can't get buy no no CEO executive-in to run that experiment, why would you not be able to do a simple experiment like this, well it's probably because your data scientists, they forgot to check the int that they were feeding into a decimal field, oh yeah, yeah, yeah, and then you lost all your data yeah, yeah, yeah, they're not good programmers that's what happens the economists are data scientists but, without the math and without the programming and and without, uh, without stable employment right, ok well, mine, uh oh, economists were harmed in the making of this podcast, or data scientists, sadly sorry, I meant that about economists, not data scientists they're, they're, they're lovely sensitive people maybe think? do you feel more or less productive with AI, alright, what do you uh, in your workflow? sorry, it, do you have numbers, uh, let us know in the comments, so we've developers believe themselves, they perceive themselves to be faster, but in actuality they're not, now have you measured talk about a the data shows that different gap, there what executives it's the gap between believe in what the workers experience because it's not really a gap so much as it is the Grand Canyon, ohm that's right, yes, we're so now we're gonna visiting the Grand Canyon today, we let's see, workday oh, so, a workday study, which I'm put on the screen right now, gosh, when is this I just gotta say, no thing? November 25 ok, ok, from the end of 2025 uh, anyone saying that AI is the new hippo, like let's, let's talk about this here in a second, because in the executive summary that I'm showing on the screen which is actually a, a pretty fair executive summary, it says, uh productivity gains alone are not translating into better outcomes for most organizations uh, and then only 14% of employees consistently achieve net positive outcomes from AI use the next bullet point says as a result roughly 37 % of the time saved through AI is being offset by rework 37% of the time saved is being offset by a rework so, in 10 hours of work, 4 are being lost to rework wow, remember rework? we don't talk about rework in software development anymore developers just eat it and we have to do 996 that's right yeah, I guess that's a win we should make AI do 996 now so, nearly 40 % on the sidebar you can see 3 three things that leaders should know to get more out of AI on the sidebar if you ever hear here nearly 40 % of AI's promise of productivity is silently lost to rework uh, we just read that one the most enthusiastic users often carry the highest burden spending disproportionate time verifying and correcting output the early adoption tax the old, the old high Q3 here yeah, oh man maybe it was because she spent so much time fixing haiku 3's garbage, it could be that yeah, she felt like, she was burned, she felt like no, no, remember we did the podcast on uh the Ikea effect, actually I think she might have, she might have had the Ikea effect, where she had spent so much time fixing haiku 3 she actually thought that the results from haiku 3 were good, yeah, yeah yeah, she wanted to believe it that could be true, yeah, Ikea effect I didn't think about that it says, this hidden loss highlights a critical blind spot in how organizations assess AI performance, most leaders focus on gross efficiency, how much time AI saves, but this metric alone obscures a real picture when time lost to rework is taken into account the net value of AI is much lower than expected uh, anyway, interesting, so that, that's just the executive summary here, numbers in here, it there's other said two -thirds of leaders, 66 % cite skills training is top investment priority um, which is funny to me because uh because of everything we normally talk about the podcast for like corporate America doesn't want any they they want to hire somebody with 15 years of AI experience rather than train anybody, you know yeah, among employees who use AI the most only 37 % report increased access to training so interesting, 66 % of leaders say training up on AI skills is our top priority 37 % of employees say they actually got any training yeah, crazy wow this is a pretty good study from workday it is pretty good, the only, if I have to if I have to ding workday for anything it's because I have to actually go to the workday site and see the workday site to get to this study, that's probably the worst thing about this study, so the baseline metrics here uh, show that volume use is high, nearly 9 in 10 employees, 87 % of employees are using AI at least a few times a week, we're nearly half 46 % using it daily, productivity is on the rise, 3 quarters, 77 % of employees report they're more productive due to AI and times being saved, the vast majority, 85 % of employees personally saved between one and seven hours with their tasks people, they're saying the people in this study that are doing the best with it, are trained at twice the rate yeah and they're mainly in IT and marketing interesting and then the worst and mid -career professionals distributed across HR and operations was between leaders interesting so the uh, training and the baseline number uh, of who was reporting access to training, which is funny because if 37 % were reporting access to training and the majority of the 37 % were in that one category, what what does that leave everyone else? leave them out on a limb that's the disconnect here here because I'm going back and checking the sources that I pulled there's some editing together originally for this podcast because the workday study, 85 % of respondents said that AI saved them between 1 and 7 hours a week, but about 37 % of that time savings is lost to what they call rework correcting errors, rewriting content verifying output, right, only 14 % of respondents say they consistently get positive outcomes from AI workday surveyed 3200 employees that said they use AI half in leadership positions at companies in North America, Europe, and Asia with at least 100 million in revenue and 150 employees, ok, so that lets you know that they did their homework there productivity paradox uh of AI are the ones so there's a big investing the most time in reviewing and correcting what it produces so that, that lines up, right most frequent users is leadership knows that we gotta get on it, leadership says training in AI is the most important thing leadership also does not prioritize leadership training in AI um, and then the people that are being, that are using it you know, the people left over who didn't get fired is what I'm saying, they're spending a lot of time doing rework for what the AI tool, so that a lot of time, a lot of frustration here, disconnect between workers and executives um, people had between you saw that one and seven hours saved per week using AI but those were a very specific block of people in the upper right yes and those are the, and mainly IT people whereas like the HR folks and people in operations or whatever they really didn't see much gain and uh, they, they hadn't really integrated it into their daily process so we did see that in the paper uh, workday study and the workday study didn't have a disconnect with executives as far as the amount of hours that people were we looked at the gaining but what it did, what the workday study did have is it did have the disconnect between the amount of training that executives are putting into their workers versus the amount of sorry, the amount of training they say that they say that they that they say they desire that's that's what it is yes yeah they say they desire like this is what they actually 66 percent of that and the actuality was like 37 yeah yeah that's right those are the numbers so there is a disconnect there is that all the executives want everyone to be adopting ai in order for you to have, to have more time savings from the tools you have the ai time savings to use the tools more and you have to specifically request training, you know, you have to make a case for your own training because you're using the tools more rather in than just being frustrated that all your co -workers are gone because they got fired because of downsizing, yeah, yeah so this is a, this is a, this became a very murky category very quickly is, executives want the one thing they're making these decisions to get the thing that they want but then they're not putting money into driving in that direction they see that there seems to be a disconnect i guess that's that's the real story in this category who came into this uh promising the grand canyon so i mean it's not surprising really it's a pretty grand canyon it's quite grand that's what i'm saying certainly a canyon um it's definitely a hole yeah congratulations canyon i guess is the new chasm um really like, it's beating up a lot of different categories, so I'm just gonna kind of move on because the executives here, they're they were measuring the training, you wouldn't have that disconnect, if they were incented on training their workers better, you wouldn't have that disconnect there's a lot of things that are in play here that are just disconnected, you know, there is a grand canyon here, it wasn't the one that I expected going into it, this category is a canyon there is still a canyon in there it's just, you downsizing but, still found and expecting the co-workers to take up the slack because you're x times more productive you know, each person we give the $100 clawed max subscription to is gonna be three times more productive so we need a team of the third of a size I mean, you hear people talk about this with regard to product managers like, well, we can have one developer and three product managers and no, you know, we can fire all 12 of our developers from all the teams or 15 of our developers and then we'll still be as productive like, that's, that is a dream it's not, not substantiated by anything according to, again, these studies yeah, true with the takeaways here, uh, people that uh, run teams, have budget tech, leads engineering leaders, development leads, company leadership, and whatnot, if you're thinking about, uh, layoffs, or if you're thinking about, uh, pushing your team to be more productive, uh, with AI tools um and you're claiming that you just know that they're going to be more productive should you give them these tools, you might be right ok, you might be right, so let's step through the process together, alright I'll address leaders step one like, what are the specific tasks that are taking less time, you gotta name go ahead and ask, them you or what are the specific tasks that you think should be taking less time, presumably they would be the predictable, repeatable ones but, nonetheless, name them it's a good start number two would be to build a baseline so that you know how long the tasks take before you introduce AI tools this is very similar to another takeaway we already said on the podcast but since I'm talking to leaders here gotta slow down and talk through the theory of constraints and the lean startup or, no, actually I'm talking about the goal right now I'm talking about the goal rat yeah, yeah, you love a goal rat and then step three is how much time do those tasks take now measurements, not estimates which is kind of the same as, I guess number one would be that, number one is defining what tasks are, it could be combined, define and measure, but yeah, that's fine and then, number four is if you can't answer all three of these, like you're not being data driven, you're, you know, being vibe driven, or hippo driven like, hope and a prayer, just, just want to point that out, and if you don't know what hippo is ask your teen, that's like ask your scrum, nevermind don't ask, you haven't had one of those in a long time what do you think, have you heard of a company whose leadership claims AI savings but doesn't provide any proof let us know in the comments so, also let us know if your leadership has provided concrete evidence because we can all learn from that especially other leaders yes, let me know, I'll wait wait, already that's great what so in this category that executives are living in a different reality not, not, not so, not so far off reality than I expected when I did the planning for the podcast that's good maybe we'll, a win it's good to have a win on the other side of the podcast you know, occasionally in other news, water is wet we're, we're but we're about to get dark on the podcast um, because uh, some companies they don't even pretend that AI is about productivity and and they will just tell you it's about headcount, and uh I I appreciate that, I will tell you right now, so about productivity for the bottom line, that's what it is is is it, let's what uh, everyone who's out there like, selling AI tools on the internet, let's talk about what they're not saying out loud, let's talk about ok AI, it's, it's not replacing anyone, AI is not replacing anyone, like it it's giving executives air cover to cut headcount, because everyone else is cutting headcount, so AI is the new layoff permission slip and uh, FOMO that's what's really writing the permission slips, FOMO did you like my intro slide where I M &A and flattening the org and right sizing and realignment and offshoring and you everything you've heard you know, all along the way, AI is just the next thing here, so so we showed the Financial Times article earlier in the podcast, Financial Times SP500 analysis it was from September 2025 says, when it comes to AI adoption, many companies are not guided by strategy, but by FOMO that was one of the findings of that article um, and then there's the the Gartner MIT Media Lab quote that was in the Financial Times article about 95 % of generative AI pilots fail yeah, so there's those two things, I'm not going to go back and show that report on the screen because we already showed it but to get you, to get you ahead advanced my teleprompter so it's useless you like my spray paint font? I it's a skeleton put that in there because you enjoyed it last time that is nice, that is good, I like it shucking AI tools, that's what I'm talking about let's have a classic straight into the against here is uh, let's go straight, hey, AI genuinely automates repetitive tasks um, companies have a fiduciary duty to optimize for their uh, shareholders have a fiduciary that's true, they do duty at any expense, probably, um, yeah, it genuinely automates repetitive so, we talked about tasks, hard to argue against that, because that, that is on, at face value, it's true, it automates things, right, yeah, it's just looking at the quality of the end result, which is what we said earlier in the podcast that would the issue is what we've already talked about in the podcast, four hours lost what you think is repetitive what you think is repetitive actually was never repetitive in the first place you didn't map out your business process well enough repetitive, but come to find out now thought it was you've got to massage every output coming out of the AI and now you know, that, that, now come to find out that was never a test suitable for AI in the first place but again, since you didn't train your workers and the executives are expecting this magic tool, now you've got friction now you've got blame you and you're firing all your uh, senior workers because you think the younger ones get it quicker so all of these factors come at play here together. Oh, I thought you were, I thought you were firing because they were, because you're ageist. Sorry. just one of those things that gets so fiduciary duty is bandied about pretty much for everything. It does. It came back from the, like when we, when we went to digital, it was that too, right? But yeah, this is air cover. Uh, a lot of companies are doing this. You'll see this in the media with, you know, you know, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, it's like, short of running a good business that short of running a good business that short of running a good business that gains profitability and grows over time gains profitability and grows over time gains profitability and grows over time yeah yeah of bringing in ethics into this i know well you know speaking of ethics like the the other one here is workers need to upskill uh or get left behind and you know which also is used to accuse older workers oh wow you weren't upskilling you were you were stagnant you weren't you weren't learning anymore you weren't hungry this is a one-two punch though right so yeah you weren't trying hard enough also we're not gonna you didn't prompt hard enough that's yeah yeah you didn't prompt hard enough for your training and we're not giving you training that's right so it's like okay how am i gonna get training well, I'm already doing a 996 you% of those employees that were out there, but the promises are 60 something percent, 66? no, no, no, that was just that was just leadership's numbers only 30%, 37 priorities that they were expressing, right so I'm hanging on because I'm hopeful but the reality is different, we already know that and uh, like, it's like, it's like, it's like, it's like, it's like, it's like, this, companies have a fiduciary duty line you already have the opposite I you're gonna use already suspect you have the opposite viewpoint of you know, money that I put into a person I can't tell out the other side, whether that's gonna be a 2% gain or a 200 % gain, or there's no way to tell, there's no predictable way to tell, so your answer is, don't put any money into people across the board, you that one for a while but maybe we'll, maybe we'll take that whole point and bring it into I want to keep on another podcast just to talk about that phenomenon of it's worthy of it, you know, because it's like it's, just to leave that one alone for a second so it doesn't dominate the rest of the podcast we talk people that I've encountered in my career, uh, there's the I wanna leave things better than the way I found it there's two types of type of person, and there's the I've got mine, so you figure it out type of person yeah, and that is it people will fall into one of those categories you know, there's not too many of the former category, unfortunately I podcast on uh, how corporate turns good people bad, the neuroscience of power and corruption arguing the agile, 243 it wasn't that long ago it wasn't that long ago, uh, on the yeah, see our other side of this one, like 95 % of AI, like all of the arguments that you'll hear, all the platitudes about you know, shareholder optimization whatever, cool story bro but also, 95 % of AI pilots failed, so you're not you're not automating anything at that, but you're you're cutting, launching the pilot or introducing these tools and then, what? put out a press well, then you just release that says you know, our profitability went up uh, yes and then you, and then you join the the dots on a shape that doesn't exist, like, our profitability went up because we're implementing AI I see, and we're adopting AI which is, wait, what? yeah, here as a sub-bullet, again, it probably could be I've included it its own podcast, but because that's, that's what it is, FOMO -driven layouts just a FOMO driven layoff the FOMO driven layoff, like it's AI, that's the trend right now but it really could be anything, I mean, everyone, like all executives can learn to play Pokemon tomorrow now you've got an upswing in Pokemon, right it doesn't matter what the trend is the great Pokemon trend, uh the people that, the people that you're laying off by making sweeping decisions, general, generalized decisions, the institutional knowledge that you're losing when those people are gone, like Amazon for example and they're cutting 30,000 workers and they turn around a couple months later and cut 30 ,000 more workers so Amazon's culture and institutional knowledge across the board you're telling me that's not affected it's hugely diluted surely, I mean, people should be able to see that and just for everyone who don't, who doesn't use any amazon products that you think you don't use amazon products, so you're not affected, uh, there were several high profile outages that i'm not gonna go and, like, directly tied to those, that first round of layoffs the team that kept the dns updated and what not, they're all gone i guess we don't need dns oh boy, and we don't own, that's what i'm saying we don't need, i'm making some declarations on the podcast we don't need dns dns free world, anyway moving on, uh you fiduciary duty, it means copying what other CEOs at Davos are doing listen, your company is about to announce their AI driven workforce optimization or one of the other half a dozen things I had on the first slide you have some questions that you should ask, uh, gently so you don't if you hear that get yourself put on a list, that's what I'm talking about not getting put on a list and uh, some of those questions here are uh, which specific AI capabilities are replacing which specific human run repeatable tasks and sit down and wait, I got a meme for you while you wait like, hopefully they have evidence that they've thought about, that the people make these decisions, hopefully they've thought about these things well, they typically don't unfortunately, but yeah ask these questions, definitely ask these questions you the second one is uh, what happens to the work that these people do, that AI can't do, because you don't really find out what somebody does at a workplace until they're gone, right uh, the executives have a plan, or I I would love to say would love to, I would love to to some sort of like malicious conspiracy that goes into it prove that there's but the reality, usually in the workplace is people just don't think that deeply into it they just they're not the people that are affected by the second, third order effects of the decision, so they don't really think too deeply about it, and it's not that they're trying to make sure you're having a bad time at work, they don't think about you at all that's does that, does that make it worse or better? I think it makes it worse I think it compensated for making those decisions ok, well well, the last question to get you fired, that's what these are called is, uh, has AI proved proven that it can do this work you basically so that we avoid the 37 % penalty of rework right, right, when we go all in on this thing or, or, are we doing this because our competitors uh, announced that uh, they're doing cuts and their, their stock, you know, jumped up so we need to do cuts too because they're our competitors so everything our competitors do we do, so if our competitors were if your, if your, if your if your mom says, if you jumped, if your friends jumped off a cliff, you would jump off a cliff too, like CEOs need moms, I think that's what I'm that's right, that's, that's where I'm going so again, you can ask these if you want you're gonna get uncomfortable silences, you're gonna get put on lists, you early still to say would it be too what to do before, should we do, um, ah, yeah you can what keep that resume updated, folks think so, has your company used AI as a cover for layoffs like, uh, what do you nothing to do with automation or, or I guess proven automation in the way that we've talked about rolling it out, so let us know in the comments uh, contrary to the did the layoffs have topics we've been discussing, we are not anti -AI, just thought I'd throw that out there not in the least of, the, the, the opinions expressed by Brian Ohm on the Arguing Agile podcast are in fact, I mean, the opinions of the podcast, because who else, yeah, right, is there, and the solo paradox, growth slows down after major technological uh, we talked about breakthroughs although I, I'm still not convinced and then, uh, way a little bit I'm going to say maybe it isn't about the technology being if I'm gonna give bad, maybe we really are just not deploying it in the correct matter or in the most optimal matter uh, what I'm saying is maybe all the AI people, Sam Jippity Altman looking for a man in finance, maybe he's right you know, we just need to put more money into open AI, that's, that's the issue so, so here's the part where we actually agree on AI is not the problem, uh, problem is that companies are not introspecting and looking at themselves in a mirror and then thinking critically about what they see in that mirror they're just, bolting AI into broken workflows and processes and wondering why nothing has changed or worse, uh, blaming people for the fact that nothing has changed or saying you didn't, you didn't prompt hard enough, oh you didn't redesign the work hard enough uh, you just added a chatbot to the chaos so, hey, what do we do to get this right, that's, that's, that's this section, which is really a Deming section of the podcast right, right, it's Deming under any other name, speaking, speaking of Deming, what Arguing Agile 199 W. Edwards Deming's profound knowledge for transforming organizations or Arguing Agile 181 Deming's 14 points to the management revolution we needed 40 years ago that podcast in 2024 yeah, we did the September 11th 2024, ah, Deming quote, I not gonna talk like Deming we're being says, uh, we're I'm destroyed by best efforts, meaning, hey, we're all trying our hardest over here, get off my back uh, trying harder to do what you understand as your job when the system is broken, often results in more damage don't something stand there and think so don't just do yeah, I have a if a bad system will beat a good person every time, what can you do? question, isn't it, it's a leading really I mean, the only logical conclusion there is, change the system the funny thing is that, that, that MIT Media Lab, uh, study that says 95 % of Gen AI pilots failed in the workplace, the, the tagline to that one that I podcast oh, we did didn't say up until now, says that they failed due to a lack of long -term memory, customization and workflow integration so, this new tool and not actually weave it into your work processes or even really understand your work processes there, there 95 % is a failure rate they're giving you right there that's you're gonna adopt not me making the number up it's not a good bet, is it? yeah, go yell at MIT Media Lab let's see, arguments not, not good arguments, but there are arguments you know, hey, you give everyone the tools on the smart people to figure it out oh, yeah, AI adoption is the metric, I almost didn't put that one in there because I was nobody's gonna believe that's a thing that like, Satya Nadella and all those guys, you know what I mean this is a thing though, that's the thing that people are measuring most, right who's using it, why aren't you using it velocity, that's it, it's AI adoption velocity yeah, yeah point in here that really doesn't need to be restated but all of our tool, all of our uh, competitors are pulling ahead using AI tools, so we have and then the FOMO to use AI tools you know, all of our competitors are jumping off of bridges so we don't, so we have to jump off, so better training, we already talked about better training everyone knows they need better training um, nobody's getting it right, 37 % of employees, you and only in IT and specific areas majority are in IT, yeah in this category it's Deming 101, it's also Gold Rat it's, Theory of Constraints it's, this is Deming 101 stuff about process all of the Toyota excellence and understanding the process, you optimizing your bottlenecks, that's this category right, so then, in the four are we saying, go back to basics? I introduce AI into if you're going to your workflow and you really don't understand your workflows you need to get somebody in house to help you understand the workflows to figure out where to inject AI to help you accelerate because what will happen is you'll accelerate processes and you'll have another bottleneck and who's to say that the next bottleneck that you discover won't be worse than the first bottleneck of not using AI in the first place right, so somebody, I mean wouldn't, if you were a leader in the organization, wouldn't you agree that we need to stay on the bottlenecks as we discover them and talk about which ones are critical and which ones are not to decide when we've got a, you know, quote enough improvement yeah, this is that whole principle of you know value chain analysis and value streams and and identifying where the bottlenecks are optimizing for the whole that's what all this is about really um if you're not understanding your processes and if you just go full steam ahead and automate them using ai it would be harder i posit to peel back at that point later on. Yeah. because these things become institutionalized and your knowledge isn't there to begin with, but then once it's automated, your knowledge is even more insulated, right? Yeah. And so less people are aware of what it's doing. Simply they believe in what it's doing, but it's doing the wrong thing. it comes back to Deming of your automated processes it's great because that you never should have had in the first place. Right. right, so understand your workflow inside out first, and then look at optimizing it right, right, said in a nutshell, which is what he really alright, the Monday morning you can use to stop value framework that adopting and start extracting value start, start realizing value let's say that, uh I'm gonna give you the framework right now here we go, pick one workflow this is, this is your AI adoption congratulations I'm your AI adoption coach you can pay me$650 for this free class when this when this podcast is over put the money in my hand, thank you very much, through the internet we take crypto, yeah yeah, send me, yeah, that is, that is true that is true, pick one workflow that your team does repeatedly, repeatedly with a clear, measurable output, so you have a one workflow, you sure that's what you do, you can measure broken down, you're it, you've got a baseline, you know how long it takes, pick that one map it end to end again, this is our every step, every handoff you know, this is our, you know where the bottlenecks, this is, this is the, the goal right here, um, also value stream mapping, right, you know exactly where things, um, identify a, a specific bottleneck, right just pick one, again, not everything not the whole end to end cycle not, not an entire person's job, pick one thing, run experiment with AI on that one step, you measure before and after on that one step, and then if it works then you run a two week expand it, or you take it to other bottlenecks or you take it upstream, or downstream or whatever, but the idea that you're just going to wipe out someone's job and just drop AI into it uh, wishful thinking at best yeah, and uh, as evidenced by the 95 % failure rate right, it's not gonna happen well, these points here, the nice thing about uh, pick a workflow, uh, map it end to end, make sure you can measure it pick a bottleneck, run your experiment if you improve it, great expand it, do another one, or maybe move on to something else, right, whatever's most critical if you can do that, and you can show that you can, uh, repeatedly do that congratulations, like you're a product manager now you might as well just change your uh, change your headline on LinkedIn you're an AI product manager no less good, good good call out, good call out, that's big money right there, so, yeah, it's 50 % more or something, I just made that number up, boy, this baby can fit so much AI in it, where, so I think, that's that's the takeaway, the takeaway, yeah it's a wrap up, let's, we've solved all the problems, everything is solved, so what do you think, has your company tried to redesign workflows, have you with that being said dealt with adopting AI from that perspective, uh, or do you just hand out Copilot to everyone and say, get in there kid we believe in you, let us know in the comments, and if you found this episode helpful, uh, do us a flavor, and like and subscribe, and share with someone in your company who needs to hear what we're talking about before the, before the next AI budget meeting that's what I'm talking about, oh, I should have brought my clock, oh no, that's uh, so we talked, uh, a lot on this podcast, we talked about um theater uh, versus the real AI value, we talked about uh, actually getting data to push back on spotting adoption the, hey, just use AI, just do what I said just disagree and commit mandate, and then we talked about testing whether AI is actually helping your team, or if it's just, you wishful thinking positive vibes, that's right, you know and then we talked a bit about the goal, any podcast that we can talk about and uh, please do like and subscribe let us know about any other topics you'd like us to talk about