
What's Up with Tech?
Tech Transformation with Evan Kirstel: A podcast exploring the latest trends and innovations in the tech industry, and how businesses can leverage them for growth, diving into the world of B2B, discussing strategies, trends, and sharing insights from industry leaders!
With over three decades in telecom and IT, I've mastered the art of transforming social media into a dynamic platform for audience engagement, community building, and establishing thought leadership. My approach isn't about personal brand promotion but about delivering educational and informative content to cultivate a sustainable, long-term business presence. I am the leading content creator in areas like Enterprise AI, UCaaS, CPaaS, CCaaS, Cloud, Telecom, 5G and more!
What's Up with Tech?
The Enterprise AI Revolution: From Science Project to Mission Critical
Interested in being a guest? Email us at admin@evankirstel.com
From cutting-edge experimentation to business-critical infrastructure, the AI landscape has undergone a dramatic transformation. Ron from KungFu.ai shares an insider's perspective on this evolution, drawing from his experience dating back to AI research in the 1990s.
The conversation reveals the pivotal factors driving AI's enterprise breakthrough: exponentially increased computing power, unprecedented data availability, and the democratizing effect of open-source libraries. These elements have converged to create capabilities early researchers could scarcely imagine, requiring millions of times more resources than initially anticipated.
What distinguishes KungFu's approach is their unwavering focus on production-grade AI systems that deliver tangible business value rather than impressive but unreliable demos. Ron shares a striking success story of a financial services client whose AI implementation reduced loan decisioning time from 48 hours to just 9 seconds while simultaneously reducing fraud rates – all without eliminating human jobs but rather redirecting human attention to the complex cases requiring judgment.
The discussion tackles the profound challenges enterprises face during implementation. AI systems differ fundamentally from traditional software in their probabilistic nature, making human-like mistakes that can be difficult to predict or debug. Data quality emerges as the critical determinant of success – "garbage in, garbage out" applies more powerfully to AI than to any previous technology. Ethical considerations, especially regarding bias and explainability in regulated environments, demand sophisticated approaches that go far beyond typical software development concerns.
Looking ahead, Ron provides a sobering yet optimistic assessment of agentic AI systems, suggesting that failure rates may exceed Gartner's 40% prediction while maintaining that these technologies will ultimately revolutionize business faster than most anticipate. For companies navigating this complex landscape, the talent equation remains daunting – building effective AI systems requires a blend of mathematical expertise, domain knowledge, and hard-won intuition that remains in critically short supply.
Ready to transform your business with AI that delivers real results rather than just impressive demos? Connect with Ron at ronallen@kungfuai.com or explore their "Hidden Layers" podcast for deeper technical insights into the future of enterprise AI.
Discover how technology is reshaping our lives and livelihoods.
Listen on: Apple Podcasts Spotify
More at https://linktr.ee/EvanKirstel
Hey everybody, excited for this chat today about AI and enterprise transformation with a true expert and innovator in the field at KungFuai Ron. How are you?
Speaker 2:I'm great Thanks for having me on.
Speaker 1:Well, thanks for being here. Let's kick off with introductions and also your origin story. What inspired the founding of KungFuai Great?
Speaker 2:name, by the way. Oh, thank you, thank you, thank you, yeah, so you know, I did a master's in artificial intelligence way back in the 90s and I've been working in AI for, you know, decades now. I'm one of the old guys and I just sold my last company in 2017. And I knew I wanted to do something in AI, but we felt it was just a little early for products. Things were moving so fast they still are but it was definitely clear that if you spend a bunch of time and money on a product 18 months later, they could just be washed away by some new breakthrough and it wasn't really a great time for, I think, product development. What we believed companies would need eight years ago was guidance strategy, help understanding, implementing, building custom solutions, you know, based upon their proprietary data, and it's worked out great. And you know, since ChatGPD has come out, the world has woken up to the full promise of AI and we're just having a great time helping companies build AI solutions.
Speaker 1:Fantastic. So, as you know, ai has gone from being a kind of cool science project to mission critical, seemingly in a year or two. What's changed in your opinion? What's driven that sea change?
Speaker 2:You know, it's several things. You know, in the 90s we knew we needed more data, we knew we needed more compute. Honestly, I think if you'd asked us, we just said, oh, 10, 100, 1,000 times more. We did not realize we needed like millions of times more and billions of times more compute. So it's really a combination of, I think, a few things. One, we just didn't have the compute necessary for the type of capabilities that we see today. We were literally off many, many, many orders of magnitude.
Speaker 2:The other really big element is the data. Even if we'd had the compute back in the 90s, we didn't have the data. Most AI systems today are supervised systems. They're supervised learning systems, meaning they are trained on really large amounts of data, and until that digital data existed, we wouldn't have gone anywhere without the compute. And then the other really big part of this is the fact that there are these open source libraries. All the best libraries in the world around AI, pytorch and TensorFlow and NumPy and all this sort of stuff are open source, so anybody can get involved and stand on the shoulders of those who came before us. And there's many other factors, but I really feel like those are the top three.
Speaker 1:Interesting. So it's been fascinating to watch all of the big tech services firms jump into AI, from Accenture to PwC and the long, long laundry list. How do you define your role as a services company, helping businesses become, I guess, sort of AI native? What's your perspective there?
Speaker 2:Yeah, that's a really good question. When we started eight years ago, we were absolutely adamant that we were going to build production-grade AI systems or toy systems that demoed really well but had so many reliability issues or sort of edge cases that you just couldn't put them in production. They couldn't deliver real business value, and I think that that decision back in 2017 was critical for us becoming who we are today, and I think it's more important now than ever. So the way we help companies is, you know, it's really through this journey and part of it is understanding that just doing AI for AI's sake is probably a waste of time and money. You have to have real business ROI associated with it and just because you identify an initiative and you see that if it's successful, you know it'd be worth the effort. That's just part of the battle.
Speaker 2:Do you have the data? Do you have the buy-in? Do you have the ability to deploy and manage those types of solutions? It's not a 180 from software, but artificial intelligence as it exists today is so data dependent that you have to start with the data. You have to start with an analysis of the quantity, the quality, the distribution of that data. It will determine your success or failure, more than any other aspect of an artificial intelligence engagement, and that's very different than traditional software. In software, it's just your ability to execute. With AI, you are beholden to the data garbage in, garbage out.
Speaker 1:Fantastic, and you've worked with Fortune 500, startups, everything in between. What's a big misconception you see in working with your clients across the board?
Speaker 2:who will come in, who thinks that you can just take AI off the shelf and you just sort of point it at the data and it will just go figure things out? We do occasionally get clients who think you know, ai may be almost sort of mistake-free or omniscient. No, these are probabilistic systems. They can absolutely make mistakes. One of the challenges with modern AI is that they make very human-like mistakes, right, so they can be unpredictable in the way that a human can be unpredictable. For example, you could be an expert speaker, but doesn't mean you won't misspeak occasionally, right? Well, we see that with AI systems, and that's not really something we had to worry about. With traditional software. They were much, much more deterministic, less probabilistic, and there's also sort of an underestimation, as I already mentioned, about the data.
Speaker 2:A lot of companies have really really, really strong intuitions. Now Things have changed quickly. They have strong intuitions about how AI could help them, but they underestimate the need for the data. So, for example, they may have some process that's highly, highly manual and they want to automate that, but they haven't been collecting the data, they haven't been collecting the inputs and outputs that the humans relied upon to accomplish that task, and so it means it can be automated, but they need to do the data collection first, and so that will often mean well, we'll have to put that project on hold for a year or two or three, as we collect that data, and then we can build a system to mimic that capability.
Speaker 1:Got it. So you know these systems are gaining traction autonomous systems. I've been driving around in Waymo's. It's been super exciting. I have a couple of apps that are sort of agentic in nature. But what about real business problems? Are you seeing a lot of them being solved with AI, like today, and not, uh, in the?
Speaker 2:in the lab? Oh, absolutely Absolutely. We, um. We just wrapped up a project from one of our clients, um, publicly traded company that um does billions in loans a year. Um has a lot of fraud. Um a lot of manual processes. For non-disclosure reasons, I won't go too deep, but we built.
Speaker 2:This is a great example of kind of coming full circle on what I was mentioning before. They had decades of data collected about loan decisioning from their experts, and so we were able to build a system that could mimic those capabilities. And the beautiful thing about this system was they're doing billions of loans a year, but that decisioning it was prone to fraud, and the system that we trained allowed them to move from a 40-hour-a-week business stance to 24-7. It reduced their turnaround decisioning from 28 to 48 hours to nine seconds. Fraud dropped dramatically, chargebacks are reduced and all of the hundreds of people that were doing that task are still Wow cases that the AI flagged as being suspect and really requires human intervention. It's really one of those examples of where you can leverage AI to automate parts of your business and it's just a win-win-win across the board.
Speaker 1:Brilliant. So let's talk a bit about ethics and governance. And you have a unique position as an independent, not wedded to big tech, you know has an agenda. How do you think about walking that line between you know, innovation and responsible AI use?
Speaker 2:That is one of our main offerings. Actually, we started as just a pure engineering firm. So you would you know, all of our early clients would come to us and they would say, hey, would it be possible to solve this problem with AI? And we realized, over time I think we were about four or five years old we realized that there was a sort of a missing piece to our offering. Businesses often needed help figuring out what to pursue and they almost always underestimated the sort of ethical and governance issues. So, for example, on the ethical side, as I mentioned earlier, most artificial intelligence systems today are based upon a technique called supervised learning, where you train these models on a bunch of data. These models will soak up that data and they will get really good. These models will soak up that data and they will get really good. They will get so good that they will mimic the bad bias behavior in the data, even if you don't want them to, and we've seen this over and over. You go to build some system to make predictions and if there is a legacy of racial discrimination, that model will soak up that behavior and replicate it. So it's really, really critical and we do this with all our engagements that you take the time to understand the nature of your data. It can be biased in many, many different ways, not just in like sort of socioeconomic ways. It can have bias. I'll give you another example. We've built systems that can accurately predict the risk of breast cancer years in advance, but the model these tricky little models are so smart it was actually learning to do things like accurately predict the patient's age and race and weight, and it was even doing things like accurately predicting what model machine the mammogram was done on. And the reason that's problematic is that it's very common for sicker patients to go to higher quality machines, and so, if you don't take these types of data issues into consideration, we could have easily built a model that we thought predicted the risk of breast cancer, but it was actually just really good at identifying the version of the mammogram machine that the mammograms were done on. So we did a ton of work to make sure that the model literally was no better than just guessing at any of those different areas, and that led us to be able to build a model that is, you know, radically less biased than the traditional metrics out there, like the Tyra Cusick metrics and things like that this model was recently approved by the FDA. So that's sort of the bias.
Speaker 2:The point there is. There are real issues that you have to be concerned around. And then governance if you're in a regulated environment, there are lots of things that you need to think about, not just about data distribution, skew and things like that. You may have explainability. So, for example, if you're doing loan approvals, like I mentioned earlier, and you build a system that's a black box AI, it may be really really good at making predictions about loan payments, but if you can't understand why it's making those decisions, if it doesn't have explainability, it's not going to pass regulatory muster. You need to explain to somebody why their loan may have been rejected. So there's all of these really really complicated issues that come up with AI initiatives that a lot of companies are just starting to get their hands around.
Speaker 1:Interesting. So Gartner just came out with an interesting blog suggesting that 40% of agentic AI projects will be canceled by 2027, which probably has a lot of people scratching their heads about their own projects. How do you see the balance of experimentation with delivery, real clients and real value in the enterprise? What's that look like?
Speaker 2:I will be surprised if the number is not higher than 40%, and the reason is, you know all of the AI systems that I've talked about to date and we've built well over 100 production systems. All of them, if you'll notice, are examples of narrow AI, meaning they're domain specific. They have superhuman capabilities, but often along just one or two dimensions, right along just one or two dimensions. Right Then these agentic systems, which it's still very early days, they're going to be incredibly powerful but, just like I mentioned earlier about humans, they're going to have a jagged sort of a jagged edge along the frontier of liability, and what that means is they're going to fail in unpredictable ways and that is what's going to make them very difficult to deploy in enterprise environments.
Speaker 2:We see the same thing with generative AI. If you are using generative AI on a personal basis and you're interacting with Claude or Chad GBT, you interact and you check the prompts and you go back and forth until you get what you want. And sometimes you'll say are you sure you may massage the interplay a little bit, but you definitely don't just ask it a question. Whatever it gives you, you just take it as gospel and send it out in the world. And that's the challenge with generative AI, and that's the challenge with agentic systems right now.
Speaker 2:Now let me be clear, so I don't come off as too pessimistic. These are just going to be the challenges along that jagged edge. Agentic AI is going to revolutionize business and it's going to happen a lot faster than people think business, and it's going to happen a lot faster than people think. But there are going to be a lot of tears and a lot of wasted money as people realize that you know that that 2% error rate is not something that they, that maybe their enterprise can live with. You know that that probabilistic capability is is something they're going to have to get used to.
Speaker 1:Interesting. Let's talk about the talent equation that you see. Everyone's trying to chase the same talent pool out there and trying to upskill and build internal talent, but of course there's a big gap between you know that and people that are available. You're helping partly bridge that, that kind of gap, but um, um, how do we scale up when it comes to talent and know-how?
Speaker 2:That's a great question, um, I think it's going to be quite quite a while until we are at a point where the talent supply and demand has sort of equalized. And the reason is, you know, building state-of-the-art AI systems is significantly more complicated than traditional software because of some of the things we've mentioned around the probabilistic nature. There's quite a bit more math involved than in traditional software. But the other part of it is, you know, this is funny.
Speaker 2:I remember in the 90s in college, and I remember one of my computer science professors saying something to the effect of like you know, we're really struggling, um, as an industry in software because you can have like one character, like you can be missing a semicolon in your entire code base breaks, like the entire program breaks. And he, he kind of, you know, wistfully said it wouldn't be great one day if we had computer systems that were more like biological systems. You know they didn't just completely tip over, you know any part of them being damaged and they had redundancy. Well, sometimes, you know, you got to look out for what you wish, because we have that now.
Speaker 2:These systems are incredibly redundant, resilient, but the black box nature and the complexity of these systems means they're really hard to train, and you know, we've been in many instances I'm not afraid to admit this where we're training models and I work with some of the smartest people on the planet in AI and we'll get stuck and the model will stop learning and you can't figure out. Is it a data issue? Do we have a bug? Have we hit some sort of weird you know hyper parameter issue that that's preventing us from going down the gradient further? And then you have to rely upon your, your heuristics that you've built up over decades to kind of get yourself out of these situations. And so it's still quite a bit more art than science, and I think that's one of the reasons it's so hard to predict the future. We don't know where we're going to be in five years because there are these sort of emerging capabilities. So I think it's going to be quite a while until we see a balance on the supply-demand curve. Yeah, I would agree with you there.
Speaker 1:Let's talk a little bit about your business model and the industry. I've worked for several services, consultancies, software services companies over the last 30 years and not much creativity necessarily there. You know, we just had a bench of 10,000 engineers and we threw them at problems and, you know, did good work. But are we heading into a new kind of services landscape when you know you're doing code creation with AI and you don't necessarily need the same kinds of skills? What does the future look like for services, professional services?
Speaker 2:That is such a good question, evan. I honestly don't know. I think that the AI coding assistants have matured faster than I think almost anybody anticipated, myself included, and I was extremely bullish about them from day one. I really expected them to make an impact. Even I didn't think it would happen this fast, and so if I was forced to place a bet, I would say this I think closing that last gap without human oversight is going to take a while.
Speaker 2:Without human oversight is going to take a while, meaning I think it'll be probably closer to 2030 than 2026 before we see um, these coding assistants where you can describe what you need at the highest level and they just get it bug free right at at um. First try Um, but I don't think it's, you know, I don't think it's super far away. And then that just begs the question is this going to reduce the need for software developers? There was a lot of talk last year about the Jevons paradoxes. As something becomes more affordable, demand rises, and I think that that's entirely possible as well, that instead of this being the end of the software development lifecycle for engineers, it could just be the early days, right, demand could go out through the roof. Honestly, though I mean that's one of those areas I'm probably most confused about. I don't have a strong opinion.
Speaker 1:Yeah, I think we have seen really weak demand for computer science and software engineering graduates and that's a little bit of a red warning light. So we'll have to see. What about Kung Fu AI? What are you focused on over the next year or two? Where are you building and you know marketing and selling.
Speaker 2:We yeah, we're focused, as we have always been. We really want to do one thing, and I love being able to say this we want to help our clients build real AI systems, things that actually go into production, things that actually work, that actually make their business better and stronger, and then we want to help them through that whole journey, whether it's strategy or governance, roadmapping or literally hands-on keyboard model building. That's where our passion lies and that's part of the reason I think we're able to hire such elite talent because people come to Kung Fu AI because they want to build stuff that that changes the world, that makes, makes a difference, and if they can come here and do that, Fantastic.
Speaker 1:So you're in Austin, one of my favorite places. Where can people meet you, either there or out and about any events this summer or the fall that you're excited about, or out and about any events this summer or the fall that you're excited about.
Speaker 2:Yeah, I'm not doing a bunch of events this summer. It's actually. It is literally the case that we are so overloaded with work right now that I'm actually canceling some vacation plans, in fact, even to make this happen. But if you ever want to reach out to me personally, ron Arlen at KungFuai and we have our own podcast. It's called Hidden Layers. It's a little bit a technical deep dive on AI. If you've ever wondered, like a little bit of a glimpse behind the curtain, I would encourage people to please check that out.
Speaker 1:Fantastic. Well, enjoy the summer, Stay cool, whatever 10 degrees in Austin, but you guys are used to it. Thanks so much, Ron, for listening and joining and sharing.
Speaker 2:Thank you so much. This is a ball.
Speaker 1:And thanks everyone, and be sure to check out our new show at techimpacttv now on Bloomberg and Fox Business. Take care everyone.