
Tech Unboxed
Stay ahead of the curve with BBD's Tech Unboxed, the podcast that unpacks the latest trends, innovations, and transformative digital solutions driving tomorrow's world.
Tech Unboxed
The Open AI Revolution: What Flipped the Coin?
Lucky Nkosi, head of of BBD's ATC team shares insights on AI's evolution from closed research to public accessibility, exploring how we've moved past the peak of the AI hype cycle into the trough of disillusionment.
• OpenAI's release of ChatGPT marked a pivotal moment in AI adoption, reaching millions of users while validating their models
• Microsoft's billion-dollar investment in OpenAI was strategic, with the money returning to Microsoft through compute costs on Azure
• We're now past the peak of the AI hype cycle, with studies showing 93% of enterprise AI projects failing completely
• Heading into the "trough of disillusionment" means many AI investments will prove fruitless, requiring more responsible approaches
• The "time filter" framework helps identify valuable AI applications by asking: Do I need to do this task? Can I automate it? Can I delegate it?
• AI technologies are non-deterministic by nature, making them challenging to integrate with deterministic systems like financial services
• Hardware remains the ultimate frontier for AI impact, as "what truly matters is what you can impact in the physical world"
• BBD helps companies navigate AI by filtering through the noise and conducting rapid R&D to identify practical solutions
here we are live for another BBD Tech Unboxed, and today I'm in Utrecht for a live recording together with Lucky. Lucky, who are you and what are you here for?
Lucky Nkosi:Oh, what an incredible welcome and introduction. Thank you so much. My name is Lakin Gosi. I am an executive here at BBD and my job is to head a team called ATC. We do research and development work, we do specialized consulting and we're responsible for training all of BBD's talent.
Koen Vanderhoydonk:All right, so you're going to give me a lot of good content today, hopefully, hopefully, I give you good content. I'm just raising the bar, that's it. That's it. No, I was looking at your keynotes and there were a couple of points that I really liked, so I like to zoom into that a little bit. Sure, you mentioned there was a shift from closed to open AI, and I think you mentioned there was a single moment that this happened. What flipped the coin?
Lucky Nkosi:Yeah, it's a very good question.
Lucky Nkosi:I think it depends on who you ask and they'll give you different answers.
Lucky Nkosi:For me, just from a spectator's perspective, if we look back to the likes of Mask and Sam Altman, when Google started focusing on Google Brain a little bit more, this would be around 2014, 2015,.
Lucky Nkosi:When OpenAI was founded and there was this view that we had gotten this utopian idea of what artificial general intelligence would be. And a lot of that came with massive risks, and so a lot of these billionaires met and said we should probably put out or bring some of this AI research out into the open and conduct it openly, because if it's all done behind closed doors, we have no way of knowing what's possible and what the technology could be used for. And I think that's when open AI was formed. But the true thing that sort of brought it to everyone's faces, I think, was when OpenAI needed people to validate the work that their agents were doing, or the work that ChachiPT what we now know as ChachiPT was doing, and so they decided to open it up to the world and, if you remember, back to the first so that was a big gamble by its own right Gamble, yes, but if you think about it, they reached about 5 million users, or something ridiculous like that.
Lucky Nkosi:Very quickly, the fastest ever and, as my tech lead used to say, prod is where you have the most testers. So they just took it to the public and they were able with the primary goal of being able to validate the work that their models were putting out, and it turned out to produce something even more incredible. So I think that point that's the point that I think made everyone aware of where this technology was today. It's not that no one else was near what Opini was doing. Yes, they were advanced. But I think putting it into everyone's hand changed how the world sees it.
Koen Vanderhoydonk:Well, there's also a theory that says that they were burning cash or they had to validate to get more cash. Do you believe that story too?
Lucky Nkosi:Absolutely. I mean, if you think back to I think it was 2019, when one of the CTOs of Microsoft was looking for workloads to put on Azure, and there's an email that came out during one of the cases where he wrote an email to the CEO of Microsoft and Bill Gates saying what really shocks me is the ambition that the OpenAI team have that we don't seem to be batching up to, and so that first billion dollars that Microsoft gave to them was truly that thing. But again, to validate it, they were hiring people to actually go and validate the work that the LLM was putting out.
Lucky Nkosi:And so burning cash meant that you have to prove value, yes, but they also then needed to find a much more novel way of validating this, without hiring all of these people. And, as the old saying says, the technology is free, you're the product, so it actually was a mega growth hack from them, absolutely. I mean, I think it's probably one of the best marketing and business strategies or solutions we've seen in recent times.
Koen Vanderhoydonk:Now, if you think about the idea and the notion that they were burning cash and therefore had to go out, then you obviously have to put a point. You have to set a point when you go and you probably have to say in out, so what was in your opinion, what did they left out and what have they could get in and not got in. At the end I think what we saw.
Lucky Nkosi:There was a very critical step that a lot of investors in AI technology today seem to be forgetting, which is this measurement of success. You need to be able to put that line there, and until you actually put something out into the hands of people who use them, it's really difficult to measure value, and so you need to be able to define what that value is. For Microsoft, for instance, the big value was getting compute up on Azure, getting Azure used and getting it proven that it's capable of doing all of this work, and so a volume trigger, which volume actually means money.
Lucky Nkosi:Absolutely. I mean, they were giving OpenAI this billion to spend on compute, which basically comes back to Microsoft. Also genius on their part, right? And so I think it. What did they lose? I think they took a massive risk of letting the cat out out of the bag, but I think they gained a whole lot more than what they could have potentially used, because you don't know what's good or bad about your product until people actually use it Absolutely right. So Intuitive processes Absolutely so. I think they got the fundamentals of product development right and that you need to build quickly, you need to get it into the hands of people and you need to validate the work that you're doing. And so, if you think back again to the first few iterations, it would always ask you which of these responses was the better one? That was them retraining their models right.
Koen Vanderhoydonk:And continue to do that Absolutely. So I think it was a critical. Let us zoom out and zoom in at the same time. So imagine that this very moment of a chat GPT coming to the market did not exist, but it actually happened. A bit of a quirky idea. But what would be the other moment of truth in terms of pickup of the hype cycle?
Lucky Nkosi:of AI and interesting question I think the Google brain team was already doing this work. In fact, in that very same email that the Microsoft leadership got, google is mentioned there as well, and I think they've continued to innovate in the space, albeit slowly, maybe less open to the world, but I think they would have released a product suite that maybe would not have sort of shocked us into it, but I think they probably would have improved search a little bit and then slowly, sort of like, conscientized us to what's possible. And so do I think that technology would not have happened? No, I don't think so. However, to contradict myself, what was I?
Lucky Nkosi:I think another very important step was the. Was the 2020 paper from the ai, from from the open ai researchers right, which said our solution has always been to look at how we train the staff, but their paper concluded that we actually just need more compute, we need to throw more resources at this thing, and eventually it'll get to a point where it's able to deal with issues that are much more general versus all the research before. That was saying you need hyper-focused and hyper-specialized model and for the first time, they were saying actually quite the contrary what you need is more data, you need more compute and throw as much at it as possible and it'll start sort of being this really practical general thing. So I think, while that was a really, really new and different way of doing it, I think research was happening even during the AI winter and all of it came together to do this. So I think it definitely would have come out, but probably delayed by a decade or so, and would have come out in drips and drabs in my opinion.
Koen Vanderhoydonk:Well, it's interesting that you mentioned winter, because then, by saying winter, you talk about seasons, you talk about cycles. So, realistically, where are we at the Gartner hype cycle?
Lucky Nkosi:That's a very good question. So, if we consider the fact that, so the hype cycle is kicked off by a trigger. The trigger, in my my opinion, was that 2020 paper and chat gpt launching. That was a trigger that made everyone look at ai and say this is a thing and start asking questions about how we can use it. So that's the number one trigger. It then goes to the peak and along the peak, we typically see massive investments into it. If you consider the fact that I think it's something about 60% of venture capital right now is going to one sector, which is AI technology, that is terrifying because it says to us that we're probably approaching a bubble or we are in a bubble already, Hence the Gartner question.
Koen Vanderhoydonk:I guess Absolutely.
Lucky Nkosi:Hence the Gartner question. I guess Absolutely. And then if you consider the fact that at the peak of it is where you see the biggest sort of like investments, is where you see the biggest I'll say grift for lack of a better term, but that's where you see the biggest promises happening. And then, straight after that peak, while we start going down, you start seeing a lot of failures being public.
Koen Vanderhoydonk:It's the first dissolution, but isn't that exactly the report of 80% of the box not working?
Lucky Nkosi:Absolutely. In fact, there's an MIT study that came out today that postulated that over 90% can't remember the exact number, but I think it's around 93% of enterprise AI projects failing completely, and we know that enterprises will under-report this because no one wants to be seen as failing at implementing AI. But what that says to me is that we're probably already over that peak. We're now in the trough of disillusionment and we're now heading south. One of the other indicators that you're already seeing some of the companies sort of like merging together to like build bigger things or to build better things, and they're starting to consolidate, like likely from pressures from their investors as well, and so I think we are now facing that trough of of, of that point where we say, actually this thing doesn't work, while the true people who believe and can actually productionize this technology start working to get us to that plateau of productivity. So I think we've gone over the peak. I think we're heading down.
Lucky Nkosi:I think we're about to see a lot of companies start failing in the next couple of years. We're going to see a lot of investments proving not to be fruitful, and so I think just to add to your question, where do I think we are. I think we're going downhill now, and what's important for everyone now is to invest responsibly. So before you take on any project, ask what value is it going to give? You be sure that you can measure success up front and you know how to measure that success so that you don't find yourself in the bottom. So welcome to sanity. Absolutely, this is when we're starting to wake up to the sanity and I.
Lucky Nkosi:I mean, the hype cycle isn't all bad right. Yes, a lot of the companies would have failed, but we would have learned a lot of lesson. A lot of infrastructure will be built.
Lucky Nkosi:There are more data centers being built today in the US than there are office spaces right and so instead of building places for human workers to work, we're building more spaces for human workers to work with building more spaces for for for ai agents to to work, and all of this infrastructure will be left over for us to reuse and utilize for useful and productive things well last week.
Koen Vanderhoydonk:Um, there was escape, a large event that you organized with bbd, and I listened to your keynote and you mentioned the term time filter. Can you explain that to me, what that really went, what you actually meant with that?
Lucky Nkosi:time filter is a borrowed term, um, and what time filter, or where it comes from, is from my need to try and maximize my time management or try and maximize what I focus on, and it basically forces me to ask three questions. Whenever I need to do a task, is this a task that I absolutely need to do? Because if I'm wasting time on things that I shouldn't be doing, I'm actually doing myself and everyone around me a disservice. So there's the first question do I actually need to do this task? And if I don't but it's a, it's a task that or rather, if I do need to do it and it's a task that keeps coming back more and more I then need to ask can I automate this task?
Lucky Nkosi:And if I can't automate it and it's still coming and it's something that I have to do I need to ask can I delegate it to someone else? And then, failing those, I then need to give it attention immediately or deliberately postpone it to the future, because if I postpone it, I'll eventually be brave enough to eliminate it entirely, or brave enough to eliminate it, and I use the same filter to think about how we get value from AI projects. So I asked the question. When someone says to me we need an AI that does ABCD, I asked them what are you trying to do with it? What repetitive task are you trying to automate? What are you trying to delegate to it? And that sort of helped us sift through a lot of ideas, us about a process that they don't think can be automated or delegated to, and then we found ways of using AI technology to actually help them out there.
Koen Vanderhoydonk:Do you see in this scenario that AI is the lifesaver it's the single point that you needed to solve that problem or is it just a tool?
Lucky Nkosi:Absolutely not. I think it's a really useful tool because, if we think about just this, three step right, eliminating processes is something that we've solved already. So, if we consider capturing forms, if there was a process where we capture forms and we need to scan them into a digital system, we've eliminated that by giving people apps and say actually capture the data yourself. So we've eliminated that tedious process. When it comes to automating, instead of you needing to go pay your bills every month, right, we now run debit orders and we've automated that process. But this thing of delegating is something that's been quite tough for us as human workers to delegate actual pieces of work, and what Agentic AI, in particular, has been able to start equipping us to do is to then start delegating very specific tasks to AI agents, and so I think it's a really, really critical tool.
Lucky Nkosi:but if we think about it, software is inherently deterministic, which means that if we're building financial systems, a financial system needs to know that if I give you these inputs, this is the number I'll get at the end. We can't say, oh, this is probably the number you're going to get most of the time, if it's my own balance.
Koen Vanderhoydonk:I prefer not Exactly. That's a tip I favor. I'm just saying.
Lucky Nkosi:Which means that we now need to find ways to use this non-deterministic piece of technology to produce deterministic results, and so we have to ask the question anywhere where we integrate it, especially in things like fintech or mission critical things like health, we need to ask what can we delegate and what requires our particular attention? So I don't think that it's the end all of all technologies. I think it is a really incredible tool, and what you're likely to hear very soon is that, since we've gone through that peak, the big CEOs of the big companies are probably going to need to say that they've achieved AGI at some point, because we need to keep the hype cycle going forward. I think what we're going to see is something that imitates this AGI.
Lucky Nkosi:Absolutely right and it'll be a thing that is sold to us to say this is AGI when in the background is just multiple different things that are simulating it. We've seen it with chatbots. We're told this is AI and as soon as we get the technology working, we rename it and call it something else.
Lucky Nkosi:We saw object detection and we said this is AI. So it seems that AI is a term that's just used for something until we can figure out what the technology actually is, and then we name it that thing and then we keep chasing AI. So I see it mostly as a really good marketing term to get us to invest. So here we are, our second marketing world wonder that we discovered in this podcast. Absolutely, and so I wouldn't be surprised if, in a year, maybe two, some of the CEOs talk about being close to AGI and they give us a version of what they believe they can sell as AGI, because the share prices need to share price Depending on that, Absolutely Lucky.
Koen Vanderhoydonk:What's the end goal of AI? Sorry if I now use that very generic term again.
Lucky Nkosi:It's a really good question. I think we've gone through a lot of recent hype cycles that were technologies looking for an application and true believers and true people who find value in them are able to operationalize them. We think about blockchain. There are people who are finding incredible value in it, but there's also a lot of people who've dumped a lot of money and found zero value into it, trying just to say that they're using blockchain. I think AI is different. It's absolutely different. I think it's an incredible technology that is able to disrupt tech by its non-deterministic nature and its generative nature as well.
Lucky Nkosi:The end goal of the technology itself? I don't think a technology can have an end goal. The end goal of the technology itself. I don't think a technology can have an end goal. The question is about how we want to use the technology.
Lucky Nkosi:The father of AI just got a Nobel Peace Prize and in his speech he warns of the dangers Heavily warns, heavily warns of the dangers of us not using this technology appropriately, and we've already seen some misuse of it, especially in the defense space. So the technology itself, I don't think, can have an end goal. I think how we use it. We are the ones with an end goal and we need to be asking if we look at a lot of the robotic stuff that we're being socialized to, we're seeing a lot of humanoid robots now coming out of China, coming out of the robotic stuff that we're being socialized to. We're seeing a lot of humanoid robots now coming out of china, coming out of the us, sort of like in friendly, funny instances. I think that's probably where the next frontier is and we need to be asking the difficult questions back to hardware.
Lucky Nkosi:it'll always be hardware, because everything needs to exist somewhere, and what truly matters is what you can impact in the physical world. Right and then, at the end of the day, everything around technology is about adding value to humans, and so value will always come back to something that's tangible.
Koen Vanderhoydonk:Well, talking about being tangible with BBD, how do you help companies to deal with AI?
Lucky Nkosi:That's a cool question. So what we've? I think the central role that we've played is twofold. Number one is that we've helped them with their thinking so that we can watch through all the noise. By being able to research at such large volumes with so many different types of technologies, we've sort of learned what works and what doesn't, and so we have clients coming to us and say we want to do this and we're able to point them to an instance where we've done that and it didn't work because of ABCD. So a lot of research and development work has been conducted already and we're continuously doing it so we're able to eliminate the noise for them.
Lucky Nkosi:And because we've existed in these sectors for over four decades, we also know or we have people who've seen similar, um sort of like projects go downhill and they can sort of warn us against taking certain things.
Lucky Nkosi:So the biggest value is our ability to clean through the noise and actually point them in the direction of technology that has true potential. And then, secondly, is that by having so many technologists that try so many things, we're able to actually do that research and development for them. I mean, I can give you a really good example we recently helped a bank I won't mention any names, but they're a digital bank and we helped them go through things like product selection, when they wanted to have a chat interface into their bank to actually give you a virtual banking assistant that'll help you through WhatsApp, even right and we were able to get them to test all of the options that they had in terms of products and they found that none of them could scale to the level that they needed. Once they decided to actually do it internally, we then were able to help them operationalize that really, really quickly. So our ability to test technology out really quickly, I think, is a massive asset, and our experience helps us filter the noise.
Koen Vanderhoydonk:Well, that's a very nice explanation about on what BBD can do. Is there anything else you want to mention? And please do not use your chat GPT for this.
Lucky Nkosi:It's a good thing I didn't prepare for this podcast, because otherwise I would have Damn it.
Lucky Nkosi:I think, the only thing that I'd like to add really is that, as a web developer myself, the idea of technology changing very, very quickly and coming at us fast is not new to me.
Lucky Nkosi:As web developers, we have a new framework every single week to me, as web developers, we have a new framework every single week, and one of the things that I've learned from it is a lot of the noise is very distracting, and you need to be able to look at what's valuable and what's not, and the only way that you can do that is by engaging with people who are playing and experimenting with all of this technology, so that you can see where to invest your efforts. You can't possibly try every single AI tool out there, and so build community. Be with interested people around you, attend events such as meetups, listen to podcasts, see what people are playing around with, thinking with, and in there you will find what strikes a chord with you, and if you don't do that, you're just constantly going to feel overwhelmed and inundated with all of these changes. Engage your community and you'll see what is valuable to you and then play with that Well.
Koen Vanderhoydonk:thank you for that great advice, lockie. Thank you also to the listeners and please stay tuned. Thank you for having me. You're welcome.