DataTopics: All Things Data, AI & Tech

#93 The Most Misunderstood AI Statistic of the Year: Lessons from Tech Expo on Hype, Failure, and Innovation

DataTopics

Send us a text

In our final episode of 2025, we sit down with Tim Van Erum to unpack what really stood out at Tech Expo Amsterdam.

Together we revisit the most misunderstood AI statistic of the year, exploring why the “95% of AI projects fail” headline is misleading and how hype versus reality played out across the conference. Tim shares why failure and experimentation are not setbacks but essential drivers of innovation, and we highlight Reddit’s Scaling Safety strategy as a powerful example of machine learning in action.

This candid conversation closes out the year with lessons on what AI truly delivered in 2025.




SPEAKER_05:

Hello, welcome to Data Topics, your casual corner of the web, where we discuss what's new in data every once in a while. Today I'm sitting with Tim. Hey Tim. Hello. Uh we're gonna be you you you had a uh a nice uh conference experience and uh you wanted to to share with with everyone. But before we dive into that, Tim, um would you care to this is not the first time you're on the pod, this is not the second time you're on the pod. You've been here like what three, four times already? Something like that, yeah. Something like that. Um, I think the last episode that I remember at least it was on uh sports analytics.

SPEAKER_01:

Oh yeah, or the might be, yes. Um with Thibaut, I think at that point.

SPEAKER_05:

Yeah, maybe Thibaut, but uh yeah, so people are always welcome to to go back on our feed and listen to the latest episode of Tim. But for people that this is the first time you they hear about you, Tim, would you like to introduce yourself a bit?

SPEAKER_01:

I would like to try. Um go for it. So uh I'm Tim. Um my official title is Customer Solutions Lead. Is it still? At this point, it still is. Okay, yeah. I have no idea what it is gonna become uh post sort of like January 1. Yeah, um I used to be uh part of our pre-sales team, so somebody um I really did like first initial deep dives, kickoff, these kind of conversations. But I will transition into a new role. Um so I will transition into uh I will head up our flash team, um, our data uh together with uh another friend of the podcast. Uh so Tim Leaves. Tim Leaves.

SPEAKER_05:

Very cool. It's gonna be a Tim Team.

SPEAKER_01:

It's gonna yeah, it's we we joke around and say it's a tiny Tim team.

SPEAKER_05:

Tiny Tim team, yeah. They're gonna have a Team Day. Yeah, you get it? Tim Building. Team Building. Yeah, yeah. Well, I wanted to be the Flash team too, but they said no. I just uh they don't wanna they wanna keep the joke going, so it's like okay, it's fine. Very cool. What is the Flash team?

SPEAKER_01:

Uh the official acronym is uh the FastLean AI service up, uh which is uh um obviously uh uh acronym that was devised after we found the Flash team names.

SPEAKER_05:

Yeah, it's like they you pick the name and then it's like let's find something that makes sense, okay.

SPEAKER_01:

Um but the idea is really um so so Dataroots is part of uh the Talon group, um and the idea is really to bring sort of the eye knowledge that we have within Dataroots to an international platform. Um so collaborate with um our colleagues across the UK, Switzerland, uh the Americas, uh Middle East, uh, where you can think Spain, Poland. I there's more countries that I can.

SPEAKER_05:

Peru, I think they are, Canada, yeah, Tunisia. I don't know if you mentioned already those.

SPEAKER_01:

And try to try to bring sort of our AI knowledge to their teams as well.

SPEAKER_05:

Yes, it's really very cool, very cool. Excited uh to see to see taking shape. It's officially January.

SPEAKER_01:

Officially January 1. Uh the transition is ongoing. Yeah, okay.

SPEAKER_05:

And is it gonna change a lot on your like so the the scope, right, of the kind of projects? Is it also gonna change what you're gonna be doing on those projects as well?

SPEAKER_01:

For me, it's uh it's a it's a shift towards delivery, right? So it's it's really um it's a shift to actually um right now I was uh I was main mainly involved in sort of um initial having initial conversations with customers and and and really trying to devise what we could potentially do, and then afterwards a delivery team would come in and actually fix it. Right now I'm more shifting towards being the person that actually has to solve the situation, so it's uh it's uh it's a change. Um but I really like it, yeah.

SPEAKER_05:

Yeah, exactly. It's exciting, it's exciting, and I'm sure you're gonna do a great job. Um so what do we have? What do we have for today, right? So you went to uh a conference, it was the tech X Expo. Maybe what was the conference about?

SPEAKER_01:

It's a it's a big conference about uh everything tech related. So I I initially you I I subscribed for the what you see on the screen, uh so the AI and big data uh part of the expo. But it was a big expo which actually also talked about cybersecurity, which also talked about digital transformation as a whole. Um it talked about blockchain, so there was a stage on blockchain. Uh you see there the stage on cybersecurity, um, and it featured a lot of um leading companies, very big companies. Um obviously it's Amsterdam, so uh more Netherlands-focused companies, um, but just from across uh the entirety of Europe. Um and so I I mostly followed the data and AI track. Um we were uh very happy to be invited by the organization, uh, so we were able to follow whatever tracks we wanted to follow. Um so sometimes we um uh both me and Sophie, the other person that went, we we jumped into the digital transformation track or sometimes cybersecurity when it was interesting topics there. Uh but mostly data and AI, um well, the big data and AI uh part of the track, which is a series of um presentations being given, uh, both from consultancy partners or non-consultancy partners, um very interesting topics, very inspirational, um, which was nice.

SPEAKER_05:

And uh so you mentioned there was also the digital transformation track. Uh did you just like is it just is also the same conference still? Because I think what we have on the screen now says AI and Big Data Expo, but like it's a different track within the same conference.

SPEAKER_01:

Yeah, so you see that the sort of the color of the AI and Big Data Expo is is sort of this light purple-ish. You also um there were it was basically one big conference hall, and then you have um multiple presentations ongoing at the same time, and then I hear you think how the hell do you follow all of this? So they had sort of the silent disco concept. So you you you you wear headphones, you sit in the audience, and you listen to the presentation, you can tune into the presentation if you want to follow. So there's no like physical barrier between the presentation, it was just one big conference hall, which was which was really cool to see. It's interesting.

SPEAKER_05:

Do they have like background music as well, or is it just like people walking in the background music?

SPEAKER_01:

It's like the individual booths of people talking. Yeah, yeah.

SPEAKER_05:

So okay. And so they also had booths, right? And also that's what we see. Yeah, so you have the main stages that talks, and you also had booths. And the idea with the booths is it um easy to talk to? Like, is it is it uh products, is it companies hiring? Like what what what are the people that went there?

SPEAKER_01:

Uh a bit of everything. Um, it was uh a lot of companies that had AI products uh these days. Uh I think a lot of them I have no, so yeah. Um then there is um a lot of well large consultancy partners that are I think mainly there for recruitment. Yeah, I think. Um there were some very interesting companies. There was one company that was really working on on like a standard for talking to PLC, uh PLCs and machines, etc. Like so they it's it's a it's an open foundation. No, correct. It's like a Linux foundations, um, but really they they they invest in sort of these open standards. Um maybe I tri IEEE is a is a better comparison, like really safeguarding the standard, working on how can we improve sort of uh software to machine uh communication, how can we open up uh the data ecosystem of these of these um machines and plcs and everything there.

SPEAKER_05:

Cool, cool. And uh do you have any general impressions on like how did how was the vibe of the conference or how what kind of talks or what are the main pain main things that you heard over and over?

SPEAKER_01:

I think there was a very specific I've talked to a number of colleagues here at the office about it. Um there was a there was a very um big trend in in in the in the AI and big data expo. I think it's in in general you see this uh if you go on LinkedIn, you see the same trend sort of happening is that um there were a lot of companies presenting on sort of like, oh, you need to jump on the AI train, and this is not just a hype train, and um this is real, and uh multi-agent orchestration and everything, like the the the really crazy cutting-edge um AI agent AKI, and I'm obviously a big fan of that, but then there was sort of the second movement, the second stream of presentations that you saw, um, which is more on really real-world applications of AI today, which was a bit less focused on Gen AI, which was a bit less focused on Agentiki, which was more So you prefer that types of talks or well the first one was more sort of theoretical, abstract, which was which was more like oh look at the art of the possible. Yeah, I see. Which I sort of hated. Um, why did you hate it? Uh because it wasn't it was not grounded in anything. I see, like it was just it was just fluff. Yeah. So the the the second stream of presentations, so the the first one obviously more given by and and I'm a bad person to talk about it because consultancy partners.

SPEAKER_05:

A bad person because I've done this a lot. I've done this.

SPEAKER_01:

Um but it's it's a difficult thing, right? And and uh all of these, like the especially the consultancy partners, you come and come with this, oh, you have to jump on this, you have to jump on that, but very few things that are really directly applicable for a lot of companies. Yeah, I see. It's like oh look, multi multi-agent orchestration. And if you if something doesn't work out, just throw another LLM on top of it and it's gonna be great. LLM's all the way down. Let it make decisions for you, it's gonna be great. No, no questions asked. Just and then like from the from the real, like the the the non-consulty companies, really the companies that are applying AI to achieve whatever purpose they have. Then it was it was a way more realistic mix. It was it was way more like okay, we try some things out, it doesn't work, it's fine.

SPEAKER_05:

But it feels like they're not trying to sell you something necessarily. It's just like we tried this, this worked, this didn't work, it's a little bit disappointed, yada yada. Like more.

SPEAKER_01:

Also, it was it was the complete opposite of like, oh yeah, everything needs to be Jennifer and Genicai. Is this the greatest thing ever? This is the no, it was just like, oh yeah, we have a we have a healthy mix of like some of these things are just basic clustering and regression, and if we can solve it with regression, that's fine. Not everything needs to be this this this great heavy foundational model to be doing things, and some of these things were because you know if you want to generate marketing videos very easily, then probably your Gen AI yeah model is gonna be the best to do that. You know, don't recreate it from scratch. So it was a very healthy mix there, and and it was a way more realistic way of looking at things. So cool. I preferred that part of the presentations. The second part, the second part, and I do feel that there were people there which I talked to which had the feeling that especially the first stream was more interesting because for them it was more about you know understanding what is possible and they are more faced with the day-to-day reality of things, and yeah, but I think maybe you and I we're in a different space, right?

SPEAKER_05:

That maybe we are a bit more saturated with this. Well, first we I think we have a better idea what's possible. And the second thing is like if you go on our LinkedIn or whatever, because of our network, you are naturally exposed a lot to these things.

unknown:

Yeah.

SPEAKER_05:

Right. So maybe that's like for maybe for us, it was like a bit much, but maybe for someone that is not in this space, maybe they see this like, oh, I never thought of this, or maybe I haven't heard that.

SPEAKER_01:

It's it's true. One thing that I try to avoid, like obviously, I'm I'm I'm very heavily focused on the data NI space because that's literally all my professional experience. Um and that means that a lot of my LinkedIn network, for example, is is um people from the data and AI space or people that are outside of the space but also still doing yeah uh adjacent activities, and that means that I see the same post passing by on LinkedIn every once in a while. I saw the same thing on this conference. I I started a count at some point because the first two presentations that I followed all cited the same study, and then I started a count because I was joking with uh Tim Lakers, the my my co-uh flash member. Um, and I was joking around with him, be like, this is the second time. Oh, this is now the the third time I've heard this MIT study, and now it's the fourth and the fifth. And um I see it passing by on LinkedIn a while uh every once in a while as well. Like um a lot of a lot of people citing like the same study and just revolving around in the same echo chamber, which which at some point I'm like, okay, but let's try to do something which is actually thought leadership, which is something that people can actually do something with. Yeah, um, and especially when it comes to that, and I think that's that's especially one of the very individual things that I wanted to highlight from this conference, like um, so many companies citing this one study, the MIT study, which has like only five percent of agents are actually generating value, and then the same conclusion being drawn fifth five times from one same study, yeah. Yeah, which is always the same. Yeah, which is like, oh yeah, uh, we do not succeed in turning agentic AI into value, and that's a that's a problem because obviously it shows that either we're investing in the wrong places or we're doing XYZ, or yeah, people are experimenting too much and they should stop experimenting. We shouldn't we need to get out of this POC world and we need to make sure that we can get this operational. And I don't believe that.

SPEAKER_05:

Okay, so maybe for people that this is the first time hearing this, right? Like you mentioned five percent, but I think if I remember, well, the headline that I remember seeing they said that 95% of the projects fail. Yeah, and I already heard someone saying, like, even the report doesn't say that. The report doesn't say that it fails, it just says that 5% have proven um like revenue targets or whatever. Basically, so it's like just because something has proven success doesn't mean that it failed. So that's the that's also what I what I also heard. But I heard there was also way more criticism. Maybe also before we talk more about the study and your opinions on this, um, being a bit more meta as well. I also think it's interesting that people are repeating the study because I also think it's very uh it there's no risk, right? Like if you just go and repeat something that MIT said, then it's gonna be like, ah, that's wrong, right? I think in something that made a lot of noise in in our bubble, let's say. And I think I think if you start seeing too many people repeating this and like really driving this on this point, I feel like it's like, come on, like what are you what are you telling me? Right? What what are you bringing here on the stage today?

SPEAKER_01:

You know, yeah, and and it's the it's the same sort of like face value, it looks like an obvious conclusion, like, oh yeah, we're we're fail, like we're failing in turning this into business value. But then on the conference itself, I I had some conversations with people who are doing this as a day-to-day job. And one of the things that they mentioned, like consistently, the people that I that I listened to the presentation, and I saw the presentation, and I was like, Wow, these are doing great stuff with AI. Like they were they were showcasing some of the use cases that they had across their company, and they were like, Well, this is really impressive, this is really cool. There, there seems to be a strategy behind this. And then you have a conversation with these people afterwards, and you're like, How did they how did they get to this? Because it's nice seeing the results at the end, but there's a journey to get there. And a lot of these people were like, Yeah, we just crashed and burned a couple of times. We tried out so many things. I think about 10% of our initiatives actually made it to the end. And we probably kill about 50% of these initiatives after the first two weeks. We try it out, especially with Genji today. Like, we try it out, it's easy to try it out. We have a lot of the agent kits and these kinds of things where you can build some things really fast. We build it, doesn't work, it's fine. That's that's the whole point of RD. That's the whole point of innovation, is yeah, is to try out things that potentially they don't work.

SPEAKER_05:

But that's the that's that's that's so you and it that's that's your opinion as well in regards to this to this um to this study, right? That it misses a bit the point on this is new, this is RD.

SPEAKER_01:

I personally feel like there's a there's a lot of points to be drawn from a study like that. Um, if the study is legit, which is also uh has been drawn into question a couple times here. Yeah, um, with people saying, like, is it is it actually an MIT study or is it an MIT associated study? Yeah, yeah, I also heard that those question marks. So I don't know at this point. Um, but like there's a there's a lot of conclusions be that that you can draw from this, and everybody draws the the the one single conclusion, which is like, oh yeah, yeah, this is a failure of or we're going too fast. I don't know. I don't like it's just like a couple of months ago, everybody was advocating, and I'm I I still am advocating for the point like experiment, try it out. Yeah, it's the best way to learn. Like from and this is my own personal firsthand experience across a number of projects here at Ada Ritz, is like whenever people experiment with this, even if it works or it doesn't work, their understanding of how it works is better. So it it improves future success of this.

SPEAKER_05:

No, I I agree. I'm also wondering if so I agree with what you're saying, and I I can definitely see that perspective, but I'm also wondering for people that come from the other side as well, that they don't know the Gen AI thing, and then you just think like, okay, this is a sure thing, right? Like I and we I mean I have seen this throughout my my career as well that you have some people that have like I have the before Gen AI, I have these ideas for this AI model, we're gonna have our customers, we're gonna cluster them, we're gonna have like six clusters, kinda, and then we're gonna have a market personalized marketing, and then like you cluster, and it's like either you have 300 clusters, or you have like three kind of clusters, and it's like, I yeah, and then each cluster is gonna have an identity, like sports and this and this, and we're gonna send personalized marketing stuff, but then like you get the cluster and it's like, yeah, this is kind of like sports, but kind of like lifestyle as well, and kind of like this, and this also has a bit of sports, and it's like you know, it doesn't work, and then I think there's still a bit of bringing that this is experimentation. We're gonna try, we have a hypothesis, but it is possible that it's not gonna work, right? I think here at Data Roots as well, we try to be very pragmatic on if this is not gonna work, let's figure it out early. But you see, there's a lot of companies that they they invest so much, like almost like in a waterfall, kind of like you know, we're gonna do this, and then by next year we're gonna deploy, and by this time it's gonna bring bring this much money, um, which is not the case. So I also think that maybe this study maybe was again, maybe it was used wrong by some some people, right? They have that and to be very fair, uh, I haven't read the study in detail myself, right? So um, but the things that I have said, uh or have said, that I have heard people say, um, and the things that I've read, like just the abstract and the title, um it could be helpful for some people to say, hey, this is not a sure thing, because there's so much money invested in Gen AI these days. Um that it's just to bring a bit like, hey, it's not a guarantee.

SPEAKER_01:

I think I think m my main frustration with the conclusion that is being drawn is if I have conversations with uh people leading or establishing AI functions within companies, is when they they go and use this thing as a metric, as like how many of my AI projects actually make it to production. Some of some some companies like you start with 10 initiatives and two of them make it to production. Is that a good thing or a bad thing? I think it depends. Of course it depends. So that's a it's a faulty metric.

SPEAKER_05:

Yeah, that I agree. That I agree, and I think also you're saying like about initiatives as if we spend the same amount of time on all of them, the same investment in all of them, which I also don't think that should be how we're looking at things, right? Like if I like you said, uh, if I try something today in two weeks from now, I abandon it, it's different than if I spent something for like a year and then I abandoned it, right? And it shouldn't be weighed the same as well. So that I fully agree. I fully agree. I think it's uh it's a more nuanced conversation, right? And I think sometimes the nuance is a bit lost when people are trying to make a point and they show this and show that.

SPEAKER_00:

Yeah, right.

SPEAKER_05:

Sure. Um now on the actual talks, we had a precinct before on like what kind of talks stuck stood out for you. Um and the first one here is uh scaling safety, machine learning strategies to to uh protect online communities. What was this talk about and why did it stood out? And yeah.

SPEAKER_01:

I think the reason why it stood out. as as a start is because it's Reddit and um I'd director of machine learning. Yeah so I consider so you probably do this sometimes as well. Like um if I try to find um reliable sources on machine learning and NAI you can go to like research papers which obviously reliable source but sometimes you have these these Reddit subreddits where there is just such a tremendous amount of knowledge in there like I I I I I consider Reddit to be a very nerdy community and if you're then the head of machine learning at Reddit I consider you to be kind of trustworthy and he proved it as well which is why like I I initially looked at the program I was like oh wow director of machine learning at Reddit the king of nerdiness maybe uh Alex do you go do you do you ever go to Reddit?

SPEAKER_05:

No yeah it's nerdy it's nerdy it's a nerdy thing I watch youtubers I go through Reddit things like do you like the the the nerdy youtubers not really I don't know okay she's a proxy Redditor yeah she's a proxy yeah she's like not gonna I don't do Reddit I'm not an addict I have my means you know like someone else um yeah I also like Reddit I also like Reddit because they seem to be very developer friendly as well right so there's a lot of stuff you can do with Reddit um yeah I I like the the whole setup like the whole like how it's community driven how there are there like they found a balance between moderation but also keeping it open for people to to do things and to create these different communities I also I also enjoy it and I also like how there's some uh like I know the Gemini in the beginning when they had their summaries they were also looking at Reddit for answers so the one of the answers was like how to make pizza uh the cheese stick on the pizza and it was like someone already said ah yeah you should put glue or something like clearly like just trolling and Gemini was like oh yeah maybe you should do this you know just but just and then like but he tried to justify like to make sure you have a non-toxic glue you know and all these things it's like yeah this is this is great it's it's it's perfectly leaning into the the topic of of uh Alexander G there.

SPEAKER_01:

Yes um is like what he wanted to achieve uh the the use case that he was talking about is like how can we leverage machine learning to better protect sort of the community so you have the Reddit communities and then within their communities they all each have their rules like yeah and within these you have to your messages um your responses should be within these rules otherwise you can be blocked or um maybe quick question is when you say is it machine learning or is it Genai or is it both? Or initially so they they've been trying to tackle this with machine learning from before the North AI boom so they they used to have like classical NLP techniques so uh named entity recognition trying to do keyword matching these kinds of things um and then obviously the entire LM thing happened and then they also realized okay uh we need to adopt to this and can we like it it should be a relatively straightforward thing you have like a limited set of rules which are stated in sort of like the the community on the bit and you evaluate every message against these set of rules should be fine right um and he was talking about like on on the one hand they have such a tremendous amount of messages that it would be very costly.

SPEAKER_04:

So they had to like that's the first thing I thought is like because what you're saying is like so maybe to to backtrack a bit like you have the community guidelines which is basically text.

SPEAKER_05:

Yeah and then what you could do is to just put that on the LLM prompt and then just say okay for every message does it meet the does it respect these guidelines that I'm telling you right so that's a very naive that probably would work but probably you would be bankrupt in like half a second.

SPEAKER_01:

Yeah. Right okay carry on um and so that was the first challenge is like okay how do we how do we scale this because obviously proof of concept works kind of okay I'm gonna get into why it's only kind of okay later but how do we scale this and then you have to look at sort of like more efficient um sort of and and there's a lot of progress being made in that area having more like energy and cost efficient yeah yeah um not per se large language models I mean with like medium sized or small language models like these language models language models in general um which was the first thing like he did they really were also looking into self-deployed uh language models that they could use and stuff like that so that was that was already from a technical point of view was really really interesting interesting and then there was like he's the only person that I've ever heard complain um no he's not the only person that ever heard complain but like in the in this kind of context complain about like post-training of an LLM like what what do you mean by post-training for LLM for people that are not basically LLMs are there there's a lot of YouTube videos on this like how LLMs are training is basically like you feed them a lot of text and there's like a whole bunch of um uh they try to do first word prediction uh predict the next word um and then a second thing they try to do is make sure that answers are aligned with how people would answer to things and then they have a lot of uh q QA pairs that they give like there's a whole bunch of things happening in training in LLM but then there's also a sort of like um a lot of the commercially available LLMs do this where they have like security and sort of like um making sure that you cannot jailbreak an LLM or make chat and say how do I make a bomb it won't tell you making sure that there's no um that you cannot uh give prof like um get profanity answer enter profanity or you cannot generate like nude images of somebody like that's also um so there's a there's a lot of sort of like postering like security being put into these uh models and they were facing massive issues with this really because um if in a lot of communities you would think that how can I make a bomb is actually is something that an LLM is obviously gonna say like I cannot answer that question. Yeah but they have they have like the r slash gaming and then and then you have games where as part of your gameplay one of the things is like I don't know how to make a bomb. Like I I I need the ingredients uh imagine you have a character and then if you put like uh ammonia and you put like all your things together and all of a sudden you have a bomb like one of the three wood it can be a very it can be a very violent so they were they were running into these profanity filters.

SPEAKER_05:

Oh wow and and these security guys because they they like everything was classified like not everything a lot of things that uh should not be classified as false positives basically a lot of stuff that shouldn't be flagged yeah I was that was between false positives I was confused the two can be tricky but basically so it has so many false positives a lot yeah well I also think like for the side the scale of Reddit right like even if it's even if it was a small percentage it's too high volume right okay how do they solve this or do they not solve it?

SPEAKER_01:

Well one of the things was going into sort of like the more um actually training their own LMs okay um so so using like a base model which is way smaller open source base model source base model and then train it themselves align it themselves really because actually it's true because they probably have a lot of label data as well they have that that's that's one of the things that makes this possible for Reddit and not for 99% of other companies is like they have content moderators.

SPEAKER_04:

Yeah they have people that just spend entire days exactly labeling content just saying like no shouldn't be okay this is okay it's not okay yeah yeah yeah and so this tremendously improved upon the original way that they were doing it the NLP way and do they have their model that they they they trained is it open source as well or they just kept it as a proprietary do you know I'd be curious to see okay yeah I don't know I should ask I don't have the guy on WhatsApp.

SPEAKER_01:

Give him a call cool yeah but that was that was that was really cool because we get a lot of questions from from companies like okay how do we tackle this entire channel like where when do we do the training and I think I've never ever said to a company like actually you should do fine tuning yourself or you should host your model yourself like I'm not sure if it might be that at some point somebody at AdaRoots has had this but I personally haven't encountered it and this was one of the cases where I was like yep actually you that makes sense you need to do this.

SPEAKER_05:

Wow I think if they have the data and they I think it's more like the the the the data they have it's more like what's the cost and what is the benefits and all this even though this is also a very narrow use case right well it's a it's a large part of being such a content heavy platform is having actual proper content moderation.

SPEAKER_01:

So the I think the costs for content moderation are quite high for them.

SPEAKER_05:

Yeah probably makes it a business case.

SPEAKER_01:

I feel like this is probably the thing they spend the most time on like as a whole yeah might be probably not cool um anything else you want to share on this anything else that stood out just I I went to talk afterwards with the guy um because he he he mentioned that they try to do everything from a very um ml ops safe way like they really every every new thing they start every new project they start every new like from from the beginning they immediately start with like new repo immediately like experiment tracking everything like that it's really it's really like our ML ops competencies will be like damn this is like yeah this is this is this is actually but the best practice yeah um and I went to talk to him and I said like how do you like doesn't that stifle your um sort of innovation cycle because innovation is about experimenting and trying things out and um I've seen enough machine learning engineers they're just like give me just give me a notebook and I'll try something out and afterwards I'll I'll do it yeah best practice. And he was like it's just because they they grew that way because they came from sort of the traditional software engineering more that they're so used to this that it immediately got introduced and then it it was part of their culture to do it that way. It was part of their way of working and everybody sort of adapted every new person that came in was immediately sort of like there was this this is how we do it. And it was easy and seamless for them. Like they for them it was not like requesting a new repo is just like click book is there. Okay interesting they have a they just have it seems like they have a really mature setup and he was like yeah actually it's it's not always that that perfect but what is voila but it's really nice.

SPEAKER_05:

Curious also like if you what kind of tooling they have or what kind of setup they have like you know the tool do you know or did he mention as well like I think he did.

SPEAKER_01:

Yeah but this is two weeks ago maybe well and I had a week of holidays in but it's fine. What do we have next we have uh from PepsiCo yes yeah so um this is uh the next frontier in AI applications transforming data into actionable intelligence yeah this is one of those times that I read but this is very clickbait you know you get it yeah yeah what was this about um so this was uh as you can see the VP of data analytics at NAI actually I walked into accidentally into this presentation I had an I had a call and I I sort of walked you're like looking for a meeting room yeah I was looking for a meeting room uh no actually I just finished the call and um I I got back to this um but I hadn't planned for the call to end that soon so I was I was walking I was walking in there like oh cool I can follow this um and he was talking about something which actually I was um outside of the conference I was I was working on and thinking about um and having a lot of discussions about with with people inside Aroots um is a is a question so they have um he mentioned they have about 40 000 reports inside Pepsi which 40 000 per how much time per year per month no just in general permanent asset reports ah okay just like um imagine like power bi reports 40 000 of these reports okay which is a lot to maintain which is difficult which is um uh which is a challenge and this is something this is a question that we as a company also got so I I related very much and and they were very far ahead compared to what I typically saw um which is um a lot of companies are looking into that like how do we how do we clean up this process because we we've been doing traditional BI for for quite a while I'm talking about traditional bi really the the the thing from before before before my time um where we created these structured reports and and and reports are being drawn up in a management cockpit and um all these things and and and he was like but a lot of these questions a lot of these questions um really don't warrant a report like a lot of these are bloated reports somebody had a question once and the only way that they knew how to interact with the data was just going in in a it like creating a new tab of the Power BI report and just like yeah messing around in there and he's like we don't need and it's it's it's bloated in it's it every night we do a recalculation of these reports and it costs us so much money so we don't want to do this.

SPEAKER_03:

Maybe the information gets wrong or like some definitions change and just there yeah.

SPEAKER_01:

You cannot govern this like there's so many implications like um and he was just like what if we just get rid of them all together and just put a natural language interface and it answers all these questions like you don't need any reports. So he was so his vision was or the PepsiCo's vision was a bit extreme for my taste which is like we get rid of these 4000 reports and we just go to 50 AI consoles. Okay. Which to me is a bit like I think what is it like is it like it's a um uh uh chat GPT like interface you type your question you get an answer and the answer consists of direct answer the answer consists of a of a visual and what's the difference between one console because it's 50 but what's the difference between one and the other well you have to you have to scope this in some way um it's like if you have I have a question about finance numbers so then I know I need to go to this chat GPT instance quote unquote to ask my question. So I I think that's that's also where the where sort of the the research comes in and and again another topic where I spent a lot of time talking to Tim Leos about um but it's like the theoretically like the benchmarks these days they they get about like if you go just text to SQL just assuming that we're only looking at structured data text to SQL gets about what is it 61% accuracy on on just in general like answering the question correctly. As a rule of thumb as a rule of thumb you get about 61% accuracy and and you can improve upon that right if you if you can set the right guardrails the right scope the right domain then you can imagine make sure you have the metadata the documentation all these things yeah for sure but out of the box you get 61 so if you if you just try to apply one AI console to all the questions being asked in your company you're probably not gonna get great accuracy.

SPEAKER_05:

It's gonna degradate a lot to the performance for sure for sure.

SPEAKER_01:

But so my vision compared to um this guy who obviously knows way more than me but my vision is like there is a there's an in-between I think he personally knows probably that there's an in-between as well like but the the vision is a is a North Star to get to maybe we can get there maybe they'll need about a hundred or five hundred reports still but these are reports that are actually being used day to day like every day somebody's looking at this somebody's using this like these are these are like like then I can warrant that it's a report that you just need to keep but if you can remove the remaining uh uh 39500 and you replace them by a couple of AI consoles that it can answer these questions and people don't need to recreate non-used reports then might be a good thing.

SPEAKER_05:

True true and I think it it it sheds a new light on how data teams need to sort of look at governing their BI yes but also as you're saying these things my my thought is before we go into the AI can we just monitor like how many times this report has been seen how many times has we been you know like it maybe takes a bit more work but uh maybe he also did this as well right but like if you have how many say 4000? 40,000 like how many of those have been seen in the past year past two years?

SPEAKER_01:

You know what one of the one of the big questions often is as well is like um people create the same the same like semantically same report in different ways across the organization and they don't know about yeah that's also thinking like as well like maybe before the next like maybe having this AI tool but somehow having a layer of before you create a new dashboard maybe there's a there's a part of like maybe this was created this like maybe this is what you need maybe this is what you need that's that's obviously a part where um a lot of a lot of companies are looking into as well right like improving the findability yeah exactly and that's also but like that becomes more like findability becomes more difficult the more things you have to search. So if you have to if you have to improve the findability in like 4000 reports or you have to do that in 500 reports. Yeah yeah for sure it's easier to like it makes your data governance a lot easier and to be fair a lot of these questions are ad hoc questions you need them somebody has a question once somebody wants to and and not everybody has the skills or the accesses to go into like a say Jamaica workbench and try out like it's it that's just the reality but yeah a lot a lot more people have knowledge in Power BI and maybe they they just like the drag and drop thingy and they can find some information they're like okay look this report and then it exists and it's yeah yeah and it stays there.

SPEAKER_05:

Yeah creating stuff is easy and then sometimes maintaining it is the is the tricky part.

SPEAKER_01:

So I do like that that you sort of have this additional way of giving people access to data and if you implement it in the correct way and I I think there's a lot of sort of safeguards that you have to put around it you have to look at what types of questions what is the intent of the person like you can you have to evaluate the types of questions very correctly but I I do think there's a lot of value in this way of doing BI. No at least including it sorry.

SPEAKER_05:

Yeah exactly I can agree I think this is more of a more moderate take right like to say it's not like this is better all the time some cases this some cases that but there is an opportunity to to involve to bring more of these uh Gen AI applications in the space for sure absolutely but I also think that is a bit uh like you said uh like the text to SQL performance are it's not amazing yet there's a lot of stuff we can do and I think we also need to figure out a bit what can we do and what's the best thing you know like maybe there are some types of questions maybe like okay even in the 500 most popular reports we stay but maybe there are a few questions that are very tricky as well to get from natural natural language to SQL queries right so maybe for those as well it's better to have a report because from experience you're gonna say these types of questions AI is going to be very hard to to get it's very nuanced.

SPEAKER_01:

I think today and and it's actually uh this is this is conversation from like two days ago um with with with Tim Leus like like the ide to approach it I think is is like have a look at the types of conversation the types of questions that people are asking um the types of people the types of questions that people would like to have answered maybe that they don't know how to ask them today but like that they would like to have answered and then try to classify them indeed in into buckets like you try to classify them in terms of complexity intent like are these just conditional questions are these simple questions like how can you classify that and then based off of these buckets indeed determine like okay what kind of solution do you need for a specific question and then I think the difficult part is how do you bring that to the people like how do you bring it to the end user that maybe does not want to ask themselves the question like what kind of what kind of question is this?

SPEAKER_05:

Do I need to go this solution and that's that's where then the difficult part becomes again but but it's true but I think like you meant you touched on a good point like if you have this also AI interface you also have an opportunity to capture user intent closer. Yeah right uh like maybe there's a big report but maybe people are just looking at one small section of the report right or maybe from if you have feedback as well was the answer did you get your question answered or not you also can kind of see a bit how what are the things that the models perform better and which ones right so you can also map a bit what is the most important things and cluster Like, okay, this is the things that we should go. So I think you should set these things up. There's also an opportunity to understand more how what kind of questions that people are having. All right. Um, and maybe one other thing I'll say before we move on as well. I also have my experience of having to interact with these agents or these LLMs. Um, for example, we want to go through customer support, and I just want to like I look the documentation, I don't find it. I just want to send an email to someone and say, hey, just answer my question. And then, like, no, you have to talk to the LM. And I'm always like, fuck, you know, like, I don't want to talk to this guy. You know, like I know he cannot help me because I know the information is not there. And then I have, but I have to do the dance and like talk to this. Oh no, no, okay. Would you like to talk to a person? Yes, please send me to a person. And then in the chat they say, okay, we'll ping you, we'll send you an email when it's back, and then and then it gets solved. Um, and I've always like, I don't like it, but I also understand from the other side, from like the the person providing the customer support, to have that layer is like super useful, you know.

SPEAKER_01:

I think um I recently had a conversation with somebody inside the iSpace, and they were like, I hate these systems, I really hate these systems. So every time I get into this loop, I try to get out as quick as possible. And from his experience, the easiest way to get out is just start throwing profanity at the stuff. I really yeah, because you start cursing at it and you start like um uh just just really bullying it, yeah, like it automatically gets flagged and triggered, and somebody is like, Oh, you cannot do this, and they're like, Oh, I got a person. Bring you here, it's like jailbreaking this.

SPEAKER_05:

You can all do this. Well, while you're here, let me ask you also kind, sir. It's completely changed the tone. All right. Um, what else do we have? Let's see. The last one that we we selected for this little chat is the from Alberthein. Yeah. Um, I think for people that don't know Alberthein is like uh supermarket chain. Yeah, right. I think they're they're they're Dutch.

SPEAKER_01:

It's uh yeah, yeah. You you hear it from the name uh Albert Hein. Um this was um uh both Sophie and and I we we agree this is by far well together for me together with the Reddit. Uh Sophie was not able to attend the Reddit presentation, but like the the two uh most interesting presentations, and this was really the way this woman presented how they did um AI at Albert Hein was really like this is what we want to want to achieve as a company. We experimented a lot, this worked out, this didn't. Yeah, like she was she was really honestly talking about some things that didn't work out.

SPEAKER_05:

So the name of the the the talk is AI in action, accelerating innovation across the enterprise.

SPEAKER_01:

It's like yeah, and this is actually um um so she really talked about how they how they try to like Albert Hein uh what they what he tried to do was like oh can we reduce food waste? Like how can we reduce food waste and maybe we can set dynamic pricing on some of the items and um look at oh by um by 3 p.m. we still have 700 breads in our um 700 loaves of bread in our supermarket. So maybe if we discount the price, we think we can actually and and this is the highest price we can still have where we get rid of all the breads, loaves of bread. Um and and and sort of like that's one of the things that things that they do. And obviously, this is one of the examples where I said like these are they're really still looking into traditional AI, and there's still so many things to be done with traditional AI. Yeah, and there were a couple of things like uh creating this is creating promo videos. This is looking uh we went to sell this product and then they generate an ad with AI, the first version. Uh obviously it's it's oftentimes the first version, and maybe then they perfect it with a real studio or something, but like the it was really AI across the board, how they source use cases um across the board, like how they get to these ideas. Um because I don't know how how how you think about this, but like as a data scientist or a machinery engineer or a Gen AI engineer or whatever you want to call yourself today, somebody that is familiar with AI technology and is able to implement it, you don't always have the perfect ID of how AI can be applied in an organization. Yeah, and like getting the ID to you is is still a big challenge in a lot of companies because sometimes the people don't know what AI can do, and you're stuck there with like I know what I can do, but I don't know what you need, so I have no idea how to get there.

SPEAKER_05:

Things need to come together, right?

SPEAKER_01:

You need to have the right person at the right time with the right intent, and yeah, and and really how how they set that up and how they really had this, and and she mentioned it a couple of times like this culture of experimentation is what what led to all of these results, like all of these cool things. It's just we experimented, we killed a lot of things, and I actually spent after the uh she gave such a great presentation that afterwards there were like a big line of people asking questions, even after the official QA moment was already done. Still had about 20 people, and then um the the 15 minutes break between her presentation and the next presentation was done, and so she left sort of this the conferencing area. There were still like five people, and I actually was the sixth, I was the the second to last in line. I waited about 20 minutes to ask her a question. Um and I I really asked her the question. I I challenged her on like the MIT study and said, like, so what do you think about this?

SPEAKER_05:

She mentioned the MIT study, or she did not? She did not.

SPEAKER_01:

Okay, and I was like, I was like, I've had five people mentioning the study like these days, and they just say, like, oh yeah, there's an issue with how we implement agentic AI and AI, and yeah, like um, how do you think about it? She was like, she was like, We kill so many, yeah, so many initiatives, and that's fine because otherwise, like she talks about innovation, and innovation means that you have sometimes have to make bets, and you have to. So, the best way to diversify this is investment, uh investment analysis. Like you do a lot of bets, and you look at which ones are the best, and then you continue with these. She was like, We killed so many of those initiatives, and I think that's I think this is closer to the reality than what most of the um this is closer to thought leadership for a lot of companies than what multi-agent orchestration is supposed to be.

SPEAKER_05:

Yeah, but then like so. In this talk, she she goes a bit over the like actual use cases, so more like low-level, this is what we did, this is what we're doing work, but also like a bit the the general theme of like this is what leaders should be thinking on their organization. So they also found she also found a balance there. That's what you're saying.

SPEAKER_01:

Yeah, I I think that's why like it really encompassed a lot of good things or or sort of um best practice, not in a software way, but like best practices of how to do data in a company. Like it's it started from this is what we want to achieve as a company, not not AI, not we want to apply AI for this, or yeah, um AI is gonna look make our numbers look better.

SPEAKER_05:

AI is not the no AI is not the answer that you're trying to find a question for, it's more like the I AI is the way exactly means to get to this.

SPEAKER_01:

Okay, and she started from that, and then she sort of went into this is how we then how this translates into uh we wanna we wanna reduce food waste across the company because uh we we value ESG. Okay, how can we do that? We can do that this and this and this way, all right. And that's how AI contributes to that, and not everything is obviously AI, but um, she was talking about it because well, she's the vice president of the TSR data science and analytics. Um that was uh that was also that was a really cool presentation, I think. All three were all three of the the the um conferences, the presentations that I wanted to talk about are non-consults, because I think that's what the real thought leadership is to be. I think um the funny thing was that if I think about because I'm in this bubble, because I'm in this agentic generative AI bubble, um, the state-of-the-art technology, what Chat GPT, etc., can do today, is when you ask you ex ante would ask me, like, what would AI mean at Albertine? Yeah, I would immediately go into like, oh, you can do this, these, and these and these cool things. And then one of the first cases she mentioned was oh, we just do dynamic pricing on some of the products to try to avoid having food wastage and sell them at a lower price, but actually make sure that they are are gone. And I was like, damn, that's that's that's really yeah.

SPEAKER_02:

Yeah, like like I was I should have said that. Yeah, it's like kind of like you're like ah, I was focusing on the wrong thing.

SPEAKER_01:

There's there's so much steps to be taken before you get to the to the to the really where you need cutting edge technology. There's there's still so many, and we we talk about it with multiple companies, there where we say um uh AI is a means to an end, and there's steps to be taken. Yeah, for sure. I forgot a number of steps. Yeah, yeah, yeah. And I think that's why for me it was a reflection on like damn. I whenever whenever we do uh Tim, I you when whenever we go into conversations with people, really ask the question like, could you not solve this in another way? Which is a difficult thing to say if you're an AI expert and you're gonna advise people to solve it in another way than what you're really good at.

SPEAKER_05:

But yeah, I think it's uh for us, I think as consultants, I think sometimes a bit tricky because people want to hear these things, right? They they they want to invest on the gen AI. If you tell them like, okay, but if for you to do this, you need to have the data centralized with this, you need to have easy, you know. My the management is like, no, like I want to do something today, and that's what they want to hear. And I think our job is to be a bit like okay, you want to do this today. This has the risk of doing this the way today, like this is if you if you take shortcuts, this is what could happen, right? And I think in the end, the clients will always make the decisions, it's always up to them. But I think we have an advisory role to to to burst the bubble a bit, right? And I think it's not it's not a not always an easy position to be because there's so many people, and probably you saw in the conference as well, that they really enforce that bubble, they really hype it up more, you know. And if people want to hear that, they will they will easily find them a lot of people to to really encourage this, you know, to instead of instead of saying like maybe you should slow down and think like what should we do.

SPEAKER_01:

But I I started off this this session as well with saying like I do feel like you should experiment for sure. Like that was that was also one of the key messages that I started off with. So um I I I do believe that today for 99% of companies, 99% of people inside companies, it is still the best way forward, is like try this out. Because you and me we're in this space 24 uh not 24-7, uh let's say but about 12-5. Um but you we are busy with this so many so so much time, and then um even for me, oftentimes I'm still surprised with what it can actually do, and understanding like, oh, actually it can do this, or seeing applications of this technology and ways that people are doing this. So there is still a lot to learn and find out and experiment. And if that's the case for us, I think it's especially the case for other people as well, yeah. For sure. People that are not in this space 24-7 and then but but uh your company needs to well, that somebody in your company needs to be able to give you a little bit of a safe space in which you can experiment safely. Yeah, and there should also be this second part of the question, which is like just understand that getting from zero to 70 does not equal to getting from 70 to 100% on some of these cases, like the effort required for this is not linear.

SPEAKER_02:

Yeah, yeah, definitely.

SPEAKER_01:

I think that's indeed one of the main caveats, and I think that's one of the main things that that people want to address with the MIT study. I I feel like they're doing it in a wrong way, um, because you don't do not want to discourage experimentation. But there is a point.

SPEAKER_05:

Yeah, I think when you say these things as well, I'm thinking a lot of like what's the culture inside the organization, like a culture that like it's okay to try things, but at the same time, you need to like it's okay to try things, but it's not okay to keep insisting on the things that don't have a high likelihood of succeeding, right? So you have like having a bit of a of a staying grounded, but also try new things and giving that space to people. I also think that for like for Alberhein as well, um what is the culture that you have inside the company where there's this whole Gen AI hype, but they're like, okay, but what is the value? What we want to do, let's try to use Gen AI, yes, but like that's not the that's not the the end goal, that's like the means to another goal, right? Like let's think of these things critically, let's be grounded, right? And I think I imagine that for a lot of organizations today that's not easy, no, right?

SPEAKER_01:

No, no, especially and and that makes it that makes it more difficult to adopt to some of these things, and that's I think that's for sure. I think that's true, and I think uh Albert Hain, with I think the person personal experience, I don't know if this is uh backed by market evidence, but like Albert Hain is is is known as being a little bit innovative, different within the sort of the supermarket space, um very much focused on being digital, and so I think for them being part of that culture makes it easier to do certain of these things. Um I think for other companies, if your culture is different, if you're more of a let's let's let's wait things out, yeah, not be a first mover, mitigate risk and all these things. And I think the different the difficult difficulty for a lot of these companies is um how do you stay true to yourself in still doing this? Like you you you don't want to be a laggard, you don't want to be the person not moving, but you're in in in in this in software in AI, you are not a company that is very experimental because reasons. There's a reason why your culture is that way. Yeah, it's not wrong to not be a first mover, you don't always have to be a first mover. But how do you still make sure that you adopt and be curious enough? And I I think that's one of the things I actually didn't mention. There was one presentation that was very interesting. Um about a it was a journalist, so it was um gave a very interesting presentation about the power of curiosity. Um, I think if you if you can find it, uh I think I found it. Let me the guy uh is I I assume he was an an American um this one? Yes, yes, um, so he he he wrote for the New York New York Times for uh uh among other things, and he had interviews with people like Sam Altman, so he's kind of well connected. He said like he gives a lot of these workshops across companies and he always focuses on well for a journalist, obviously, the power of curiosity. How today really having and cultivating a culture of curiosity gives you a strategic advantage, which I think, sure, I think it's something you you you should encourage, but again, try to stay stay true to who you are as a company, of course.

SPEAKER_05:

I think it's yeah, you have to mix and match, yeah, yeah, yeah. I think if you if you just see like, yes, you should be curious, but there's a whole context, there's always new ones, right? Like, uh, who are you? Is this like being curious is important, but there's also it's not just blind curiosity, it's also like being grounded and like understanding where to invest in what's this. So I fully agree. So as we as we're nearing the end of the episode, oh I know Alex. Um, but we'll get there. Um anything you would like to say on I don't know, who would you who would you advise to go to this? Like, would you go back next year? What's the kind of audience like for people that are listening? They're like maybe they heard you, they're like, ah, this was actually pretty interesting. I'm thinking of I'm considering joining. Um, who would you who would you say this is the best? So I also saw here 2026, 20 to 21st of October. I'm not sure if this is accurate or not, but they already have a date here. Um if you're looking for a job, is this a good place to go? If you're uh someone trying to find uh expand your network, is this a good place? If you're trying to learn more about AI, is it for leaders, is it for engineers? Who what was your impression having left this conference and who would you encourage to go?

SPEAKER_01:

So I I personally went with not too many expectations. Um I come from a sales background, I mentioned that. Um, which means that oftentimes as a salesperson, you go to a conference with a very specific purpose, which is network, get connected to people. Yeah, maybe if you can get a couple of leads in, it's a good thing, right? Um, so I very purposefully went to this conference without any expectations, and just like I'm I'm I'll see what it brings. Yeah, for me, that was a it was a good thing because I was sitting there with sort of like an intrinsic curiosity, like what can I learn from this? Um very open-minded, not too much focused on should I potentially add this as a lead in my CRM. Um which which was which was really nice. So I if I can recommend anything is to go there without too many expectations, which is a weird thing to say, but like it's the it's it's a general advice on going to conferences. Um, for whom would it be very interesting? Um they try to well, they actually cater to a very broad persona. Okay. Um, so they as mentioned, they have some very very technical presentations, people really talking about how they fine-tuned LLMs, and they have really they had um they had people talking about uh how there was one presentation on um how AI and and generative AI and agentic AI is changing the world of uh hacking and what this means for the cybersecurity. Like I mentioned, there was a cybersecurity track. So um there were there were very like um the the uh person that went with me, uh Sophie, she's she's way more strategically oriented. There were a lot of the presentations on how to just change the culture of your entire organization and how you set up these large change programs and stuff like that. So there's a lot for a lot of you like I'm I'm actually doing the sales of techx right now. Um please sponsor me. Um but um I think it was a really nice format. Um, there were a lot of cool companies to talk to across the board. Um so if you're looking for a new job, it's a it's a good place to go because cool companies to talk to. Um broad persona.

SPEAKER_05:

Indeed. So basically, anyone interested in any of the topics that they had tracks on, so data and AI, big data or cybersecurity, all these things, different range, whether you're technical or not technical, they always have something for you, even if you're looking for a next opportunity or something, you would advise people to go.

unknown:

Yeah.

SPEAKER_05:

Very cool. Thanks a lot, Tim. Thank you for the nice chat. Thanks, Alex, for for keeping us in line as well.

SPEAKER_01:

Thanks for inviting me.

SPEAKER_05:

No, no problems. Happy to happy to have you. Uh thanks everyone for listening. And uh see you on next week.

SPEAKER_06:

You have taste in a way that's meaningful to software people.

SPEAKER_00:

Hello, I'm Bill Gates. I would I would recommend a type.