AI Operating System Diaries

Episode 2 - The Mindset Gap AI Is Exposing

Iceberg Digital Season 1 Episode 2

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 19:55

This episode starts with a real moment inside an estate agency.

We released a new AI feature: Contact Intelligence and Property Intelligence —designed to surface everything the system knows about people and properties in seconds.

The reaction split immediately.

Some agents saw it for what it is: leverage, clarity, and time saved at scale.
Others found one issue and defaulted straight back to manual work.

That contrast is the story.

Not the feature. Not the bug.
The mindset behind the reaction.

This episode explores the gap that AI is now exposing inside all business, not just estate agency. The difference between teams that can operate in probability and those still anchored in certainty. Why AI gets judged by the wrong standard. Why visible errors outweigh invisible leverage. And why this is a leadership problem before it is a software problem.

Because the shift isn’t about tools.

It’s about learning how to work in a system that doesn’t behave the way software used to.

And right now, that gap is widening.

SPEAKER_00

Okay, welcome to this episode. Today is an exciting day. We just released a new feature inside our software, a new AI feature. And it's going to form the basis of this episode, really. Not to talk about how the feature works, but to give good examples as to some of the learnings around AI, some of the ways that you have to think about things as we move forward into this new world. So a little bit of context, the uh the feature that we released um allows people that use our software, uh who are they estate agents who use our software. So they have people and they have properties in the software, and they have a uh traditionally they have a list of all of the timeline of things that happened with that person, whether they be conversations or actions that the person has taken, because we have a complete ecosystem, they might be able to see what the person's been looking at on the website, when they were last active, what emails have been sent, what notes have been left, what conversations have happened, all of that sort of stuff, and someone might look through all of those notes. That's the idea, right? The reality is that no one's really got time to do that, so it's kind of pointless. Um, so now what happens is there's something called contact intelligence. Uh, you go to a contact and the AI tells you what's going on with this person. Uh, for instance, it might tell you this person came into the software two years ago because they completed an online valuation. Uh, they weren't active, uh, particularly they weren't particularly active for the first 12 months, but over the last three months we've seen their activity online rising. They've been checking out your blogs, they recently filled out a contact us form. Uh now might be a good time to speak to these people to get their house on the market. Or uh, you know, you might go and look at a contact who, for instance, um is in the middle of sales progression with you, and so it would update you to say that like Steve in the team spoke to this guy on Thursday, they uh had a good conversation about the search and blah blah blah blah blah, whatever it might be. Right? So uh the the reaction when we released this into the group was pretty crazy. Uh, lots of our clients said things like I've got a list of some of the things that they were saying here, like it's super impressive, uh loving the property intelligence. Someone else has written here like it's blowing our minds this morning. Uh, someone else has written here that like this system is insane. I can't wait for the rest of the year for all the other stuff that's coming. Someone has put this is bloody amazing, I love it. Like the list goes on, everybody's like pretty blown away by the whole thing. But here's something interesting, right? Um, on the one side, minds are blown, this is insane, this is gonna save so much time. But I did also read a message uh from somebody who said, Uh, this does this isn't working, uh, we're gonna go back to doing it manually until it's perfect, and that is the fault line with AI. So, what we believed um previously, and I've had this, like I say, internally with my own company over the past couple of years as we've transformed into more of an AI first company. What we originally believed is that you give people something better, they'll use it. Uh if it saves them time, they'll sort of lean into it, they'll see the benefits, they'll see the upsides, right? But what actually happens is that uh you I've seen two completely different reactions to the same product, whether it be the products that we're releasing, or whether it be things that we've rolled out to our company over the last 24 months, same feature, same release, same beta label, two different interpretations. One group see it as a way that they can increase their leverage, one group see it as a risk. So, an agent the other day, for instance, was telling me this story where he's got this lister that works for him who's really good at what he does, very good at getting houses on the market, but uh he's been doing it many many years, and he still uses a dictaphone to go around the house and talk about all the details and the features, and then that recording gets given to somebody back in the office and they type up the property descriptions. So this person had been to our AI summit, not the valuer, the the owner of the business, and uh they'd seen me talking about how our how the voice recording side of things worked. They got themselves one of these AI voice recording tools, uh, they gave it to their valuer and said, Next time, talk into this. Uh, he did that, and then they went to the person who normally types it up, showed them how to get the transcription, they put it into our AI, into Azair, and said, Write the property description, bang, it just wrote it perfectly. Wow, amazing! Everybody's amazed at that point. The owner goes away because they've got multiple branches, and uh they go back there a couple of weeks later, and they're back using the dictaphone. And he says, Why? Um, and the guy who writes up the descriptions actually, I don't know if it was a guy, guy or girl, who writes up the descriptions, um, explains to him that when he goes to copy the transcription, the transcription's a bit messy. Like it's not um it's not got the full stops in the right places, it's not formatted correctly, so he has to go and clean all of that up before he pastes it into the AI. And the the the owner of the business says, no, no, no, no, it doesn't matter. The AI, the AI doesn't care, it'll figure it all out. You can just literally copy and paste it. And even after that, the person who does the the property description goes, No, no, I just I just don't like it. I'm just gonna carry on doing it the way I was doing it before. Like that's my brain exploding. I mean, it doesn't make him wrong or me right or it but it's it it's inexplicable. Like I guess like it's just you can carry on doing that for a while, but you it's gonna catch you up, right? That poor person is feeling a bit lost with the whole thing. Maybe they're feeling like unneeded. I I'm not sure, like, but somehow they've gotta wake the fuck up from that. Like, yes, this isn't preservation, this is not going to help you by going, no, no, you know what, I'm just I'm just gonna carry on doing it the way I was doing it before. It's not worth it. Like, it's just a crazy thing to say. So it's easy for us to turn around and and and blame the staff for that, but it's a that's a leadership thing. Like, what are we doing wrong here somewhere along the line? Because that person has been shown what you think they need to be shown, but it hasn't landed. How else do we need to think about this? There's no books on it so far. Like, we're the first people to to to walk through this ever in history. Um, I guess we can point to some of the things that have happened in history in the past, any change, any sort of any technology technological changes, difficult. But these are the things that I'm seeing. So, anyway, going back to my original point, we released this feature, amazing feature, nothing ever like it seen before in this industry. People are exploding with excitement, left, right, and centre. Another one of our clients sends an email in saying, I've looked at the summary on one of our properties and it's wrong. The status is wrong, the summary's wrong, this is so simple, I'm not gonna get my team to use this until you can confirm that this is working properly. Until then, we're doing it the human way. Like, I I get the I get the sentiment, I get the understanding. It's the way the world's worked for the last 10-20 years in terms of software, even 30 years maybe. Like, you know, the system's supposed to work properly from day one, but AI doesn't work like that, it's so so complex. You have to figure it out as you go along. If you wait for it to be perfect, you're never gonna get it, you're never gonna use it. Like, even if we look at, I don't know, let's look at something like Chat GPT, billions and billions and billions of pounds being it being invested in it. Probably billions of people using it, billions of changes, all of that sort of stuff. It's not perfect. So, what are you gonna just not use it? Just won't use it, just haven't been using it for the last three years because it's not perfect, it can't be perfect. It's the probability, it's the leverage, it's the speed that you can move at. I had a sales guy that worked for me once, right? And I wrote this uh I wrote this custom AI bot to help coach them. It knew exactly how we do sales, and it knew about the company, and it knew loads and loads and loads of stuff, and it was very good. Uh, you could ask it questions specifically around some of the objections you might be facing with uh a sow that you were trying to do, and it'd help you get to a conclusion, lead you down a path where you're like, that's really good, actually. I think I'm gonna go down that path with it. And I showed it to him, and then I checked in once and said, Are you using it? And he said to me, um, no, I'm not using it because um the sales coach told me that we do property management. But at the time, like, you know, we didn't we don't we didn't do property management. He was like, so you know I'm not using it. And I was like, what the fuck are you talking about? He was like, well, I mean, do we do property management or don't we? And I was like, you've worked you've worked here for like I can't even remember, like a year. Do you think we do property management? No, no, I know we don't, but that's my point. Like, what is your fucking point? What the fuck are you talking about? Like this sis this this this this AI is helping you close deals, move faster, do things, but you're telling me because it said one thing wrong, because I haven't sat down and specifically taught it about that, that you can't use it like wake the fuck up. Get a fucking brain. So I this is the change, but as I say, over the months, over the years, what I've started to realize is like you can't put that on your staff. That wasn't really his fault. Some something in there was my fault, whereby he he had some misunderstanding, some sort of misunderstanding that like this incredible robot knew everything. I knew how it got built, and so I'm sitting there thinking to myself, like, are you are you on drugs? Right, but he don't know how it got built, so he's now thinking, like, well, it's just it's just not reliable, right? Doesn't, it's not that good. Maybe he's trying to maybe he's thinking about how trying to prove how useful he is as a salesperson rather than using the sales coach. Maybe the person doing the dictaphone writing is trying to prove that. Maybe the person sending the message about how the property intelligence that we released was was wrong in one instance across their entire database is trying to prove how important humans are. And that's fine, like, but we have to recognise that as leaders, like, because otherwise you're going to get frustrated with all your team. I promise you that. I've been there. I've been frustrated with there is no member of my team that I have not been frustrated with at some point over the course of the last two years, and then gradually over time, I've started to realise that much of that is my own uh my own problem. You know, I'm not explaining something correctly, or I'm expecting too much of somebody that doesn't know that much about AI. So it's the assumption. The assumption that software is supposed to be finished, that's the old model. You buy it, it works, it's stable, it's predictable. AI breaks that completely. It completely breaks it. Because now the system is learning that outputs are probabilistic, the value comes from pattern recognition, not certainty. Right? So the person who complained isn't reacting to a bug, they're reacting to a loss of certainty. You can like I think if it was like the property status was not right or something, like our system was saying that the property was let maybe and like it was available. You can see it, you can see that with your own eyes. Yes, there's an issue, it needs to be raised, but that's how the system, that's how the system gets better, that's how AI works. You can't test AI in every single set of circumstances. You can't, it can't be done. If you try to, you'll never release anything, and you'll never use it. And that's the bit that the your team and you as a company owner and everybody has to get their head around. You're gonna use some bro what you consider to be broken stuff. It's not broken, it's never been done before, right? So we have to reframe this the instinct of somebody. Uh if it gets oh if it gets something wrong, then how can I trust it with something complex? But actually, it's the opposite with AI, it's it's actually quite a weak at isolated certainty, and it's strong at pattern recognition across scal. So humans are good at simple tasks. A human could easily see if this status is correct, it could see that, a human can see that, no problem. Humans are very bad at seeing 10,000 data points at once. AI, occasionally wrong on obvious things, leaves you thinking like this thing's dumb, but it's unmatched at connecting hidden signals. So the mistake is judging AI by the wrong standard and judging your team by the idea that they should they should probably know how this stuff works. Like, Jesus, this is this is a whole new world. We need new leadership, we need new team structure, we need new processes, we need new jobs, we need new everything, and no one has the answer, unfortunately. Everyone's looking around to see who's got the answer to this. It's why one of the reasons that like I'm making this podcast, one of the reasons probably that you're listening to it. Like we're all trying to find what we're sharing this information with each other, right? So, so the real shift is you're no longer implementing tools, you're introducing a new way of working. It means teams need to learn how to interpret outputs, not just consume them, not just trust or reject them, it's not one or the other, it's not this didn't work, nothing works. No, it doesn't work like that. And you can't, of course, we need you need to give the feedback that this I don't think this is right. And it just it just gets solved instantly. So we have to widen this out, right? Two types of agencies are forming. Certainty-led agencies, wait for perfection. They're not even working with companies like iceberg that have got this got we've got this AI operating system. They're they're waiting, they're waiting for perfection. You know, in huge big corporations, like the one my my wife used to work in many years ago, like global corporations, they're waiting 10-15 years before they implement the latest operating systems because they want all the all the bugs taken out of it. So I can remember like it was way after Windows XP had come out that her company were just rolling it out, and I was thinking, what? Like that's not gonna work with AI. Waiting for it to be perfect, like the comp you're gonna be long, long lost by then, cast way adrift, focusing on errors, defaulting back to you're gonna do it manually. Like, no, no, no, no, no. We're not defaulting back to doing it manually. We have to be system-led agencies, you have to accept imperfections, look for where the leverage is, improve outputs over time. Those two sets of different companies will not perform the same. The ones looking for the leverage, the ones that accept the imperfections, the ones that are seeing the benefits, seeing the three things out of six that is doing amazingly for them. They're the ones who go fast, they're the ones who start to really embrace it, and they're the ones who start to affect the change as well because they have you have constant feedback from those people. So the risk is not that AI might get something wrong, the risk is that you ignore what it gets right at scale. Because while one agent is checking one contact manually, another's seeing patterns across 10,000 contacts instantly. We didn't release a feature this week. We exposed who's ready for the next version of a state agency and who is still trying to make the old one work.