101 Stories to cement your AI Leadership

The worst waiter in the world - no action=no value=no sense

Bart Van Rompaye Season 1 Episode 1

In this first episode, we’ll explore the smart computer systems our organizations build and use: are AI systems that don’t lead to action as powerful as they could be?

When people think about the least effective colleagues in their team, they often think about their inability to take action. In this first episode, we’ll explore whether the same should not be said about all the smart computer systems our organizations build and use: are AI systems that don’t lead to action as powerful as they could be? I personally really don’t like the word ‘insights’ when describing what AI delivers – and I bet a story might help you understand just why!

I am certain that many of you, when you were young, have spent whole summers, weekends, and perhaps regular evenings waiting tables in a bar, or a restaurant. Now imagine for a second that you are the manager of such a bar, and you have hired some young student to wait tables for you. It is a very busy evening, so you’re tending the bar while the student is going around checking on tables, and at one point he – or she – walks up to the bar and says: “Hey.” You look up, what’s up with this guy, you’re busy, right? And then the student says “you know, I’ve been over at table 6”. Uhm, ok-ay?? See, now you’re getting a bit annoyed, get to the point man! So the student says: “Well, I’ve noticed the people at table 6, euhm, their glasses are getting empty, so probably you should go and ask if they want a refill?” Oompf! Tell me, would you not want to fire that guy? Really, you hire these students to dò things, to actually serve the customers, not just to walk around and observe things, correct?

Well, now consider the same situation, but the waiter is not a person, no, it’s a robot. Would you then still be as annoyed? Surprisingly, in our daily lives and our businesses, we behave as if we don’t care all the time. What that young waiter is giving you, is an insight. He’s diagnosed the situation perfectly, and explained it very clearly to you. But why you don’t like that, is because you have hired him to also take the next step, to take the needed action. Only if the waiter takes action, value is created for you. With our smart computer systems however, we are perfectly happy to limit them to the insight alone, for some reason. There’s a whole field that thrives on this, which we call business intelligence. In business intelligence, you typically get a dashboard that visualizes some more or less simple analysis, and then the human is supposed to understand and take the appropriate action, but in a great many cases, this just doesn’t make sense. Because why would you not demand more from your systems, why would you not have them take the needed action directly? Many people in business still want to pass through a human channel to explicitly decide and act. But often, if you build a smart system that can access and digest more information than you as a human, a system that can decide free from emotion or distraction, and that can do so a million times per second, then what does that human still have to offer that is irresistible? Like the waiter, computer systems only create value if they lead to an active intervention, and in that ‘insights’ paradigm, where you digest some data and then feed it to a human to decide and act, you really introduce three major risks in your process: first, the action doesn’t get taken. The bartender may forget the information about the empty glasses, he may not recognize its significance, or perhaps he simply disagrees with the conclusion. Second, the action is taken too late. By the time the bartender gets around to listening to the waiter, and then finds the time to serve the customers, they may have left. Without paying. Mmh. Third, there is a risk that the wrong action gets taken. The bartender misunderstands and looks table 16, not 6, like so many dashboards get misunderstood every day again. Or perhaps he doesn’t believe the waiter, like that time when the people at Chernobyl decided not to trust the readings on their console. Or perhaps he simply disagrees with the conclusion. Whatever the reason, it is clear: in deciding and acting, humans often get in the way, leading to error-prone processes with issues of speed and scalability. With well-designed AI systems, we have the opportunity to avoid this, and in AI we should strive for a system that fully automates the task at hand, rather than merely facilitating it. This also opens some of the most common paths to value for AI: the ability to make processes straight-through digital, allowing to deal with higher volumes, and doing this at lightning speed, while also avoiding problems when volumes vary over time. Therefore, the burden of proof should be on including human bottlenecks in processes, and only allowing them if there is some clear and compelling reason to do so. As an example: setting a customized price for a customer is one of the most important aspects of commercial activity, and
if you have a system to do this automatically you can have efficient online sales – so why would you still want a human to okay that price, taking away the benefits automation gives you?

Don’t get me wrong, business intelligence certainly has its place in any organization, but should not be the default way to go in a future-proof organization. Building automated systems is expensive, and those systems can only create maximal value for you if they lead to actions. So while business intelligence strives to better inform tasks executed by humans, AI should strive to execute the task itself, and as an AI leader, you should push the whole organization to make the distance between the AI system and the action as small as possible, and this focus on the practical action should be ever-present, during strategy setting, when devising the governance of AI, during projects building AI systems, in your anticipation of needed change management, and in so many more things. So my personal baseline that I advise every AI leader to use as often as possible, is: no action equals makes no value equals makes no sense.

Thank you for listening to this first episode, I hope you enjoyed it, or that it at least got your thinking going. Because, this was just one of a 101 stories to cement your AI leadership, and many questions remain: under what circumstances can we trust AI systems with fully automated actions? Where do we get good ideas to apply AI? Luckily, we still have a few episodes to go – a 100, if the title is anything to go by. Next time, I’ll tackle another big topic: is AI really just one thing, or should organizations perhaps realize that there are vastly distinct types of AI?

People on this episode