The Not Unreasonable Podcast Podcast Artwork Image
The Not Unreasonable Podcast
Joshua Gans on Prediction Machines
August 23, 2018 David Wright

How are we supposed to think about Machine Learning? How are businesses going to change? This week I interview Joshua Gans, Professor of Strategic Management at the Rotman School of Business at the University of Toronto and the Chief Economist at the University's Creative Destruction Lab. Joshua is the co-author, along with Ajay Agarwal and Avi Goldfarb, of Prediction Machines: The Simple Economics of Artificial Intelligence.

Are you an actuary? Someone you know? Check out the Not Unprofessional Project, for the price of a CAS webinar you get unlimited access to content dedicated to Continuing Education Credits for Actuaries, especially Professionalism credits. CE On Your Commute!

Subscribe to the Not Unreasonable Podcast in iTunes, stitcher, or by rss feed. Sign up for the mailing list at notunreasonable.com/signup. See older show notes at notunreasonable.com/podcast.

Episode Transcript

Speaker 1:0:00My guest today is Joshua Ganz. Joshua is a professor of strategic management of the Rotman School of business at the University of Toronto and chief economist at university that universities creative destruction lab, Joshua's research specialties are the nature of technological competition and innovation and economic growth. And he is the coauthor along with Aja Agora Wall and Abi Goldfarb of prediction machines, the simple economics of artificial intelligence, which is the subject of today's conversation. Joshua, welcome to the show. It's great to be here. So the subtitle, the book says that economics of Ai are simple and that struck me as an interesting contrast to the strategic call it complexity or confusion around what to do with Ai. And I'm wondering if you could maybe elaborate a bit on what is exactly simple about the economics of Ai.

Speaker 2:0:46Uh, the title of the book is to, is telling you about the content of the book. And uh, there are complicated things to do with artificial intelligence, but this first place you should start is what is simple. And the general approach in economics is to, uh, when you're understanding a new phenomena like this is to identify what is being sold and then see what happens to supply and demand. And that's really what we mean by simple economics. So in the case of artificial intelligence, we analyzed the technology and realized that in the current wave of artificial intelligence, what was being sold was prediction, machine prediction, uh, as we would call it. And so what that meant was that the, uh, uh, you know, that was what was being sold and the technological advance was to make those predictions better, faster and cheaper. Uh, in particular, when you make something cheaper, what happens to it, you will end up using it more. So we would say, okay, this is going to increase the total amount of predictions being used in, in, in life, or whatever, uh, in the, in the business world. Uh, but the other thing that comes when something becomes cheaper is people find new uses for it. And that's a particularly tantalizing when it comes to artificial intelligence. And so simple is basically those two effects, right?

Speaker 1:2:25And maybe you get to 12, just a second on cheaper on how you might measure that because when I kind of intuitive reaction, I have to what's changed about Ai, it's not necessarily don't necessarily think of it as a cost reduction. I think it instead of as a capability expansion now, I suppose those things are related. Maybe you could, you could talk about that for a second.

Speaker 2:2:44Yeah. There's sort of a, they are related to one another. Uh, I think that's how I'm an economist would, would look at it too is that you are able to produce something of greater equality for the same price. Uh, so that's implicitly a cost reduction. I guess we'd just like to focus on it that way since it's a little bit easier to, to sort of comprehend even if it's not as exciting. The problem with when you think about, oh, capability expansion, you already into the question of what capabilities. Sure. A cost reduction. It's a little bit easier to sort of more universal. Yeah,

Speaker 1:3:20exactly. And it, the book has some interesting color framework for advice for organizations that want to think about. And I see that as kind of a motivating force on the book, and maybe you can tell me if that's right. It is. What do we do with all this given these insights in this economic thinking, the framework of economic thinking through this. What can we do?

Speaker 2:3:37So, so the book was, you know, in first part a reaction to what we saw going on out there in terms of discussions for that regarding artificial intelligence, enormous amounts of high. Sure. A dollop of panic. How am I missing out on something?

Speaker 1:3:56Yeah. Or panic of what's gonna happen to us all.

Speaker 2:3:58No, that's that as well, a existential threat. But from the point of view of business, people sort of like, what should I be doing about this? And um, and terribly unclear answers, oh, it's kind of replaced cognitive capabilities, this and that at which weren't really necessarily true. And if that is true, what would you do with it? So our book was that designed site, you know, what is this really a, let's strip away all the hype and throw some economics in. Which is something, at least a business people would be familiar, uh, and then say, well, once we've got that, uh, exactly how would you read this book on the weekends and turn up on Monday morning and think about what to do about it. And that was really our intention here with the book is to provide that, that, that, um, entree into the whole area. One thing that

Speaker 1:4:51does, it kind of interests me about a comment you made there is the familiarity of economic thinking to business people and I actually find that that's less the case then I would expect. And so there's, there's many, maybe most, maybe all business people, certainly senior folks in organizations study economics in some manner, one way or another, be it through some kind of extra training program or an undergraduate as I did. And I think that the, at least the jargon of economics is absent from the business world for the most part. And the concepts maybe are related, but I don't think that there were actually explicitly thought about and, and you're bringing and what I actually want the things I enjoyed about the book, because it's a, it's a framework for thinking that I am familiar with maybe more so than most business people is the use of economics ideas, ideas from the economics profession. But

Speaker 2:5:34why don't I see it more often? Well, that's a difficult question. I think, you know, I think there's economics in economics, uh, you know, I think when you're just talking about the basics of supply and demand, you know, which is really where we start and in the book, um, you know, even though people don't like using those terms because it doesn't sound like you are doing something complex, thoughtful, etc. Uh, uh, you know, it still has that familiarity, you know, somewhere at the heart of it's very hard to be a business person without having some grasp of those concepts, whether you've lent them formally or not. And even more so when yoU're sort of reduce things to cost. I mean, that's sort of something easier to understand. Uh, you know, there's a lot of economics that gets built on that, uh, that gets more complex and in the book we deal with some of that, uh, for sure, uh, because we start talking about, you know, how you'd make a decision under uncertainty and things like that, and then it gets, then it, then it definitely gets harrier.

Speaker 2:6:37Uh, you know, uh, I think it depends on the industry. Some businesses are starting to use economics explicitly more and more, uh, you know, and that's beyond the, like currency trading and other things used to finance is a bit different macro economics. So, you know, that sort of forecasting. Um, I, you know, I'm just telling me years ago google was, I think group was the first to employ a chief economist, uh, that microsoft and others have done that, you know, amazon has like, I think over 100 economics, phds, uh, you know, uh, right at the core of that organization. And so I think that that notion is spreading. It started, you know, it's interesting and sort of started with technology, but it's spreading elsewhere. Um, but, you know, to be sure, you know, economics in its academic variety had for a long time being fairly inaccessible to business practice and it used to come for business schools through things like strategy and things like that, but not really in its full glory.

Speaker 2:7:43But that's starting to happen now. And do you think, do you think that's a, to use some economics jargon supplier to man? So is that, that the economists aren't interested or is it the business people aren't interested? I think it is, uh, you know, I think actually, I mean, I think it's been demand led. A change has to change, has been people saying what do you guys have to say? But all exactly starting to realize that, you know, that company over there as an economist who said something good maybe we can get one is, is the herd mentality. And also that coupled with, um, the options for economists, uh, in terms of koreans were either government or academia basically were the two main options, especially for people with phd's. um, and I think it's really interesting to see that the private sector, uh, has opened up as a significant avenue. I know many economists who would have otherwise been on I clear academic path who have moved into the private sector, say I don't want to deal with this stuff and moved over. I could do real economics there. I couldn't it. Yeah.

Speaker 1:8:56In some way the whole, I think I might even question my own premise and asking these because it is unique amongst economists to have a significant presence. Sorry, unique amongst academics for economists to have a significant presence in government and slash or business for that matter. I mean, it's not many academic disciplines make any kind of crossover whatsoever. So why should we think of that? Economists should. Well, no, that's true, although that,

Speaker 2:9:19that's also changing as well. I mean, let's, you know, we're talking here about artificial intelligence and if there's ever a place where, uh, academics and business of sort of fused together, it's in that field, uh, you know, all of the main pioneers of artificial intelligence, uh, almost, almost door are um, uh, not now purely academics. They can be half time, but at some other company so and and the students as well, so I think that's, I think, I think there are situations in which, you know, maybe always has been more integration versus others. It's rare amongst social science, social sciences where it's a rarer and economics has had a big influence all over the place in that regard

Speaker 1:10:06and maybe identified important idea there which is a lot of technology emerges from academia as well from engineering departments and computer science departments and so it sort of naturally dragged along a few of their friends. Maybe instead you should join us, you'd have something to say here. Right,

Speaker 2:10:20right. And I think that's definitely definitely true with them. The recent events in artificial intelligence, it was all a all developed in academia first. All of it. Yeah. It's quite, quite incredible really.

Speaker 1:10:32It is. And so let's go back to artificial intelligence and, and I ask the question a minute ago, but I've pulled us off track, what, what are the, what is the, the evaluation framework, or maybe you could sort of briefly address what you put it in the book as saying, here's kind of how you should think about it, mr. Business man, right, right. So, um, so artificial intelligence is this word

Speaker 2:10:51that connotates a machine taking over thought processes of various kinds and suddenly, you know, 50, 60 years ago, that was the image of it in popular culture. It's the image of it. But what, uh, what really has developed in recent times is that advancing, what are those called neural networks. So, you know, 30, 40 years ago as some people have the idea or you know, it's hard to think about how we will create intelligence. Let's take what we know about the brain. A bunch of neurons sort of communicating with each other, reinforcing a links and things like that. Uh, let's do the computer equivalent, which is a vastly simplified version. Respect, very ambItious, very ambitious, but very, uh, you know, even so then the number of neurons that we or, or even, you know, insects or something a far less than what we can model and simulate on a computer.

Speaker 2:11:52But, you know, that was the, that was the vision and it did nothing much came of it for a long time. Um, but then, uh, uh, these three or four pioneers of artificial intelligence, uh, Jeff Hinton, yoshua bengio and yann macun, um, about 15 years ago, you know, had the idea of, of various events is the idea was rather than having one sort of layer of neurons, what if it was a, like a whole a hierarchy. So you had sort of more complicated sorting that could have deep neural networks, deep neural networks, deep learning what was the term? And they have to do some other things as well to, to advance that, you know, um, a yan lacoon was able to think about, you know, if I want my, a neural net to look at images, you know, it doesn't help if I do the computer thing and have it look pixel by pixel, right?

Speaker 2:12:54It matters which pixels are next to each other. So he came up with a way of, of reading in image data that allowed those sort of proximity associations to be part of the data that's coming in. And so those advances. So you know, those advances, uh, you know, yielded results. So in image recognition, which is basically where you present a computer with a bunch of images and you ask them to label them, what is this? What is this? And what we're asking. Uh, you know, come back to this in a bit. We're asking, you know, what would a human thing, this is a. And sometimes yoU know, what is it actually? Um, but, but at the initial it's like, what is a human, what would a human thinking? And so that had, you know, a bunch of images that are already been labeled by people, um, and then they wanted to see how accurate the machines were, uh, in, in, in doing it.

Speaker 2:13:53And I'm plugging along, plugging along. And then, uh, Jeff Hinton and his team through deep learning at it and it just skyrocketed, you know, in terms of accuracy and all of a sudden we're now down to, you know, uh, you know, depending on the domain, you know, it can be more accurate than people in sort of identifying these images. That's amazing. Uh, so, and that was a very rapid period of time and it was a time where I believe no one outside of these pioneers and maybe not even them really a thought that this was going to be a promising area. There was a lot of skepticism, but that's worked. And so that's worked on, on not just image recognition, speech recognition, translation, a whole lot of problems that were like the cornerstones and all the way up to the ability to, uh, play go and now be the, you know, the best and chest in the world.

Speaker 2:14:49And chest or chest was already sort of cracked using old methods, but now you can do this too now. Um, so, and just a whole lot of applications like that. And so that was really the advanced there. Uh, not surprisingly, that got people very excited and I think partly they also had to wait for a few things to occur. They had to wait for a, it'd be very easy to read, lots of, uh, you know, images and data to, to kind of train these neural nets and they had to wait for computing power to get good enough to do that same thing as well. Um, but that's sort of like all came together, uh, in the early two thousands and here we are today.

Speaker 1:15:27So there's an interesting framework, uh, as a, uh, I guess he's an analyst, works for andreessen horowitz and benedict evans probably heard of. And he has a framework for evaluating ai. Wrote a blog post a couple months ago where he said, really, there's three, three ways of thinking about the application. The first is do the things we've already do, but doing it better. Do you ask new questions of existing data that we already have? And the third is bringing new data right now. The third one is the most advanced one, the ones that's the most sexy, let's call it. And that's you spend your time second ago talking about the first two are really probably where we're generating a lot more of the value I would argue. And so how should we think about that? And uh, as kind of the evolution of the ability of ai to do things we already do but little bit better and what new applications that we find that a bit closer to home.

Speaker 2:16:11I mean the, the, the issue that I have with that setup of, you know, what is it doing, it's not that it's wrong, but it's like hard to see. Interesting because you know, we're going to learn stuff from data and this is true. And so, you know, that data, more data, new data, the whole thing. It tends to put an emphasis on finding the data. But the way we see it, it's more finding the problem. And so we talked to just a second ago about image classification and what image classification is, is a fundamentally a prediction problem. we tend to think of prediction as being about the future, what's the weather going to be like tomorrow? and of course it is, uh, and, and we use data from the past to, to, to give us a signal of that. Uh, but what image classification is a prediction of the present or prediction of a, you know, what would someone call this all the way leading up to what it is.

Speaker 2:17:13It actually, but someone has had to tell the computer what it is training that has to be programmed. But once you've, once you've worked out what it is, then it's a matter of taking some sort of image that could be anything and saying is that this will not. Uh, and so, you know, because you know, the, the machine aside from one case, the machine doesn't know what it's looking at. It doesn't have a label for things to kind of describe to us whether it be a tumor malignant or not. But if you associate the word with the thing enough, it learns stuff and it's like kind of baby's learning to speak, you know, they uh, you know, babies learn how to classify things at a much faster rate than the computers. The computers need millions of images, but they'll do it. Maybe these are almost prewired for these certain categories. Machine. Well, you know, that's one possibility, but probably not. You know, you teach a baby what an apple is a, you know, when they can speak, they can recognize what an apple is the next time that it's in front of. All right.

Speaker 1:18:21I have a two year old right now. I have two, six, four and two. And each of them gone through very recently. Each stage of each of them and what back I'm thinking as I do sometimes about ai and just marveling actually at, at how, how they can, can

Speaker 2:18:37more much more going on there. Yeah, I mean it's the same function. There's a predictive element to it. Yeah. It just is a compared to our eyes. It's, it's so far advanced. Whatever's going on in front of mr.

Speaker 1:18:52It reminds me, it makes me think a little bit of, of one way of, of being amazed at, in some ways at the original neural network research and what's immersion since it occurred without too much of the underlying theory of what the brain is doing. So there's a physical reality, right? You have the set of neural networks and the deepening of the neural network is as important and you speak a bit in your book about, about what I think is maybe the most compelling or the most complete theory of the mind that I've seen before, which is the way in which it's a forecasting machine and that's how it developed and that's what it's supposed to do. And maybe you could talk a bit about that and what we, what we think we know about that.

Speaker 2:19:26So there's a couple of layers to that. One is, you know, there's, you know, there's a view out there that all we do is prediction. We may think we're doing more than that, but we're not. Um, this guy, Jeff Hawkins, I think it was the founder of palm, decided to write a book putting forward this view. Um, I'm not sure. I believe that as fully, I think there's more going on there, but you know, and understand tHat perspective and what he's basically saying you can get along long way with prediction. Um, and um, but there's another element to this in terms of the mystery of this whole. It's actUally less mysterious than people would like to believe. I know where you get a lot of articles, like I don't know how it came up with this and it seems to work and no one knows why.

Speaker 2:20:07Well, you know, from a, you know, so economists tend to also have a close relationship with statistics and um, you know, what Is going on inside the computer is, is can't be magic. It has to be sort of equations. It's moving books. The only way it works. And, and so, um, you know, uh, several people and most recently an economist at yale have gone back and looked at all these different methods and so on and found that really it's, they are just versions of statistical methods that we already knew about. They're juSt operating on, on steroids and they've got some, uh, extra optimization algorithms to get you to the least square era, uh, um, a solution much, much quicker and stuff like that. but fundamentally it's just that statistics, it's same sort of processes and things like that. And, um, you know, that's useful to know.

Speaker 2:21:06so, so what, that's basically telling us, and this is why I come back to the devons and sort of like, it's not just about the data, it's about the, you know, the model that's being built and other things like that. There's more statistics than just the dollar. It's how you put these things together, a band. It's really just just that. Now that doesn't mean that it's not, you know, exciting, important, uh, you know, uh, revolutionary. Uh, but it is just, it. The reason this is important is it that any prediction you get out of this, um, has to obey the laws, the laws of statistics. So there are situations in which, yeah, it can come up with predictions and understand and see things and understand the data, complex data and a why and filtered down in a way that we can't and we were unable to do previous statistical tools.

Speaker 2:21:58So that's very exciting. consciously or unconsciously. Yeah. And so that seems maybe not even able to do, um, but, but, uh, it seems sort of magical in that regard, but we also know because we understand that this is statistics as far as it can go. For instance, I, you know, we like to call this prediction machines and we like to, uh, in the experiment with the idea of what if you dialed that up and made it better, better and better, better, what would you be able to do? But there's a limit in some situations of how far you can do that. For instance, when you toss a dice, um, it's going to be one and it's a fair, it's going to be one of the six sites. No prediction machine is going to tell you which one is gonna come up. I mean, unless they can, you know, and people are going to think, oh, what if they could analyze the structure of it truly is a fair dice being tossed.

Speaker 2:22:52They can't do any better than what we already know. so there is a limit there and it doesn't matter how many dice you rolled in terms of the data, you're not going to prove it and that surely is going to exist in the world as well. There is some fundamental randomness that you'll never be able to quite work out and so that's going to affect when iii performs today or in the future as well. Uh, and I think it's, it's, it's a, it's a way of at least thinking to yourself whether this thing is going to have legs, if it's going to be able to perform consistently over time.

Speaker 1:23:32There's another set of ideas that I came across recently, and I think you're probably familiar with robin hanson, economists who drove the mission and he had a post where he talked recently, which really I thought it was really thoughtful, thought provoking, and we talked about how the observation was there were different. The the, the diversity of genotypes and phenotypes in the world and so was saying was that was amazing was that there's lots and lots of different gene orientations and different and it's called williams and eat coal. I was the example. They create only a few different body types and so there's this diversity of the way that you generalize that to something like ai. He said there's lots of tools potentially we can use to achieve only actually a narrower set of functions. Then we might. We might think so. He kind of implied theory of the brain there is that the brain is actually a collection of a huge amount of different capabilities and we don't necessarily touch all of them all the time and the switching of one capability or one color forecasting model to another, and here's how we bring it into ai, where the, the implication for ai there is actually you need dozens, hundreds, thousands of different models that will be distinct and how they might predict something, but they're, the assemblage of them all and, and the interplay between them or sort of being able to switch between them is actually what intelligence kind of really is.

Speaker 1:24:46And, and there's no general solution. So there's no just one neural network which can handle it.

Speaker 2:24:50Right, right, right. Um, you know, I can see that. I have no idea. I, I have no idea. I think at the moment in terms of the sort of practical applications where we're far from, you know, wanting all these different models. We're just trying to find applications for a model. Yeah.

Speaker 1:25:08Thinking about it and what kind of encouraging about it is that there's another, there's another kind of theme in the ai world is that progress doesn't happen, right? So every time there's a new whole bunch of hype than it always fails. And what I find encouraging about that framing of the problem iS that actually what we're doing is adding more and more models over time. Right. And so there's this inventory which we're building up and eventually there, there is linear progress here. We might not necessarily observe it so much. And so you know, let's say if in a particular business you just waiting for your model to be built,

Speaker 2:25:37right? I guess I'm. Yeah, no, I think so. I mean, well you can, you can push that forward and have it built itself. I think what's different, you know, you, I had hyphen, it's died and hype, but it's never had, you know, it's never filtered into practical use the way it currently has currently has the potential to do so. So if he dies now, so I previously, because it hit a limit very quickly, I don't think there is that feeling that we, uh, are, are, are, uh, even close to sort of hitting limits on stuff. I mean, I'm certainly in certain applications. I mean eventUally you kind of get above 100 percent accuracy and image recognition and you probably won't ever get to that. So we're gonna hit some points of diminishing returns on specific applications, but there are more so much more to seem so much more to do. And so I suspect that this will probably have a decade's worth of legs before we sort of like, okay, we've gone as far as we're going to go.

Speaker 1:26:38What was there it, was there a particular business or two that you've had a lot of experience in that you had in mind? When was a, was there a kind of a business audience that you had in your mind when you were writing this book? What ian's was? Well, no audIence,

Speaker 2:26:51the love, the googles, facebooks and so on. Uh, our audience was the sort of typical ceo of a large service or manufacturing company. I'm sitting there saying, you know, what is this thing and is it gonna, you know, disrupt us and what's gonna you know, what's going to happen and where exactly is it going to impact on our business. Um, and, and, and so like they're asking this question at a time when, you know, what, we have some very good tools and we also know that for the most part a, they kind of have to be developed specifically for your business. We haven't yet, although we might see sort of someone selling sort of generic ai platforms and you can sort of build on that. There's a bit of that going on, but it's not, it's not at the same level as, you know, the ability to outsource building a webpage or something that, yeah, sure, you still kind of need someone who knows your business in order to, to, to make a good of it.

Speaker 2:27:54But, uh, that was basically the audience, like where do those people start? And, and our message was, you know, you don't have to panic. And russian start spending millions of dollars on, on, on iii and what have you. But there are probably places in your organization you could look to start those pilots stopped building up those capabilities. At least now there's another question of what will occur once, you know, once I get better and better and better, uh, you know, you might, there might be bigger changes, but right now it's like, wait, can I use this right now? And, and for us it is, if it's all about prediction, what you want to be identifying a these situations where you're dealing with a lot of uncertainty. And so, you know, a few days ago, um, I write an article and I'm unfortunately going to forget the name of the company, but it was the, the typical a thing here where a company, I think it was in a grocery business notice that like that 28 percent of their products have to be thrown out.

Speaker 2:29:00So there's 28 percent of waste. Wow. And so you, you know, and I don't know if that's typical for things, but one can imagine it is a fact. I think in most of these food service industries, I think that's a constant thing, you know, trying to met supply demand on it basically daily basis. It's really, really hard. Uh, and uh, so they realize that. And then I got a data science team started doing machine learning, uh, became much tighter predictions and buy some order of magnitude and I can't remember where it is. We were able to significantly a slash that wasted. And so that's like, that's, that's the low hanging fruit. That's stuff that's not impossible to do. It probably was probably possible when you had normal statistical rules, but now you can get such a bigger bang for the buck and people have collected, have been collecting the data for long enough and it can just be thrown into some standard machine learning algorithms that they can get an impact.

Speaker 2:30:05Now there's a secondary danger to that is that, you know, this is the problem with these things. Um, they yield great predictions quickly, but in only the worst possible way because you, you run it and you'll say, you know what to predict better. And I've got my corpus of data and I've now reduced the predictive error on it. And so, you know, for the next short while it does a wonderful job. The problem is that either hidden or because of the data you've collected or somethinG like that, uh, the underlying model that the ai has written of the world to, to construct these forecasts by itself start to change. It may be other things changing. So, um, you know, obviously there are things we know season seasonality and other things like that, but there can be changes In the economy and what have you. That starts to change that supply and demand.

Speaker 2:31:03And so the forecast start to break down. Uh, and so the only way around that is to sort of realize that from the start and uh, you know, effectively and for want of a better term, push your ai to be more robust. So in other words, not take, oh, it's performing really well and as a good sign actually take that as potentially a bad sign. Interesting. And you want to sort of knocked about a bit for want of a better terms to sit to get something more robust and it's, that's a very, uh, kind of a very subtle, insidious form of the over fitting problem. Actually. Basically it is overfitting. That's the problem that arises. I was trying to avoid the technical term, but we're fitting problem. You can, you can come. I essentially what the ai, if it's super good, it can come up with the theory that explains for past and if you apply to the future you are going to be wrong because there is no real, there is no such thing as the theory that explains the past.

Speaker 2:32:04And so what you try to do is you try to say today I know don't do that. And so you throw in some noise and other things, you know, as I said, basically knocking in about two, two to make sure that it doesn't get cocky. And what's scary about that, that way that you fraMed it there is that it actually does work for a bit. Yeah. No, that's the problem. Yeah. It does work for a bit. It works, it works for a bit. I, and this was, this happened with google flu trends, you know, google's able out of search results to predict where the flu was and it worked wonderfully like the first year and then it broke apart after that and pretty much abandoned. Why do you happen to know? Um, well I think that just overfitted that model, uh, you know, I think uh, you know, and every year is slightly different and uh, uh, you know, why it broke down that quickly, I don't know, but it was not able to the search terms for, you know, cold medicine And other things were not giving us reliable information. Uh, you know, the next year as was initially amazing. I mean, maybe you could speak for a minute about that.

Speaker 1:33:03How and in what way ai thinks differently from people it takes in information to kind of have a much narrower view and regenerates results which are almost independent of what a person might do, mIght create those.

Speaker 2:33:16Well, you know, so I don't know how, you know, people think that this white too, but what an eye is doing is still, you know, effectively correlation, right? Um, you can train an ai to predict when the sun is gonna come up right now, the way you do that is not by going to isaac newton and riding down the laws of gravity and giving yourself over size as well. The planetary objects, which is the traditional way of coming up with a prediction or einstein if you want to get really, really good, um, what, what you do is you sit there gathering data. I suppose you had data of the time of the sunrise and sunsets going all the way back to whenever and uh, and other characteristics know mostly like the temperature seasons and other things like that position on the earth, what have you, and you could train and iii to predict when the sun is going to rise tomorrow or you know, three years time or whatever, whatever one you want based on that.

Speaker 2:34:24But it would be like the statistical association. Now we know it's going to do a good job of coming up with a statistical association because we already have newton's theory, which we, so we know that there is an answer, right? and so it will be able to do that. And so it would come up with, with that in mind. Well, it might come up with, I don't know if anyone's tried it. There's some constraints. Newton's theory and the mathematical equations don't have all the sort of linear constraints that sometimes arise in ai models, but, you know, let's ignore that for the moment. But, uh, but, uh, the, that iii, every time the sun comes up and I go, well, I'm not surprised I predicted that, but it's not like they have in their thing, like they knew it was going to occur because they understood, uh, the, the sun is in, the earth is orbiting the sun and the earth is turning and all that sort of stuff.

Speaker 2:35:17Um, so I, I think that's the difference. It's like this mirror association and we do things all the time and sort of mere associations and sort of get used to them probably very implicitly a there, but when we're pressed for it, no one says, well, I just have tended to be they, you know, come up with it. You, you know, why did you, why did you even asking people, and sometimes they're wrong if you ask people why are you win or lose it at a, at a casino, you know, one way coming up as well, when I understand the probability of winning here, this and is how many times I won. Or alternatively I come up with some other explanation regarding luck. Um, but to an ai, this is just, you know, associations, this is my best guess. And that's what happened.

Speaker 1:36:03And what struck me about that, and I even wrote it down a second ago to make sure I touched on this, was your training, the ai the same way you're training. My kids just look at the sun every day and it comes up now and I kind of noticed the seasons change and the effects it a little bit and it could come up by themselves with the julian calendar maybe. Right. But you're not, you're gonna wind up messing up, leap years and other kind of weird subtleties that you might not pick it up.

Speaker 2:36:23I mean, certainly years and years ago, that's probably, I guess what was going on and people sort of came up with a thing and they can predict it, you know, at the time was going to come up once every 24 hours but change a little bit over time with the seasons and they would've seen that association, uh, what the theory of it was behind it would have been much different. But then you know, and using came along and it could do a better job. Well, actually other people did a better job before. Did some, you know, you could come up with some, some models, some causal things. I, I don't know if that's the way that your kids are learning, uh, stuff. Um, I mean there is a sort of intuitive grasp of physics that seems to come to all living creatures somewhere. Uh, we're, I, I think we can all pretty much claim most of us do not know those equations.

Speaker 1:37:09Yeah. That's another idea that I read about recently. Athletics, forget the publication that I read it in, but at a certain framing of the, the evolution of intelligence as evolved for physical acts. And so the description that sort of stuck in my head there was inked, the great apes became very intelligent because they're swinging through trees and, and the stakes are high when you're swinging through the trees, right? Because you could hit stuff and I know where the next branches and you got to be able to handle all that. So to have all of the physical quality intelligence and you know, as I kind of reflected on that, I found that really appealing because I think that when, when I, when I feel like I'm at my most insightful, it's when I'm actually using my, using my intelligence in a, in a physical manner. So I'm actually putting myself in a situation, right. So I don't think to myself, oh, it's sort of doing the supply and demand graphs. I'm like, so when I talked to this person and tell them, this is my argument feel right, it's a different way of just going to writing it down on paper it, there's something. I think there's something special about embodiment about feeling in space and time and putting yourself in a room and if you're just imagining it, that powers up your intelligence.

Speaker 2:38:11I mean certainly there's a thesis about that and certainly in terms of thinking about how to make robots and other things like that, there's something to the physical body when I, you know, it's hard to know. I mean, sure, I mean, one of the things that has come up in, in reason years is, is you know, everybody knew we weren't that much different from all sorts of animals in terms of our genetic material and you know, while though a difference in brighton size and a building, brian composition, you know, everybody had a pretty sophisticated brian compared to computers or something like that. Um, but, um, it wAs a sort of bias for so many years trying to say what is different about people, why are we the ones? And I think the recognItion now a is we're not the ones where we're not the only ones who do things like, um, there are, for instance, chros who have demonstrated the ability to have a, you know, a theoretical model inside them.

Speaker 2:39:14What they do is they place inside a test tube, uh, uh, you know, uh, treat sitting in some water and it's too far for that big to grab that treat. Um, but what the crows realize is if they drop other little rocks into that, the treaty will rise to the top as credible. Yes, exactly. Or worse still the nation where, uh, you can pick up something in the same test tube, um, and they've given us, you know, bendable stick and the crows fashion it into a hook and then hook it up. Now, you know, that Is a pretty amazing set of things. Uh, um, and so, and I think they're finding that we've all manner of, of animals and animals really different in this regard as well. And the military crozer apparently can do this. All the birds can't, you know, why is that? I don't know.

Speaker 2:40:15No one sat there and said the cro that's going to be the smart one wasn't, it wasn't a forecast forecast, but there are interesting things and you know, elephants have shown all manner of uh, uh, you know, uh, ability to remember and know people and stuff like that. so I, I kind of, I kind of, I think that's actually fairly comforting in some sense. It just shows that there's a lot more, uh, you know, the distance between us and the animals is, is, is far closer than we were willing to admit. Uh, it also shows that, you know, uh, being able to focus your artificial intelligence research on just being able to produce some of the things even animals can do is probably going to be a good place to start getting you further. So it's quite, it's really quite interesting.

Speaker 1:41:07Yeah. One of the ideas I liked in the book, and it's also resonated with me is you mentioned that humans, so this is on the topic of uncertainty. So humans manage the inhumans. The difficult decision is manifest in the amount of time you spend on it. Right? And so you're deliberate, you're not sure in thinking about it and, and in machines, they don't have the same kind of cost or the dental experience, the same kind of deliberation,

Speaker 2:41:32right? Oh, I mean, you know, machines

Speaker 1:41:34can take a while to reach an answer. Uh, I think the way we sort of think about it in the, in the book is the problem is the machines can't be told if you want to automate something, uh, you have to say, uh, if this happens, then you should do that. And yoU have to specify all of that. Um, you can shortcut that a bit saying whatever you do, try and achieve this goal. But either way someone has to tell the machine the goal. Yep. The machine isn't really coming up with a goal. We can have machines sort of adjust something that looks like their award things. But they're gonna have to have a place to start and things like that. And so that process of thinking about what to do, how to complete those, if then statements is still something that we as a people have to do now, it may be that only one person has to do it once and then it can be coded and spread out. That's a different matter. But, uh, you know, for a lot of decisions that have to do with us personally, it can only be us to do. And, and that, that the idea, I think that you're, you, you didn't, maybe you can talk about more specifically is the idea of compliments. So the humans, because ai becomes more welcoming, cheaper, and you use it more than actually humans become more valuable, which people might not expect.

Speaker 2:42:58Right? I mean, the way I like to think of it is without the predictions that I was giving us, how did we deal with uncertainty? Well, the answer was we tried to insulate ourselves from it. Uh, you know, um, you, you, you know, you build a factory floor, very tight environmental conditions and things like that. When you suddenly have better predictions, you are able to adjust your behavior. So rather than having a rule, I will always take my umbrella. You have a role, I'll take my umbrella. If the forecast says it's likely to write just a contingent decision, uh, you can see why you'd want to do that. Uh, and so, so you can, you can make that trade off and, and, and when you have predictions are coming from iii, you can do that for very, very complicated things and potentially have very, very complicated decisions.

Speaker 2:43:50But I do this then that is a very complicated thing. However, because you've not made that decision before, chances are you haven't thought about what you would do if you could. It's much the same way I liked it. You know, when people buy a lottery ticket, why did they buy a lottery ticket before you buy a lottery ticket? There's no chance of you winning the lottery after you buy a lottery ticket. There's also no chance because now I, there is a state of the world in which you are the winner. Sure. And so people say, well, now that I bought the ticket, I can now think about that state of the world. I can think about that outcome where I win and what I would do with this thing, which is not that much stupid, but you know, at least it's, it's, it's, it's not possible before you buy the tickets cost.

Speaker 2:44:36You should think about that before you buy the ticket if you don't try. Yeah. But I think the idea is that, you know, sometimes now, now that the ai can tell us when this sort of edge event is going to occur, uh, maybe it behooves me to think about what I would do it. And if I'm not going to think about what I'm going to do with it, then what use is the ai because if I'm going to just make the same decisions as I did when I didn't know anything, the irs has no use to me. It's only use you only useful if it changes what I'm going to do. And if I don't know what to do, it can't do that. And that's the complementarity.

Speaker 1:45:11It makes me think a bit of cognitive bias. So one thought that came to mind as you were talking there about somebody who once they, once they buy a lottery ticket, they probably overestimate the probability that are going to win. The lottery now has a lottery ticket in their hand. That's availability bias. Well documented. It's in the psychology literature and daniel kahneman won the economics level price for it. Now

Speaker 2:45:30machines, do they have cognitive biases or are they just machine cognitive biases that we just think are ridiculous? Well, I think so there's still some three buckets. One is if you do it properly, yeah, they won't have that sort of cognitive bias, right? Uh, so you know, you could eliminate it to is if you train the machines to make predictions that are the same as what a human would, well they got to come up with the same biases, the documents. So if they're looking at the humans, I do that, well they're going to do that. so they're gonna have the sign buys. Now the good news is you might be more transparent and so you can tell the machine, don't do that, you know, through everything in the human except for that and see what happens. So, so there's some hope there, which is more than we can say for most people.

Speaker 2:46:17Um, and then there's this third thing because we know this comes from statistics, biases creep up, in fact, the very notion of these predictive algorithms, you know, if, if, if anyone's sort of thinking back to the statistics courses, if they ever did it, you know, the goal of it was to come up with an unbiased estimator. Yep. Those are ai. Machine learning algorithms don't have that restriction in them. They're quite happy to come up with biased estimate as long as it works, as long as it works, because there's a difference between, uh, you know, working bias, things can't work then I'm biased, unbiased, unbiased, as some sort of like scientific thing that we must not have the truth and blah, blah, blah. They don't care about the truth. I care about what works. So, you know, that's the thing motivating. So it's going to introduce those biases into it, uh, and we know less about what those will look like in particular domains. Um, we may not even be that greater picking them up. so I think, you know, there is like, oh yeah, they could do it, but. Well, the laura statistics tells us you've got to be careful.

Speaker 1:47:25Well, it, it, it, I think it's a great bridge into another really important idea in the book, which is that of trade offs. And so another way of framing that, if I understand it right, is bias, is the introduction of bias as a tradeoff for accuracy. Right? And so there are lots of trade offs. Now I, maybe you can talk a bit about some of the examples you had in the book of, of what you must sacrifice in order to gain.

Speaker 2:47:45so, I mean, some of the stuff in the, once we concentrate on the concentrate on, in the book is where you have to, uh, uh, the thing that you're trading off as these customer experience or something like that. For instance, you know, we know that when you use an app like ways to navigate, uh, through traffic and things like that, the way that, why is collecting the data on when there's a traffic problem is by why is users who are stuck in traffic. Okay. So imagine there's two ways, two ways to two paths to go. There's the using the freeway and using the Backstreets. The freeway gets clogged. So the absence people down the back straight stood out better. Now, if it does that to everybody who's using ways and that's what they do, they go on the back streets. How does ways ever find out that the traffic jam is no longer there?

Speaker 2:48:44How does it ever find out? No one imagined that I believe this is actually what they do is they at some point decide, oh, well, you know, it's like the traffic jam has gone. Let's send some people that way. Okay. now that's great for optimizing the system. It's not so great for the sacrificial lamb that's now stuck in traffic because turned out that it could be really bad. I meAn, if, if they're still stuck in trouble, mr you're really bad, you know, now, you know, maybe that's part of the deal of using ways and you can, um, and you can, uh, you know, accept your, sometimes you're going to have this cost, but that's the sort of thing and anytime we have to put stuff out in the real world, especially if we wanted to learn and uptight, well it's not going to be as good initially, so someone's going to be bearing the cost of that.

Speaker 2:49:36Um, and so, you know, there's this issue of, you know, how well do you train it within your own confines and your own company before you put it out there? Uh, you know, we know in the long time things that are trained out there in the environment they're operating in it are going to probably be better. But the short term cost of that is going to be crappy. Initially it could be. That could be far, far worse. So this is, this is the, you know, just a trade off that people are going to have to work out. And it sort of depends on the nature of the business.

Speaker 1:50:09And I think another way of thinking about it, let me know what you think of this is that ai is to some degree an alien intelligence. And so I'm thinking in particular about the, the chess algorithm, which I forget what it was called deep. No nano. The recent one, the neural networks based one that, that um, yeah.

Speaker 1:50:28Anyway, you know what I'm talking about the right and there's a lot, there's kind of a bit of an uproar and the chest community because he was playing very differently than, than a lot of the other algorithms were and that a lot of people were because I've learned the game just on its own. and so I'm thinking what? And there's a, there's an example in my business in the insurance business where this can get, it really can manifest as well where there's a lot of, a lot of underwriting of risk is done on an algorithmic basis now. So we'll have, we'll have a machine which will, which will post the, select the risk and we'll, we'll sell the policies. And what happens though is you wind up with policies or or opportunity is called the fall out of the box and to a human will look like a very stupid decision, but a machine and so it actually scatter some fruit on the ground.

Speaker 1:51:11One can pick up if you are willing to just take a very human approach to evaluating the risk which is distinct, deeply, completely separated from the machine look. And it's easy to, to despair to the machine and say the machines are stupid. Look at all this dumb stuff that they've thrown out, but, but actually what you're not seeing there is an enormous amount of value. Machines are currently doing a lot better, but the but they, they're, they're making different kinds of errors that humans would make. And so that's, that's shooting out all this other stuff that humans can then pick up.

Speaker 2:51:38And so the answer to the, the, the reaction to that, and I think this is going to come up quite a lot, is, is the reaction that has, oh, stupid machines. We shouldn't rely on them as much. A is not the right reaction. The right reaction is is you know, have they discovered something right, because you know, going off the path and other things like that can lead to discoveries and I think one of the things that we've noticed in some of the sort of more powerful use cases for machine learning is when they go into a new environment, you run these techniques and then rather than blindly using them, you say, well why are they suggesting this? And you go back in and uSe you and you can play with these computers. You can basically do the equivalent of interrogate them, ask them questions, find out what's really driving the results.

Speaker 2:52:28And you like discovering, oh my goodness, it was this. I didn't think that drug demand at all. I mean because I kind of see the theory is, is there. But it's like implausible to me if I did know, but there it is. And, and so you now say, oh well I've discovered something. So I think actually we're going to, you know, aside from the automation people coming in using these tools and erbitux and insurance in a big way. I just going to discover some other things, discover some, uh, you know, drivers have of risk, uh, that just weren't transparent before and obviously once you discover the driver, you can mitigate and insure against that risk and I think that will be happening, um, in precisely what avenues and where our society, but it could be in the sort of like the oldest and stable, you know, driving risk and things like that you might turn out to be, to the uh, uh, you know, quite revolutionary.

Speaker 1:53:24I'm wondering about what kind of company is good at this. And I want to relate this to a book I read recently called capitalism without capital, which is in that book, it makes the observation based on quite a lot of literature that preceded it, that, that organizations are getting, well, let's call it the intangible capital is becoming a bigger part of organizations. And one of the theses hypotheses about what's going on there is that organizations are doing things like building ai and there's software development and that's embedded in the organization and that's what they're creating. And that's intangible and that's what's driving the value. There's a, there's an observation of course, that amazon, these large tech companies are driven by software at their core and that's kind of, that's, that's what's going on and that's what an ai driven organization will look like. Is that the way we're thinking about? What do you thInk?

Speaker 2:54:13Uh, not really. I mean, I understand the pushes and things like that. My, my thought is that the, uh, you know, I'd be surprised if there's a thing such as an ai driven organization basically is automated a whole lot of things or if it is, it just doesn't look that significant to, you know, It's going to be a thing we do, but it's a. I think the reason being is that I think when those sort of tasks get automated, it frees up people to do so much more and you know, it's, it, there's no way for short to know this, but, you know, just imagine this always stuff you can do just in your own jobs. You know, if I didn't have to do this, what would I fill it with your. The answer is really nothing. It's really nothing. It's just not, you know, prIority and other things like that and I think there's just a lot more scope for that to. So you're going to see ai use and used as tools and things like that. I mean maybe there was some organization, you know, lIke agriculture, the agriculture became automated. So again, very different than maybe there, there's some parts of organizations that will start to look like it's all driven by some, you know, article computer somewhere, but I just don't see that as widespread. I don't think it's gonna fundamentally change the.

Speaker 1:55:46Maybe maybe I focused too much on ai and that framing of the problem because there. I'll put the question to you. What is the difference then between the tech startup giants and other large companies or other. Are there organizations generally?

Speaker 2:55:59Yeah. No, so I think, I think the tech giants were, had two things going for them. One is they had the capabilities, uh, so we kind of discount that to is that they were already dealing with the already trying to solve. I like problem. So, you know, google are trying to predict what people would want to click on when they put in a search term. I mean, that's basically their business or what ads they need to try, you know, so they've prediction at the heart of their business. They already have the data. It's like they're the fallow place for this sort of thing. Other businesses, yes, they have prediction, but they don't have the capabilities data and everything all set to it. Um, and so I've just been spending all their time avoiding predicting and so that's going to be lighter when those industries unlock and say, okay, I can now do this because I can start to formulate predictions and think differently about the business. And that's a longer process. So it's like, eh, you know, I imagine in insurance insurance is always an industry. We, we, we imagined because it's dealing with uncertainty, risk, so on and so already becoming a big data industry, there'll be one that we'll see these tools get deployed fairly quickly. Um, and so I think that's going to happen, but you know, as you got further and further down, uh, it's going to take awhile.

Speaker 1:57:15Hey, there's an observation when insurance, the insurance business, a long time now and the end, it is the case. It definitely is the case that a lot of, of ai and ai like tools, uh, or two or maybe just predictive modeling, let's call it as far as making big inroads, but the rest of the core of the insurance business is what I like to call a moral economy, which is actually in the face of enormous uncertainty what humans do. But you're right, they avoid it, but they also make deals with each other and trust each other. Right? So creates this network of, of people and organizations that collaBorate and they've more or less say we don't know what's going to happen here. So let's just sort of agreeing not to screw each other and have this really long game that we're playing where we're all in this. It's a kind of an insular business in some ways and there is something deeply human about that. Right? And there's something that's very, that feels to me like very hard or difficult for, for, for computers or machines to replace it all.

Speaker 2:58:09Well he can't replace. I mean if the, if the industry is already optimizely configured itself, the insurance industry can't reduce risk. Whatever the risk is out there, it's out there. It just is. It's only in the business of allocating it so that some people feel it so that we reallocated. Now if the insurance industry is already worked out, how to optimally allocate all the risk that's out there, nothing more can be done. It's nothing more than to be done.

Speaker 1:58:33You can make the, that efficient allocation, but you can make it cheaper. Right? So back to the original thesis of the book, right? It's getting cheaper and cheaper finding what that should be. Yes. Yes. Shorter path to, to, to the answer. Exactly. There's one other kind of thought on the insurance business, which is given back to the complimentarity idea of compliments. And so people who are evaluating each other for trust, they don't, they don't trust machines. I think that's what it comes down to. Yes. And do you think that's something that can change or will change or, or, or was it always going to be a separation?

Speaker 2:59:06Uh, I think, you know, when the machines are reliable, we put an enormous amount of trust in the sense that what we drive around in. Yeah. But what would fly in? That's right. So, uh, I, I, oh, what we do describe, know when, when it works, there's a period at not trusting something and you can remember when you didn't trust your internet connection or you didn't trust some thing and some piece of software somewhere and then it's fine. Maybe we can start relying on it. And I think the same will be true insight movie. True.

Speaker 1:59:37So brian, a time, but maybe we could close on if you have any, any thoughts about what you think, how you think the industry or so call it the function of ai will evolve or the next kind of immediate period of time. And what will the, how will the cheaper manifests itself and any forecasts? No, I try very hard not to make too many folks. I noticed a few bad. Well, the problem is that really shortens the lives of these books. Yes, yes.

Speaker 2:60:00From the past that, you know, somebody predicting in the future a type book, uh, doesn't, doesn't tend to last. Um, uh, I don't, I don't have any, you know, I don't have any particular insight into know, like how quickly this is going to roll out and things like that. There's a lot of activity going on. Uh, I think in the next let's be more interesting in the next five years, there'll be a startup somewhere who manages to reformulate what, what wasn't a prediction problem as a prediction problem. Solve that. And it impacts broadly on our lives. The one thing I know about these radical innovations is how they actually ended up manifesting themselves, was always different from what people were imagined at this stage and you know, and I think the same is going to be true of ai.

Speaker 1:60:49Great. Uh, maybe you can tell the audience how, where they can get your book a well, they can maybe reach out to you on twitter or any other platform.

Speaker 2:60:57So, uh, the book is available at all good bookstores, including on audio book. Now you can go to prediction machines.ai for links to your favorite ones. Uh, and I'm also available on twitter at josh gans.

Speaker 1:61:13Great. A guest today is joshua ganz. Thank you very much for being part of the show. Thank you.

See All Episodes