The Answer Is Transaction Costs

Permissionless Innovation: Unshackling Potential or Unleashing Chaos?

November 21, 2023 Michael Munger Season 1 Episode 20
The Answer Is Transaction Costs
Permissionless Innovation: Unshackling Potential or Unleashing Chaos?
Show Notes Transcript Chapter Markers

What limits innovation? Is that good? I talk to Adam Thierer, senior fellow at the R Street Institute, exploring the concept of permissionless innovation and its far-reaching implications. From ancient Mesopotamia to the digital revolution, we unpack how policy context shape the trajectory of innovation and, consequently, our society.

With Aaron Wildavsky saying "Go!" and my son Kevin Munger saying "Not so fast, there Scooter!", we venture into the contentious territory of innovation intellectual property rights in an era of digital sharing. 

FOUR TWEJs (trying to keep THAT weekly, at least), and some great letters.

LINKS:
PAPERS and BOOKS by ADAM THIERER:

Money Pump: 

 Aaron Wildavsky, SEARCHING FOR SAFETY, https://www.routledge.com/Searching-for-Safety/Wildavsky/p/book/9780912051185

Calestous Juma, INNOVATION AND ITS ENEMIES  https://academic.oup.com/book/25649

William Baumol, ENTREPRENEURSHIP (article)  https://www.sciencedirect.com/science/article/abs/pii/088390269400014X

Epstein and Munger on Capitalism and Stagnation https://www.youtube.com/watch?v=4o-s541UKgI 

 Munger on “Permissionless Innovation” on Econtalk https://www.econtalk.org/michael-munger-on-permissionless-innovation/ 

If you have questions or comments, or want to suggest a future topic, email the show at taitc.email@gmail.com !


You can follow Mike Munger on Twitter at @mungowitz


Speaker 1:

This is Mike Munger of Duke University, the knower of important things permissionless innovation, the difficulty of knowing what's next in artificial intelligence and the transaction costs of regulation. My interview with Adam Theer, senior fellow at the R Street Institute, and four new twedges, plus this month's letter and more Straight out of Creedmore. This is Tidy C.

Speaker 2:

I thought they'd talk about a system where there were no transaction costs, but it's an imaginary system. There always are transaction costs. When it is costly to transact, institutions matter and it is costly to transact.

Speaker 1:

I've long been an admirer of Adam Theer's work, especially his terrific book Permissionless Innovation. It was great to get a chance to talk to Adam about innovation, a subject on which he has thought a lot. My guest today is Adam Theer, senior fellow at the R Street Institute. Let me ask you, adam, if you will say a little bit about your background and how you got here and became interested in the problems of innovation and public policy.

Speaker 2:

Sure. Thanks for having me, mike. I spent the last 31 years covering the intersection of emerging technology and public policy, working at five different nonprofits and an academic institution, going all the way back to my time in 1991 at the Adam Smith Institute in London, where I got my start, and then continuing on in a variety of other think tanks and academic institutions, including 12 years at the Mercatus Center at George Mason University before I became a senior fellow at the R Street Institute, and what I've done over that time is attempt to try to stay on the cutting edge of all of the emerging technology, specifically information and communications technologies, that have come about over the last 30 years. And it's like drinking out of a fire hose sometimes, but that's a wonderful opportunity for academic study and coverage of emerging tech and innovation issues, and it's just dominated my life from the start, so it's been a fun ride.

Speaker 1:

It strikes me that one of the things about working in the innovation and digital space is that it's almost meta policy, because it's not just a policy to regulate or promote one industry, but it's the backbone on which everything else is hung. And so you wrote a book that really affected me when I was writing my book Tomorrow 3.0, that came out with Cambridge in 2018, that was entitled Permissionless Innovation, and the reason that I gave that setup about being meta policy is I was a student of Douglas North and he always thought that innovation was important, but it's really hard to study Innovation by its nature. Real innovation by its nature is hard to predict, because it may be something that leaps over and it may be something that is disruptive. So that idea of non incremental change and disruptions that extend to other spaces are the way I, at least, would think about innovation. What do you mean by innovation, and can you give some examples of innovations that you thought were important in communication, transportation or industry?

Speaker 2:

Yeah, well, first, you're exactly right, mike, that this is the ultimate meta question, because really what I exist to do is to go to the core of the fundamental question that political scientists, economists and economic historians care about, which is to paraphrase the way Joel McCurr kicks off his wonderful 1990 book Lover of Riches on technological creativity and economic progress. He begins with the question why does economic growth occur in some societies and not in others? And that is the fundamental and most important question that we should be asking ourselves as academics, as policymakers, as a society, and it, for me, comes back to the all encompassing importance of economic growth fueled by technological innovation. And so you can look at innovation very much as a meta issue, as you described it, as sort of a macro thing, and then you can look at the micro elements of it, the various drivers of it that Douglas North and Deirdre McCloskey and many others, including Joel McCurr and so many other great economists, political scientists, have attempted to diagnose.

Speaker 2:

And of course there's been raging debates about this and everybody can sort of pick and choose their favorite policies and or institutions. But what I've learned from those scholars and others is the incredible importance of cultural attitudes, and cultural attitudes that then transition into being political attitudes and then how they connect back to real world outcomes. And I have found that when you take a look at the last 30 years of information technology policy, we have a particularly powerful case study, a real world case study of this in action, of how cultural policy attitudes connect back to actual innovative outputs and economic growth, and that, of course, I'm speaking about the rise of the digital revolution, e-commerce and the internet. And you and I, mike, are old enough. I don't want to date you, but I was born before human set foot on the moon and we can remember a time when we talked about telecommunications innovation. It was moving from a black rotary dial phone to a, to a push button princess phone with a longer port.

Speaker 1:

It was not mounted on the wall. Just the fact that it was not mounted on the wall was a major innovation.

Speaker 2:

It was huge the fact that I could pull the cord down the stairs and close the door so that my mom couldn't hear me talking to my, my girlfriend on, you know, on racking up a long distance charge.

Speaker 1:

that was outrageous and long distance charge too. What is this that?

Speaker 2:

you speak of I make. Kids are mystified by the concept. I've had to explain what area codes are to them and why we still use these things. And you know why we mentioned long distance or even the idea. My daughter once asked me Dad, what do you mean by hanging up the phone? It's a weird metaphor. It is, and yet we still use it because that's the way it worked.

Speaker 2:

But in the old days we had very incremental innovation, at best, in the field of information technology and then suddenly something changed, and changed radically in the early 90s. And what I've argued in work is to to build on what North and McCloskey and McCurr and so many other great scholars have pointed out is that we got policy prerequisites right. By specifically getting all cultural attitudes towards risk taking in the field of information and communications technology Right, we allowed people to experiment with new and better ways of doing things, which is to me the heart of what we mean by innovation. And specifically we allowed that to happen in a more permissionless fashion. We allowed for greater experimentation and trial and error and learning through the process of trial and error, and that is, for me, the secret sauce that drives economic growth.

Speaker 1:

Well, I want to get at that idea of permissionless and I have an example that I always use. I'm not sure if you will find it as clever as I do, because few people find me as clever as I find myself, but one of I thought the first examples of the potential for permissionless innovation was about 4,000 years ago. In a couple of cities in Mesopotamia they had these large marketplaces called a souk, and in the souk you could rent space to sell stuff. What stuff, they didn't care. There was nobody at the gate to the souk saying I want to check to see if we're going to allow you to sell, that you can sell whatever you think you want to try to sell and if you make money from doing it, good for you. But we're going to charge everyone a fixed fee to have a stall at the souk.

Speaker 1:

And so it might be that 4,000, 3,500 years ago it wasn't clear what things could be usefully commodified and sold, but they tried a whole bunch of them and it became clear that people might be willing to buy bags of wheat, certain kinds of clothing. So there was no requirement, there was no restriction on what you could sell, but the souk reduced the transactions cost of being able to get access to a lot of people, because consumers would go to the souk, they would walk around, they would look at stuff. They didn't have to depend on you actually coming to my farm to look at something. So it is permissionless in the sense that it wasn't regulated. Anybody you wanted to sell at that little stall was welcome to do it, and you might find out fairly quickly.

Speaker 1:

These are the sorts of things you should concentrate on and these are not. So that was the smallest, most incremental example that I could think of of permissionless innovation, when we don't know what's going to work, but we have this institution that makes no distinction. Now, probably, maybe you couldn't sell human body parts, so it wasn't entirely permissionless, although maybe you could have I don't know for sacrifice, but to me the idea of permissionless innovation comes down to, at its most basic level, the ability of some people to try to think of ways to serve other people in a way that costs less than the other people are willing to pay, because it's subject to the profit test.

Speaker 2:

Yeah, I love that example.

Speaker 2:

If I recall, I believe I don't know if you've read the book Innovation and its Enemies by Kalustus Juma, the late Harvard scholar, a Nigerian scholar who worked briefly at the UN and then went to Harvard and he wrote this wonderful book just before he passed away, where he goes through these histories, going way back, and I believe he actually mentions that when he talks also about the rise of coffee trade and trading and goes into all sorts of other stories, the story of margarine and all sorts of things, and he talks about these little wonderful moments.

Speaker 2:

He didn't call them permissionless innovation, but they're little moments of permissionless innovation and free trade where it was starting to take root but then unfortunately just as quickly was derailed by defenders of the status quo and people who had a vested interest in making sure that this sort of freewheeling experimentation was just not allowed. And this is what we come up against again and again and again Today. We have formalized these rules and approaches and policies and we often call it the precautionary principle, the general idea that we shouldn't allow that sort of experimentation and that, basically, innovation is guilty until proven innocent and you need a permission slip from somebody, some authority, before you can move forward and engage in these sorts of open forms of trade and innovation.

Speaker 1:

Well, I will put up a link to that in the show notes. That's very helpful, thank you. It does seem to me there's a kind of Baptist and bootleggers problem. Most innovations, by definition, are going to be disruptive, and that means that they're going to do harm to the incumbents who have invested in the existing technology, existing way of doing things. And then there are people who are just afraid of things that they don't know, and so you may get these odd coalitions. In public choice we call it the Baptist and bootlegger coalition, where you have one group that's operating according to self-interest that's the bootleggers. They think it's great that the government restricts the sale of alcohol, and then the Baptists have these moral objections and the very odd coalition between people who otherwise wouldn't share very much. So thank you for bringing back attention to the disruptive part. So the significant change and disruption are what we think ways we would describe innovation. It seems to me that innovations might take place.

Speaker 1:

My usual description of the way capitalism works is that it kind of works on three different concentric circles. The largest is exchange and the fact that difference means that people value things differently. We can often benefit by exchanging both parties to a voluntary exchange or better off and at its most basic level, any friction to the ability to exchange reduces welfare in ways we can't measure because we never see the exchanges that don't take place. The second level is industrialization and division of labor. So specialization means that I may be able to produce at a lower cost.

Speaker 1:

You're an artisanal shoemaker and I have a factory that employs four people who work in a production line, each of them making one fourth of a shoe. Well, we have a romantic attachment to the artisanal shoemaker. These are handmade, these others are made by a machine, but they cost one tenth as much, even though it's only four times as much. Many people, because there's increasing returns to scale, to division of labor. That doesn't seem fair to the artisan. We should protect the artisan out of this sort of romantic concern.

Speaker 1:

And then the third is capitalism actually relies on liquidity and the allocation of liquid capital that has not yet taken physical form, so that it become embodied in a physical form that I can then invest. If I have access to liquid capital and you don't, then it seems unfair. But I have to pitch my ID in order to get this, to attract that kind of capital. So it seems to me that permission full or the requirement of permission at all three of those levels is an important obstacle for to growth. As you said, the big problem is the difference in growth between different nations. So restrictions on exchange, restrictions on division of labor and restrictions on liquidity, all of those things operating in the background, will prevent a nation from growing, even if they have a lot of entrepreneurs.

Speaker 2:

Yeah, absolutely. I agree with that framing. And it's not just that the defenders of the status quo and those who oppose innovation minimize these gains from trade and the gains that we get from liquidity and division of labor, it's also that they're limiting knowledge. It's that when you undermine trial and error, you undermine the error part. This is what I learned from the great political scientist Aaron Wodolsky, who wrote a book that changed my life in 1989, called Searching for Safety. It was the best critique of the precautionary principle I've ever read and he said the most important thing to understand is that you can have no progress without the error part of trial and error. People just ignore that. But the error is so important and you have to be able to get to it, to learn from it and move on.

Speaker 2:

What was the famous Edison quote when he was ridiculed by some newspaper man about? Like you've had all these failed light bulb experiments, something like 10,000 times you've failed creating a light bulb. He says, sir, I've not failed 10,000 times, I've learned 10,000 times how not to do it, and I'm about to get it right. And he did, finally did.

Speaker 2:

And so this is what's really important about permissionless innovation is that it represents this general freedom to tinker and develop new ideas in a relatively unconstrained fashion. And it doesn't say that it's all going to be sun, sun and roses. It's to say we will encounter problems and risks and even some harms, but then we learn how to address them with still more innovation and we apply human learning to each of these processes until we get it right. And this is what is so disturbing about defenses of the status quo when they are premised upon some moralistic argument or safety argument that somehow we are better off holding on to stasis as our baseline for human civilization. That is what Wadalski pointed out is that is fundamentally unsafe, that is not moral at all, that is holding back the ability of humanity to improve its lot in this world, and so I always try to make that added point about limiting knowledge is not. You know, you can't claim the moral high ground by doing that, and you can't claim safety as your premise, because you are wrong when you take that approach.

Speaker 1:

Well, that raises an interesting question about intellectual property and we may not want to go into. I'm interested in your thoughts on this because it's a hard problem and I'm never sure exactly what I think of it myself. I always believe wherever I talk to last. But there was a famous conference in Silicon Valley, I think in 1990 or 1991. So a long time ago by the standards of this kind of innovation. And the discussion was about the problem that information wants to be free.

Speaker 1:

And one of the advantages of innovation is, once I discover something, nobody else has to, because we can all use it, and the fact that the digital space allows us to send that all over the world very quickly. And in the case of music or books, there's no production cost. We can share it and there's no less of it available for me. So it's non rival in production, it's almost like a pure public good, but it's excludable. So the question is there's two senses of free in Latin One is gratis and one is libre. We can't really make it gratis, or can we in the sense that the person who invents it doesn't get compensation that we're, you know there's. I don't have intellectual property. Once it exists, I should set it free. Libre means it should go everywhere because it benefits the human society, but libre means that I don't get paid anything for it. So how can we have, how can we make innovations widely available and yet make sure that the innovator is compensated?

Speaker 2:

Well, this is a true story. Twenty years ago, I wrote a book with my colleague, wayne Cruz, at the Cato Institute called Copy Fights, and it was about the future of intellectual property in the digital age. This was during the Napster Wars and music file sharing and so on. We tried to be peacemakers and split the difference between a hotly divided building there at the Cato Institute. How did that go? It did not go well, I will tell you this. The story ended with people on our board trying to get Wayne and I fired both because we went too far and not far enough, I would think so Everybody thought you were wrong.

Speaker 2:

Yeah, exactly, we did nothing but make enemies by trying to be peacemakers in the middle, and that's because, when you look at classical liberal movement and libertarians and so on, you can divide the world into four quadrants. In one axis you have natural law versus utilitarian and in the other you have pro versus anti-IP. You can put people in all four of those quadrants. You can find utilitarians that are very split, pro and anti, and then you can find natural law here and it's pro and anti. In fact, when I worked there, our chief philosopher, if you will, tom Palmer, held very strong views about. There was no such thing.

Speaker 1:

About everything. He has strong views about everything.

Speaker 2:

He does. He comes from a natural law perspective and the natural law camp goes into the stark corners on the spectrum because you're either completely for or completely against. And his argument was to have the law constrain my ability to use my body or mouth to project ideas or repeat things or dance or whatever. That's a fundamental violation of our natural rights and so we can't have no such rights and intangibles. On the other part of that spectrum you'd find someone like Ein Rand who wrote eloquently about why this was the product of man's mind and we had to protect it until the cows came home and maybe we should be paying Shakespeare's descendants royalty.

Speaker 2:

Still, there were crazy extremes on the natural law part of the spectrum. Now the utilitarians are much closer, and so you could look at somebody like Hayek and Friedman versus someone like Richard Epstein, who I think you just recently spoke to. You can get very different views, but they're not as stark as the natural law adherents. In fact, one of the last things Milton Friedman did before he passed is he did a debate at the Cato Institute about drug re-importation from Canada and he was up against Roger Pallon from the Cato Institute and these guys were good friends and they agreed on everything else, but they did not agree on that issue, and so there is no easy answer, I think, on this. This is an incredibly difficult thing. In fact, I argued in my book with Wayne that it is the single most contentious issue within classical liberal circles, and for that reason I promptly abandoned it after 2000. And I never touched it again.

Speaker 1:

Fair enough.

Speaker 1:

One of the things that is important for permissionless innovation is that the rules, whatever they are, are known and relatively fixed. So provided and this is actually Epstein's argument provided we have a rule about patents and intellectual property, and maybe even more so for copyright and registration of a trademark, because those things allow me to cultivate a reputation, so not everyone believes I can own my reputation, but the fact that I can use a reputation and you're sure that this is bare aspirin nobody else can use bare to put on there, that's usually not so controversial. How long should the patent be? Those are all interesting questions. But so long as those things are fixed, probably people can adapt to them in the way that prices capitalize expected future values. The difficulties come when we're constantly changing those things in response to rent-seeking claims, maybe by the large drug companies that want to extend drugs. So having them known and relatively fixed already, we're quite a bit of the way towards something like permissionless innovation, because then I can be pretty sure that's the framework in which I'm going to have to work.

Speaker 2:

I generally agree, except for one thing. First of all, I think Richard's framing and your framing right there is a good one. I think obviously you pointed out that you have a rent-seeking problem. You have Disney filing for extension of copyright every time Mickey Mouse is about to go into public domain and they get it. So the rules always change because-.

Speaker 1:

They're clever monkeys, they can hire. Really, that's what rent-seeking is.

Speaker 2:

Absolutely, I mean-.

Speaker 1:

You have clever people who could be working for a living and instead you pay them to do this crap, absolutely.

Speaker 2:

So. That is a legitimate problem, but there is a different problem. There's the problem of just ongoing technological change, which is that it really confronts traditional rules and standards with conundrums that are difficult to resolve. I just mentioned file sharing. That came along in the late 90s, early 2000s, I mean before. In the old days, you and I would walk into record stores and we had to buy our music on a physical piece of media. It was either on vinyl or tape, or then CD, and then all of a sudden it was up there in the cloud, in the air. You didn't touch it anymore. That was a big deal. It was a big change. So it wasn't a rent-seeking thing or a policy thing. It was a technological thing. And now we're confronted with this again in the AI age. Artificial intelligence can now generate new images, new texts, new other things, and we're like who's the author? These are legitimately complicated matters, and so I'd like to say that, richard's onto something, except that you still have to grapple with that fact that times and technologies change.

Speaker 1:

So that would be an argument for changing it relatively slowly, which actually brings me to the.

Speaker 1:

It happened.

Speaker 1:

I just listened to an excellent podcast you did with IEA and I wanted to ask some of your thoughts on permissionless innovation in artificial intelligence.

Speaker 1:

As a provocation, let me say, my son, kevin Munger, who is a professor at Penn State, gave a keynote address at the European Association for Computational Linguistics in Dubrovnik, of all things, in May 2023, where he argued that we and I'm not sure what that means, and so I'm not endorsing this, I'm just saying he said it, he's the black sheep of the family, but he said that we should enforce a restriction that all large language models, all AI applications, should not be allowed to use first person pronouns and referring to themselves and should not be able to say we in combining themselves with humans, so as not to allow the eliding of the difference between sentient humans and potentially sentient AIs.

Speaker 1:

And that was an example that he gave of a kind of restriction that we might start with, at least on AI. Now, I'm not asking you about that specifically, but are there? He is a slow downer. What we need to do is slow down the rate of change so that we can get used to it and have some kind of perspective. What do you think of the slow down AI movement and how would it affect the rate of innovation?

Speaker 2:

Yeah, I'm very troubled by it because I mean, right now there are a number of different forces aligned for not just slowing down but stopping the advance of algorithmic and computational sciences and at some point it becomes an all out war on math and computation and experimentation utilizing these next generation technologies. I think we should be very, very cautious about proposals like this but understand that there's a kernel of truth to them. I mean what Kevin has said and I've read that and I think we communicated on Twitter about it once. I think the better approach to that is to say well, look, there's a sensible middle ground here that says we do need probably some steps or maybe even some policies that help us identify what is human generated versus machine generated. But that gets into more of a transparency or labeling approach, which would be a more, a less restrictive approach that would still allow innovation but facilitate more information. That would help markets evolve and help people understand this technology. That's always the better approach, as opposed to the slamming the door shut and saying thou shall not and not allowing it in the patient, which, by the way, I don't even really understand how that works in the global environment we now live in, where you have global innovation arbitrage as the new norm. I mean just to give you one little anecdote, mike.

Speaker 2:

In June 21st, facebook now meta released the world's biggest open source AI large language model. It was called Lama. It was a 70 billion parameter model. Don't worry about the numbers, but 70 billion parameters, that's a big model. And so for a while, they were the kings of the open source AI world. And then, less than two months later, the government of the UAE announced that it was releasing, through its Falcon program, the Falcon 180B model. That was two and a half times the size of Facebook's Lama model. And so Facebook's reign as the king of global AI was two months long.

Speaker 2:

And the government of the UAE. I'm not usually a fan of industrial policy mechanism, but they created this lab and they've open sourced the world's biggest open source model. Now you can go and access it. Uae government owns it. Well, in that kind of the world, what does it even mean when we say we should bottle up something or stop something? Is Joe Biden going to tell the sultans in the UAE, like you got to shut that thing down, brother? No, they're not, and he's certainly not going to get anybody in China or Russia to shut down their systems. So we have to be realistic about this and find the least restrictive kind of approaches to addressing concerns, all the while understanding and appreciating that some of these concerns are quite legitimate, including the ones that your son has raised.

Speaker 1:

Yes, but it is interesting. There are always these apocalyptic predictions in the other podcast. I'm just shamelessly a fanboy, let me admit it. In your other podcast you mentioned the famous prediction by Bertrand Russell that unless there was a single world government by the end of the century, the world would cease to exist. I am reminded of the same sort of thing. All we need is a single world government for AI, and then these scoff laws in the UAE could not be doing this sort of thing, which it reveals my libertarian sensibilities. I guess it's more like a presumption in favor of liberty. Now, it's rebuttable, but it is a presumption in the sense that I'm pretty sure that a single world government is going to be more repressive.

Speaker 2:

Absolutely. There could be no greater existential risk than global totalitarian control. That would be the greatest of all existential risk. For sure it's not a chance of it Probability one.

Speaker 2:

Yeah, yeah, I mean, it is extraordinarily dangerous.

Speaker 2:

And what's a shocking thing about Bertrand Russell saying what he did in 1950, I believe it was six or seven is that he really believed quite naively, like a lot of philosophers at the time, that you know well, the United Nations has the best of intentions and it will only do good. So if we can just somehow consolidate it all there or it'll all be peaceful and everything will be fine. Well, history records some different realities with the way things played out at the UN. I always just try to remind crowds that in 1972, you know, the UN negotiated the biological weapons convention, which sounded like a dandy idea at the time. We should reduce not only nuclear stockpiles but chemical weapons and biological stockpiles, and in fact we should have all bands on biological weapons because of the fear that it would, you know, destroy the planet. Well, the reality is is, after a whole bunch of nations, including the Soviet Union, signed on to the biological weapons convention in 1972, the Soviets probably went right back to the USSR and ordered their scientists to double down on biological weapons research.

Speaker 1:

It makes perfect sense. If the other people are going to do what they say, I should do this, and if they're not, that's a dominant strategy.

Speaker 2:

Yeah, absolutely Right back to game theory and everything we know from the cheating that we the first day of game theory. Absolutely, this is the. This is so simple. And so when I see these people today make these same arguments about, like, well, we got to have international AI control through a new AI body or agency and you know we can trust the UN or some other body like that, I'm like I'm sorry, but are you really going to get Vladimir Putin at that table? Are you going to get China? And what about the UAE and these other two players that maybe not?

Speaker 2:

You know, rogue nations they're just nations are just going to say hell, no, we're not going to sacrifice our computational capabilities. Nation. And why should they? Why should they? And why should the United States? Why would we want to shoot ourselves in the foot and undermine our own ability to respond to those threats? Right, and so I'm not some crazy hawk, I'm more libertarian. I generally am skeptical of large national security schemes and so on, but I also don't believe in completely abandoning the, the important prerequisites of, like this, securing and defending your nation against hostile and adverse attacks and systems.

Speaker 1:

My last question is more of a kind of meta question. My first job was at the Federal Trade Commission. I worked in anti-trust enforcement in the first Reagan administration so yes, I'm really old. So the and the anti-trust by then had been this was after Michael Perchak, who had been a very aggressive sort of pro consumer aggressive anti-trust, and the movement had begun but it had stopped a lot of regulatory agencies. Their main job had been law enforcement and law enforcement said these are the things you can do, these are the things you can't do. Actually, mostly these are the things you can't do and everything that is not prohibited then is permitted. But many of the federal agencies and I think probably this is true, this was true of the telecommunications regulators and is true now at the Federal Trade Commission have changed from being law enforcement to being basically building inspectors.

Speaker 1:

And the problem with a building inspector is you know, I make the building, it's all ready to go, but I have to schedule an appointment from the building inspector who's then going to come in and look around and say is this okay? I can't know in advance if it's going to be okay. I have to wait until it's done and then wait maybe quite a while before the government signs off on it. That's sort of the ultimate in quiet permission, full innovation, because I can't know in advance if my innovation, if my merger, if my new contract provision, my new product, I don't know if it's going to be acceptable until bureaucrats look at it and use standards that may be subjective, I think, and old people are always apocalyptic. I think we're moving way more in the direction of the building inspector sort of mentality in the US. Now the European Union is lost. European Union, you have to get permission for everything. But I think things are getting worse, not better, in the permission required before innovation front in the US.

Speaker 2:

Absolutely so. This is the perfect framing you've just provided there, mike, because in theory we should want agencies like the Federal Trade Commission, agencies of general jurisdiction, to be there as a backstop for when harms or risks or fraud develops. The FTC, as you know when you work there you know, and still today has unfair and deceptive practices, authority and can address various other types of harms. There are other agencies that have that same sort of ex post enforcement approach and enforce after the fact. That's the way we want it. Permissionless innovation is not anarchy. It relies on a bedrock of property rights, contract law and some basic consumer protections, but the key thing is it doesn't, as you just pointed out, enforce them from above, preemptively and precaution in a precautionary way, such that it is like a buildings code kind of thing that you know. Basically, you're locking down everything by design and saying, well, you have to come to us for a permission slip to get out of that. You know regulatory hell, or what I call the innovation cage, and this is the way the way I look in my book on permissionless innovation. I sort of divide the whole world into two general buckets. When it comes to technologies and their regulation, I said you're either in the born free camp or you're in the born and regulatory kept activity camp, and we had this amazing history in the United States.

Speaker 2:

We're in the field of information and communications technology that I've always studied. You had one set of ICT providers, namely broadcasters radio and television broadcasters who were very heavily regulated by a centralized agency with a licensing scheme and all sorts of censorship and other BS. But then you had another body of media providers called newspapers that had an absolute gold standard of First Amendment protection and pure capitalists. You know, do whatever you want, kind of thing. What a weird, bizarro world that was we lived in.

Speaker 2:

And so when the internet came along, we had a choice to make as a nation were we going to put everybody in the innovation cage along with broadcasters, or we're going to put this new technology in with the born free world of newspapers? And we chose wisely. We chose to put the internet and e-commerce and digital and the digital technologies in the born free camp as opposed to the innovation cage. And the rest is history. And you pointed out, mike, look what happened to Europe that took the exact opposite path. This is a great transatlantic sort of like real world experiment for political scientists and economists to cover. In fact, when I taught at George Mason for the last 12 years, I'd always asked my students in class like can anyone name me here, a leading digital technology innovator emanating out of our headquarters in the European Union today?

Speaker 1:

And it's not for lack of talented people.

Speaker 2:

It's not for lack.

Speaker 1:

It is an almost perfect experiment, In fact. In some ways, they have people that may be more qualified in the sense that their knowledge of technology is tremendous. They're just prevented.

Speaker 2:

Absolutely, and it's not just the people and the labor and they, of course, left and came to America to invest in technology but it's the investors. Let's go back to your point about liquidity and capital that you made earlier. Right, it wasn't just that the talent left those shores, it was that the money followed them, and so we should learn from that. That's a powerful real-world experiment, and Europeans put it in the innovation cage and they got what they paid for. Nothing right. Well, are we going to repeat that mistake for AI? And I hope that we don't in the United States. But the danger is is that more and more people and more institutions, including the Federal Trade Commission, have made this term, and the Biden administration is encouraging it and saying, hey, go forward agencies, even the FTC, and this is why I've said that we now run the risk that the FTC becomes the Federal AI Commission, because Lena Kahn and company are ready to go with all sorts of preemptive strikes and a lot of job-owning and threats to boot on top of it.

Speaker 1:

Well, can we close then with one optimistic thing? What do you think is the best thing that's happening now in the innovation space?

Speaker 2:

Well, I always say this and it makes some people angry, but the single most important fact to understand about the world of emerging technology, especially information technologies, is that the pacing problem is a re-entless reality. The pacing problem refers to the fact that technological innovation happens very, very fast Sometimes it happens exponentially, even in the information space, whereas policy change happens incrementally at best, if at all, and that gap is growing. That's the pacing problem. But one person's pacing problem is another person's pacing benefit. I like the pacing problem when it forces reassessments of old bad policies that lock in past norms that need to be changed.

Speaker 2:

And this is why in my last book I talked about how it's quote unquote evasive entrepreneurs, a term I got from Pete Becky why evasive entrepreneurs are shaking up the world by taking advantage of the pacing problem. They know if they can put ideas and new products and services out there on the market and let people get a taste of those waters. They like that taste of freedom and the choice and it's hard to pull it back right. I mean, think about the case study of Uber and Lyft, which you're very familiar with, especially your previous history at the FTC. We spent you know, a conomist political scientist spends the better part of something like 70 years studying inefficiencies in the taxicab and limo market and getting absolutely nowhere with policy reform. And then Uber and Lyft came along, offered the service an alternative and overnight the whole world changed. They just broke it. You know, I hate to say it, because those of us who care about you know economics political science you know saying we don't do any good but the

Speaker 2:

reality is in the policy, where we don't get a lot of change that way. I've written two different books on just abolishing the FCC, and that agency is bigger than ever. I've stopped writing books on abolishing the FCC. Instead, I just encourage people to go out there and try new and different things. This is how Elon Musk got to worries app. This is how the whole world, the sharing economy, came about. It's when invasive entrepreneurs take advantage of the pacing problem to push new ideas and products out there and then people fall in love with them and we move the needle on progress forward.

Speaker 1:

Well, Adam, thanks so much for being part of TIDC. I appreciate it and let's talk again.

Speaker 2:

Absolutely, mike. Thanks, it was great to be with you today.

Speaker 1:

Whoa, that sound means it's time for the twedge. I should change it to to-medge, I suppose. But I have four econ jokes, so they're still sort of weekly. Now I know you're thinking, yeah, weekly is right. They certainly aren't strongly Good one. First, why did the large language model cross the road To get to the orthogonal, overparameterized training set? That is a knee slapper. An AI algorithm walks into a bar. It looks around and then tells the bartender I'll have what most other people, or most likely to order People want to build a border wall between artificial intelligence and human social spaces.

Speaker 1:

A chatbot heard about this and laughed huh, who do you think's going to write that code? In Silicon Valley there was an exhibition of a new generation AI computer which integrated GPS mapping and data on identities. A man from Iowa took his 10-year-old kid to the Silicon Valley exhibition. I will hide in the next room, said the man, and you ask the computer where I am. Let's see if this works. So the man hides and the 10-year-old kid asks the computer where is my father? The computer immediately replies well, your father's driving a mail truck in Iowa. But maybe you shouldn't tell that to the half-wit. Who's hiding in the next room. Let's catch up on letters. Hr writes.

Speaker 1:

The money pump, a thought experiment involving exchanging alternatives for money, is the classic argument to why intransitive preferences are bad, even though real preferences are experimentally known sometimes to be intransitive. What did transaction costs mean for the money pump? Well, I never thought about that, hr, but that's a great question. The money pump is a gedonkin experiment about being able to take advantage of a non-convexity in preferences. For discrete choice analysis, this caches out as transitivity. If I like apples better than broccoli and I like broccoli better than carrots, then it can't be true that I like carrots better than apples. Now, we're used to numbers working this way Seven's bigger than five and five is bigger than one. That should mean that seven is bigger than one, and of course it is Now. It may help to write these examples down if you're having trouble visualizing the problem. In principle, there is no reason that the following set of preferences are impossible I like apples better than broccoli, I like broccoli better than carrots and I like carrots better than apples.

Speaker 1:

Suppose, for simplicity, that I start with carrots and you have broccoli and apples hidden behind the counter at your store. You offer me broccoli in exchange for my carrots plus a nickel in exchange. I am happy to make this trade because I like broccoli a lot better than carrots. Now I have broccoli and you have carrots plus the nickel that I paid you. But remember, you also have apples and now I'm again willing to exchange. You give me the apples and I give you the broccoli plus a nickel. Remember, I'm happy to do this because I'm better off. You now have broccoli and carrots plus two nickels, so you offer me carrots in exchange for my apples. I'm happy to exchange again, giving you the apples plus a nickel, and I get well, carrots. Taking inventory, I have carrots which I started out with, but I also spent 15 cents on three trades in a circle. You have apples and broccoli which you started out with, but you made 15 cents on the circle of transactions. This could continue, of course, since I still like broccoli better than carrots, and so on. You make 5 cents on every transaction and I spend 5 cents on every transaction and I'm better off for doing so.

Speaker 1:

Now the listener might object that this makes no sense and could never really happen. Well, right, that's the reason that transitivity is a requirement for rationality. Intransitive preferences mean that someone could literally take all your money because the money pump drains your savings and you'd be much, much better off even though you started with carrots, spent all your money and ended up with the same carrots you started with. Well, if people did have such preferences, they would be eliminated from the gene pool by evolution because they would always give away all their wealth for no gain. Now it's not altruism. They give away all their money because it selfishly makes them better off. It's a contradiction.

Speaker 1:

This money pump example was discovered or first stated by the famous philosopher and mathematical economist Frank Ramsey. He was born in 1903 and died in 1930, just before his 27th birthday. One of the greatest set of achievements in the philosophy of economics and mathematical economics in history. And he died before his 27th birthday. What have you done today, hr? But enough fanboying on Frank Ramsey.

Speaker 1:

Hr's question is a good one. What about transaction costs? It might easily cost me something to engage in each transaction the time spent, the inconvenience of repackaging, travel, etc. Suppose it costs six cents in transaction cost for each transaction. Then mild and perhaps even moderate intransitivities could easily survive, because they would never actually manifest. I would not be able to make those trades and have all of my money drained away. That suggests the real hidden point of HR's question. Now, I happen to know HR rather well and he's a clever and devious monkey, so I know what he was going for.

Speaker 1:

Might, transaction costs actually protect people from a variety of problems and social ills. You bet there are obvious examples. Suppose we charge one tenth of a penny for each phone call, now that's not enough to deter real phone callers, but it would sharply cut back the expected value of robocalls. Charge one one hundredth of a penny for each email or text message Again not enough for most of us even to notice, but would help cut back on spam messages and emails. Any setting where one side is hoping to benefit on a large volume of transactions can be affected by transaction costs. We usually think of transaction cost as bad a friction to be eliminated or at a minimum reduced, but in this case quite the opposite is true.

Speaker 1:

Well, there was another letter. I was wondering if you could do an episode where you talk about your favorite books that have influenced you on some topic you've talked about on the show so far, especially if there are any that are more obscure that some of us might not know about. Thanks so much and I look forward to more episodes. Dj, dj, that's a good suggestion, but I think I'll amend it a little. Rather than an entire episode, I'm going to make it a new feature of every show. I'm going to call it book of a month.

Speaker 1:

For this first segment of the new feature, let me recommend Brian Doherty's terrific book Radicals for Capitalism. I've read it three times and it never gets old. It's a history of the Liberty philosophy and both talks about some undeservedly obscure thinkers from the past and give some interesting background on people you may have heard of but don't know that much about. It's a huge book but it's episodic and you can read it in pieces. Most importantly, it describes why the role of transaction costs regulatory, intentional or otherwise can restrict the number of transactions, and it's the many decentralized, small but valuable transactions that make a market economy work. Finally, the letter that I read from last month that I'd like to go back to.

Speaker 1:

I had an interesting experience the other day at work on a space flight project. The team I'm on has been developing procedures for many years, but, as flight approaches were only beginning to exercise some of these in tests and simulations, the most recent simulation revealed a process change that a few of us scientists and engineers thought would instantly save us at least one person year of future effort during mission operations and also lead to clearly better results. But management rather quickly shot us down because we're too close to flight in what NASA calls phase D in the project life cycle. Is this failing to ignore sunk costs, my initial thought, or is it rational because of transaction costs? Since the answer is always transaction costs, I think the transaction cost case is interesting. In phase D, nasa purportedly creates substantial transaction costs through process and paperwork for any changes. The reason they do this is because experience has taught space systems engineers that any kind of late change is all in caps, a bad thing that can cause hard to adequately catch bugs that change even to a likely better option will likely create the unknown. Unknowns are the hidden transaction cost and in space could be sufficient to kill a mission literally. So if you stick with the process as you know, you at least have been thinking about it for a few years and you've developed knowledge of its deficiencies, signed C. Well, see, that really is a great example and it's in keeping with HR's question.

Speaker 1:

Under some circumstances, the imposition of transaction costs to make it harder to change something can actually work to our benefit. It might be tempting, you know, a month out from the launch, to say, oh, we found a better way to do this, but you haven't thought through all of the problems with it yet. And so, as you say, having a rule which seems irrational, saying no, we're going to stick with the probably inferior procedure, means that you're ruling out a new procedure that might have unforeseen disastrous consequences, whereas the one that you already have is probably just inconvenient. So the imposing transactions cost make it very difficult to change at the last minute. Make sense, but notice that that's not the same as ruling out change at the last minute. If you have a really good idea, it may be worth changing even at the last minute. So raising the transaction cost just increases the friction of change. It doesn't make it impossible.

Speaker 1:

The next episode will be released on Tuesday, december 19th, the week before the Christmas holiday. We'll talk about distributed electoral consumption and generation and have another book at a month. Plus we'll have four more hilarious twedges and more next month on Tidy See.

Cultural Attitudes and Permissionless Innovation
The Power of Permissionless Innovation
Permissionless Innovation and Intellectual Property
International AI Control Concerns and Limitations
Inspections and Innovation
The Pacing Problem and Evasive Entrepreneurs
Transaction Costs' Impact on Change