Taboo Trades

Truth Bounties with Mike Gilbert and Yonathan Arbel

July 10, 2023 Kim Krawiec Season 3 Episode 15
Truth Bounties with Mike Gilbert and Yonathan Arbel
Taboo Trades
More Info
Taboo Trades
Truth Bounties with Mike Gilbert and Yonathan Arbel
Jul 10, 2023 Season 3 Episode 15
Kim Krawiec

My guests today are my UVA Law colleague, Mike Gilbert, and University of Alabama Professor, Yonathan Arbel. We’re discussing their paper, Truth Bounties: A Market Solution to Fake News, forthcoming in the University of North Carolina Law Review. 


Mike Gilbert is the vice dean and a Professor of Law at the University of Virginia School of Law. He teaches courses on election law, legislation, and law and economics, and his current research focuses on misinformation, corruption, and the role of “prosocial” preferences such as empathy in law. 


Yonathan Arbel is an Associate Professor of Law at the University of Alabama School of Law. His work focuses on commercial, consumer, and private law; and his methodology combines doctrinal, economic, and socio-legal analysis. 


Further Reading:

Truth Bounties: A Market Solution to Fake News

Slicing Defamation by Contract by Yonathan Arbel

How Do You Stop Fake News? Guarantee the Truth

Carlill v Carbolic Smoke Ball Co

Michael Gilbert Bio, UVA Law

Yonathan Arbel Bio, University of Alabama Law

Show Notes Transcript

My guests today are my UVA Law colleague, Mike Gilbert, and University of Alabama Professor, Yonathan Arbel. We’re discussing their paper, Truth Bounties: A Market Solution to Fake News, forthcoming in the University of North Carolina Law Review. 


Mike Gilbert is the vice dean and a Professor of Law at the University of Virginia School of Law. He teaches courses on election law, legislation, and law and economics, and his current research focuses on misinformation, corruption, and the role of “prosocial” preferences such as empathy in law. 


Yonathan Arbel is an Associate Professor of Law at the University of Alabama School of Law. His work focuses on commercial, consumer, and private law; and his methodology combines doctrinal, economic, and socio-legal analysis. 


Further Reading:

Truth Bounties: A Market Solution to Fake News

Slicing Defamation by Contract by Yonathan Arbel

How Do You Stop Fake News? Guarantee the Truth

Carlill v Carbolic Smoke Ball Co

Michael Gilbert Bio, UVA Law

Yonathan Arbel Bio, University of Alabama Law

[00:00] Mike Gilbert: We have sometimes been asked why major media players like The New York Times, for example, or Fox News would be interested in our system. And the basic logic has run like this your system creates credibility, but these actors already have or have enough credibility. They don't need your system. And I think Jonathan and I are in complete agreement that this argument does not make sense to us. The New York Times Times is a for profit entity, but I can't imagine that they wouldn't be interested in cracking into those millions and millions of Fox News viewers who believe everything in The New York Times is false and they'll never buy a subscription and so on.

[00:39] Kim Krawiec: Hey. Hey, everybody. Welcome to the Taboo Trades podcast, a show about stuff we aren't supposed to sell but do anyway. I'm your host, Kim Kravick.

[00:55] Kim Krawiec: My guests today are my UVA law colleague Mike Gilbert and University of Alabama professor Jonathan Arbell. We're discussing their paper, Truth Bounties a Market Solution to Fake News, forthcoming in the University of North Carolina Law Review. Mike Gilbert is the Vice dean and a professor of law at the University of Virginia School of Law. He teaches courses on election law, legislation and law and economics, and his current research focuses on misinformation, corruption, and the role of prosocial preferences such as empathy and law. Jonathan Arbell is an associate professor of law at the University of Alabama School of Law. His work focuses on commercial, consumer, and private law, and his methodology combines doctrinal economic and socioe legal analysis.

[01:49] Mike Gilbert: Hi.

[01:50] Kim Krawiec: Hi. How are you today?

[01:52] Mike Gilbert: Good. How are you doing?

[01:53] Kim Krawiec: I'm not bad. Well, thanks to both of you for being on the podcast. I'm so excited to talk to you about this. The article that we're talking about today is Truth Bounties a market solution to fake news. As you already know, we love market solutions on this podcast. I mean, you're like, right out of the gate. You're ahead of competitors in terms of proposals. Tell me, first of all, this hasn't been published, is that right? It's a draft, or has it been published or has it been accepted for publication?

[02:26] Mike Gilbert: The paper is forthcoming in the North Carolina Law Review.

[02:29] Kim Krawiec: So tell me a little bit about the article. First of all, I'm interested in how the idea came about and how your collaboration on it came about, and then we'll talk a little bit more about the specifics of the problem you're attempting to solve and how you're seeking to solve it.

[02:45] Mike Gilbert: Sure. Well, okay. So I'm happy to start. Thank you so much for having us on. Jonathan and I, let's see, we met at Alia, the American Law and Economics Association meeting for the first time maybe five years ago or six years ago or something, but we don't otherwise routinely run in the same academic circles. But our collaboration started when I discovered a blog post that he had written, and the blog post was, I believe, titled Defamation by Contract. And in that paper, in that blog post, he explored in brief form some of the ideas that are at the core of our paper now. And I reached out to him after reading it because although I hadn't written anything down, I had been thinking about exactly the same ideas, coincidentally. So I reached out to him and we've been thinking about this ever since.

[03:39] Yonathan Arbel: It's always nice when someone reads something you write someplace. And so Mike reached out to me and said, I read it. And I was like, okay, somebody actually read that? That's very nice. They said, I want to talk about it. And then we started talking. We realized that we have thoughts that align much more broadly on how it might fit in. And so I study contracts, study defamation law, but I don't know a lot about election law and sort of the big picture stuff over there. And so his ideas, I felt, were able to bring much more a deeper buddy to this project. And we thought, well, maybe we can write something together that might actually be transposed, maybe even policy wise. Maybe it's something that we can actually accomplish and do.

[04:31] Kim Krawiec: I had actually assumed that that was the two sets of expertise that you guys brought to the table. And we'll talk about it later in the interview today. But as you might expect, Jonathan, I really liked the section on Puffery a lot and want to talk to you about that. Tell me a little bit then, about your proposal itself and what is the problem that you're seeking to solve, and then we'll talk about some specifics of how your proposal will solve it.

[04:58] Mike Gilbert: Sure. So I'll give an overview of the system we envision, and then Jonathan can add in corrections or edits or improvements. So the basic problem we're trying to address is one that lots of people are thinking about, and it's the problem of misinformation and disinformation, particularly on the Internet. And the way we think of the problem is as a kind of this is economic language. It grows from a kind of information asymmetry usually not always, but usually it's the case that the speakers know whether the things they're speaking about, the messages they're communicating are true or false, or there's a good reason to think they're true or supported by evidence. But the listeners or consumers of this information, they can't tell. They can't distinguish what's true from what's false. Some people can. Some people are better than others, but many people can't. And this leads to really problematic outcomes, as when people believe things they shouldn't and they take actions as a consequence that they regret or they shouldn't take, or whatever else. So solving or addressing this problem is very difficult for lots of reasons that are probably familiar to any listeners out there, one of which has to do with the First Amendment. It's quite difficult to regulate speech directly, especially in the political arena and so on. So we've devised a solution that we think is pretty clean and pretty elegant and importantly, is voluntary. And the system relies on what we call truth bounties. And I think it's easiest to explain by way of an example. So suppose that you're a journalist and you've done good work and you're going to publish this good work and you want people to believe it. When people encounter this story or article or whatever, you want people to understand that it's true. There are good reasons to believe it. So under the system we envision, at the same time that you publish your article online, you would deposit some amount of money that we call a truth bounty, essentially in a kind of third party escrow. And when your story appears online, it would have an icon or a badge next to it telling everyone in the world when they see it that you're so confident about the veracity of your claims that you've put money on the line, you have skin in the game. And that's a very powerful and important signal, we think, for helping people distinguish what's true from what's false. Now, if someone in the world, in the ideal operation anywhere in the world, this avoids a bunch of jurisdictional challenges and hurdles and things that maybe we'll get to a little later. If anyone in the world sees that story and thinks that it's inaccurate or false, they can just by clicking on that icon file a challenge to your story. And in brief, the challenge goes to private arbitrators. We stay out of the courts, we avoid First Amendment constraints and other problems with the formal court system as well, one of which is long delays. And if the arbitrators decide that the challenger is right, this story is inaccurate or it's misleading, the challenger gets the truth bounty. And if it comes out the other way, no, the story is accurate, then the author of the story that money remains in the third party escrow. So a couple of points to make about that then. So first of all, the prospect of claiming the bounties incentivizes people to seek out and challenge falsehoods. And we think that's good. A second point to make is people who are knowingly telling lies presumably don't want to lose their money. In general anyway. We can discuss some specific cases later maybe, but in general people don't want to lose money. So liars will tend not to use the system. And of course that's valuable. When you don't see a badge or an icon next to a story, that means someone has opted not to use the system. That tells you something about whether you should trust it. And then the last thing I'll mention is that that icon would update dynamically. So just to keep there are lots of ways to operationalize this, but one way the icon could be green when the story is first posted and there's a bounty. And when there's a challenge pending it could turn yellow. So anyone knows, well there's a bounty here but there's some question and if the challenge succeeds, the icon might turn red. So now, you know this was challenged and if the challenge fails, the icon is green but I don't know, it has an extra check mark or something next to it. So all along the way the system is trying to provide ongoing, updated, useful information to information consumers.

[09:11] Kim Krawiec: One point of clarification Mike, I assume that there's only a single challenger, that it's sort of is it whoever files first and then you're done. It's either verified or not verified and that's the end of it. I mean that was my assumption but I just wanted to be sure I was right about that assumption.

[09:27] Mike Gilbert: So it's a good question. And we've thought a lot about this actually, and maybe Jonathan will have some things to say here too. So we don't think it can be limited to just one challenger, at least if those challenges fail. And the reason for that is it would be possible, not necessarily easy, but possible to game the system. So I could post a story with a bounty and my friend could immediately challenge it. Their challenge is frivolous.

[09:54] Kim Krawiec: How clever. I would not have thought of that. Yeah.

[09:58] Mike Gilbert: So it seems to me we have to do our best to avoid that problem. And one way to avoid it is to allow ongoing, continuous, albeit sequential challenges. So okay, you bring your friendly and frivolous challenge and I win. And so I keep the money and the icon in fact updates. That gives reason, extra reason to think you should trust this story. But then someone else can file a challenge tomorrow and I suppose if you had enough friends lined up you could keep up this ruse, but that would be increasingly difficult to do.

[10:31] Kim Krawiec: Do you worry then about the costs on speakers of having to defend against these potentially then multiple challenges? Or is there some sort of system in place to sort of harness economies of scale so that they can build on the prior defenses, for lack of a better term?

[10:50] Mike Gilbert: Jonathan, do you want to take a stab at that one?

[10:52] Yonathan Arbel: Yes, we've thought a lot and I think there are a couple of ways to make sure that we are doing it at scale and we can do it very well. And I think it's worth remembering also that the legal system, judicial system, the courts, they don't scale very well. That's the problem we've seen in various parts of the law where we use adjudication for things that don't scale well. In our case we've thought about having an initial filtering system built in such that even before a claim reaches full arbitration, there will be a sort of plausibility, just facial plausibility stage where if somebody is just making the moon is made out of cheese claim, then the system will filter it. And there is a little bit of money that has to be paid just to make sure that the filtering is effective. So that I think, contributes to the realism of this system. And that connects to a more general, I think, reaction a lot of people have because when they first hear about it, they think, is it realistic? Why would anyone ever put down money and say any person in the world might claim that if what I'm saying is not true. And so when thinking about it, like how realistic this thing is, which we think it is, one point to keep in mind is that we're not the first one to think about the need to create credibility for speech. You mentioned earlier puffery and we'll talk about cases where people said, I want people to believe me and they put down money. They haven't done it effectively, but they try to, they figure it out. They're putting down money. Putting down skin in the game will make people believe them more. And so we think about it as the truth. Bounties are to the truth the same thing that warranties are to people who try to sell refrigerators. They want to say, I'm going to give a warranty and I'm going to pay out money, real money, if my fridge is not working well. And they do it not because they want to pay anyone money for repairing fridges, but they do it because they believe that more people will buy.

[13:06] Kim Krawiec: Mention of warranties brings me to another question I had, which was you use the example of manufacturer warranties in the paper a lot, understandably, but one of the thoughts that occurred to me is I care less about manufacturer warranties if my credit card is providing one, as most of them do these days. And it just made me wonder whether you envision a role for third party sort of staking or funding of bounties. You talk about it on the consumer side in the paper, but there's less discussion of the speaker side. And I'm just sort of envisioning Peter Thiel or a comparable person from the left staking smaller speakers with bigger bounties than they could afford to put up on their own. Do you guys envision that? And if you do, do you think it's a good thing or a bad thing? I can imagine arguments on both sides of that.

[14:04] Mike Gilbert: Well, maybe I can take a first pass at this. And let me say first that in kind of simple terms we might think of potential users of the system as being people or entities with means and then there's another category of people or entities without means. So for the people with means, they don't need they in the right circumstance might still want, but they don't need third party stakers to put up the bounty and so on. They can do it themselves. And there are lots of potential mechanisms by which they could do it. So when I introduced the system, I did it intentionally in the kind of simplest way. You just deposit some sum of money in a third party escrow and it sits there. But is the New York Times going to make a deposit every time they place a bounty on a story? I think the answer is no. They're going to have an underwriter and they're paying a premium to the underwriter, and the underwriter is contractually obligated to pay the bounty for the users of means. There are lots of potential ways to do this, and I just want to say one other thing about that. This applies actually to both categories, including the users without so many means. If the information that you're spreading and producing is honest, you won't lose your bounty. The system could make some errors now and then we can discuss that. But in general, you will not lose your bounty. And so it isn't so much a big cost to you as it is a temporary loan that you make. Now, that's not costless. But just to be clear, if you're doing this right, you're not routinely shelling over lots and lots of money. It could be relatively inexpensive to use the system. So then maybe I'll just say something briefly about this other category, right, that people are users without means. They're the ones who, it seems to me, might rely on stakers. We don't envision, and we don't even know how one could enforce a kind of prohibition against third party staking. So it could happen. And as you suggested, Kim, I guess we see upsides and downsides to that. So the upside is it's a bit like the anecdote I started with, the example I started with. I'm an honest person who's done good journalistic work, but I don't have many resources available to me. How can I nevertheless afford to attach a bounty to this? Well, maybe some third party will help me out. Maybe the New York Times will pick up my story and I come under the umbrella of their insurance coverage that is, effectively posting a bounty on every story there. And third parties who have the resources could be helpful for spreading truthful information and using the bounty system. The downside, I suppose, is that you could have wealthy third parties whose motives are nefarious and they're going to stake stories that they know believe are untrue, but they're going to do it anyway because they have an interest in circulating this false or misleading message. And that could happen. The system can't entirely prevent that. But there are some disincentives to that. So the first and obvious one is that the third party staker would be losing their money. Someone is incentivized to challenge the false stories, and someone is claiming the bounty every time. The other thing I'll say is that this may be impossible to track perfectly, but the system would at least try to track kind of reputations so that if I'm consuming information and I see the truth bounty icon next to a story, I can, I don't know, hover over the icon or something. And I can not only see, for example, how much money is at stake here, whether there have been any challenges to it and so on, but I would be able to see. Oh, the author of this story has posted 116 bounties and only eleven of the bountied stories have been challenged and none of those challenges have succeeded. And that tells me a lot right there about whether to believe this. And of course it could go the other way too. Oh, this person has put up 40 Bounties and 36 have been challenged and 35 challenges have been successful. So we can't entirely root out the problem you're getting at. But there are some mechanisms here to mitigate it, I think.

[18:00] Kim Krawiec: To be honest, I don't necessarily view it as a problem, although I can imagine that there are some downsides that you mentioned. But in some ways it seems to me consistent with your theme in the paper of using the bounty to signal our strength, the strength that we have in the belief. Right? And so a person with lesser means might be able to offer a bounty, but a lower amount. And it actually is not a function of their belief in the view being expressed, but a function of their constrained liquidity or something. Right, but I can then imagine a wealthier staker saying no, we are serious about this, this is good information and the bounty will be 100,000 or whatever it is. And so I can imagine it having I can imagine the downsides as well, but I actually can see a number of positives from these third party stakers.

[18:56] Yonathan Arbel: Yeah, just to add to that, I think one of the nice things about the system is that suppose this extremely wealthy deep pocket organization, maybe an individual wants to sponsor falsities at scale. Well, the falsities have a very short shelf life. It's like milk.

[19:16] Mike Gilbert: Basically.

[19:17] Yonathan Arbel: Somebody will come in within three days, seven days, two weeks and claim the bounty. And that person, by the way, might well be a think tank or an NGO or an organization that is sort of going after the truth and use the bounties as a way to fund themselves. So there is sort of an interesting dynamic that might arise out of it that will mitigate this specific concern.

[19:43] Kim Krawiec: I want to talk some more about the specifics of the proposal, but one thing I want to ask you about sort of early in this discussion is pledged because if I understand it right, you guys have actually put your money or your effort where your mouth is, right? And you actually started a nonprofit to try to implement. This is one of the first things that appealed to me about this. I was like, wow, they are really committed to this. Can you tell us a little bit more about that and sort of what you're doing and what stage you're at right now?

[20:16] Mike Gilbert: Jonathan, do you want to oh yes.

[20:17] Yonathan Arbel: So we would like to see it actually being implemented. And the way we think about it, I think there is a very important role to be played by a nonprofit organization that is creating the standards, the rules of the game, an organization that's ideologically independent and is not profit motivated necessarily just to create the framework. And then there are ways in which markets can interact with that. And so we have started conversations with people with expertise who might have more money than we do to help us get started. It's initial stages in terms of searching for grants and funders. We're on the market, we're looking for more. We've also started a lot of work on thinking very practically about the roles of the game. What would they look like? There is the big picture for which we wrote the academic paper, but there are a lot of small things about timelines and protocols and even user interface, and what sort of logo we want to use and what databases we want to present and represent information as it changes. When you have a dispute, when you have a refuted claim, all of these questions are very practical to deserve attention, and you're actually working on writing the protocols.

[21:48] Kim Krawiec: I'm interested in the extent to which the act of trying to implement your proposal in a practical way influenced your scholarly thinking. In my experience, you'd never think of everything in the course of just writing a paper. No matter how hard you try and how many people you have working on the problem, there are always new things that you didn't anticipate, and then sometimes there are things that actually work better for reasons you didn't foresee, right? You're like, oh, we actually intended this to solve this, but it solves this other thing as well. Did that happen with you guys? And if you have any examples that are worth sharing that, I'd be interested in them.

[22:26] Yonathan Arbel: I think one of the things we didn't appreciate fully was what is the standard that we're going to use to determine whether story is true or not? We had a very fairly abstract sense that, well, if somebody says something that's not true, then you can claim the bounty. Well, you run a story and you mentioned 24 facts. You quote someone, you have a typo in there. Is that enough? So start thinking about that. What is exactly the standard? Another very practical question that interacted interestingly with academic work was fee shifting. Who has to pay for the cost of arbitration? Do we want it to be the claimant? Do we want it to be the other party? Who should pay for that? The loser? The winner? And here it was very interesting, because to answer this very practical question, of who has to pay. We looked at academic literature and economic literature on fish shifting and one of the things this literature teaches us is that if you have a winner loser pace system, then what you're going to have is fewer lawsuits that are marginal, but more lawsuits that are very strong. And we think in terms of the system, it's actually good to have really that the claims that will go through will be the powerful ones and not to have a lot of iffy determinations that people might not believe in because trust in the system is also something that we want to worry about. So there was this very interesting interaction between very practical considerations and academic work.

[24:07] Mike Gilbert: This is partially responsive to your question and partially just a more general observation. Again, we're often thinking as we work on this project in kind of rough and ready categories. So there's speech that's true, there's speech that's false, and then there's some stuff in between. And the purer you are as an academic, I think the easier it is to ignore the stuff in between. The closer you are to the ground in law and defamation law in particular, and just thinking first Amendment, the more that middle category sort of looms larger. Okay, there is stuff that we just really can't it's hard to know if this is true or not true or we could suss it out if we had access to a bunch of information to which we don't have access. Right. I think we would readily admit that for speech in this middle category, where it's sort of on the borderline, our system can't get it right every time, just as the formal adjudication system in defamation cases can't get it right every time, but we think it might be able to get it right more often than not. And at the same time we think it can nearly always get it right in the cases where the speech is just false with a kind of pure academic hat on that's maybe not fully satisfactory, but the closer we get to on ground operationalization of the thing, that actually sounds pretty good. It seems to us that if we can do something about the most egregious false speech in a way that is voluntary and effective and avoids all these First Amendment constraints and jurisdictional problems, and if we can get at least some of the kind of marginal speech that we're less sure about, that's successful.

[25:48] Kim Krawiec: That is a really interesting observation and leads into a question that I had for you and that you have now partially answered, but maybe we can discuss it a little bit more, which goes to the issues of errors and predictability. And I think we all know the disadvantages of, say, the tort system and defamation suits in terms of predictability of liability. My question was how much better is your system? It certainly is much, much better in predicting the amount of liability because you've staked a bounty. Right. But do you envision it being better than the existing system in terms of error rates and predictability about truth or falsity? Or maybe perhaps the better word is accuracy or inaccuracy or misleading or not misleading or something like that. I think that's what you use in the paper. And your answer, Mike, in part goes to this already, which is when it's really just like made up, we'll be good at that. But I am interested in some of the in between cases because of course I am an academic and so that's what occurs to me. Because the Alito stuff was just in the news the past couple of days, it made me wonder what you thought about that with your bounty system and whether that is the type of case that it would be successful with or would struggle with. The reason I bring it up is because all three of us have friends that would probably be considered experts in this and none of them agree in my discussions with them. Granted, it's not as if any other system is going to be better than yours, to be clear. Right. I'm not suggesting that there's a better alternative, but I am interested in how you think your system would handle that well or not well or not at all.

[27:32] Mike Gilbert: I have some scattered thoughts, though they're not directly on point with the well, I'll just say a few things. You can edit. Jonathan can come in if that's okay. Yeah, I guess one thought I have is about the age old distinction between assertions of fact and statements of opinion. So the system is designed to root out inaccurate information, meaning statements that purport to be factual but in fact are not. The statement is not designed to prevent people penalize people, take away people's truth bounties or whatever else. It's for statements of opinion. So then this raises a question of can we distinguish statements of fact from opinion? Because sometimes the line can be fine. And I guess this is a potential connection to Alito. It seems to me some of what he is saying in his defense of himself is kind of borderline opinion, like, well, no reasonable person would think that this was an improv. Okay, well, that's sort of a statement about how people actually think and sort of an opinion about how people ought to think. Right. I don't know if Jonathan would agree with this, but it seems to me that here the risk of error in the system actually works in its favor. I don't mean there's no downside to making errors in adjudication or arbitration, of course there are. But it works in our favor, in the system's favor, in the following way. If you think you're right on the border such that what you intend to be, a statement of opinion could be construed as a statement of fact, and you could then lose your bounty as a. Consequence, then it seems to me the right response to that is to rewrite your statement.

[29:17] Kim Krawiec: I had not focused on the potential incentives to reframe the way in which you word things that could be independently helpful. That's a great point.

[29:27] Mike Gilbert: Right. And we think that just to emphasize it, we think that really is independently helpful. You can express whatever opinions you want, however crazy they may be, or some people may think they are, as long as they're clearly opinions such that readers are not misled into thinking that that's a fact. Maybe this is, I don't know, too subtle or something for some people, but in my mind there's a really important difference between saying the election was stolen by fraudulent votes and I believe the election was stolen by fraudulent votes, or it is my opinion that so and so should still be president. I think those are really important nuances, and the system incentivizes users to get those nuances. Right.

[30:12] Yonathan Arbel: I like that, just to take a cheap shot at the competition, because this is about markets and the competition being content moderation. Here is what I have in mind. I think the concern is, when we wrote it, people had a more rosy view of content moderation done by Twitter, Facebook and other social media organizations. And I think these sort of cases and sort of what we have learned since we started writing and today I think the sort of public opinion has shifted somewhat on the viability of content moderation by private firms, is that cases like that with content moderation there will be a very strong incentive to silence one person on one side of the aisle. And we don't want that. I think it's not good to have access, to give public access to only one side of the debate. And so content moderation, I think, would falter here, especially if done behind closed doors, and a system, an adversarial system, where people present their claims or evidence, where everyone is welcome to participate in the system, and at the end there might be someone who is a winner, someone who is a loser. I think it has institutional and public value that we might lose with secret content moderation.

[31:32] Kim Krawiec: Great. Yeah. And this reminds me, I just want to say, for listeners who might not have read the paper yet, you can download it on SSRN. I'm going to put a link in the show notes. But you guys do go through a lot of alternatives that have been put forward proposals by others, because many people have thought about this problem. And you go through them, I think, very carefully and point out what you think are some of the positives and some of the problems with them and why you think that your proposal will work better than them. And I don't think we're going to have time to talk about all of those today. But I just want to highlight for listeners that you do that in the paper and it's worth reading the entire paper to also get your discussion of what these other alternatives are that have been put out there.

[32:12] Mike Gilbert: If I can just chip in one thing here too. So I think content, the term content moderation is sometimes used to refer to at least slightly different circumstances. So insofar as content moderation is used to refer to, for example, platforms removing child pornography from their sites, we have no objection whatsoever to that. We have no objection whatsoever to content moderation that is designed to prevent violations of law. When we think of a comparison between our system and content moderation, we just mean insofar as the two different approaches can combat the problem of misinformation, that's it. And there we think the choice is content moderation, where some people you don't necessarily know and it's not transparent, are just deciding you won't see this information versus our system, which is not everybody can see. We're not stopping anyone from circulating any information. We're just creating a labeling system that helps people distinguish what they can trust from what they can't.

[33:10] Kim Krawiec: Well, this brings me to another question I guess I had about the paper I wanted to get your reaction to, which is one of the problems with the type of content moderation you're talking about, right? Not to remove child porn and stuff, but to remove disinformation, is that there's a lot of distrust in it because it is perceived to be biased. Whether that's true or not, I actually have no idea. But in any event, that's what people think. And so one of the questions I had for you is the extent to which your proposal will fare better on the trust metric. On the one hand, I can imagine it faring better than that type of content moderation for precisely the reasons that you talk about in the paper. At the same time, I kept hearing in the back of my head Donald Trump saying, yeah, we don't use their stupid truth bounty system because they hate us, they don't like us, they're never going to rule in our favor, they're only going to rule against us. We don't use that stuff. And you have mechanisms built in to avoid bias, such as the selection of method for selecting arbitrators and that type of thing. And yet for the type of person susceptible to that type of talk, I just don't think they're going to appreciate that. And so maybe the answer is just some people can't be helped and they're not for us. But I did want to get your thoughts on the trust problem and the extent to which you think that you are helping to alleviate it and whether there are just some speakers and listeners that are sort of just beyond the pale.

[34:42] Yonathan Arbel: Yeah, I'll just start because it's a difficult question. I tend to think that people as a whole are responsive or truth searching, although they might be misguided in many different ways. We all have our own limitations. We all live within our ideological bubble. So we might not all be persuaded more. Bayesian approach would be to say some of us have very strong priors, too strong to be dislodged by any evidence. And that's a nice way of saying, well, some people are. It just takes a lot to persuade people. We actually ran an experiment. This is not in this paper, but in a different paper. When we present to people a new story and we tell them there are sanctions when if what is being told to you there are legal sanctions, if what is being stated is false. And we ask them, well, how much do you believe the story? And it turns out that people do adjust and when there are sanctions, they believe more what they read. There is a credibility fact. So people generally understand this dynamic. They understand it very intuitively because of sayings, very common sayings like put your money where your mouth is. So people do react and do believe systems like that. Now, you might say, well, what about the speakers who might want to disavow the system? Because when they're made to pay, nobody wants to say, well, okay, what I said was false. And here's one advantage this system might have over more conventional litigation. You don't get to choose whether you are bound by defamation or not. It applies universally. Our system is voluntary, so people opt in into the system. And if you think the system is completely bogus and run by a cabal of elites, you can choose not to use it if your audience doesn't believe in the system. So that's one mechanism. There is another smaller mechanism that we adopted here, which we think will lends a lot of legitimacy to the system. The arbitrators, they're not a person. Mike and I choose every claimant, every side to the dispute decides the arbitrator. They want to use me. We have a panel, people who are credible, established, who have experience. But every person can choose their arbitrator. And the two arbitrators choose a third one together, say who will be the typewriter. This sort of process, we know it from legal proceedings, we know it from legal jurisprudence. People who have when the process is fair, the outcomes are perceived as more legitimate. And that's important. That's something that's built in into our process. Mike, do you have anything to add?

[37:36] Mike Gilbert: Well, I'll just add this might be a little redundant, but I do think there are some people out there, maybe not many, but there are some people out there who really don't care about truth. And for those people, there's just nothing we can do. But again, that's okay. From a Pragmatic perspective, if we can help some people make better decisions and do a better job of filtering what's true from what's false, then that seems like a project worth exploring. Even if we can't help everyone. Just one other point. For the system to work, we need, among other ingredients, the arbitrators to get it right in the main. They don't have to get it right in every single case. That would be nice if they did. But just like ordinary legal systems, they need to at least get it right in the main, and they need to be perceived to be getting it right in the main. And this is the trust issue. And you asked earlier, Kim, about the intersection of kind of academic thinking and practical thinking on the project. And this is another area where this has really come together. It's one thing to talk about the importance of trust and legitimacy in the way that academics often do. It's quite another thing to think, okay, how do you build a system from the ground up that people will trust and that trust will grow over time? And this is something we've done a lot of thinking about. And as Jonathan said, we think the selection of arbitrators, especially early on, when the system is just getting going and it's more, perhaps more realistic to do that. If you choose your own arbitrators and you lose, it's a lot harder to complain. Right? And we have some other ideas, too, but you put your finger on a real challenge. The system won't work if people don't trust them.

[39:16] Kim Krawiec: One of the things I do like about your paper, there's an optimism for reasons, and you discuss why you think optimism in sort of human nature and our ability to govern ourselves is important and it is refreshing. I am less optimistic than you, quite frankly, just sort of by constitution. Right? But it is refreshing to assume with some evidence, right. I'm not saying you're just being optimistic with no basis to focus on the fact that many people do in fact, want to know the truth and just find it hard to distinguish. Right. And it's very easy to focus on the people who are not interested in the truth, as you say, and to let them carry the conversation when they are perhaps outliers and not what we should be focusing our attention on anyway. I really appreciate that part of the paper. I was going to ask about the puffery cases, but is there something different you guys would like to focus on instead?

[40:10] Mike Gilbert: I'm happy to talk about the puffery. Well, Jonathan, this is really your expertise, not mine. The only thing I don't know, Kim, one thing that occurs to me that didn't come up earlier that I would like to mention. Jonathan and I are trying through this nonprofit to actually build and operationalize the system. It's challenging in part because we both already have full time jobs, but we're trying at the same time, though, to be clear, our paper is available for the world to see if somebody else comes along and picks up the idea. And operationalizes it better than we do. Or we could. We are not opposed to that.

[40:48] Kim Krawiec: You are true market advocates.

[40:53] Mike Gilbert: There is no trade secret here. I mean, we've done some thinking that others probably haven't about how to operationalize it. But just to be clear, I don't know why I feel like I need to make this point. But if somebody else wants to do this, have at it. We would be thrilled if someone more competent than us made this practice. We just assume that that's sufficiently unlikely, that we need to take some steps ourselves too. We're trying to do that.

[41:17] Kim Krawiec: Great.

[41:18] Mike Gilbert: If anybody in the world listening to your podcast wants to partner up, we are happy to talk.

[41:22] Kim Krawiec: Okay, great. I think probably a lot of the discussion around your paper and focus in the paper is on political speech and these types of political questions because they are both very important and have been in the news a lot. I was very drawn to the section on commercial speech and in some ways I almost thought the Use case there was even clearer, in part because there have been attempts to use bounty like features not always successfully, especially in that space. It's happened in the political space as well. But I was really drawn to that section. I wondered if you guys could talk about that a little bit. And Jonathan, I assume that this is your primary, that you're the primary one who would speak to this, but obviously I'm happy to hear from both of you. Do you see any differences really in the commercial context and the political speech context that are worth talking about that perhaps aren't in the paper or that you want to say more about?

[42:17] Yonathan Arbel: So in general, a system where people want to gain credibility, want people to believe them, that's a general problem that we see in political speech. We see in commercial speech where advertising is ideally meant to make people believe that your product is good. And there are other areas where we want to persuade our peers that what we say is true, that they should believe us. So this is a general solution and it plays out a little bit differently. The dynamics are different in different domains. And so I think the thing to get the contracts professor excited is the Garballic Smoke Ball case, of course, and very recently that's political. But we got the Michael Lindell case where we saw someone else pledging $5 million trying to engaging in a prove me wrong contract, which is a variance of the truth bounty system. And the Kabolic Smokeball, maybe just a quick introduction. It's a case from 1891 from England, and then the Russian flu is all over England. People are very worried about that. In a commercial firm, say we have this perfect medicine carbolic smoke ball, a mysterious element, nobody knows exactly what it is and how it works, but if you take it three times a day for two weeks, you will never get the flu. This is how they advertise and they want to show they think this is the birth of modern advertising. They say, we're so certain that you will never get the flu that we're putting down 100 pounds, which in today's terms is the equivalent to say, $15,000. We're putting down $15,000 that if you use it as directed, you're not going to get and this is not real medicine. It has very real backing. And so this lady, she's the wife of a local lawyer and she takes it as directed for two months, actually. And eventually she gets the flu and she goes and she asks the company, please pay up. And they say, no, we don't have to pay. They're trying to get out of the contract and break all their way out. And so this is an attempt to create a true response to say, we will pay money and we see all the difficulties because when it's time to pay up, the other party is going to say, oh, no, actually there is section somewhere in the fine print, something we didn't read. The Kebolic Smoke Company said, oh, you should have used it in our offices. Otherwise we don't know if you actually used it as directed. And also, we were just kidding about the whole thing. It was just puffery.

[45:04] Kim Krawiec: Right? I've always been uncomfortable with the puffery defense and with some of the cases. And I think that's part of why I was drawn to this part of the paper. What was interesting to me is I had not focused on this part of the case before, but now, having read your paper, I am focused on it. My recollection is that the court's reasoning for why it was not puffery depended in part on the seriousness that the bounty conveyed. Right. They posted a bounty, therefore they couldn't have bit minted as a joke. I just found that very interesting. At the same time, you bring up a good point, right, which is that in a lot of these cases, in most of the ones you discuss in the paper, I think people are very happy to get the short term benefit of the credibility of the bounty and are very eager to disavow it. And I don't know whether that says anything about the sort of long term viability of a bounty system. Right? I mean, will people like it until they lose once or twice and then they're like, no, we're out of here. No more with the bounty system? Or is there something about your more formalized system that will discourage the type of behavior that we see in Carbolic Smoke Paul itself and in some of the other cases that you guys discuss both in the political speech context and in the commercial speech context?

[46:20] Yonathan Arbel: So the Michael Lindell case came out after we finished our draft, but definitely a case in point there. Mike Lindell says, I have data here that proves that the election was there was Chinese interference in the election. And I'm putting down $5 million to anyone who can prove that this information doesn't relate to the 2020 election. This information is supposedly something called internet packets or captured packets, which is a fancy name of just like, internet metadata, says, I'm putting down $5 million. And so this person, a cybersecurity expert, comes in and say, I want to claim the $5 million because I can show that your information has nothing to do with the election. And then the lawyers open up the rulebook, and the rulebook says, you have to disprove our claim or Michael Dell's claim with 100% certainty. And if there is any doubt, mike Lindell will get to that his sole discretion how to resolve issues. That's an almost impossible standard to meet, but there the claimant was actually able to persuade a panel of arbitrators that the data has nothing to do with the election. So what does it tell us? It tells us that we want to have a separate body to set up the rules of the game from the players that play the game, because if the player gets to choose, the rules of the game gets to hide various nefarious. Like, everything is my own sole discretion. Yeah, you could pull it off once or twice, but then after a while, people will just say, this entire system is raped. It doesn't work. If you want to read the credibility benefit, if you want people to believe if Michael and Dell wants people to believe him, he will not be able to use his own homebrew truth boxes. If they have to rely on a different system, will he want to use the system? I think the answer is no, and I think it's great. I don't want him to gain credibility when he doesn't have evidence, and in a world where just sort of suspend his belief, but imagine a world where the truth bonzi is doing it up and running, and maybe say you're Michael and Dell and you want people to believe you. You don't want to lose $5 million. But on the other hand, if you are not pledging any money, why would anyone believe you?

[48:52] Kim Krawiec: I guess the other thing is that your example made me realize that not only were these speakers making their own rules about verification, but they often changed those rules after the fact. In some cases, at least, they didn't have an ex ante set of rules. Right. It was only once they were challenged that they were like, oh, and you have to be in our offices, or you have to do this, that or the other. Go ahead, Mike.

[49:15] Mike Gilbert: Well, I was just going to say, the problem or the potential challenge you raised, Kim, the regular legal system has that, too. Lots of people are happy to play, and they go to court until they lose, and then they say, wait a minute, I don't want to pay the judgment the system is fixed or rigged or whatever else. And I think Jonathan was getting at it. The root issue is that these systems have positive externalities. They don't just help the participants, they help lots of other people too. And so the participants and so it creates kind of collective action problem the participants would like to pull out. They'd like to disavow the system. But that doesn't mean that the system isn't producing lots of social value for the world. And we think that would be true with our system too, just as it is with the formal judicial system, our legal system. The other thing I would say too is just briefly one more example that's sort of on the margin of the commercial context and the political speech. We have sometimes been asked why major media players like the New York Times, for example, or Fox News would be interested in our system. And the basic logic has run like this your system creates credibility. But these actors already have or have enough credibility. They don't need your system. And I think Jonathan and I are in complete agreement that this argument does not make sense to us. The New York Times is a for profit entity and I have no inside information. I don't know anything about The Times other than what I you know, I read their stories sometimes, but I can't imagine that they wouldn't be interested in cracking into those millions and millions of Fox News viewers who believe everything in the New York Times is false and they'll never buy a subscription and so on. If they could convince those viewers, no, we're telling you the truth and if we're wrong, you can claim the bounty, it seems to me they would have an incentive to do it. So it's kind of some of its political speech or political related information. But they're a commercial actor and it seems to me Fox News is the same way. We know they're interested in making money. And there are millions, probably tens of millions of people who won't believe a thing they say. Maybe they would actually like this system. And as with Mike Lindell, they can't build it their own themselves. Their own homebrew system isn't going to work. But if you had a working third party system, the big players might like it more than anyone else.

[51:37] Kim Krawiec: So thanks to both of you. This was a really fascinating conversation and I learned a lot, even beyond what I had gathered from the paper already. So this was very enlightening.

[51:47] Mike Gilbert: Thank you so much for having us.

[51:51] Kim Krawiec: Great, thanks. Bye, guys.

[51:53] Mike Gilbert: Thank you.