Privacy is the New Celebrity

Cory Doctorow on the Fight to Legalize Encryption, Anti-Trust Law, and Holding Power to Account - Ep 25

May 19, 2022
Privacy is the New Celebrity
Cory Doctorow on the Fight to Legalize Encryption, Anti-Trust Law, and Holding Power to Account - Ep 25
Show Notes Transcript

In this episode Josh interviews Cory Doctorow, science fiction author, activist and journalist. Cory has written dozens of books including the bestselling Little Brother and the critically acclaimed How To Destroy Surveillance Capitalism. Josh and Cory have a wide-ranging conversation diving into everything from anti-trust law to secure hardware in computing to Cory's years-long activism fighting for transparency and accountability in tech. Cory also shares what it was like working at the Electronic Frontier Foundation during the thick of the battle to legalize encryption, a technology that many of us now take for granted.

Speaker 1 (00:03)
Hello and welcome back to Privacy is the New Celebrity, a podcast about the intersection of tech and privacy. I'm Joshua Goldbard, and today on the show, we're excited to welcome Cory Doctorow. Cory is a best selling science fiction author, activist, and journalist. He has written many books, including the nonfiction work "How to Destroy Surveillance Capitalism", as well as the award winning science fiction novel Little Brother and its sequel, Homeland. Cory is also an activist, campaigning against digital rights management and copyright laws in favor of file sharing and the free exchange of ideas. In this capacity, he helped establish the Open Rights Group and still makes many of his books available online for free. So many of the themes in Cory's work intersect with what we do here at MobileCoin. So let's get into it. Cory, thanks for being here on Privacy is the New Celebrity.

Speaker 2 (00:58)
My pleasure. It's great to be talking to you. It's always nice to talk to people who are interested in the same weird, esoteric stuff as me because I've been locked indoors for two and a half years with two people who are completely bored to the back teeth of hearing me talk.

Speaker 1 (01:11)
Oh, man. So I wanted to talk a little bit about this idea of surveillance capitalism. So MobileCoin as a company is actually sponsoring an art piece or an art exhibition at the Berlin Biennale. That's coming up in June, where we've helped to sponsor an exhibition inside of the Stasi headquarters, and the Stasi, of course, are the secret police of Germany during World War II. And inside of this exhibition, which is entitled Surveillance Capitalism, there's a number of the world's top artists who are presenting pieces like Hags Avedesian's piece on public lighting as the enforcement of police powers and the expansion of police powers. And there's another artist whose name escapes me right now who's presenting a series of works on the dehumanization of people in India with respect to the thumbprint becoming your identity. So people who lack thumbs are unable to authenticate. There's a whole lot of very interesting angles on that subject. I really want to understand, though, from your perspective, what does surveillance capitalism mean?

Speaker 2 (02:14)
Well, I am someone who is four square against surveillance and extremely skeptical of capitalism, but I'm also someone who is also skeptical of the surveillance capitalism hypothesis as articulated by Shoshana Zuboff. So I have maybe a slightly different set of concerns to the mainstream set of concerns about surveillance capitalism. So maybe it's worth talking about what I think Zuboff believes and then explaining how I differ from it. That might be a good way to get into the subject, if that suits you.

Speaker 1 (02:46)
Yeah. So just so everyone who's listening to the podcast, Shoshana Zuboff is an incredible author who's written extensively about surveillance capitalism in a very long novel, but it's quite well done. So I would love to hear what your thoughts are on it.

Speaker 2 (03:01)
Yeah. So Zuboff she got the term. I don't know if she independently reinvented the term or if she got it from its originators who were some Canadian West Coast academics who used the term to critique capitalism in particular and say that surveillance capitalism was kind of a way of building out what we might today call bossware and to do lots of other stuff that I think we can be pretty worried about. Just today I was listening to some of the writers on The Intercept talk about how the capabilities that are being demonstrated in the private surveillance industry could very easily be purchased, for example, by a large firm that could use location data from apps to figure out which of their employees were going to union meetings and fire those employees pretextually before they even tried to organize a union and thus avoid National Labor Review Board scrutiny, that kind of thing. So that was the kind of the OG version of this. But under Zuboff's theorizing, it took on a very different cast. So the basis that I think supporters of market capitalism have for believing that markets are good allocators even though all market systems are highly unequal and there are losers and winners in them, people who end up with giant mansions and people who end up living in cardboard boxes is that markets, by dint of aggregating all the decisions that people make about what they're willing to pay and what they're willing to offer are able to do a better job of allocating all our resources than anything else, particularly more than central planning.

Speaker 2 (04:46)
This goes all the way back to the 1920s and something called "the calculation debate" about whether you could ever build a computer big enough to figure out who gets what or whether that computer is, in fact, just the market. They were basically saying, like, given what we know about electro mechanical relays, could you build a computer big enough to do this adding up, and figure out who gets what? And they said, no. The conclusion was no. Obviously, we have faster computers today and so on. But what Zuboff says in her book is that machine learning and surveillance data collection has created a world in which those decisions that produce good allocations, they have been replaced by algorithmic manipulation that by combining big data and data mining and psychology, psychometrics, the ad tech companies are able to effectively take away our free will, to build a mind control right? Basically. And that this confiscates, which she calls the right to the present tense, the right to your behavioral surplus, the right to know what you're going to do in the future instead of being told what you're going to do in the future. And I think that this is wrong.

Speaker 2 (06:05)
It's just as a technical matter, and I think that there's a better explanation for what's going on. So I'll start with just the technical stuff here. So I don't think that the research that is cited to support the idea that this manipulation works really well is very good research. Things like micro expressions, sentiment analysis, big five personality types, these are all caught up in what's called the replication crisis. That's where there's a bunch of bedrock psych research that just no one can replicate. So these papers where people claim to have these very strong effects, which when people try to independently recreate them, they don't see that effect. And some of it seems to be academic fraud, and some of it may be inurement. Right. Just as anyone who's ever had a refrigerator humming in the background and then it stopped, and they only noticed it once. It stopped, you very quickly become inured to stimulus. I actually have a little Post It note on my mic here that says "turn off the washing machine" because I'm recording from my office, which has the washer/dryer in it, and it's really loud, but I don't hear it after five minutes.

Speaker 2 (07:15)
And so it may be that whatever manipulation techniques there are that worked in the past, that they got overused and we just stopped responding to them in the same way that nobody sees a sale price of $9.99 and goes, wow, that's a lot less than $10. Clearly, at one point that was a good tactic. It's pretty widespread today. It really doesn't seem like anyone treats $1.99 as anything different from $2 or $9.99 as anything different from $10. And so the research doesn't really support these claims. The best research there is on this is the big Facebook study where they exposed 60 million people to a stimulus that was predicted to increase voter turnout. And what they found was that about 300,000 people did go and vote more than they thought would, more than they'd predicted earlier. And so that's interesting and impressive. 300,000 votes is 300,000 votes, but it's a 0.39% effect size. So it's not as impressive as you might think, because that's 300,000 votes across 60 million people all over America, and there are not precincts that swing by 0.39%. That's not how American elections get decided. And, you know, it may be that they could refine this technique in the future and come up with a better outcome, but it's, I think just as likely a priori that if they repeat this technique over and over again that you would see the reverse, that you'd see a declining effect size, just like you do with $9.99 not being perceived as any different from $10.

Speaker 2 (08:56)
But there is something really important going on in that study, which is that Facebook non-consensually did a psychological experiment on 60 million people, and that really does permanently disqualify them from running even a lemonade stand, much less the social media of 3 billion people. That's pretty grossly inappropriate. And I think this is maybe where Zuboff and I dovetail. I think there is something very rotten going on in these tech companies, but I don't think that we have to indict them by saying that they're evil geniuses. I think they'd like to be evil geniuses. Yeah. Lee Vinsel calls it criti-hype, right? When someone says, "I'm a genius!" And you say, "but you're an evil genius!", and they're like, "But I'm still a genius!", maybe we could say "you're an ordinary mediocrity and not a genius who is a hustler who's claiming to have a mind control ray". And just like everyone who claims to have a mind control ray, you're kidding yourself, and us, whether that's like Rasputin or Mesmer or MKUltra or, like, sad misogynist pickup artists on Reddit, everyone who claims that they can sort of use science to reach past your volition and manipulate your behavioral outcomes, bypassing your moral sense, your cognitive abilities, those people like, they've all been wrong.

Speaker 2 (10:17)
Maybe someday someone will be right. But all those people have been wrong.

Speaker 1 (10:20)
I think so much about the banality of evil in those cases because it's sort of like the most basic thing you could do to say, we're the CIA, shat if we just dose people on acid and watch them freak out? It's not a particularly interesting experiment.

Speaker 2 (10:35)
Well, and the part of that experiment that actually worked and now we're getting into what I think is going on is the CIA did, in fact, dose a lot of people by tricking them into taking LSD with sex workers, and then blackmail. So it turns out that you can, in fact, alter people's behavior by threatening to expose the fact that they were patronizing sex work.

Speaker 1 (10:59)
This is why entrapment is illegal.

Speaker 2 (11:02)
Yeah. Well, and you know, there's a lot of stuff that tech does where there's a much simpler explanation for why you use it, even though you don't like it, than the idea that you're addicted to it or that they can bypass your critical faculties or whatever. So, for example, Facebook holds all your friends hostage. They've deliberately blocked interoperability. The memos that we see in the antitrust cases show product managers who are like, how can we increase the switching cost? Right? How can we make it more painful to leave Facebook? And when people actually do start leaving Facebook, like they did for Instagram, Facebook just buys the companies that people are switching to. And when they can't buy the company, like Snap, they use surveillance tools that they have built into their app to figure out what features Facebook users like about Snap and just clone them. So none of this requires that they bypass your critical faculties. It only requires that they violate antitrust law, which is what I think they're actually doing. And we had 40 years during which we didn't really enforce antitrust law from Reagan until now. And we are in the midst of quite a pivot that I think, like normies who don't think that this stuff is really interesting have completely failed to see.

Speaker 2 (12:16)
But, you know, the FTC right now is being run by a woman named Lena Khan, who four years ago was a third year Yale law student who wrote this seminal paper for the Yale Law Review called Amazon's Antitrust Paradox. That refuted Robert Bork's book, The Antitrust Paradox, which was the book that changed how antitrust enforcement worked in the Reagan years. And this paper was so powerful and sent such a ripple through the entire antitrust bar that now this woman, who was a law student four years ago, is running the FTC, and she is doing a bunch of stuff that upends the orthodoxy of the last 40 years, like promulgating new merger guidelines that will prohibit firms from buying their competitors, like treating things like surveillance as an antitrust violation, that if firms use their market power to affect surveillance, to treat them as monopolists, engaged in abusive conduct.

Speaker 1 (13:18)
I'm really interested in whether they're going to allow the purchase of Twitter to go through.

Speaker 2 (13:22)
Well. So the thing is that the purchase of Twitter doesn't really have an antitrust dimension because Musk isn't proposing to merge it.

Speaker 1 (13:30)
I see.

Speaker 2 (13:31)
So he's proposing to buy it. So it's not like he would make Twitter an adjunct of his ISP, right. He's got that satellite ISP.

Speaker 1 (13:37)
There are a lot of rumblings inside of DC about potentially blocking the deal, though.

Speaker 2 (13:45)
Well, I don't know. Now we're getting to securities law, which is outside my domain. My understanding is that those considerations relate primarily to securities law. And the fact that he is the CEO of an SMP 500 company, that all the indexers like Vanguard are in, that have to be in, that it comes with a bunch of duties and responsibilities and restrictions, that he's flouted those restrictions, and that allowing him to acquire another firm when he has demonstrated that he does not behave in accordance with the law, that he flouts the law is maybe a bridge too far, but I can't even handicap the likelihood that any of that stuff will work because it really is. That's just not my domain.

Speaker 1 (14:30)
Sure. Can I ask you an abstract question about surveillance?

Speaker 2 (14:34)
Sure.

Speaker 1 (14:34)
So I wonder. We've talked a lot about corporate surveillance, and we haven't talked much about national surveillance, and they are somewhat different things, but they have a lot of overlapping characteristics. I think the question that I have is, is there any amount of surveillance that is acceptable in a free society?

Speaker 2 (14:53)
I mean, sure, those kinds of absolutes are always a little like the answer is almost always going to be yes. I have a clever 14 year old daughter who likes to do things like, "dad, is it ever okay to murder someone?" And then like, "but what about in this case?" "Well, what about in this case?" So based on my parental experience, I'm going to say there's probably surveillance that I would tolerate. I mean, I do think that I believe in enforcement of the law. So, for example, if the FTC believed that Tesla was building unsafe cars, and they thought that the senior management were involved in this and knew it, and they got a warrant, and they use that warrant to intercept conversations or in some other way engage in surveillance. And then that evidence was presented in court, and the other side had the opportunity to quash the evidence on the basis of whether or not the warrant was improperly issued. Like, oh, that sounds okay to me.

Speaker 1 (16:00)
So let me hammer in on this a little bit. Where do you think the line is between surveillance of speech and surveillance of financial activity? Because I think they're different boundaries. Right. Like yelling "fire" in a crowded theater and moving $100 million anonymously. Maybe they're different things.

Speaker 2 (16:18)
Right? I mean, I think that code is speech, but I don't think money is speech. Maybe because I've written a lot of code and don't have a lot of money.

Speaker 1 (16:27)
Of course, you realize that code of speech and the idea of money as code sort of starts to rub up against that line pretty quick. Right?

Speaker 2 (16:35)
I understand. Right. So let me give you the answer that I tried to give. I went and spoke at Defcon Ethereum Defcon in 2018, I should say I'm what they call a no coiner. So it was awfully nice of them to invite me to go and talk about this stuff, because I recognize in the rhetoric of Web3 and blockchain and coin stuff that the stuff about decentralization is stuff that I completely recognize is stuff that I like. But there's stuff about it that I'm concerned about. And what I said is, like I said, let's think about the history of the crypto wars, where code speech came from, because the original basis for this was that governments around the world treated working encryption as a munition. And they said that if civilians were able to access encryption that was provably strong, that we didn't know how to break, then bad guys would use it, and they do bad things. I think that's true. But I also think that there is a good reason to support it, that there's lots of good things that can arise from it, too. I mean, you can make a parallel argument where you could say, if there are spaces that are not near microphones, then bad guys might say things that can't be captured on mic and later on used to indict them or to prevent their schemes.

Speaker 2 (17:58)
And so I'm not convinced by the argument that just because bad guys can use it, we should prohibit it. And there were a lot of arguments on those lines that were made over this prohibition on publishing strong crypto. There were arguments made by the finance sector, which even then was very powerful. It's even more powerful today, but even then was very powerful about the need to have ciphers that worked well enough that corporate spies and foreign governments and organized crime couldn't break into them. There were arguments made by computer scientists about the inadequacy of the cipher that was proposed for civilian use. So I'm actually sitting at my desk here next to a bar refrigerator sized computer called Deep Crack that John Gilmore designed and built in the early 90s for a quarter of a million dollars. It's a then massive crack debt.

Speaker 1 (18:50)
Three, right?

Speaker 2 (18:50)
Yeah. This crack debt. John was tired of having it in his garage, so he let me have it on permanent loan. So I took out one of the circuit boards and framed it, hung it up in my office. It's really cool. You can find if you Google Deep Crack. It's really cool. So they built Deep Crack. John built Deep Crack. And they brought it in front of policymakers, and they said, look, you've got the NSA saying that Des 50 is so strong that organized crime will never break it, foreign spies will never break it, corporate espionage will never be able to break it, and it'll be safe for all of the things that America needs to protect. Here is John Gilmore, who is a very smart guy, but is not the smartest guy that ever lived and is a very well resourced guy, but is not the best resourced guy that ever lived. And he built this computer for a quarter of a million dollars, and it can brute force Des in a couple of days. And so if John can do it, so can, I guess back then it was the Russians. Right. But so can lots of people.

Speaker 1 (19:43)
I mean, you could argue that today it's still the Russians, right?

Speaker 2 (19:46)
Yeah. And that argument. Well, no, I think it's mostly Chinese now, but that argument didn't go anywhere. It just failed. Right. I think it convinced a lot of people that Jon was right, but it didn't change the policy. What changed the policy was the woman who now runs EFF, who was then a lawyer working for EFF, my boss, Cindy Cohen, she went to court in the 9th Circuit, and she said that Daniel J. Bernstein, who was then a grad student at UC Berkeley, is now a prominent cryptographer, has the First Amendment right to publish the source code for a strong cipher on the Internet on Usenet, because that's what the Internet was in '92.

Speaker 1 (20:24)
And importantly, can't be compelled to change that code.

Speaker 2 (20:27)
Right. So the First Amendment protects it. It's a form of expressive speech. And there were lots of people who came forward to talk about it, literary scholars, programmers. My colleague, Seth Shone, who now works for John but worked for EFF for a long time, he at one point, when the code for unwrapping the encryption around DVDs was published and then prohibited, he rewrote that code as a series of haiku to show that it's a poem, that if you're going to ban DCSS, you're banning poetry, and that this is speech that no one claims the First Amendment doesn't protect. This is like, well, understood to be within the scope of the First Amendment. So code is speech, and it's important. And the way that we got strong encryption, working encryption, which I also think is really important and a really important hedge against all kinds of bad conduct. Stalkers, supply chain. So how do you avoid getting supply chain poisoning of the over the air update for the antilock brake system in your car? Working cryptography. How about the firmware for your pacemaker? Working cryptography.

Speaker 1 (21:37)
How do you protect anything on the Internet?

Speaker 2 (21:38)
Right. You need working cryptography. So I think it's super important. Now. How did we get working cryptography? We got it not by building a computer that won the argument, although we tried that. We got that not by having powerful people in industry argue about it, although we tried that, too. We got that by appealing to the rule of law, which may sound quaint in the wake of the leak that occurred just as we're speaking here, where you have the Supreme Court making some pretty weird arguments about rolling back over 100 years worth of jurisprudence about privacy. But it was the, but I actually think that kind of makes my point, and I'll get to that in a minute. What got us working cryptography was not the absence of the rule of law, not treating the government as defect and routing around it, but having a government that was responsive to its own values so that when you made the principled argument in front of it, it had no choice but to get out of the way. Now we live today in an era in which our government is a lot less responsive than it was then.

Speaker 2 (22:44)
And even then, it wasn't very responsive relative to how responsive it had been, say, 15-20 years earlier. And a lot of that has to do with financial secrecy.

Speaker 1 (22:54)
That's right. Well, the original sort of easing of cryptography law in the United States, almost all cryptography was restricted just to military applications until the advent of electronic funds transfer. And we just started to see IBM and other organizations get licensed by the government to allow them to basically do the protections for businesses and the sort of, like, domestication of allowing civilians to have access to that really started with Phil Zimmerman and Open PGP.

Speaker 2 (23:20)
Yeah.

Speaker 1 (23:21)
And that was like the early 90s late 80s.

Speaker 2 (23:22)
For sure. And I touched earlier on monopoly and the role of monopoly in tech. I think the way to think about this causally is the gradual deregulation of the 70s produced some pretty good sized fortunes that were used to accelerate deregulation. Among the deregulation that it produced was the deregulation or the neutering of competition law, which allowed the creation of monopolies, which produced far more money in far fewer hands. And some of that money was mobilized to further influence policy. And so it's been a long time since the term rubber hose crypt-analysis was coined. It's a 30 year old in-joke among cryptographers that it doesn't matter how many bits you add to your cipher if someone ties you to a chair and hits you with a rubber hose until you give up your passphrase. And the way that you defend yourself against rubber hose crypt-analysis is the rule of law, and there's really no substitute for it. So one of the reasons that I get worried about financial secrecy, not financial privacy, and by all means asking me what the difference is, because I don't know for sure.

Speaker 1 (24:31)
Well, I think the hardline difference between secrecy and privacy is the ability to, I think, resist the rule of law.

Speaker 2 (24:38)
Yeah. I don't know if that's the case. So here's the thing. I have a colleague at EFF, the Electronic Frontier Foundation. So EFF is a human rights group, digital human rights group. I've worked with it for 20 years. It's a little over 30 years old. And as you heard, one of our seminal victories was legalizing cryptography. So if you're involved in anything using Internet security, but particularly cryptocurrency, it exists because we won that lawsuit.

Speaker 1 (25:02)
Thank you.

Speaker 2 (25:02)
Yeah, it's good. I have worked with them for 20 years, and I'm a major donor every year because so much of what I care about comes out of that group. And I watched how they work. Right. They squeeze a dollar till it hollers.

Speaker 1 (25:17)
I will note that MobileCoin is also a major donor to the EFF. We really believe in what they're doing. What I want to ask you about this is we live in this sort of halcyon era of very strong, extremely cheap, very fast cryptography, like arguably the ciphers that we have today, they're not going to be broken until we have a quantum computer, if you believe in quantum computers, and that's a really kind of crazy thing. And so we live in this place where on your phone you can encrypt something and nobody's going to be able to read it until we have material advance in the math or material advance in the hardware or algorithmic world. And so that is very strange compared to the world that we came from 30, 40 years ago. So I guess my question is, how does strong cryptography change the game?

Speaker 2 (26:05)
Well, I'll start by saying I try not to use the term strong cryptography. I try to use the term working cryptography.

Speaker 1 (26:11)
Fair enough.

Speaker 2 (26:11)
Because I think that if we can make it, that basically strong cryptography, to me, is cryptography where you have enough bits in the key space that if you took all the hydrogen atoms in the universe and turned them into computers and gave them until the heat death of the universe to guess the key, you run out of universe before you run out of keys. Right. To me, that's working cryptography. And I don't know why we would build anything less when, as you say, we have distraction rectangles in our pockets that can convert a bitmap into a cipher text so quickly that we don't even notice? Like, why would we use something worse than that kind of consume the entire universe grade cryptography for any kind of privacy application. And I think that it does create some new challenges and opportunities. So I think that given all of this stuff about rubber hose crypt-analysis, what our privacy tools really give us is not, as the cipher punks, I think dreamed of a kind of demi-monde where you can secede to like, you can basically climb inside your cryptographic bubble and zip it up behind you, and then the foolish laws and policies of the outside world don't matter.

Speaker 2 (27:29)
This is one of the problems with all those libertarian exit projects where they're like, we'll find a country where the government is so weak that they will let us just build an island where we can make our own laws. And it's like, you do realize that all the countries where the government is that weak are countries where they're totally happy to come over to your island and steal all your stuff, right? The thing that all of the countries that respect the rule of law have in common is they don't want you setting up a sovereign state in the middle of it. And so you're left in this kind of weird situation where the dictator's idiot nephew might come over with a speedboat and some heavy munitions and just take all your stuff. And so that's I think the tension there so we don't get a demi-monde, we don't get to declare stupid laws irrelevant and go away. But what we do get is a limited kind of operational security that can be used to foment political change, to hold the powerful to account and require them to honor the rule of law. Which is something that you mentioned, the Stasi.

Speaker 2 (28:38)
One in 60 people in the GDR worked for the Stasi when The Wall came down, right?

Speaker 1 (28:44)
Isn't that crazy? It's a crazy statistics.

Speaker 2 (28:47)
It's a shocking number, right? Many of them were just paid informants and not operatives, but still, it's an absolutely bonkers thing. I mean, they had jars full of stolen clothing from people that they kept sealed so that sniffer dogs could be put on the scent of those people ever disappeared. So they would steal clothing from people and keep them in jars in warehouses. It was an absolutely bizarre project. Right? So getting together under those conditions is very hard. Fomenting political change under those conditions is very hard, especially in an era of digital surveillance, because Stasi used one in 60 people to surveil their whole population. If you accept that the NSA has effectively the entire planet under surveillance, plus or minus, their ratio is about one to 10,000. So it's like two and a half orders of magnitude in a generation. Really? Like, the efficiency gains are massive. So what you get from our cryptographic tools is a temporary shelter in which you can organize to hold the powerful to account. What you don't get is something that makes holding the powerful to account obsolete because they can't touch you inside your cryptographic bubble. And that, I think, is a powerful thing.

Speaker 2 (30:13)
The ability to keep secrets even as imperfectly as crypto is, and it's imperfect, not because the ciphers aren't good, but because the people are imperfect. If you read any of the forensic accounts of people unwinding blockchain heists, they're pretty amazing, right? And it's always something like the Dread Pirate Roberts one time signing an email with his real name, then deleting it, or signing a message board post with his real name, then deleting it, but not until after someone had replied to it and quoted the message. So the reply with the quoted message with the email address at the bottom of it was still there. And that was what unwound the whole thing. So perfect OpSec is impossible, right? Given enough time, you will eventually make a mistake. Your adversaries don't have to be perfect. They have the attackers advantage.

Speaker 1 (31:02)
That's right.

Speaker 2 (31:02)
They just have to find one mistake you make. You have to make no mistakes. And what's more, you're just you. So you get tired, you get distracted, your pocket rectangle starts buzzing because something is going on and you make a mistake. They have three shifts. They can rotate people on and off watching you. And so what you get is a little bit of privacy, a little bit of security.

Speaker 1 (31:26)
So speaking of the NSA, are they still requiring their new recruits to read Little Brother?

Speaker 2 (31:34)
I don't think the NSA required that. The Cyber Institute at West Point, it was a course book.

Speaker 1 (31:42)
I see.

Speaker 2 (31:44)
It was in the Air Force Academy, and I believe the Naval Academy. So there are lots of potential spooks who are made to read that book as a counterinsurgency text, I think. But it also influences, I hope, the way that they think about this stuff. So anyway, the point of this is that not only does it give you a place where you can organize to hold the state to account, it also changes the kind of game theory of being an oppressor, because if you know that if you push people, they can't push back, then you might push them harder. And the more things that there are that are available to people to push back, the more rational adversary doesn't push too hard. And if they're an irrational adversary and they do push hard, then we have access to these tools. So that's a better equilibrium than the equilibrium without cryptography in the world. And as I say, the OpSec stuff is so hard that I'm not hugely worried about strong crypto or working crypto. Sorry, like ending the rule of law. I'm far more worried about a belief, a kind of complacency about strong crypto, working crypto, ending the rule of law, because we say, oh, we don't need to reform our governments or hold them to account, we can just walk away from them.

Speaker 1 (33:14)
So I would love to get into a little bit of controversy here. This podcast has been fantastic and the time has really flown by. But I want to make sure that we get into this before the end of this. So maybe you can talk a little bit about the book you're writing and the concepts around secure hardware and computing and how you feel about that.

Speaker 2 (33:33)
Sure. Yeah. This is a pretty esoteric thing that I've been worried about for a little over 20 years now. About 20 years ago, Peter Biddle from Microsoft came to EFF to present this thing that he and his team were working on called Palladium, or Next Generation Secure Computing Base, which has since become trusted computing. And in Peter's conception, you would have a cryptographic coprocessor on the board that would be tamper evident. So it would have like some acid and an epoxy sealed compartment. And so if you tried to decap it or remove it from the board, it would just like melt down. And if you managed it, you could obviously see. So the user would know and it would have a secure hardware path to your keyboard and your screen. So something like DTCP and cryptographic handshaking with your keyboard. And so the idea was that this cryptographic code processor would observe your boot process and produce a manifest. It would go like, here's a hash of the boot loader, and here's a hash of the drivers that loaded and the corner kernel modules, and here's a hash of the OS and so on. And that on the one hand, it could be programmed to just interrupt the boot process if it didn't recognize any of that stuff.

Speaker 2 (34:51)
So if it thought you were like blue pill, right? Or red pill, rather, and your machine was running inside a VM right then that you didn't know about, that was there to kind of spy on you. It could interrupt that process and just say, I'm not going to let you boot because you've asked me to secure you. It could also provide a secure attestation. So if someone said, hey, before I send you this email, I want to make sure that you don't leak it. Please, can you ask your computer to send a hash of the manifest so that I know you don't have a rootkit or some other dangerous piece of software that would compromise our digital security.

Speaker 1 (35:33)
Basically verifying that the computer you think you're working on is actually the computer you think it is.

Speaker 2 (35:36)
And there are a ton of benign applications for this, right? You could imagine if you are a Russian dissident in St. Petersburg, where my family is and where they're trying to get out of right now, then you might not believe that you have the technical chops to configure your computer to resist Russian spy agencies. So you might ask a toolsmith outside of the country or in another part of the country to sign all the manifests that you are willing to trust. Right. Sign the hashes of all the components that you're willing to trust, and then ask your computer to reject anything that wasn't on this white list that this other person prepared for you. That's a super benign application. But then there's a whole bunch of really dangerous applications because there's lots of times when you want your computer to lie to other people. Well, here's an example. So right now I'm going to hit alt tab a couple of times and bring up my terminal, and at the top of it it says TTY. And that's because somewhere in this computer there is a process that started as a teletype process that literally was meant to print to a line printer, like the one that my dad brought home from the University in 1977 that we connected to an acoustic coupler.

Speaker 2 (36:54)
And then someone wrote a layer that sits on top of it, a virtualization layer, and then that probably allowed it to run as a glass teletype, an old like VT 100. And then someone added another layer and another layer and another layer. And eventually you get this where I have like a modern windowing system and it's talking to a kernel process that goes all the way back archaeologically to these old line printers. And I don't want the software at the top of the stack to be able to say I'm not going to run unless there's really a line printer at the bottom of the stack, because I don't need a line printer. I just need to send commands to my Linux system. And so that's kind of maybe a trivial example. But think of how many processes on your computer are one thing wrapped in another thing FFmpeg, or all the browser stuff where browsers wrap around old modules or old compatibility components and render content in quirks mode. Or even back in the old days, when there was this war between the web and Gopher, and the way that it was resolved is that they just added Gopher as a supported protocol on web browsers so that Gopher just became a corner of the web.

Speaker 2 (38:07)
There's lots of instances in which you might want to do this. And more significantly, if you're working from home, which is to say, if you're living at work and your boss wants to install bossware on your computer that watches your eye movements with your camera, watches your keystrokes, listens to your speakers, rummages through all of your files to make sure you're not doing anything that they don't like, and you want to stick your computer inside a VM so that it can feed false telemetry to your boss. Or if you have an abusive partner who insists that you run stalkerware on your phone so they can keep track of all the places you go, you might want to run some code that gins up some false telemetry. There's a whole class of apps in Indonesia called Tuyul apps, T-U-Y-U-L that are made by and for gig motor scooter riders who are a big part of the Indonesian economy. And they modify, these apps modify the dispatch apps that are used by the gig companies, which are really callous about their driver's safety and economic futures. So, for example, they will not give you a job to pick up a commuter coming in at the train station unless you're at the train station when the train arrives.

Speaker 2 (39:19)
The problem is that those traffic jams are literally lethal. People die in them. And so the Tuyul app lets you spoof your GPS so that you can tell the app that you're at the train station when really you're at a safe distance and not contributing to that traffic jam by circling and circling, waiting for a fare to come in. And then you go in and you pick up your fare and leave again. And it's better for everyone and even your bosses, even though they don't want to make that change. So this is seizing the means of computation. And remote attestation is a way to coerce people who have less power than you into revealing whether or not they are using technology to do what we're talking about before, to provide a temporary shelter from coercive authoritarian power so that they can hold that power to account and make a better world. It's a way to neutralize that. And so that is the thing I really worry about and that I'm of two minds about, because the benign applications are great. There are a whole bunch of benign applications that I would like to have on my device using trusted computing.

Speaker 2 (40:23)
And today our trusted computing is mostly through the form of pretty primitive TPMs or the even weaker, I think, are the secure enclaves where you just have Intel or some other Silicon vendor like drawing a rectangle around their die and going, everything in this rectangle is going to be really secure. Trust us. But it's still like on the die, and it still has lots of interconnectivity with the main processor and failure modes like the specter and meltdown attacks might be able to infect it, and so on. I also worry in terms of the political economy of this stuff. For 20 years, security researchers in the security community have had my back when I have argued that people should be allowed to tell true facts about defects and computers. Companies should not be in charge of who gets to criticize their stuff. Well, trusted computing models, by definition, aren't supposed to be field updatable. They're supposed to be immutable, because if you can change the trusted computing module's code, then you can subvert its application. The whole point is that the trusted computing module is the constant in a world of evershifting pluripotent malleable code.

Speaker 2 (41:44)
But as soon as they're naive users who are exposed to risk and who can't make an informed decision about which app store to trust, about which code to trust then we really need to just vest our trust in central authorities and demand that they act in a benign way rather than holding them to account by publishing the evidence of their failures.

Speaker 1 (42:10)
Cory, that was a blistering podcast. We didn't even get to talk about the decentralization of blockchain. I would love to have you back on some time to really dig into decentralization of trust, but, thank you.

Speaker 2 (42:22)
Yeah, we didn't even talk about the book which I'm happy to send you a copy of. The book is called Red Team Blues. It's a heist novel about a forensic accountant who is working his last job after 30 years in Silicon Valley and is recovering the keys for securing claims after they're stolen from a guy who's built a remote attestation-based blockchain that has a billion dollars in it and that he fears will all be stolen and it turns into a multi way fight between different criminals.

Speaker 1 (43:04)
Wow. As a guy who led a team that built a billion dollar cryptocurrency based on remote attestation secure enclaves I am very interested in this novel. Cory, thank you so much for joining us on Privacy is the New Celebrity.

Speaker 2 (43:18)
My pleasure.

Speaker 1 (43:20)
We've been speaking with Cory Doctorow, best selling author, journalist and digital rights activist. Thanks for listening. Don't forget to subscribe wherever you listen to podcasts and check out mobilecoinradio.com for the full archive of podcast episodes. That's also where you can find our radio station. I'm Joshua Goldbard, our producer is Sam Anderson and our theme music was composed by David Westbomb. And remember, privacy is a choice we deserve.