Privacy is the New Celebrity

Ep 2 - Peter Eckersley on AI Ethics, Encrypting the Internet and What's at Stake

July 14, 2021 MobileCoin
Privacy is the New Celebrity
Ep 2 - Peter Eckersley on AI Ethics, Encrypting the Internet and What's at Stake
Show Notes Transcript

In episode 2, MobileCoin's Chief Product Officer Henry Holtzman interviews Peter Eckersley, a leading thinker on AI ethics. Peter and Henry discuss the challenges in shifting from machine learning algorithms that purely serve the interests of capitalism to AI and tech that benefits the user and society at large. Peter explains why he's optimistic about the future of AI but a bit of a grump on cryptocurrency. He also thinks that if privacy is a celebrity, it's a dead one.

[00:05] - Speaker 1
Hello. I'm Henry Holtzman, the chief product officer at Mobile Coin. And you're listening to Privacy is The New Celebrity, a podcast that hosts conversations at the intersection of tech and Privacy. And today I'm Super excited, excited because sitting across the desk for me right now is Peter Eckerceley. Peter is one of the leading scholars and policymakers in the field of AI ethics. He thinks deeply about Privacy, cybersecurity, and how to integrate artificial intelligence into our lives in a way that is safe and ethical. He is also co founder of the AI Objectives Institute, a nonprofit organization that works on artificial intelligence and transformations of capitalism.

[00:52] - Speaker 1
Peter, thank you so much for joining us on Privacy as the New celebrity.

[00:56] - Speaker 2
It's a pleasure to be here.

[00:59] - Speaker 1
Can you tell us a bit more about yourself? In your own words.

[01:01] - Speaker 2
I've had a career in technology, policy and technology ethics. I spent many years at the Electronic Frontier Foundation. I guess it was about twelve years. For a lot of that time, I was the chief computer scientist there. We did a whole lot of different Privacy projects, ranging from Privacy, Badger and Https everywhere. And this big push to try to move the Web from Insecure unencrypted Http to encrypted Https. And the big kind of crowding achievement there was co launching let's Encrypt and building Cert bot, and those turned into really fundamental, widespread infrastructure for Internet encryption.

[01:42] - Speaker 2
And so that was my background. I started to get very excited about artificial intelligence and the types of transformations that it is going to make possible. I think there are a lot of ways we could do that really well, and in a lot of ways it can and has been going terribly wrong. And sir, for the last maybe five years of my career, I've been focusing on that at EFF. I spent some time at the Partnership on AI, where I was the founding director of research.

[02:08] - Speaker 2
And at the moment, I'm both co founding this project called the AI Objectives Institute to think about AI and transformations of capitalism. And I'm also a visiting senior fellow at Open AI.

[02:20] - Speaker 1
That's an amazing career, Peter. Maybe working our way through it from the end to the beginning, I have been really sensitized lately myself. Maybe for the past five ish years to the idea of what is responsible for the AI industry. What do you see as the leading problem that AI is leading us towards?

[02:39] - Speaker 2
Well, I think we've had a few really spectacular failures of AI safety and ethics, and I think the biggest one, the civilizational one, the one that makes me literally keeps me awake at night sometimes is the recommendation engines and social media feeds, having been optimized for business objectives at big tech companies. So the Facebook news feed being optimized for engagement or the YouTube recommendation engine being optimized for engagement Instagram's algorithms, Twitter's algorithms. There's no guarantee that when we plug those optimization engines into civilization, that the outcome we get is sensible, and we shouldn't underestimate how powerful those systems are.

[03:24] - Speaker 2
The newsfeed just looks like a bunch of stories from your friends going past, but actually behind the curtain, there's very powerful language models that are increasingly able to understand in great detail everything that we're posting and writing and reinforcement learning algorithms that are optimizing for particular goals. And it starts to matter a lot what those goals are and whether they're aligned with what we, as humanity, actually want from our future.

[03:54] - Speaker 1
Yeah, I recall maybe tennis years ago that Facebook's own researchers published some work showing how they had manipulated people's news feeds and were trying to learn whether this improved their overall life experience versus interfered with it. Do you remember this event? It was controversial. It was a scandal.

[04:14] - Speaker 2
And it was one of these scandals that happens in the tech policy world, where the scandal itself ultimately seems kind of misframed, and the consequences of those events were actually really unhelpful rather than helpful. One rumor I heard was Facebook was so burned by that scandal that they committed internally. The policy for years afterwards was no social science research because social science research creates these scandals for us. And so as a culture, Facebook's response was let go of all the people who could help you predict the calamities of 2016 and have avoided them.

[04:52] - Speaker 2
Instead, Facebook was unstuffed and was just continuing to optimize for whatever engagement or similar metrics, profit oriented metrics they had. And then an actor like Cambridge Analytica could come along, or even just the human subconscious could come along and say we find the following types of content really provocative and engaging. And so we're going to get caught in a loop, doom scrolling and clicking on more of this stuff, and that actually drives our politics. It's like riding a bicycle. If you want to go off the edge of the cliff, the surest way to do it is to keep looking that way.

[05:25] - Speaker 2
And it felt like that's the loop that we're in now. I think Facebook gets a deservedly bad rap for this in a lot of ways, but it's really not just Facebook.

[05:35] - Speaker 1
Yeah, fast forward to just these past couple of years. And don't we see it all playing out again with Google?

[05:41] - Speaker 2
Well, certainly with YouTube, I think there is a plausible case to be made that several percent of the world population now believes the Earth is flat because the YouTube recommendation engine has spotted that some people are really engaged by it and provoked by it, fascinated by documentaries claiming that the world is flat or making similar kind of conspiratorial allegations about the world. If you just let a recommendation engine run, it will show people more and more of that stuff and persuade a fraction of our population to believe this crazy stuff.

[06:19] - Speaker 2
And I think what we do instead is a really hard, important, complicated ethics problem, and it needs to then be threaded through the incentives of big corporations. We have these two problems at the AI Objectives Institute. We have a bit of a story about this that we're starting to tell about what it means to thread those incentives through and how to do it.

[06:41] - Speaker 1
Yeah. And the parallel I was also talking about was, I believe, Google just went through a period of where they had their AI researchers wanting to publish papers, and the papers drew some bad light to Google. So they said, no. The researcher insisted on publishing it. And so they fired the researcher.

[06:58] - Speaker 2
That's right.

[06:59] - Speaker 1
Absolutely.

[06:59] - Speaker 2
Tim Nick and Mike Mitchell, two excellent colleagues who have done a lot of really impactful AI ethics work, winding up being fired over controversies about things that were and weren't disclaimed in various ways in their papers. And in general, I feel a lot of nervousness from the outside trying to figure out what the dynamics were inside the company. But it certainly looks like they were activists who are trying to push the company in a constructive direction. And then at a certain point, the big machinery of capitalism says, no, we're here for a different purpose, and we don't want to have activists inside our organization making a lot of noise and trying to change it from within.

[07:42] - Speaker 2
Yeah.

[07:42] - Speaker 1
Well, so now that you're working on AI and capitalism and how they come together, do you have a suggestion for how we correct this course?

[07:52] - Speaker 2
So we have a theory that this group that's been putting the AI Objectives Institute together has come up with about what's going on. And it starts kind of with this observation that a lot of people are afraid of AI and afraid of it in different ways. But one of the deepest fears is that we have these powerful optimization systems and they have the wrong goal. They will be given some objective and they'll pursue it past the point that makes any sense. And there's this sort of joke thought experiment to explain this that you have a paperclip factory and the people running it make an AI, and they say make as many paperclips as possible, as cheaply as possible, and then it somehow destroys the world and turns it into paperclips.

[08:36] - Speaker 2
And the more realistic version of this kind of joke story is, well, it's dollars rather than paperclips that we're going to optimize. We're not going to turn the world into paperclips. We're just going to pursue dollars past the point that makes any sense. Viewed from this lens, we spent a lot of time wondering, what exactly is this analogy ended up concluding it makes a lot of sense to think of capitalism or market supply chain structures in particular, as themselves being a type of artificial intelligence, both in the hand wavy, analogous sense, but actually fairly literally as being large networks, neural networks of nodes that are moving goods and services in one direction and then sending a signal back in the reverse direction, which is profit.

[09:21] - Speaker 2
And that big system appears to use the same algorithm that we use to train neural networks. It's gradient descent by back propagation. And you actually see these firms all cranking the lever saying, what can I do to hire more people, adjust my products, et cetera. To get more of this profit stuff that ultimately gets me the promotion or more resources for my team. And that big engine really looks a lot like an AI optimization system pointing at this miss specified goal. And so I think what we need to ask is how do we thread the other considerations that we want, taking into account whether that's equality or justice or avoiding environmental destruction and externalities or producing adequate public goods like investigative journalism or open source software or social media networks that are not beholden to advertising interests.

[10:15] - Speaker 2
How do we thread those things into the objective function of the market so that executives and boards can say, oh, the way we make more money is by figuring out better, how to get YouTube or Facebook to be aligned with what humanity really wants. That's the problem statement that this organization kind of has for humanity and for the politicians and economists who manage the market and for executives at tech companies. How do we adjust those incentives acting on companies so they point in the right direction?

[10:46] - Speaker 1
When I think about some of my own experiences in the business world, I also see here what you just were talking about. I think timeframes is part of the issue. So a company is optimizing for dollars over a particular time frame, and that time frame, depending on the company, could be anywhere from weeks to years, rarely decades.

[11:14] - Speaker 2
Henry, do you want to tell a bit of a story about your experience? Like, Where are you an executive? Are you able to talk about decisions that you felt would have gone differently if those longer time frames had been in play well without naming a company.

[11:26] - Speaker 1
I will talk a bit about the fact that this company had a policy and probably still does of reorganizing its executive staff on a yearly basis and possibly entire divisions of the company seeing substantial change on a yearly basis. And that meant that the leaders of that company had a year to prove themselves. And in areas of technology where there was a clear road map, where it was a clear, strong, long term success for the company, there would be the freedom to do projects that had a horizon of not seeing the customer for five years.

[12:04] - Speaker 1
But in anything that was new where the company might think it was a good idea to go. If the company couldn't see results in a year, it was unlikely that executive would still be in charge of it.

[12:16] - Speaker 2
I get excited when I hear a point like this because it feels like one of many examples of the ways of thinking about capitalism as an optimisation engine rather than markets as being kind of this natural phenomenon. But it's this big machine that we're building as a society, and we can choose the goals, and we can adjust the objective of that machine. And there are a lot of ways in which we're not getting it right right now. But the frame of having it as an adjustable objective really should allow that critique that you're making to be threaded through into policy.

[12:47] - Speaker 2
How do we say, oh, actually, companies should be incentivized to promote executives five years later or reward executives five years later, both for how their products turn out for the company, but also for the societal consequences of those products.

[13:04] - Speaker 1
Yeah. I mean, one of the times I was surprised by the savvy nature of a company. I was working at MIT as a researcher, and we were doing a deep dive with an oil company, and they made the statement that here's our focus. Our focus is on the fact that this is a limited supply, the Petroleum we're pulling out of the ground, there's just so much of it, there'll never be more. So let's think about how we maximize its use over 50 years, 100 years. Let's not think about how we maximize our profits in the next year.

[13:40] - Speaker 1
And I was surprised to hear that surprised and gratified.

[13:46] - Speaker 2
And this kind of thing is great. And it seems like it is happening in some boardrooms where companies are adopting so called ESG, environmental, social and governance goals and threading those down from the board through the executives into the priorities of the organization. The sense we've gotten from talking to quite a few executives is that it's very variable how deep the commitment to ESG goals is. And ultimately, once companies are in danger, once they're looking less profitable, suddenly the constraint becomes quite binding that you can't spend real money on ESG objectives.

[14:24] - Speaker 2
Sometimes wealthy companies in good years can do it. And it's sort of a psychological and social trace in the leadership structure, whether to a cultural question. And you'll get a mix of some companies that are really trying to do the right thing and others that aren't. But the machinery says, well, at the end of the day, in order to continue existing, you need to balance your books and make enough return for your investors to keep having money. So there's a binding constraint that says only once you're making enough money to be profitable, can you have these other goals?

[15:00] - Speaker 2
And that's the thing we could adjust by policy.

[15:03] - Speaker 1
Right.

[15:03] - Speaker 2
We could look at the economy and say, actually, if you can show numerically that you're having a positive impact on people's lives outside of the parts that you can capture with profit, then we could score that into the accounting system, too. I think we should have more conversations about how to do that. Yeah.

[15:20] - Speaker 1
It seems like if we just say the free market decides, then we're assuming that people have the luxury to pay more for things that they think are better for the long term. Whereas a large part of our society, both here in the United States and throughout the world, is living below a poverty line. And so for them, they're having to make real decisions every day on just how to take care of their basic needs.

[15:45] - Speaker 2
And I think it's really just hearing whenever you hear free market decides, replace that with the strange janky machine that currently has some random settings. It decides, and we can go over to the AI machine and adjust the settings. And we should be in the habit of doing that all the time.

[16:01] - Speaker 1
And that's a matter of using governmental policy.

[16:03] - Speaker 2
Well, I think it can happen through different types of institutions. Governments have some levers and some responsibility to use those levers better. But there's a lot that can be done with new startups, with innovation within market structures, with new types of democracy as well. And so I don't think it's limited to the old institutions of government, and especially in settings like the United States, where they don't always work at all. Well, we should be looking at all of the interventions we can take.

[16:32] - Speaker 1
So I want to key in on one thing you just said, which was new forms of democracy. Can you tell me a little bit more about what that might look like?

[16:39] - Speaker 2
Well, we're thinking about doing some experiments of this sort with the AI Objectives Institute, sir, for those of you who've been following progress in AI and particularly in language modeling and natural language processing, AI, that field has been moving extremely quickly. So these large language models, developed, perhaps most prominently by open AI, but also by Google and other labs, Facebook, where you take a neural network, this type of neural network called the transformer, and you just feed it enormous numbers of documents. And the initial task is to predict the next word in a sentence, and then you can do various things over the top of that, fine tune them to do particular tasks.

[17:19] - Speaker 2
But these models have demonstrated a growing and really impressive ability to understand and produce human language. They're starting to understand what we mean when we say things, and they're starting to be able to give thoughtful replies. We've done some experiments with the question of whether you could hold up your end of a conversation about people's lives and their experiences of economic life and how policy decisions by governments impact them. And it's not the case that you could use, I don't know, say, a language model like GPT-3 opening eyes thing to make policy decisions or to actually just do stuff.

[18:03] - Speaker 2
But what you could use it for is to hold up its end of a conversation with an ordinary human being. It's perfectly capable of, like, riffing on whatever political topics, ones that it's been trained on and ones that are completely fictitious and made up. We tried this a bunch of times. The experiment where wanting to run is, can we get some kind of chat bot that can talk to lots of people about how their lives are going and figure out, oh, is there stuff happening in their lives that isn't being accounted for by their governments or by economic policy?

[18:41] - Speaker 2
And can we surface that stuff quickly and backed by a lot of evidence to take a hypothetical during the pandemic? Of course, a lot of people were worried about Evictions and governments passed moratorium on Evictions. The operative questions would have been is are these working people getting evicted anyway? And then maybe more deeply, is it working psychologically? Are people feeling secure and protected or not? And if they're not, what could we do about it? And so this is an example of the type of responsive policy making that we want to try experiments with building those chatbots and seeing what happens when we try to use them to gather information about people's lives.

[19:23] - Speaker 1
And how would you wire up those results to those levers to fix this machine, this crazy machine?

[19:29] - Speaker 2
Well, that's going to if we succeed in being able to gather the stories. And we think of this as like a statistics of stories, then there'll be a policymaking problem of going to governments and saying, we've got the evidence, you should act on it. And here's the evidence from a partisan political situation. Here's the evidence from a conservative voters perspective. Here's the evidence from a progressive voters perspective. Here's the kind of bipartisan case for getting your moratorium on Evictions to be waterproof if it wasn't to begin with or for messaging it differently.

[20:04] - Speaker 2
And we can't promise that that's going to work.

[20:06] - Speaker 1
But we are excited to try how are your experiments going in terms of making sure that this conversational bot is not actually biasing those kinds of results, like wanting to dig deep in areas of pain rather than satisfaction.

[20:22] - Speaker 2
Ultimately, those biases are probably going to exist. Bias in the mathematical sense that how you frame a question guides the answer in a lot of ways and in deep ways that are hard to disentangle. And so I think all you can try to do at first is to measure those things and see for a spectrum of different ways of approaching a conversation. What do you get then there's a separate question, which is, how do we do this ethically? How do we ensure that we're not by being the people asking the questions essentially centralizing too much power to ourselves?

[20:57] - Speaker 2
And I think the things we're going to try to do there, and this is where we get back to Democratization is to really try to let the users of this platform or the system that we're going to try to build actually have a lot of say in shaping it. And so it's not just going to be a bunch of us in an Institute in San Francisco rolling a thing out, but rather, the experiment will be, can we get our users to tell us what's important and how to ask about it?

[21:24] - Speaker 1
So tell me more about who are these users and what do they think they're doing for a product we haven't built yet?

[21:31] - Speaker 2
We don't have a definitive answer to that. But I think that the question. The things we're going to look for at diversity in cases where I think one thing that several members of our group have flagged is we're a brand new nonprofit, so we won't be able to afford to do that much. But eventually the successful version of this, you could imagine, like, millions of people every day talking to this Democratic economic AI, like telling it what's up, and then that could be something that we want to pay people to do.

[22:01] - Speaker 2
We might eventually say, okay, this should be like remunerated work. We really want a very diverse cross section of society. And so people who are, like, poorer and have less free time will need more help to be able to do that. But actually, their experiences are more important, even in some sense, to ingest into this kind of conversation or to call into the conversation because their experiences are being less.

[22:30] - Speaker 1
Well.

[22:30] - Speaker 2
Their lives are being less served by the machine. As it stands.

[22:34] - Speaker 1
Do you have concerns about the bots actually interviewing bots who are just there to farm for the incentives you're willing to give?

[22:42] - Speaker 2
Well, this leads to a bigger set of Privacy questions about how bots are going to transform conversation in digital spaces of all kinds. And I think that's not going to be a unique problem for this project. I think it's going to be a problem for if we get to do it right. Like, it's speculative that we can get the tech to be good enough and get enough funding to actually have the thing run. But we, for sure know there's a problem with online conversation being derailable by bots that are now so good that we can't spot them through conversation.

[23:19] - Speaker 1
Yeah. Sounds like a big challenge there. What do you think is the biggest Privacy challenge that you see in the world right now.

[23:26] - Speaker 2
So I think we have a really fascinating and strange problem as a result of AI, which is that the Internet, its traditions have involved a lot of pseudonymous conversation. There are so many places where you have a forum, anyone can sign up with an email address and a username, and people talk to each other about all sorts of things, some of which they're not comfortable doing on Facebook and under their real name. We have all these spaces for free, diverse conversation, some of them, of course, we've realized the toxic, but many more, actually, other things that made the Internet great.

[24:01] - Speaker 2
So many of those subreddits and forums all over the place with bots getting so good, we kind of know that that set up isn't sustainable anymore if you want to and you're a state actor or a well resource actor. I'm sure everyone's going to point at Russia, but there are probably many other Israel is probably as guilty of this or more guilty. You can go into those places and go in with enormous numbers of bots, and those bots just will be emotionally responsive to the people they're talking to and push up a propaganda line, whatever it happens to be.

[24:42] - Speaker 2
So if we want to keep the Internet as a pseudonymous conversation space, we can't keep the identity infrastructure in the form that it has been in. And so I think we need to figure out how to do a cryptographic proof that the person you're talking to is a human and that they are a human who's only engaging in a reasonable human amount of conversation each day, and then maybe nothing else about them. Just they're a human. They're rate limited and they're reasonably well behaved. I think we need infrastructure for that type of proof, and I don't think it exists right now.

[25:18] - Speaker 1
Do you know of anybody who's even working on that?

[25:21] - Speaker 2
I've heard that there are some startups in this space, but I haven't seen anything that has gotten substantial traction yet.

[25:27] - Speaker 1
As you talk about this, it feels very real to me. I'm scared at the same time. I wonder how real it is to enough people. Do you think that this is a threat that people recognize at enough scale for us to really take action?

[25:46] - Speaker 2
I think the way I found myself thinking about this is it's probably not something that's going to get discussed widely amongst everyone. I think it's going to be a conversation that's run amongst the San Francisco companies and people who run large Internet sites and forums, and obviously the big platforms have their own bot detection and defensive infrastructure. So I think it's much more that middle tier of the 100,000 sites that host conversations where this matters. And then I think what's at stake is really strange and fascinating.

[26:25] - Speaker 2
Do we have a future where all the AI systems and the humans become hard to disentangle from each other, and they're all yelling at each other in propaganda on the Internet. I don't think that's as good a future as one where. Okay, you can tell when something is a human, you can tell when something is an AI, and you can trace the source of opinions and arguments and evidence a little better back to the types of sources that they're coming from, even if the sources are anonymous user like whatever their name is on some forum, but we know they're human.

[27:00] - Speaker 1
Do you think that sort of in a free market of ideas, that it could be as much an issue of, like trying to get sources of truth versus sources of fiction? Does it really matter if it's a human or a bot if what they're telling you is true once again.

[27:17] - Speaker 2
Free market, like a dangerous metaphor to use here.

[27:21] - Speaker 1
Right.

[27:21] - Speaker 2
Like the free market of ideas when those ideas are coming out of large language models that can produce beautiful, compelling text at like a megabyte a second. That's not the same as whoever dreamt up that metaphor of the free market of ideas was not thinking about this future we're living in. And so instead we need to think about the infrastructure, the architectures of who gets to have an opinion. What kinds of evidence do we look at if someone's going to say, well, there was an experiment done like a scientific experiment or social experiment done to make a certain claim about the world.

[28:01] - Speaker 2
Can we figure out who actually did it and did it really happen? And that infrastructure is going to be used by humans, but also increasingly by AI systems that themselves have to rely on the Internet for all of their truth. So this is the weird thing, like future that we're living in. People who thought about paperclip experiments or whatever thought that AI was going to be this kind of incredibly intelligent being that could kind of control and run everything. Actually, what we're seeing is that the AI systems that are learning the most the quickest are learning on top of human culture and human writing and human understandings of the world, and they're just as confused about everything as we are, which is to say, quite confused.

[28:43] - Speaker 2
I think we have a huge process of deconfusion that we need to go through.

[28:47] - Speaker 1
Yeah, I hear somewhat a Dystopian view of a possible future that there might also be a utopian version of which is I have an idea I can be quite good at conveying that idea to a large audience. We now have the means of distribution where I can do that quite affordably, so that everybody can basically get to hear my words about my idea. But maybe I could employ some technology to make my idea even more compelling by understanding your lived experience as an individual and translating my idea into something that you will be able to relate to better.

[29:29] - Speaker 1
Right. And if I can do that at scale, then that Crystal clear idea that is so important for the future of humanity could actually become accessible to people with all kinds of backgrounds.

[29:41] - Speaker 2
And I don't want to be mistaken for a Dystopian. I actually think I'm mostly of the view that things are probably going to go quite well, and we maybe have, like, a ten or 20% risk of running terribly off the rails with the way that we incorporate AI as a civilization. But a ten to 20% risk is certainly large enough to really worry about it and to spend a lot of time trying to control those risks. So I wanted to disclaim. I'm mostly an optimist. I mostly think we will figure out great uses of these technologies and answers to these epistemic problems.

[30:14] - Speaker 2
Your translation idea is absolutely a great one. One of the big successes of the Internet was really letting many more viewpoints flourish. There's unintended consequences of that success. But if we remember what the media landscape was like before the Internet, it was like talking heads on television stations, basically being the arbiters of our entire understanding of the world. And maybe you could find that obscure book in the library with a different view. But we're really fundamentally in a much more open civilizational frame of thinking. And we're like, okay, we're more open.

[30:52] - Speaker 2
How do we also sort out our thinking collectively so that we are better at reaching agreement on hard things or agreeing to disagree, but in a civilized way with each other?

[31:03] - Speaker 1
Right. And fundamentally, you're not saying we should outlaw this kind of technology anyway, you're saying we should try to create systems of accountability.

[31:11] - Speaker 2
I think the idea of trying to outlaw is ridiculous. I don't think in most cases, we have any hope of doing that, right.

[31:18] - Speaker 1
And what we do have a hope of is spotting it and labeling it exactly. And letting you know when you're under its influence. Exactly. Josh landed on the name of this podcast after hearing a quote from one of Facebook's lead engineers who made the claim that Privacy is the new celebrity. Do you think this is true?

[31:39] - Speaker 2
I think if Privacy is a celebrity, it's dead celebrity, but it's dead celebrity that we can't let go of. And so I think we have this strange situation where the systems that have been built really do collect so much more information about people than they did 1020, 30, 40 years ago. And we have so much less Privacy in the modern world. And if you want to keep Privacy, you have to live this very weird alternative lifestyle where you install the Cypherpunk software and you kind of have a PhD in computer science, and you're doing threat modeling on all the databases that are going to track you.

[32:19] - Speaker 2
But at the same time, we're not okay letting go of Privacy. And so we see, I think herculean heroic efforts to unwind these consequences. So Europe's push with the GDPR is probably the most spectacular of these. But even before that, you see companies not wanting to admit how much data they had about people wanting to kind of hide, people wanting to behave as though all the structural commercial surveillance wasn't happening. And so I think for me, it's like, oh, it's this thing that's gone, but we can't let go of it, and we kind of want it back.

[32:59] - Speaker 2
And I think we're caught civilizationally in this paradox zone.

[33:04] - Speaker 1
So if we tease apart Privacy into that desire for something that we want to preserve, what do you think is the root of that? Like, what are the attributes of Privacy that people really want and don't want to let go of? Sir.

[33:20] - Speaker 2
Yeah. Privacy is a really confusing concept to understand. And one frame that maybe can help with that is to think of it as being protections against adverse consequences that can follow from someone or something, finding out things about you. And if you think about what that is, it's like, well, okay. It could be like an abusive partner or a member of your family who doesn't agree with your politics or your gender identity or something. It could be your government that wants to impose systems of control of various kinds.

[33:51] - Speaker 2
Maybe government like Western government that sometimes veers in that direction or an authoritarian government. It could be corporations that want to build databases and use that information about you to make money or to persuade you of things. And for each of these, there are all these different channels through which information can be gathered. And so it becomes this very complicated sort of matrix of who gets to know what about you and what are the consequences. So it's quite hard to think clearly about that stuff. It's really helpful to have a threat model and to say, okay, which bits are going to bite me and when and why.

[34:32] - Speaker 1
So it sounds like some of it is about maintaining autonomy. Some of it is about predicting consequences and being able to be free exactly those things.

[34:48] - Speaker 2
And also, I should say there's a primal part of this, which is that psychologically, some of us have this hardwired intuition that loss of Privacy is dangerous, and that comes probably from evolution out on the Prairie. If you're around the campfire and then you can see some eyes out in the darkness that's like a deeply dangerous situation like whatever is watching you has power over you, that you don't have necessarily any symmetrical answers here. And so I think there's both the autonomy question, the safety from bad consequences thing, this primal psychological consideration, and then also some things that are really tied to Liberal ideas of freedom of speech, that we need space to play around with new ideas and new identities before committing to them, and that we need to be able to have unpopular views that can't be chased down every borrower and into every private messaging conversation, that there needs to be space for people to disagree with the popular views of their time.

[35:57] - Speaker 1
So this leads right to one of mobile coins tenants, which is that creativity requires Privacy. Would you agree with that?

[36:08] - Speaker 2
It depends on the type of creativity, but in general, absolutely many forms of creativity do which in particular do you think require it? Well, I think one kind is certainly any creativity that involves modifying your own identity, whether you're a queer teenager who has yet to figure out exactly for sure that they're queer, or you're a person who wants to tell a story about a new organization you're starting, or you want to build a new system that the world might have strong opinions about. You may need Privacy while you're figuring out what your plan and your story and your identity is space to try it out before you commit to it.

[36:49] - Speaker 1
Would you say that societal change requires Privacy in order to foment and have a chance to grow? Yeah. Absolutely.

[37:01] - Speaker 2
To run with the example of folks who are queer for a really long time, not forever.

[37:08] - Speaker 1
Right.

[37:08] - Speaker 2
There have been civilizations where homosexuality has been relatively open and tolerated, but Western society for centuries, it has not been. And so until the Stonewall riots, there had to be spaces where could gather and do so relatively safely before they knew that they were sure enough of their stand and their strength and their numbers to be able to take a stand.

[37:33] - Speaker 1
Yeah. It's been amazing for me in my lifetime to watch it going from closeted to gay pride to gay marriage to that taking a little while to work its way into being a cultural norm. But it really started with people being able to get their message sorted in private. Yes.

[37:57] - Speaker 2
Huge amounts of Privacy. We're hugely important along that path.

[38:01] - Speaker 1
Yeah. Or even just to explain, to figure out and explain the message so that people understood who they were and so that they weren't defined by the stigmatizing part of their identity. But we're defined by the core of their identity.

[38:18] - Speaker 2
Another example of this that I think we're seeing transform at the moment is the end of the war on drugs, which, of course, was this tragic? I don't know whether you can four decades or 100 years of the United States, first and foremost, but really the United States persuading the world to really target the users of certain substances as criminals for their recreational habits and the slow realization that that was doing incredible damage and that there's a case, at least medically for some of those substances to not only be decriminalized, but also then potentially used as treatments for a whole lot of medical and psychological conditions.

[39:05] - Speaker 2
And we're only just starting to see the science for that be properly explored. But along the way, people who were recreationally using these substances or experimenting with them medically needed huge amounts of Privacy to protect themselves because they would be dragged away and thrown in prison for it.

[39:25] - Speaker 1
So circling back to the idea that Privacy is dead, it is largely dead because the technologies that reveal us that reveal our behaviors that allow us to be seen are much stronger than the technologies that protect us from that sort of observation. But if we would like society to continue to evolve, we still need Privacy. And so I come to conclusion. I don't know if you would agree that it's worth continuing to work on those technologies that protect us from the threats that we've been discussing. Yes, absolutely.

[40:12] - Speaker 2
The way I've always thought about this is like I pulled a phone out of my pocket and I look at it, turn it over. I'm doing this right now for those who can't see this with video. And I look at this phone. It's like, well, this thing has three cameras. It has four GLT, radio, WiFi, Bluetooth, some nearfield communication things. It has probably two or three different microphones, GPS, and enough storage on it to record months or years of conversation, depending on how much AI is using to transcribe things.

[40:48] - Speaker 2
This is an amazing surveillance device, and we're all just carrying these things around in our pockets. And so we've built we've wrapped the planet in incredible senses, and then we're trying to sort of fight against gravity, saying, well, we've got all this recording, like apparatus. Let's try to stop people from using it when we don't want them to. And that's really hard to do, particularly given that there are so many commercial incentives for the ad tech teams to get more of this data and to use it against us or use it to sell things to us, which is sort of against us or for us, or a mixture of the two.

[41:25] - Speaker 2
And the team that's playing defense that's building Privacy enhancing technologies hasn't had anything like the same resources, maybe ever. Maybe until recently, we've seen a little bit more since GDPR, and since some companies have started to really stake strategic positions around Privacy, maybe the ship is turning. But over the past 20 years, most of the time, the surveillance actors have had all the resources.

[41:52] - Speaker 1
Yeah. From what you're describing, there's a thought that is not new to me that we are, in many ways when it comes to Privacy, our own worst enemies. The advertising industry exists because we don't want to pay for things. Right? Fundamentally, we don't want to pay for TV. Fundamentally, we don't want to pay for news. And by we I mean, just the average person would happily accept an Advertisement versus paying some money.

[42:23] - Speaker 2
Well, I think one of the things there is this is a problem again, with the structure of capitalism, which is that there's a game theory problem with privately paying for information goods.

[42:36] - Speaker 1
Right.

[42:38] - Speaker 2
Some of the new sources we have are trying to do donations, but it actually makes a lot more sense to say when I vote to donate, I actually want there to be a tax that everyone is going to put money into a pot, and we're going to send it out to the investigative journalists or the news sources that are doing the best job. That's an institutional arrangement that has much better incentive properties than the thing where everyone gets their own dollar out and donates it separately. But we haven't had the civilizational conversation about what options to consider for paying for content online.

[43:14] - Speaker 2
Instead, we have some donation buttons and then this terrible thing of paywalls, and it's a separate paywall for every single publication. It's like Rupert Murdoch wants me to pay him some pile of money to read occasional Wall Street Journal stories. Some other newspaper wants me to pay them. The new Yorker wants me to pay them. Everyone wants a separate subscription. Why isn't there one subscription that I can get for everything? Where is the option to do that? Like it's missing? And so I think what we call that competition, right?

[43:47] - Speaker 2
It's a poorly designed machine. It's not set up the way that any game theorist would tell you to set it up because these are public goods, which we know from economic theory. Markets don't provide these at all. Well, but we've been so caught in the kind of neoliberal story that markets are the only game in town that we haven't been creative about our institutional arrangements for funding the Internet. I think it's time for us to go back and look at those things again. And then how does Privacy fit in?

[44:20] - Speaker 2
I also think this is a little bit of a tangent that I want to go on here. I think we do need better Privacy protection technologies. I think we also need a better way of enabling the data collection and aggregation systems that serve civilizational goals that we support. I'm not excited about us insurance companies and hospitals, which are essentially like these terrible institutions, incentive wise, giving them all of my personal data. But what I would really love is for public interest science to have anonymized aggregated statistical access to all of my bio.

[44:58] - Speaker 2
Like all this data off my phone. I would buy a smart watch or a smart ring if I could know that the only place that data was going was open science.

[45:10] - Speaker 1
Yeah, I'm with you there. I got very excited about when DNA sequencing started to take off and the idea of getting sequenced and giving that data to science, and, in fact, finding a pool of money to allow all my friends to do it, too, if the cost of it was a problem. But that was deciding that science, having that data was more important than Privacy in a big way at that moment.

[45:41] - Speaker 2
Well, I think there's a role for Privacy enhancing technology to try to sort through this mess.

[45:47] - Speaker 1
Right.

[45:48] - Speaker 2
Maybe we don't have to all be cypherpunks the whole time with all our data going through Tor and signal and encrypted everywhere. Maybe there are two streams right.

[45:57] - Speaker 1
There's.

[45:57] - Speaker 2
The stuff that's personal that we keep private with those kinds of technologies. And then the stuff that goes into a different kind of anonymized aggregation thing that ensures that it's used for the purposes we agree upon and not for the other purposes. There are some exciting organizations trying to do this kind of work.

[46:14] - Speaker 1
Like.

[46:18] - Speaker 2
Is a nonprofit that's pushing hard on kind of private machine learning technologies to figure out. Is there a way to get a big pot of data that no one can read, but you can train machine learning models and learn aggregated statistical things from it. This kind of tech is becoming real. And maybe there's a lot that we could do to roll it out a lot further.

[46:40] - Speaker 1
What do you think of cryptocurrencies in terms of being Privacy enhancing?

[46:44] - Speaker 2
Well, it's funny. Of course, we're on a cryptocurrency podcast. I'm going to confess I have a big confession here that when it comes to cryptocurrencies, I'm really a grumpy old man. I don't know that I'm that old, but really, they make me cranky in a bunch of ways. And the prevalent designs of cryptocurrencies, I think, have a bunch of problems that cause me to not be a fully fluid, enthusiastic supporter.

[47:12] - Speaker 1
Would you care to expand on one of those? Yeah.

[47:15] - Speaker 2
So some of them have nothing to do with Privacy and more to do with the fact that financial instruments and apparatus, these things are shared agreements between human beings, like the idea that we value a Euro note and not some other arbitrary object. That's a social agreement and whether we choose to value cryptocurrencies is a social agreement. I think we should decide on the basis of whether this thing we've created is actually going to meet our needs as humanity and meet our economic objectives. When I look at the especially the first generations of cryptocurrencies, I see a lot of things there that look like very red flags.

[47:56] - Speaker 2
So the fact that you have Bitcoin with a preordained deflationary monetary policy. Right. So there's a schedule for when the coins appear, lots at first decreasingly and then eventually none that's designed to incentivize adoption. But it's not designed to meet the needs of any people down the road who are trying to use Bitcoin for anything. It's basically like if one is being ungenerous, it looks a lot like a pyramid scheme. Maybe it's a pyramid scheme that eventually rests on something the wealthy people of the world can decide to make that happen.

[48:33] - Speaker 2
But I don't see how it makes economic life better.

[48:36] - Speaker 1
But the fact that you can infinitely divide the Bitcoin. Well, not infinitely, but to quite a lot of precision. You can divide the Bitcoin allows for there to be as much Bitcoin as you need. Right.

[48:47] - Speaker 2
Well, the premise is that Bitcoin is supposed to be there to be transacted with. And yet this deflationary monetary policy means if you have some, you should always hang on to it, because if you believe it's going to be eventually used for transactions, it will be worth a lot more in the future than it is now. And so there's kind of a paradox there, which is deflationary currencies can't be good mechanisms of exchange under this argument. Now, how does that paradox play out in the world? I have no idea, but it's not that if I were to be excitedly designing a cryptocurrency, I'd say, how do I ensure that every human being gets some of this and especially poorer humans get more of it?

[49:25] - Speaker 2
And that's a very different kind of design starting place than the one that Satoshi whoever they are decided to roll with?

[49:32] - Speaker 1
Right. But I think some of your deflationary theory. It sounds to me like it would rest also on exclusivity like the idea that Bitcoin is the coin of the realm. But Bitcoin is not the coin of the realm, so the economy can grow because there can be other coin and third coin. And so the overall world economy can continue to expand without requiring that the people who already hold the Bitcoin also hold all of the value. Right?

[50:00] - Speaker 2
Maybe you can do that. Maybe you can roll out new coins over time in order to get this to work for people and to have it decrease inequality rather than increasing inequality. I've just yet to see the evidence and the workout theory of how that actually happened. I could be persuaded by it. I also want to know you are asked about Privacy, and so I want to take a moment to be a grumpy old man about Privacy in particular and cryptocurrencies. And it is that I'm not as sure that financial Privacy is as closely tied to the deep reasons as societies that we want Privacy as the other kinds of Privacy, about what you read.

[50:39] - Speaker 2
And so there are some things that cryptocurrencies protect Privacy wise or hypothetically, could protect Privacy wise. Of course, the first generation ones didn't do a great job of that either by having the ledger public. But we want to protect people's ability to read freely and think freely and come up with new ideas. We don't want to protect their ability to evade taxes or to launder money or to engage in arms deals. And so there are all these things that you get if you get strong financial Privacy that look like they undermine those civilizational economic agreements that we want to have in place because they make the world a better place to live in.

[51:21] - Speaker 2
There's some things also that are on the other side there. So I feel conflicted about this. But a lot of me is like, as a Privacy advocate, do we want financial Privacy everywhere all the time?

[51:30] - Speaker 1
I'm not sure when I think about that. I think about the places where we clearly do want financial Privacy, like what books I bought, being my business, what gatherings I showed up, and who else showed up at those gatherings? I mean, those are things we just agreed upon. Those are important Privacy to preserve. And when corporations or governments have easy and complete access to knowing who's meeting with who, when and where which they start to get to, when they can look at how you spend your money, those privacies go away.

[52:07] - Speaker 1
Whereas some of the other things you've talked about, like arms control, kinds of things like how do guns move through the world? There's actually physical things that have to move through the world. They have to be manufactured, they have to be shipped. And so these are things that there are other places in the system where accountability can be requested by governments.

[52:28] - Speaker 2
But look I agree. In theory, it's absolutely the case that we do want Privacy for which books you buy or which assembles you gather at. But I'm not sure that cryptocurrencies are the necessary protection to get there.

[52:41] - Speaker 1
Who do you most look up to in the field of Privacy and technology?

[52:44] - Speaker 2
Well, this is a tough question because through my career have had the pleasure of working with many amazing, inspiring people. But probably if I had to pick one name, it would be Julia Angwin, who is a journalist. She these days runs an organization called The Markup. But for many years she was at The Wall Street Journal and then at ProPublica, and she combined the creative and almost subversive instincts of investigative journalism with really clear, deep technical thinking. And she'd pursue stories at great length. She broke both so many tracking stories about the online advertising industry at The Wall Street Journal and then at ProPublica told the story of bias in recidivism prediction tools in US criminal justice settings.

[53:36] - Speaker 1
Where.

[53:38] - Speaker 2
Thousands of counties around the United States way below the radar were buying terrible janky machine learning AI systems and then using them to decide who to throw in prison and getting really terrible, unjust results from those poorly chosen tools. I've just been inspired over and over again by Julia and her work.

[53:58] - Speaker 1
All right, Peter, this has been a wonderful time spent with you. Thank you so much for coming on the show, Henry.

[54:05] - Speaker 2
It's been really fun. Thank you so much for having me.

[54:12] - Speaker 1
We've been speaking with Peter Eckerslake, a scholar and policymaker on AI ethics and cofounder of the AI Objectives Institute. Thank you so much for tuning in. And please subscribe to Privacy as the new celebrity on Apple or Spotify or wherever you get your podcasts, we'll be back with another episode in about two weeks. In the meantime, we'd love for you to check out Mobile Coin Radio. It's a live stream of incredible musicians, DJs, and performers. You can catch it live every Friday at 01:00 p.m.. Pacific Time.

[54:48] - Speaker 1 
You can find that along with the full archive of shows on mobilecoinradio. Com. Thanks for listening. I'm Henry Holtzman. Our producer is Sam Anderson, and our theme music is composed by David Westfall. Have a great week and remember private as our choice.