Privacy is the New Celebrity

Ep 9 - Alex Feerst on Trust and Safety, the Distributed Web, and Keeping Our Backstage Front and Center

October 20, 2021
Privacy is the New Celebrity
Ep 9 - Alex Feerst on Trust and Safety, the Distributed Web, and Keeping Our Backstage Front and Center
Show Notes Transcript

In episode 9, Henry Holtzman interviews Alex Feerst, who served as head of trust and safety at Medium and general counsel at Neuralink.  Currently, he  is the  CEO of Murmuration Labs and also sits on the board of the MobileCoin Foundation.  Alex shares why the backstage should be front and center, explaining Erving Goffman's fantastic "backstage" metaphor for privacy.  Henry asks what it's like to work in the evolving field of trust and safety, and Alex tells us about the human experience behind content moderation.  Alex also explains the distributed web and weighs in on Facebook's latest scandal over harm caused by Instagram, pondering whether it's even possible for companies to act "ethically" in a late capitalist world. 


[00:02] - Speaker 2
Welcome back. You're listening to Privacy as the new celebrity where we interview some of the smartest folks we know about the intersection between Privacy and technology. I'm Henry Holtzman, the chief product officer at Mobile Clin, and I'll be your host for episode nine today on the show, we're joined by Alex Pearce. Alex served as the head of trust and safety at Medium, and after that, he was the general counsel at Norwegian. Currently, he is the CEO of Murmuration Labs, and he also sits on the board of the Mobile Coin Foundation.

[00:52] - Speaker 2
Alex, thanks so much for joining us today. Yeah.

[00:55] - Speaker 1
Thanks for having me to start off.

[00:58] - Speaker 2
Alex, can you tell us a little more about what you do and the issues that are most important to you?

[01:03] - Speaker 1
Yeah, sure. So nowadays I spend most of my time working in one way or another, either on sort of trust and safety content moderation things or Privacy things. So Murmuration Labs does trust and safety consulting for different tech companies. And in particular, we are working with blockchain and distributed Web folks on the question of how to do content moderation or work on trust and safety human impact issues in a way that is consistent with the values and the principles of the distributed Web. And so it's going to involve sort of decentralization of power and not leading to a world where there's a few decision makers, either like corporate or government, that are going to make decisions about people's speech.

[01:46] - Speaker 1
So that's what I'm doing with a lot of my time nowadays, and it's satisfying and fun because we're sort of at the Vanguard of what is going to happen potentially with this technology that people are so excited about and also getting a chance to potentially do better this time than with Web two on how we're going to deal with making interventions of dehuman expression, which is very delicate and very important to try to think about and get right. And so for me, getting a chance to work on it this way is great.

[02:18] - Speaker 2
Your job title. At least one of them while you were working for Medium is trust and safety officer. That's a position that a lot of folks probably have never heard of before. What exactly does it mean? And what do you do on a day to day basis? Yeah.

[02:34] - Speaker 1
A lot of tech companies now have this sort of job, and I think of it as like the people who deal with the messy human implications of the technology that's being built. And so for a lot of places, it means content moderation or thinking about abuse or harassment or misinformation and all the things that happen when human beings have powerful technological tools and can use them. Maybe another way to think about it is like the products people think about how people might use the technology. Well. And then I think the trust and safety people think about all the ways that humans will predictably misuse technology in ways that are selfish or abusive or creative of problems, while at the same time letting people do it in an environment of freedom, but also standing off some of the edges that lead to some of the biggest human risks.

[03:24] - Speaker 1
So I think some maybe understandably object to this an Orwellian tone around the title. But I think the question of how to have affirmative values of trust in technology is very important, right? Which is to say, like, you're not just mitigating risks, and you're not just saying, let's protect the company brand or you're saying, how do we try to affirmatively create an environment, like in a product that's going to help people be their best selves and create trust between people? And instead of creating collective action problems, trying to nudge people towards solving them and have an online environment that they're actually happy with instead of one that people sort of use, but complain about incessantly as like not living up to the thing that we all wanted.

[04:10] - Speaker 2
So that's kind of like the view at the top what the officer might do. I read a wonderful piece you wrote where you really dug in with kind of like the people who actually on a day to day basis, take care of the trust and safety, the content moderators where you really dug in with them and try to understand their lives. Yeah.

[04:32] - Speaker 1
So I sort of appointed myself, like the oral historian of trust and safety at some point. I was fascinated with the work when we were doing a medium. And there's this question when you're trying to do the job well, which is like, who else has been here before and what wisdom do they have? And this work is often done under a sort of like secret service ethos, where people don't take victory lapse and they don't talk about it publicly. But they're doing this work in a little bit of a way that's not public facing and that's increasingly changing in a good way.

[05:02] - Speaker 1
But at the time I was very curious about there was this problem of, like a lack of trust and lack of communication between tech companies and folks who do this work and sort of government and the public. And there wasn't a chance for people to talk about, like, not the conceptual stuff that you're talking that you're alluding to, which is like, how do you reconcile free speech and safety things, but really the day to day, like, anthropological aspects of what does it mean to sit there and make a few hundred or a few thousand decisions a day about, like, should this piece of content come down or not?

[05:32] - Speaker 1
Should we ban this account or not? Do we think this is an authentic set of people, or is this a set of people posing as people for some other purpose? I decided to just interview and because of the personal relationships that one has when you do work for a while interview a lot of other folks who did it anonymously. And I think when I was a kid, I really loved studs Turkey and the approach to getting people into a mood where they can tell you the really fascinating but mundane details of how they do their work every day.

[06:03] - Speaker 1
And because this is like a sort of a new job that didn't really exist before, it was a really fun thing to do. And I still do it, actually and still sort of interview people and publish transcripts because I think there's a feeling from the public of like, oh, this is a group of shadowy, powerful people who make decisions about speech, and it's worrisome and it's intimidating. And it's something that should be more transparent, which I think is true. But I think there's another aspect of which is just like there's a group of tens of thousands of people doing a job that didn't exist previously.

[06:34] - Speaker 1
And the sort of details of how that job gets done are sort of fascinating and revealing about in some ways, like the nature of the Internet itself. One of the things I arrived at in writing that thing was that trust and safety people need to have the mind of a philosopher, the gut of a police Inspector and the heart of a kindergarten teacher, because really there's like an investigative inquisitive aspect to it. And there's like an analytical legalistic thinking aspect to it. And there's actually a caregiving frontline worker aspect to it of wanting to help people.

[07:10] - Speaker 1
But I think a lot of people do the work are super drawn to, which is not so much focused on the public discussion. So I think again, nowadays people talk more about the secondary PTSD and other effects of doing this work. But I think at the time it was something I really wanted to put out there and get the chance to study a little yeah.

[07:27] - Speaker 2
When I was reading about it and then thinking about it some more, it just really occurred to me how these systems, I imagine they start with a front end, that's kind of very much crowdsourced, like people flagging an article as potentially having an issue, and then it gets referred to people who have a look. And so they're like getting to sift through the dirt, so to speak, of what's being posted. And it felt like that could be really hard, like, hard in ways we don't think about.

[07:58] - Speaker 2
Yeah.

[07:58] - Speaker 1
And it's a certain kind of person that gets drawn to the work. And I sort of love this community and feel like a lot of the work that I do with that piece and also with the Digital Trust and Safety Partnership, which is sort of similar in some ways, is about supporting the people who do this work, because I think what happens maybe to caricature it for a second, like you have a tech company, as you say, you have a front end and then things just come in, right?

[08:23] - Speaker 1
People are like, I don't know how to log in or, like, my password's not working or I'm being abused and harassed. And so there's a huge range from relatively trivial problems to very deep personal and human dispute problems. And generally, the gnarliest human problems. There's usually one person at the company who's most adapted to or just like, the universe knows to send it to them because they care and they want to solve these problems. And they're not just going to be like, well, sorry, dude. They really want to dig in and think about, like, I want to help this person because we made this product and it helped cause the situation.

[09:00] - Speaker 1
And that person becomes the trust and safety person. That's the temperament. It's like the person who, like, the gnarliest profound. And I think sometimes, like, traumatic and human stuff gets put on is the person at that company. And usually there's a temperament or a look. I think for the people who want to do it. But then at the same time, I think they have a sort of potentially tortured relationship with the product or the company because they're dealing with the darker side of the implications of what you've built and people both care.

[09:33] - Speaker 1
But they also sort of don't want to hear about it sometimes. And so you know what I mean? They're not the car designer. They're the seatbelt maker. They're the person who wants to say, like, maybe let's not do that thing. And in a world where you're trying to ship things quickly and you're trying to stand off as much friction as possible, that can be irritating. And so that sort of like tension between advancing and innovating and thinking about the human impact and how to do things within, like, a reasonable ambit of safety, I think, is part of, like this, hopefully dynamic, but sometimes, like, very difficult tension within companies.

[10:10] - Speaker 2
I'm sure as a lawyer, some of your brain also, while you're thinking about the trust and safety is a brand promise, while you're thinking about it as a product issue, like, how do we make the very best product? There's also a part of your brain that has to be thinking about what's the risk to the company. It's been a fast moving bunch of years. How has that changed over, say, the last three, four years?

[10:36] - Speaker 1
Yeah. I think when I was doing this work, there are companies where the trust and safety and legal folks are sort of separate for good reason. And then there's places where there's overlap. And I think a lot of trust and safety folks are often ex lawyers. I would think of the two hats as pretty different because me when I put my legal head on, I'm a risk manager, and I'm thinking about, like, you say, brand risk share risk to people, like injury harm. Are we going to get sued?

[10:59] - Speaker 1
How do we set ourselves up to be a good actor. And then the trust and safety hat, to me was really like this more humanistic thing that is in excess of what is legally required. It's not compliance. It's how do you structure things in a way that make positive human outcomes more likely? And how do you mitigate things that are not necessarily legal responsibilities to all these people? But I think part of it that you have these sort of concentric rings of like, there's harm to individuals, there's harms to groups, there's harms to societies.

[11:30] - Speaker 1
And nowadays people are like, you talk about harm to democracy, like harm to abstract values. And all of those things are potentially things you can cause or exacerbate with the technology. But it's not well internalized in concepts like brand risk. And so when you have a hat on it says there needs to be somebody who's thinking about, are we putting ourselves on a course that makes more likely harm to some important abstract value, like democracy or equality or tolerance? And the mechanics of legal thinking don't always put you in the right mindset to do that.

[12:03] - Speaker 1
And so I sort of like the bifurcation of these things. And I think the development of these two fields separately is probably good, although to address your point on the last three or four years, I think trust and safety is increasingly becoming and maybe will become a compliance thing as more countries are getting into the business of wanting to pass more specific laws around it, around abuse and harassment, around how do you respond to terrorist content, things like that and things very understandable. But I think the downside potentially is that instead of system building on top of the law to try to accomplish something sort of affirmative once I think you're in a compliance mindset, you're like, okay, how do we comply with the law, like here's the law, let's comply.

[12:42] - Speaker 1
Okay. We've complied. Ok. We're done. And to me, that's very different than this. Like what I think really good trust and safety folks are good at is thinking about structures that are on top of in excess of what is legally required, but will actually be sort of positive and affirmative. And I would hate for the passage of laws to sort of make that less of a thing and to cause people to pivot to this compliance mindset of how do we just not get in trouble.

[13:09] - Speaker 2
At this moment? I feel like I have to think about the news of the past couple of weeks and in particular, how Facebook has had a lot come out about research. They did, whether or not they took appropriate action on that research in order to keep its user safe. Now, Facebook has said pretty publicly, well, we need to do the research so that we know how to improve the product. And of course, we have been working on improving the product because of this research we've been doing.

[13:43] - Speaker 2
Do you have any thoughts about that position and thinking back on your time at medium. What maybe some key differences are between the tax that medium took and Facebook has been taking.

[13:55] - Speaker 1
Yeah, that is a really interesting one. I think, to maybe go up a layer. And I don't want to get into the business of sort of like defending the particulars of all the things. But I think the context for all of that action is really modern capitalism and Wall Street. And so to me, Facebook, I'm sure to folks, Facebook feels like extremely powerful, and it feels like they have agency and they have choices that they can make. But they also sort of have a boss, which is Wall Street.

[14:24] - Speaker 1
And I think everybody has a boss. And so to me, you think about like, if Facebook chooses X versus Y, what are Wall Street analysts going to say? What's going to happen to the stock price? That's the sort of thing that I think is on their mind. And I think they can take ethical action within those parameters of, like, oh, we should make choices that keep people safe, do not maximize ad revenue, do not allow us to grow in X, Y and Z way and have a trade off.

[14:52] - Speaker 1
And our shareholders can just sort of understand and deal with it. But they're kind of also constrained in that way. Does that make sense? I think the larger structure is like when you look at your classic, like, Silicon Valley startup of just like, get some funding issue, some stock options grow as much as you can figure out. Monetization, maybe now, maybe later, maybe IPO, maybe exit those parameters already set a relatively predictable set of things up to happen. And I feel like all of these are sort of in the cards already.

[15:24] - Speaker 1
There are many decisions you can imagine Facebook making that I think under our current system would make people say, like, what are you doing? Maybe another way to say is I think one of the things that came out last couple of weeks is like the accusation of Facebook is putting profits over people, which seems like maybe correct. I have no special knowledge about it. But also that seems like more of a criticism of companies than it is of Facebook, and that if you want to create guardrails that tell companies the right moments or the right ways to resolve that tension between safety to people safety to communities, safety to people who are outside the US, how to protect things like democracy.

[16:04] - Speaker 1
We need to figure out incentives and mechanisms that put them into the way that we do capitalism, because right now, essentially, I think there's something slightly frustrating to me about asking people and companies to act ethically, whatever that means when there's not a framework for them to do it. Like, whistleblowing, I think, is a profoundly ethical thing to do when the company think the company is doing bad stuff, right but if you're running a company like that and you have your responsibility to the shareholders, you have responsibility to all these other different stakeholders.

[16:34] - Speaker 1
Or even if you're a junior PM, the question of, like, what am I supposed to do that's ethical here that's going to help people out. I think there's a whole bunch of regulatory and other things that need to happen, and they're not like, let's have a Ministry of Information and regulate speech. They're really about questions about growth and about monetization and about how companies supposed to resolve these corporate social responsibility questions. These are maybe boring topics to people, but I think they're where the action is sort of because I think telling executives at companies do better or be ethical.

[17:08] - Speaker 1
When you have a very complicated set of incentives set up by late capitalism, you can't really ask individuals to act heroically and out of script with what their role is consistently like. That's not a way to solve it.

[17:21] - Speaker 2
To me.

[17:21] - Speaker 1
That's not like a systemic solution. That's like relying on individual heroes occasionally to break the script and do something special.

[17:28] - Speaker 2
What kind of regulation might we imagine that would get at the issues around the monetization model? Right.

[17:35] - Speaker 1
I think that is an extremely hard one. And I think when it comes to Privacy, I think people are rightfully saying there are some things that just have to be off limits because there's not necessarily if there are things that you can do where you can align incentives better. And I've worked on some of these questions about with the assumptions embedded in whether tracking necessarily makes ads more effective or better or more valuable or more worthwhile. I think if you get into the nitty gritty of CPMs and how are we valuing what ads are supposed to accomplish?

[18:09] - Speaker 1
I think those are, again, sort of mundane and technical questions around, like, how do we not inadvertently overvalue tracking or overvalue personalization or things like that that makes it rational to push to track people as much as you can.

[18:24] - Speaker 2
Right.

[18:25] - Speaker 1
So that's like, one whole set of things. I think the corporate law stuff where I'm not an expert. So I leave it to the folks who do this work. But I think things like if you want a board to take some of these other things into consideration, there needs to be stricter laws and guidelines around, like, how exactly these incentives work. And what are the protections for a company like, if you have a board that says we're going to not grow, we're going to not take this ad revenue.

[18:50] - Speaker 1
We're going to protect people in X, Y and Z way. How do you protect them from the shareholder lawsuits that follow? Right. I think those are the sort of questions that people need to figure out.

[19:02] - Speaker 2
If I take the comparison of medium versus Facebook down the road a little further, medium very famously made a decision not to advertise as its monetization methodology, which means that both Facebook and medium probably live or die on engagement. That's not different, but it really makes medium focused on the content itself is what's being sold. Whereas with Facebook, there's the saying that the people are the customer rather than the content or the service.

[19:38] - Speaker 1
Right. And I think the medium there's lots of stuff that I'm really proud of from medium, and one of them was relatively quickly. After the 2016 election, we decided to get out of the ad business entirely, and we're just going to go subscription. I think at the time we were pretty early and saying like, okay, maybe somebody else can solve these incentive misalignments related to advertising and the attention economy and creating historians and disputes and all these other attention hacking techniques that we didn't want to get into.

[20:09] - Speaker 1
So we were just like, OK, we're going to do straight up subscription. If people find value in it, they'll pay cash for it, and if they don't, then they won't. And I think it was cool in a lot of different ways. And I think it's to have Williams credit that he was pretty early in saying, we're going to try it this other way, and we're going to see if subscription is like a cleaner fuel. I don't know if it's done yet, but I think it's sort of indeterminate.

[20:35] - Speaker 1
You have substance and lots of others that are moved to subscription, too. I think one of the things that allows you to do is do a whole lot less tracking, which is great. So like a medium, I was always super proud. In the early days, we adopted the do not Track standard, which I think virtually nobody else did in terms of allowing people to browse truly anonymously. And I think we sacrificed up to 10% or 12% of our data sometimes because of that sort of stuff.

[21:00] - Speaker 1
And it was just a choice he made. And in other ways, the theory behind subscription was sort of like people will signal what they value with cash. And then we're going to get this much purer sense of what adds values to people's lives. And then we can focus on that. And that's sort of true to an extent. But I think one of the things we've learned in the last couple of years is that there are folks who pay for content and say, like, okay, this is valuable.

[21:26] - Speaker 1
It's hard to produce things. Well, it's hard to produce things that are fact checked and accurate and richly well written. It's worth paying for and then understandably the experience of getting things for free in an ad supported way. There are folks who are not going to pay for it. And then you sort of have a bifurcated public where you have, like, folks paying for content, which is arguably higher quality. And then folks who don't want to pay for it and aren't going to be forced to but are potentially reading a whole range of things that have less certain Providence, less certain accuracy.

[21:55] - Speaker 1
And then they each are sort of in their own spirals. I think it fixes certain things. But then there's other things that it raises, that it's not so simple. So maybe a shorthand is like just getting rid of ads doesn't solve everything right? Like it solves a couple of things. It creates some new problems. But then it leads back to old questions that you may remember from the 90s, like the digital divide and things like this around pushing people to pay cash for something as opposed to being able to do it in an ad supported way.

[22:28] - Speaker 1
So I think that was one sort of learning another one was that it was always very hard for me that users cared about Privacy, especially in the negative, where if people feel like their Privacy is being violated, they get angry, which is good. But they aren't in a position to, like, most consumers are not picking things based on Privacy. And there's not even necessarily, like a great audit trail where people can say like, okay, I know this product is really good on Privacy. I know this other product is really crappy on Privacy.

[22:54] - Speaker 1
And so with medium, I felt like we were great. We really bent over backwards. I think we had a trailing deletion of IP addresses, and we got rid of all sorts of data because once we weren't doing ads anymore, it wasn't useful to us. And we were not trying to infer things that we thought would lead to, sort of, like, worse justice outcomes. But it was hard to get consumers to get that and hard to get consumers to act on that. And that's I think one of the big giant Privacy questions, which is like, when you have the convenience Privacy trade off in everyday life, people's revealed preference seems to be like, we like convenience.

[23:35] - Speaker 1
Or maybe the market doesn't give them enough really good Privacy options for them to manifest it and show that they care about Privacy. But I think the nittygritty everyday, like, consumer dynamics of it are like, if you go out there in the world is like a strong Privacy prefer, it's like you're a little bit of an idiosyncratic buyer, and there's not, like, a ton of stuff for you. Speaking of life by myself.

[23:58] - Speaker 2
I kind of want to return to something you said right at the beginning, where you talked about the distributed web and how you're focusing your attention now on these issues and how they apply to the distributed web. Can you define for me the distributed web? I'm not sure I'm fully up on what that even means myself.

[24:16] - Speaker 1
Yeah.

[24:17] - Speaker 2
Fair enough.

[24:18] - Speaker 1
The main project I've been working on is IPFS through protocol ABS, but there's a whole bunch of projects out there. So I would say at a high level what people are calling, like, Web three or distributed Web. And the Internet Archive has been running like a DWeb gathering since about 2016, is an enthusiasm for architectures that sort of, by their nature, do not cause a centralisation of power or dependencies or other things. At a computing level, you'll have sort of more nodes on the network where there's different actors who have the ability to have an impact.

[24:55] - Speaker 1
And so you don't centralize as many decisions or as many resources in any one place. I think that's like the highest abstract way to put it. So with IPFS, it's like storage. The idea that if we have the current world where we have AWS and Azure and Google Cloud and a couple of other main cloud providers leads us to a world where we have a couple of very powerful decision makers who are providing a ton of the storage architecture of the Internet. Whereas the alternative that IPFS envisions is like a very large number of storage providers, each of whom might have their own preferences and policies, including about content, but about other things.

[25:34] - Speaker 1
And so if you have that very large plurality of options with diversity, it's not necessarily a panacea, but it puts us in a way better position for people to act differently than what we have now, which is like a small number of options where people feel pretty locked in.

[25:54] - Speaker 2
I remember a movement a few years back that was, for example, trying to go from content being in a location to trying to have named content so that it could be at many locations it could move around. Is it that kind of idea so it can be?

[26:12] - Speaker 1
And so with IPFS, that content addressable storage system is part of how it's working. So instead of having a URL and multiple URLs where you might go find the same file with different URLs, you'll have sort of like one hash for a file where you'll get pulled from different places. And so this gets into incorporating the CDN type of functionality more into the storage system itself. Right where you have, like, the ability to fetch multiple copies from different places without having that CDN layer to sort of make more copies and put them all over the place to increase your efficiency and things like that.

[26:51] - Speaker 1
So I think that is part of it. And I think the folks who are into this as a computing thing are very into the sort of, like robustness and resiliency and redundancy that this sort of thing adds to the architecture. And so, for example, people talk about, like, dead links as being a thing which hopefully will become less of a thing. Once you have multiple copies that are much more fetchable, you won't have expired links, you won't have expired images, and you can have an internet that has a little bit of a better memory and is more functional for people.

[27:21] - Speaker 2
So in this future, distributed web, it sounds really great for people who currently have issues with censorship. Be able to put their content up there's, like no place to go and take it down. Right on the flip side. With this trust and safety, it also seems like it's now a structure that's hard to moderate.

[27:45] - Speaker 1
Right. So this is what I've been spending a lot of time both like thinking about and trying to build software for, because I think the way that I'm thinking about it nowadays is that there's a very strong value of censorship resistance in distributed web, which is good. You don't want a small number or a single choke point where a government or Corporation or some powerful political actor or whatever can apply leverage and get something to become inaccessible. Right. And that's sort of like what people feel like the world we live in now is at the other extreme, though I at least had this intuition that it probably can't be that censorship resistant means that something that all things are undeletable forever because some stuff is really bad and really illegal.

[28:31] - Speaker 1
And I think the notion of having a system that can't be intervened into at all, it's attractive, potentially in its extremity. But I think also, given the pragmatics of human conflict and also governments, it's probably also unrealistic. Right. So when you think about the extremes of content moderation, like, see Sam, child sexual abuse material and terrorism, things like that there have to be ways to intervene into what's the most illegal and the most taboo content to allow people to not have copies of it if they don't want to and to make some of these things inaccessible.

[29:09] - Speaker 1
But I think the sort of incredibly hard and interesting questions like how to arrive into a place where it's possible for people to make interventions. But the number of actors is much larger, and the amount of power that each individual actor has is less.

[29:27]
Right.

[29:28] - Speaker 1
So that the ability to sort of kick something off the Internet is much harder. So I'm trying to imagine where if you have something that's very illegal, probably you'll have lots of actors concurrently deciding to not make it available, and you might even wind up with the same outcome as we have now. But what you have is like thousands of actors deciding that something is not what they want, as opposed to one actor making the decision for everybody. And so that I think is like where you get into the complexity of what I said earlier, like sort of doing content moderation, where it's consistent with these values of distributing power, of having a lot more actors in the network who can make these sort of micro decisions for themselves and others.

[30:12] - Speaker 2
So when I think about these power imbalances, it sounds like there's another maybe issue here, and I'm going to cast it in terms of trust and safety versus the Privacy side of what you like to do. And that is like, how do I as powerless internet user address a Privacy violation in a distributed web or like, my phone number ends up out there because of a bad actor. What are my resources become? What can I do about that? Because I don't have the resources that multiple governments might have if it was something that was clearly really bad content that we're talking about.

[30:59] - Speaker 1
So like, another way of putting it is like, if you're getting doxed right, like, what do you do? I think doxing is sort of interesting because it's not universally agreed upon. But it seems to me that most internet users feel like doxing is bad. And it's like surprisingly unanimous sort of thing. And then from a Privacy point of view to me, one of the things that's interesting about doxing is like it's often the aggregation of publicly available information in a way that is menacing. But it is not exactly.

[31:28] - Speaker 1
People experience it as like a Privacy harm and is like a threat or like a probability of having some harm. But then also people say, like, dude, your phone number is very available. It's like in the phone book. So what's your problem? Like it's public and that, I think, goes to the intent and other contextual aspects of it, which is like, what's the Privacy harm we're trying to protect you from, if you're say, being docked or as opposed to your phone numbers in some database, and it's available.

[31:57] - Speaker 1
But maybe not like, ubiquitously visible to tons of people.

[32:01] - Speaker 2
Well, it is often attached to, like, when people get doxed, it's generally attached to a subjective statement of misdoing and a call to action to a large community, mostly often unknown to the person who's making that call to action. So I think that's why it feels like a Privacy violation.

[32:23] - Speaker 1
Yeah. And I think it is. I think it gets it to me, like the harm aspect of it because of the call to action is almost the part that's the problem. And the phone number is sort of like the mechanism. And the way that I think about a lot of that stuff, too, is that one of the promises of doing it in a decentralized fashion is that we don't need to make your phone number unavailable. We just need to make it less available. And to a large number of people, I think a lot about proportionality getting out of the paradigm of banning, not banning, take it down.

[33:01] - Speaker 1
Don't take it down. I think the same way that a lot of folks are fascinated with amplification. Now the question is we don't need to disappear your phone number from the internet. What we want to do is not make it easily available to a group of people who seem clearly doing it in a way to try to harm you. So in that one, I guess it really depends on what service we're talking about and what thing we're talking about. But I think in a DWeb environment, it's very possible, especially until, like, laws get clear on this stuff.

[33:33] - Speaker 1
You may have less recourse than you do under conventional web company, because there's going to be potentially, like a bunch more copies of it out there that are much harder to chase down. But I think what you'll want to be able to do is send agreements that gets proliferated to a bunch of the nodes. One thing you need is like a mechanism for filtering grievances in a way that's rational like a traffic cop, because you're going to have more than the upside of having many providers is that you have this plurality and you have more actors.

[34:08] - Speaker 1
The downside, of course, is that you need to do a lot more traffic copying of grievances because you now have to give people notice of like, oh, something bad has happened. I think part of what I'm trying to figure is how to invest in getting the information to people when something bad is happening so they can figure out what to do about it. And then I think a second thing would be trying to figure out how to allow a provider or somebody else to decline to serve this or to throttle it and make it less available.

[34:37] - Speaker 1
I don't have awesome solutions for all this stuff, but I think a lot of the answers will be in dimming things, as opposed to banning things.

[34:47] - Speaker 2
As I try to imagine that future, a thought that comes to mind very quickly is the idea that what at a very surface level might appear to be the same web will actually be a different web depending on who you are and where you're looking from. And we see this already today. Like if I type a search string into Google, I'll get a completely different set of answers than somebody else because it's learned about me, and it sounds like to some degree, the entire experience of the web may become more like that.

[35:22] - Speaker 2
Yeah.

[35:23] - Speaker 1
And I think this is like the current ambivalence about personalization, which is I think Larry lesson used to call this, like, Little Brother instead of Big Brother. There was the promise of all the goodies that come from personalization, and then all the tracking that's necessary to do it and not knowing whether the personalizing force is acting in your best interests or not.

[35:46] - Speaker 2
Right.

[35:47] - Speaker 1
I think that to me, is the next set of questions, because I think tracking, like having a bunch of data that's trained on your preferences or whatever your information about you can be nefarious or not, depending on a bunch of other things. But the structures of how do you create a container for that that's going to act in your interest is one of the things that's hugely unaddressed. Yeah.

[36:13] - Speaker 2
I think what I'm trying to get my mind around is does this future of this distributed web with a more dimming kind of version of moderation or censorship or course correction of whatever nature does that help with the media bubbles that we live in or does it actually solidify them? Like, right now, if you watch CNN or you watch Fox, you have a very different idea of the exact same current events and what's the truth? And if we have this distributed web.

[36:53] - Speaker 1
Where.

[36:55] - Speaker 2
Depending on how I view it, I might get the parts that are left leaning dimmed, or I might get the parts that are right leaning dimmed. Does the whole web start to have that bubble effect as well?

[37:10] - Speaker 1
Right. That is a really interesting question that I admit I haven't thought a lot about. I will say I don't know that I'm a filter bubble skeptic, but I do think that the discourse around filter bubbles has a couple of things that I think are not generally discussed. One of them is like you sort of allude to the fact that Fox and other media outlets are as much part of the push as technology. I think the role that, sort of, like media companies play in talking to a more and more extreme audience is part of it.

[37:44] - Speaker 1
And I think the amplification effects exist in the web, but it doesn't exist sort of innocent of the intent of people to create content that increasingly does. That the second larger thing, and this probably makes me sound like an old web person is like I questioned the era in which Americans of all classes and races sat down and dined together without filter bubbles. I don't think this happened in the 1860s. I don't think it happened in the 1930s. I don't think it happened. I think the era in which we all had, like, a shared national discourse is really a shorthand for, like, the mono discourse that came at us from media that didn't care about most things that non mainstream communities cared about.

[38:26] - Speaker 1
So to me, the socalled filter bubble thing, I get it. But I also think that it's partly because people are able to talk about what they care about for online, and that is more marginal. But I am not nostalgic for anywhere where we had, like, quote, unquote one national discourse, which was defined by three networks talking at you. And so I don't think the fragmentation is great. But I actually think the many communities on Twitter that you can learn from in the ways that they criticize the news is great to me.

[38:56] - Speaker 1
When I lurk on Twitter and look at different communities talking about an event in a way that would never have been read at me on, like, the Nightly news. I feel like way more educated than I would have been, and it's not necessarily bad that those ten communities are only talking to themselves. Do you have questions? I'm not actually sure where that goes with it, but I think if individual actors are surfing through all of these different modes of the web, I think that's probably great or potentially.

[39:28] - Speaker 1
Okay. I think the sort of fragmentation of people into tribes that hate each other is not great. But I'm not sure that the multiplication of nodes on the network necessarily leads to that, especially because any given person in context can hopefully exist in a whole bunch of them.

[39:49] - Speaker 2
So a few weeks ago, we interviewed John Swartz, and the context of our interview, then, was all of the congressional action pushing back on big tech and that this seems to be a place where the left and the right actually do come together. Everybody's upset at social media. Everybody is upset at the aggregation of power. And the question to him was, is there a sea change happening? And he seemed very firmly to think that there was that government is getting ready to act and to make changes in this regulatory landscape.

[40:27] - Speaker 2
Do you have any take on that? What's your opinion? Yeah.

[40:32] - Speaker 1
So I'm not much of a congressional prognosticator. It does seem worrisome that if the only thing that the right and the left can agree on is wanting to regulate tech and disliking tech, that's worrisome if you're in tech. I think also for me, as somebody got into tech because of the idealism of the people in it, it's also just sort of disappointing to me to feel like the folks that I know want to build things that are useful to people and still do. I think the naivete that have led some companies down this road is dangerous and needs to be addressed right of like the idea that you can go into things with naivete and idealism and wind up making things that wind up pretty nefarious in their outcome.

[41:18] - Speaker 1
I think that's a thing that we need to do a bunch more soul searching on. I think the fact that people no longer think tech gives them hope is also just sort of sad to me. I don't think they have to, or I don't think we should just have technocratic solutions to things. But I think I'm frustrated by the sort of like Washington Silicon Valley dialogue. And I used to go to Washington a bunch to try to sort of build bridges for medium and for just generally, people who care about user generated content and online expression and the sense of lack of trust that companies are secretive and hiding things in the fairies and doing bad stuff.

[41:54] - Speaker 1
And then I think the sense from companies that no good deed goes unpunished, that they can try to do a study, and then the study will be used to show them that they were bad, or they can make a gesture and say, we want to be regulated in X, Y and Z way. And people say you must be doing that because, you know, those regulations are going to help you kill your competitors. And so the low trust nature of the dialogue right now is just very disappointing.

[42:23] - Speaker 1
If that means regulations will happen. I think it's sort of like, sure, maybe. And I think Privacy regulations probably overdue. I think the larger problem is that I think speech regulation would be generally bad. Maybe to reframe the question a little bit. One of the endless scary things for me as a trust and safety person or a person who works on UGC products is like it's a pass through for people. For citizens.

[42:49]
Right.

[42:49] - Speaker 1
I had a conversation once with the regulator said like, oh, if a TV station had broadcast this bad thing, we would have pulled their license. So we're going to pull your license. We're coming after you. But the point is social media and UGC, it's an indirect modality. The expression isn't Facebook's. The expression isn't mediums. Expression is like people's expression. So it's really a pass through. Right. If you go after the company that you're mad at, it passes through and winds up hurting the people who are trying to express themselves on it every time.

[43:21] - Speaker 1
Right. This reminds trade protections like the prices get passed on to people. And so with this, it's like being tough on tech, whatever that means, especially when you're in the speech area and you're starting to call balls and strikes in terms of which speeches in and out of bounds from the government. It's a pass through. Right. You think you're being tough on companies, but what you're really being tough on is people, and it's very hard to try to persuade people of that and not be greeted with skepticism.

[43:54] - Speaker 1
Right. So I think that's one of the challenges now is to not wind up with a bunch of speech restrictive laws that we wind up regretting a ton later.

[44:05] - Speaker 2
Before we wrap up. I would kind of like to then flip it and let's talk about something that really excites us. And I know that you've chosen to give your time to the Mobile Coin Foundation, and I'm curious, how did we engage you? How did we snare you into that?

[44:22] - Speaker 1
I guess I've known some of the Mobile Coin folks like Josh and Brady for a while. And aside from the ambition of the project itself and thinking about how people sort of like financial Privacy and how they choose to transact with people is like an interesting angle of the Privacy question. I was very drawn to how committed Josh and others were sort of exploring the question of, like, Privacy in society, Privacy art related to Privacy and getting people to think about this as a value that is sort of incorporated into your everyday thinking.

[44:59] - Speaker 1
So one of the things we're doing on the foundation is hopefully going to do some arts grants for people who are exploring the question of Privacy and how it fits into their life. Maybe I'll tell a quick angle, like some of the writers who influenced me because I feel like we talk about Privacy a lot in the context of data transfer and tracking and a lot of the mechanics of Internet a lot. But for me, a lot of the writers that made me think a lot about this that I still use sort of my touch points today.

[45:30] - Speaker 1
One of them is sort of like is Irving Goffman, who is a sociologist who sort of wrote about everyday life like a theater critic. There's a book called The Presentation of Self in Everyday Life, and there's a book called Stigma and a whole bunch of other things. And part of what was so amazing in this presentation itself is the insight that social life is a collaborative performance, that when you catch somebody being awkward. Part of what you're doing is catching people at a moment where, like they just the performance wasn't quite up to snuff, and you can sort of choose to be mad at them for it, or you can sort of choose to help them get back into character and do their performance well.

[46:10] - Speaker 1
And he has this notion of backstage, which is like everybody has backstage where they need to prepare their social performance, need to explore and get into character and look themselves in the mirror and figure out who they are and figure out how they're feeling and that there's something important about protecting other people's backstage for them, which allows them to be out there in the world, like performing with you. And there's something very important, like acknowledging it not intruding on it and also really like protecting it for other people.

[46:42] - Speaker 1
And so I use this concept of backstage a lot when I think about Privacy because it's like a more of a humanistic concept as opposed to like a technical concept. But when I think about what is it that you're trying to steward for people when you're trying to create products that protect Privacy, part of what you're doing is creating an ability for people to have a backstage that is sort of rich and well defined and doesn't have things that they don't want it to. I just wanted to share that with folks because I just love the book and I love the concept of it.

[47:17] - Speaker 1
And whenever I think about a product like Mobile Coin or I think about other things that I'm working on distributed machine learning thing I've been working on. The questions back out to me is like for normal folks, if this becomes part of the products that you use every day, is this going to give you this bubble or this environment where the way that you act in the world, the way that you buy things, the way that you use technology, where you have, like, a safe and rich backstage that is better to find and not being eroded and not being sort of commercialized.

[47:49] - Speaker 1
And so Mobile Coin is one of those things. I was very interested in doing this, and I feel like the community that's working on it is interested in all these other dimensions of the impact it's going to have. So I'm very happy to be part of it.

[48:01] - Speaker 2
I feel like we've kind of backed into a couple of questions that we ask all our guests, one of which is when did you first realize that Privacy was important to you? I think we got a pretty good explanation of at least some of that.

[48:13] - Speaker 1
Yeah, I'll do. The short version. I spent a lot of time alone as a kid. I grew up sick and had an autoimmune thing, and so we end up reading a bunch. But also, I think as somebody who is aspiring to be a writer at that time, I spent my 20s doing literature, the sense of aloneness and solitude, and the ability to be alone with your thoughts and also think things that were weird or embarrassing or not something that would be possible in social life. I think I realized that was a very important part of becoming who you are.

[48:53] - Speaker 1
And I feel like still, when I'm trying to write something that is actually interesting or original that I tend to do it like in the middle of the night with this feeling of solitude, that there's no judgment and there's no social world that exists anymore. And for me, that sort of came out of these feelings as a kid of like being sort of alone with books and having the freedom of not having social judgment. And so to me, that is where a lot of the Privacy concept comes from, which is the ability to sort of, like, depart from the understandable well meaning, but also very constraining social world and depart and explore and figure some things out, but then also sort of eventually return.

[49:36] - Speaker 1
I feel like Privacy is a very important part of that in terms of political movements and figuring out things that are not perfectly conventional, but then also for people artistically and personally to just depart from the sort of like being on stage and being in a surveillance social world and being in some other space.

[49:58] - Speaker 2
What do you think are the biggest threats to Privacy? And what do we need to be watching out for right now? Yeah.

[50:06] - Speaker 1
I am very skeptical of anybody who would say that norms around Privacy are eroding to me because I think folks will get up and say very little empirical information or maybe even lots of empirical information, but plenty of motivation to say like, oh, the kids these days they don't care about Privacy or the future. Society we're heading towards is going to be communalist and X, Y and Z ways where Privacy as a concept won't be important or won't make the same sense. And I don't want to be resistant to those things.

[50:37] - Speaker 1
And I think figuring out, like, communal responsibility to each other in the US is important, but I'm very skeptical of the sort of historical narrative of, like, oh, we're reaching the end of an era of Privacy. This is like a naive figment of the European Enlightenment imagination. You should let it go. And the kids understand this because the way they use Instagram shows that they don't care about it super skeptical of that because I think that narrative isn't just unproven. It's so instrumental, and you can imagine there's lots of like for folks who would like that to be true.

[51:07] - Speaker 1
It seems like a perfectly normal thing to say, but think I it's very worth saying historically, there's ways to be a functioning communal interdependent society and still have this backstage space for people to develop. I think holding the line on that sort of stuff is super important.

[51:28] - Speaker 2
I'd like to end the show by asking you, what do you think of the notion of Privacy as the new celebrity?

[51:33] - Speaker 1
Do you agree?

[51:34] - Speaker 2
Do you disagree? Yeah.

[51:38] - Speaker 1
I guess it depends. On what if you mean in the sense that it's a thing that lots of people will want or it will be very valuable and scarce. Maybe I was thinking about, does everybody get 15 minutes of Privacy in the future? I could imagine a world where it takes a bunch of work and a bunch of resources to ensure you have some Privacy. And I could see that being a very desirable and potentially scarce thing in that sense. I guess I get the flip. Maybe I'll leave you there's a quote from Rylka that the highest form of love is being a protector of somebody else's Privacy.

[52:20] - Speaker 1
And so I think instead of the admiration that people get like, the buzz of wanting to be known and being celebrated is what was satisfying or has been satisfying for somewhere. I think the notion of like, if you knew that other people were working to give you Privacy, I think that would be knowing that other people are actually fighting for you to have that experience of being unselfconscious and unselfaware and private in yourself. I think that would be sort of a great feeling that is equivalent in a weird way to celebrity.

[53:00] - Speaker 2
We've been speaking with Alex Fierst, CEO of Murmuration Labs and a board member of the Mobile Coin Foundation. Alex, thanks so much for coming on the show.

[53:09] - Speaker 1
Thank you, Henry.

[53:10] - Speaker 2
Take care. That's it for episode nine of Privacy is the New Celebrity. And if you haven't already, please subscribe, you can find us on Spotify, Apple, all the podcast apps. And if you like what you hear, please leave us a review. In the meantime, you can find our complete archive of shows on mobilecoinradio.

[53:43] - Speaker 1
Com.

[53:44] - Speaker 2 
That's also where you can find our radio show every Wednesday at 06:00 p.m.. Pacific Time. And as we like to say at Mobile Coin, Privacy is a choice we deserve.