Ctrl-Alt-Speech

Murthy, Reddit, and the Speech Deciders

March 22, 2024 Mike Masnick & Ben Whitelaw Season 1 Episode 2
Ctrl-Alt-Speech
Murthy, Reddit, and the Speech Deciders
Show Notes Transcript

In this week's online speech, content moderation and internet regulation round-up, Mike and Ben cover: 

  • Supreme Court Seems Skeptical Of The Claims That The Federal Government Coerced Social Media To Moderate (Techdirt
  • Reddit’s I.P.O. Is a Content Moderation Success Story (New York Times
  • Elon Musk's X Is Suspending Accounts That Reveal a Neo-Nazi Cartoonist's Alleged Identity (Wired
  • The Risks of Internet Regulation (Foreign Affairs
  • EU to impose election safeguards on Big Tech (Financial Times
  • Canada’s Online Harms Act is revealing itself to be staggeringly reckless (Globe and Mail)

The episode is brought to you with financial support from the Future of Online Trust & Safety Fund, and by our sponsor Block Party, which builds privacy and anti-harassment tools to increase user control, protection, and safety. In our Bonus Chat at the end of the episode, Block Party founder and CEO Tracy Chou discusses the impact of harassment on self-censorship and explains how she is making navigating privacy and safety settings on major platforms easier for users through her tool, Privacy Party. 

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Ben Whitelaw:

So Mike, to borrow from the famous X/Twitter post prompt. What is happening question mark exclamation mark

Mike Masnick:

What is happening is that we have this new podcast out and it turns out that it's a lot of work to do a podcast. So I'm kind of exhausted.

Ben Whitelaw:

Me too.

Mike Masnick:

what, what is happening? Question mark, exclamation point, with you?

Ben Whitelaw:

Yeah I too, am Finding the world of a content creator, in audio form pretty, invigorating but also tiring Um, but yeah, I think this week i'm super interested in how content moderation actually might be having a bit of a mini renaissance and we'll talk about that in a bit

Mike Masnick:

What? Incredible. How could that be?

Ben Whitelaw:

Hello and welcome to Ctrl-Alt-Speech your weekly roundup of the major stories about online speech,, content moderation and internet regulation This week's episode is brought to you with financial support from the Future of Online Trust and Safety Fund and by today's sponsor Block Party We've got a great chat with Tracy Chou in our bonus chat later on today and she'll be talking about the impacts of harassment on users on platforms and how it can lead to self censorship and what she's doing about it, particularly around how she thinks about privacy and safety setting. She has a really interesting kind of little fun anecdote about her tool being a privacy conscious techie best friend Which is something that I have been referred to myself and but would always always like one more of in my life Which is a nice segue, I think On to introducing my co host for today's podcast, Mr. Mike Masnick from Techdirt, how are you doing today, Mike?

Mike Masnick:

I am doing very well. You say that as if I am only the co host for today.

Ben Whitelaw:

And always, thankfully,

Mike Masnick:

Yes,

Ben Whitelaw:

please don't leave.

Mike Masnick:

I'm doing well. How are you?

Ben Whitelaw:

I'm good. I'm good. Yeah. For those who are listening for the first time today, I'm Ben Whitelaw. I'm the founder and editor of Everything in Moderation, which is a newsletter about trust and safety and content moderation. And, uh, yeah, Mike, we're in this together. Don't worry about it. I'm here for the, I'm here for the ride.

Mike Masnick:

Okay, good. But yeah, I mean, the first episode of the podcast came out last week. This is the second one that you're now listening to. If you are just listening to this and you didn't listen to the last week's, I think, I think you can still go back and listen to it. I think it's still relevant. We're, you know, we're, we're keeping it newsy and, and, and specific to the week, but I, I, I don't think that it's gone rotten in just seven days.

Ben Whitelaw:

No, no, I think that we did well in that regard. The TikTok story is still alive and well. and um, as we'll find out later on, you know, the EU is, is taking aim at other Chinese platforms which they deem to be of a significant size to regulate. So it's all good stuff still.

Mike Masnick:

Yes, yes, yes. And I think we really appreciate everyone who's been listening. And we've had really great response. We've gotten some really great feedback, including that Ben was too quiet on the last podcast. So Ben, be louder.

Ben Whitelaw:

lean in to quote Sheryl Sandberg, lean in. And, uh, you gave me some great advice, Mike, which is eat the mic. And so I'm, I'm, I'm full, I'm full of mic right now. And hope, hope people can hear us. And particularly me in this episode today. So thanks for the feedback, everyone.

Mike Masnick:

Excellent. And if you are listening to this I, I hate to do this because I am just sort of naturally inclined not to be self promotional, but as everybody tells you on every podcast please go rate review. And subscribe to the podcast. So it helps us. It helps. If you like the podcast, it does help get it more attention elsewhere. So depending on the platform that you're on there are usually options to rate and review it. Apple podcast is the main one and apparently the most important one. So people tell me if you like the podcast, please go rate, review and obviously subscribe to the podcast. It really helps us out.

Ben Whitelaw:

Definitely. And if you, leave a particularly good review whether it's, whether it's bad or positive, we may even read it out on the podcast in the future. That's our promise to you, write a well crafted, interesting review and it may even feature. How about that?

Mike Masnick:

Ooh, such promises.

Ben Whitelaw:

Great. Let's dive in there, Mike, enough of the preamble. We've got a couple of big stories today to talk through. One kind of US legal story as was the case last week and a big platform story for us as well. We're going to start as really the week did with the story you've been looking at the Murphy oral arguments and, I'm trying to keep tabs on the importance of this case. And i've been reading a bunch of analysis on it But can you talk us through kind of what it is and why it's important first of all before you try to read the runes and what this is all gonna mean

Mike Masnick:

Yeah, this was a big case. It was heard in the Supreme Court on Monday and when the case was initially filed, I didn't think it was that important. In fact, I thought it would just be tossed out of court really quickly because The framing of the case was kind of silly. And it was this idea that was originally brought by two states Missouri and Louisiana against the Biden administration claiming that they were censoring people online by convincing social media to censor conservatives is the way it was framed. The whole setup of the lawsuit was weird too. And, and that there are questions about. Are the states being injured here? Do they even have, you know, what's known as standing to actually bring this lawsuit. They sort of tried to fix that by throwing in a bunch of anti vaxxers, um, as, as co plaintiffs in the lawsuit to say who, who did get, you know, either had content pulled down or were suspended briefly from various social media platforms to, to make the case, Acceptable under the law, um, there were some weird things to where, you know, part of the lawsuit also blamed the Biden administration for the very brief decision by Twitter to block links to a New York Post story about Hunter Biden's laptop, which occurred when the Biden administration did not exist because it was the Trump administration at the time. So it felt like sort of an odd thing to, claim in the lawsuit. And that was just sort of like the beginning of the problems of the lawsuit. And it's sort of loose understanding of like, what is truth and what is factual, but the, the. Underlying concept behind the lawsuit is actually a really interesting and somewhat thorny legal question, which is especially in the U. S. Context where we have a First Amendment that says, the government cannot do anything. That is an attempt to suppress speech. That where is the line? Between persuasion and coercion by the U. S. government, I think most people recognize that the government can speak out on its own opinion about something that they think is problematic or bad that that needs remedy of some sort. They can't take legal action if it is specifically like to, to try and suppress speech, you know, there is this sort of questionable line of what happens when they threaten a private actor to take action. And the classic Supreme Court case on this is a case called, known as Bantam books, where you had basically like. This commission on, you know, obscenity, I forget the exact, uh, and I haven't read the band of books case in a while, but like the, you know, the basics of it was like this commission on obscenity would send these semi threatening letters to booksellers about books that they had and, and basically say like, Hey, this is, this is potentially obscene and, and effectively pressuring the company, the book, the bookshops to pull those books. And the Supreme Court found that that was a violation of the First Amendment. But it was never quite clear, and there was never a very defined test of, where is the line between just going out and publicly saying, we don't like these books, versus going to the store and saying, if you don't take these books, off the shelves, you're going to be in trouble. And there have been other cases that have, have dealt with this, but none of them have created a really, really clear test. And so it would be interesting. And in an ideal world which we do not live in, it would be really nice if the Supreme court showed up and said, here's the test to figure out what is persuasion, which is allowed. The government is allowed to use what's called this bully pulpit to go and talk about these things and try and convince you. And what is not allowed as coercion, which is we are threatening you and you're going to be in trouble if you don't do what we say. And so this whole space is known as jawboning. And you know, it's the government jawboning the, the companies to do what they cannot do under law. And the history of jawboning and the term has a sort of biblical references, which we don't need to get into, but So it'd be really nice if the Supreme Court came down with like a clear rule and a clear way to figure this out. The problem with this case is that it is a total mess. And I already got to some of why it was a mess because of like, you know, Hunter Biden laptop issues and anti vaxxers and all this kind of stuff. But part of the big problem is that, you know, the Supreme Court It, they just sort of throw a whole bunch of stuff into the pot and say, you know, the government was mad about like covid misinformation which they were, and then, and different aspects of the government spoke out about it. Then also we know that different administration officials met with. Social media companies in particular, like the FBI had some regular meetings, which as has come out, we're generally about like foreign influence campaigns, like, you know, Russian attempts to, to influence the election or whatever, where the FBI would have some information and they would share it with social media. And very clearly shared in the form of you know, we have identified these accounts as Pushing Russian misinformation. Do what you want with it. Like very, very clearly not threatening them or, or saying anything, but saying, you know, do what you want with it. See if it violates your policies, basically. And then there were the fact that there were like the anti vaxxers who became plaintiffs in the case who had Content moderated in some form or another. And so they, the, the, the case sort of throws all that into a pot and says, because of a, B and C because of administration mad because of meetings and because of people getting suspended, therefore we can say that those suspensions were because the government told these companies to suspend it. Nowhere in any of, and there's thousands and thousands of pages of evidence that was thrown into this case, nowhere could they find anything that was effectively like This account, you must take down. There were a couple of things where they took comments completely out of context, or in some cases deleted words from sentences and some other cases added words to sentences, which seems really problematic to try and make that case. You know, the, the, one of the most egregious ones that I saw was two government officials. Francis Collins and Anthony Fauci, who are both semi famous, were upset about a public declaration about COVID treatments and stuff that was, was, you know, questionable. And Collins emailed Fauci and said, we need a published takedown of this. As soon as possible and was clearly saying it read in context, like somebody needs to respond to this and we need to publish it and, and promote it. And in fact, like Fauci responded with a link to a wired articles like this kind of does it. But in the, in the hands of the plaintiffs in the court, they removed the. Published part of published take down and just said, we need to take this down, take, we need a take down of this, implying that they wanted it to be removed from social media, which was, which was clearly not the case. There were a number of these kinds of examples. And so my fear going into the, into the, the oral arguments that the Supreme Court was, that they would not see this. And, and certainly, you know, the Supreme Court has become somewhat politicized. You may have heard, uh,

Ben Whitelaw:

just recently.

Mike Masnick:

yeah, yeah. yeah. It's just a bit of an issue. Uh.

Ben Whitelaw:

So the, so the oral arguments then, so the case is, is not, not very good in the first place, but what, take us through kind of what happened in the oral arguments of Mundo then.

Mike Masnick:

Yeah. And so, you know, my fear was that it was, it was going to be, everyone was going to assume that these things had actually happened. In which case the case is pretty clear. Like if the things that, that they claimed happened, happened, it feels like it probably should be a violation of the first amendment. And thankfully, like it felt like most of the justices actually did recognize that, that, and, and in fact, the deputy solicitor general who argued the case for the U S He effectively said very early on, like, if we were suppressing speech, that would be a first amendment violation. The problem is we weren't. And so then there were, there were a bunch of interesting discussions and a lot of it gets really into the legal weeds, which we don't necessarily need to do in, in, in the time that we have here, but there was a lot of skepticism. From the variety of justices across the political spectrum. So it wasn't, it didn't fit neatly along political lines. And in fact, you had like Brett Kavanaugh and Elena Kagan, who it very interestingly that both of them had been White House lawyers in the past. So had experienced being a part of the executive branch saying like, wait, like we would call up reporters all the time. And like urge them not to run a story or tell them, like, you wrote this story and you got this, this and this wrong. Like, don't do that again. Like, like saying that kind of thing should be allowed. Like we weren't threatening them. We were just telling them we thought they got the story wrong. And how is this any different than that? And the, you know, The person who was defending the case from the state side, who is Louisiana's solicitor general, who was only recently appointed to the job. And I think this sort of fell into his lap and maybe he wasn't all that prepared for it really kind of struggled with, with those examples. And, and also really struggled with the test question because you know, as I said at the beginning, like it would be really nice if the Supreme Court gave us a really clear test so we could tell where's that difference between persuasion and coercion. And and multiple justices asked him, you know, what test, what test are you giving us to, to use? And all he would say is like, well, my test flows from the first amendment, which is not an answer because that's not a test. And so like a bunch of hypotheticals were thrown around and a bunch of different different things and he was really I think ill prepared for, for the argument and then was really ill prepared for the fact that multiple justices called them out on, on factual errors. I mean, Sonia Sotomayor was the most vocal who said, you know, directly, like, I have problems with your brief counsel. Like, you know, you, you have things here that are wrong. You have people saying things that they didn't say you have, you know, you don't show any, any You know, real connection. At one point you complained that like, you know, people were upset about this, but the, the actual moderation happened to that person's brother, not them, like it

Ben Whitelaw:

They, they do, they do say that misinformation often comes from the top and this is a really good example of that again, isn't it?

Mike Masnick:

yeah, yeah. I mean, there's a lot of misinformation happening around this case. And so it was interesting to, to see the, the justices call that out and sort of be skeptical of, of the state's arguments.

Ben Whitelaw:

Okay. Okay. So that's kind of your feeling as to where it all goes at the this this will be thrown out this one. Yeah

Mike Masnick:

I, so I think the worst possible solution of them buying that all this really was happening is unlikely to happen. That still doesn't mean like there's a whole bunch of nuance in here. And obviously when you get around the first amendment, there's a, there's a lot of details and nuance that are really important. And there's all sorts of like stray random things that could be said in an opinion that cause all sorts of Damage down the road. And so I'm not convinced that the Supreme Court won't screw stuff up in that way. There won't be, there won't be some like offhand line that they don't realize how much damage they're causing. But I, I came out of it comfortable that at least a large group of the justices sort of recognized the problems of the case and hopefully won't, make the world worse in their final opinion.

Ben Whitelaw:

okay And and just before we kind of move on you talked you wrote piece on Techdirt about The case itself and you had a line in there about how you generally think that government officials Are they often get away with jawboning? And they kind of pass across the line a little bit on this But you didn't even explain why you thought that can you give a little bit as to why you think that that is the case

Mike Masnick:

Yeah. You know, I, I, as a general believer in, in the first amendment, and I realized like, you know, we have a global audience, but, but in the U S context, I actually do think the first amendment works very, very well. And that the government's job is not to be not to be compelling or suppressing speech in any way. And, you know, it is that question of that line between persuasion, which they are free to do. But as soon as it crosses that line and gets to, gets to a point where it is threatening or it is threatening retaliation for speech, I think that creates real problems. And some of those are just in, you know, silencing important speech that that should be heard or. You know cracking down on dissent and, and you can see how that kind of thing, you know, just pick the, whichever politician you trust the least and, and think of what happens when that power is in their hands and you can see where it becomes really problematic. So, you know, even if I agree that the, you know, certain websites didn't handle COVID misinformation or vaccine misinformation very well, you know, I don't think that the government has any place in threatening to punish the companies for that, because I think that leads to really dangerous and problematic results. And just as one really quick example, and I know we have to move on, that, you know, early on in the pandemic, there was a Chinese doctor who raised the issue of COVID before, you know, before it was even named COVID before all of this, who was really, you know, ringing the alarm bells and saying, we have this disease that is spreading like crazy and it's very contagious and very dangerous. And Chinese officials were just like, no, you know, this is bad. And like went to his house and forced him to make an apology. And he then a few weeks later caught COVID himself and died. If he had been allowed to speak out, you know, we might have recognized the risk and dangers of COVID earlier. It's a counterfactual. We don't know exactly what would have happened, but it was basically, you know, about three weeks, he started speaking about three weeks before the rest of the world realized that COVID was this real threat. And, you know, that is an example of you crack down too hard on certain speech. You know, it can lead to really, really dangerous results. The ability to speak is, is, super important.

Ben Whitelaw:

and and in a one word answer Do you have a good test? Lined up do you have you cracked the test question?

Mike Masnick:

You know, I really do think it has to come down to this, this question of, is there any threat? that could be an implied threat or an explicit threat, but there has to be a clear threat. And, you know, when it's clear that you are just providing information and saying like, Hey, we found these, these things, do what, do with it what you want. That doesn't seem to imply any, any threat. But you know, there's, there, there are many, many law review articles to be written on exactly how to, how to tune that. And so I'm not going to do that in the five seconds we

Ben Whitelaw:

Yeah. Okay fair. Great, so that that's really helpful overview and we will probably touch on issues of jaw burning in the future let's switch gears now and talk about I guess the other kind of major story in terms of Technology at least and to an extent platforms this week, which was reddit's ipo so really really big story in the context of silica valley and the first ipo Since 29 other social media companies since 2019 You Kind of signals a whole bunch of things in terms of the way that, the, the appetite of investors for technology companies in 2023, a time where, you know, things aren't really going well economically. But I really want to zoom in on a story that was published by the New York times and a journalist there called Kevin Roos, which was entitled Reddit's IPO is a content moderation success story, which if you've been covering content moderation and online speech for as long as we have Mike. That's something that's going to get you reeled in right that I was instantly I instantly pricked up my ears. I was like, there's very few Sentences that combine the words content moderation and success story,

Mike Masnick:

Yes.

Ben Whitelaw:

uh, at least in my

Mike Masnick:

Exactly my reaction.

Ben Whitelaw:

um, so it's a really good piece to read we'll include it in the show notes, obviously, but basically kevin kind of Unpacks how the success of reddit's ipo this week where it was valued at 6. 4 billion 34 dollars a share And ended the day 48 percent up. So making a whole bunch of initial investors, you know, really, really happy and really successful that he has tied back to this long standing battle on the platform for a kind of healthy and less toxic environment. He's, he makes the case, interestingly, that Reddit's turnaround is down to A whole bunch of decisions made by the platform to clean itself up and he kind of charts this this kind of era of you know from the the days where there was a whole bunch of non consensual speech on the platform from A time where there was, you know creep shots being posted. There was jail bait. There was all kinds of Harm and issues on the platform where it was really renowned as a place for kind of weirdos and geeks not even you know, not even probably 10 years ago To being a cleaned up slightly more sanitized version of that. I wouldn't say he was completely completely addressed But he you know, it has got better in many regards I I use reddit from time to time. I'm not a massive user, but I can see that even myself and he makes the point that advertising revenue has soared In particularly the last few years as well. It's gone up. It's trebled in in the last three years and Increased 20 last year alone according to the information so The fact that it's been cleaned up that the fact that there's some of the harms that we've seen in the kind of early You know, 2010s have been addressed has been, has mean that advertisers have really returned to the platform and it's looking like it's, that's really helped his IPO and Kevin puts it down to three reasons, which I'll just briefly go onto. He said that the decision to kind of get rid of subreddits back in the day has really proven successful. So the

Mike Masnick:

They were harmful, harmful, subreddits

Ben Whitelaw:

subreddits. Yeah. Ones that were causing an issue. And Of which there are kind of countless examples, including ones connected to former U. S. Presidents. But there was, you know, anti Muslim hate at various points. There was Russian misinformation at various points, and they took the decision to kind of get rid of those to kind of cut cut the cut the monster off at the at the neck, basically, and it also did a whole bunch of work around empowering moderators via tooling and allowing additional people Guidelines to be applied to subreddits which I think made the moderators feel More like it was their space not just the platforms And then kevin makes the point that it it didn't try to be a kind of politically neutral Online arena, so he didn't try to kind of appease republicans and also appease democrats when it got rid of subreddits He didn't try to kind of address that on the other side either so He makes the point that these kind of three areas were basically the reason why it has You Has kind of been able to IPO in the first place and I think it's you know, I have I have a few Kind of views on this. I don't know if the picture is as as rosy as Kevin makes out I think if you've been tracking reddit's Tracking reddit's time Particularly the last five years and some of the stories have been coming out It has not looked like this at all and I was somewhat surprised and and it seems like media and and the finance media in particular was really surprised about how Transcribed Well, it performed on the first day, What did you think about the kind of kevin's piece mike because I know you read it as well

Mike Masnick:

Yeah. You know, it was interesting and it sort of drew me in for the same reason because it's so rare to see anything, you know, that says success and content moderation, as you said. But, you know, it's surprising because, you know, yes, like Reddit had, you know, it went public, it was, you know, 6 billion and it had the first day pop that, that you always expect to see, even though, you know, technically first day pop is, is all money that is left on the table because it's, it's investors trading it, not, not, you know you know, not the equity holders of the, of the earlier things, but also like the history of Reddit is so bizarre and it's, it's unlike many other companies in that it's been around for almost 20 years. It's, it's not your standard IPO story. And there were all these other problems with it that, you know, that the company had, including that, like, yes, it went public at over 6 billion, but like the last time it was it had a valuation was 10 billion. So this is technically a down round, which is not good. And it doesn't usually speak of a. Success story. Also the company has never been profitable in the, you know, all of the years that it has been around, it has lost money every year and their predictions on how much revenue they were going to make were not met. And it's growth story has really flattened. Like it's not growing. There are all of these things that say like, you know, it's not a. Great company for, for a variety of reasons. And so I really didn't, didn't see the success story part of it. That's not to say that it's not an interesting content moderation story. And there are a whole bunch of really interesting things about how it is a, a, you know, it's a different content moderation story than the one that is most commonly heard of, which is. Totally centralized systems. And we're sort of used to the Facebook, Instagram, Twitter model of like a centralized trust and safety team has to handle this stuff and read it really did present a different way that it can be done where you have some element that is centralized, which is just sort of like the most egregious cases. We're going to block these certain things. And then you have. The, the other ones that are like, we're going to hand power off. We're going to push it to the edges to these volunteer subreddit moderators. Though it's interesting, right? There's also like the whole fight last year in which they sort of, you know, cut back API access and one of the big complaints from many of the moderators was that that broke a bunch of the tools that they use to moderate these, these subreddits and made, made it, Possible to really moderate, you know, as volunteer moderators to moderate some of the larger, more rowdy sub subreddits that were out there. So, you know, I, I I'm confused as to where the, where the successes, other than the fact that they went public

Ben Whitelaw:

I think

Mike Masnick:

and, and, if it also, it just feels. You know, the content moderation aspect of it feels kind of disconnected from the you know, from the IPO. Like, I don't, I don't see how those stories are actually connected.

Ben Whitelaw:

Yeah, I mean, there's some interesting posts on X slash Twitter by a reporter called Scott Nova about how the, the kind of storyline around the moderation of Reddit was really going to depend on how well the IPO fared. And he made the point that You know, if, if it went well, then, then the kind of community would be buoyed and particularly the 75, 000 moderators and core users that it. Opened up the ability to kind of buy shares for in the course of the ipo would be really happy, right? and that was a really clever really clever thing that it did because it it Averted this idea of the volunteer moderators feeling like they were just a kind of cog in in the big reddit wheel in the big in the big machine and so it's kind of empowered the moderators to earn this success From the ipo to kind of buy into what happened and and he made the point that Actually, if it went badly, who knows what could have happened? You know, we've had blackouts on Reddit before 2020, you might remember there was like 800 subreddits, which signed a letter in support of better policies around white supremacy and hate speech. And a whole bunch of them, you know, basically stopped moderating and closed so that no one could see the content or join or, or leave, you know, leave comments. And that's the power that they hold. It's only because the IPO went well and, you know, that pop was a pretty significant one, that you didn't have moderators losing money or, becoming disenchanted with the fact that the IPO happened. And so, again, I agree with you to an degree that there's, it's not quite the success story that I think Kevin's painting. Basically, Reddit did enough to IPO. Basically it was it made itself a better version of itself, but it I don't think it's something that we We should be aiming for as the

Mike Masnick:

Yeah.

Ben Whitelaw:

as the kind of pinnacle of online speech And platforms like I think that's something that we probably want to call out, right?

Mike Masnick:

Yeah. I mean, I think there are lessons that can be taken from, from what Reddit has done. And I actually think there's some stuff in the way Reddit is set up that is really interesting for the decentralized space, even though Reddit is a very centralized platform. I think, you know, the approaches that. That Mastodon and, and Blue Sky are taking, there are actually some really interesting lessons from Reddit. But I, but yeah, success story, I'm not sure I would go that far.

Ben Whitelaw:

Yeah, I agree and I think you know steve huffman the ceo one of the co founders will get a lot of credit for this You know, he's in charge at this point in time, but you know previous ceos have had a really You Rough time and you know being chased out of the platform for by its users and

Mike Masnick:

yes. For some of, for some of these varied decisions, you know, and, and. You know, and and also just the idea that like advertisers came because of these decisions. I'm not convinced that's true. I mean, I think the company has always struggled to have like a really useful and and and valuable advertising platform. And in fact, that's true today. That's why it doesn't have that much revenue, why it's not profitable. And so, you know, I think I think that story is not really connected to the content moderation story, though. Obviously, like there are issues around like if you don't do it. Content moderation. Well, like advertisers will stay away from your platform, but I'm not sure it's clear that that the advertisers that Reddit does have is because of the decisions they made on, on the content moderation side.

Ben Whitelaw:

Yeah. And let's see if the if the pop crackles and and dissipates over the over the coming weeks. I think that will this story won't necessarily be concluded, you know, we might we might see a bit of a backlash if if things do take a turn

Mike Masnick:

Yeah. We'll see

Ben Whitelaw:

great so As Listeners all know from last week, Mike, we've done our two major stories of the week. We're now do, gonna do a bit of a quickfire roundup of other stories you've been reading and a little bit

Mike Masnick:

quick as we can.

Ben Whitelaw:

a little bit about why they're interesting too. You had a interesting one. We were talking about it before we started on Twitter and, uh, the shocking news that Twitter might be providing a, a cover for Nazis.

Mike Masnick:

Well, it's not Twitter anymore, Ben.

Ben Whitelaw:

I can't, I can't stop it.

Mike Masnick:

Yeah, so there was this crazy story this week where, you know, some people had, had worked out who there was a, a legitimately neo Nazi cartoonist who goes by the name Stonetoss who was publishing, you know, just absolutely awful, terrible you know, racist, horrible cartoons and people were trying to figure out who they were, who he was. And somebody finally figured out who he was and where he worked and published about it. And suddenly X decided to, you know, suspend or take down any posts that mentioned this person's real name, which is. Just weird. I mean, for a wide variety of reasons some people will recall that, you know, when Elon took over Twitter and after promising that he would not ban the ElonJet account, eventually did, and then for a while was suspending anyone who even mentioned the existence of ElonJet And then when, when, you know, attacked on that, especially given his one promise not to suspend the account and his continual talk of like, you know, we believe in free speech, we're only going to take down stuff if it violates laws he, you know, he claimed that like, well, you know, doxing is, is illegal, which in certain places it is, but not, not, You know, showing where the public information, where your jet is, that does not violate any doxing law anywhere. And certainly like giving someone's real name is not that either, but it seemed like an extension of that. And, and so they suspended a bunch of people. It's funny that this also happened the same week that the the now infamous Don Lemon interview, which is a whole other story, came out with Elon Musk where Lemon sort of Okay Quizzes Musk on this whole like commitment to free speech and taking stuff down and, and Musk goes on this rant about how he believes in just following the laws. And then like literally at the same time, he is setting it up so that X is removing anyone who mentions this guy's name. And there's, there's this funny bit in all of it, which is that, you know, that sort of, you know, to, To justify this, they changed the, the privacy policy on X to include a thing that says they just added this line says what, for what is in violation of this policy, it says the identity of an anonymous user, such as their name it now violates the, their privacy policy, but if you scroll down on that same privacy policy, where it says, what is not a violation of this privacy, which existed longer than this past week it says non posting private information that is not a violation of this policy includes sharing information that we don't consider to be private, including names, which, you know, it's, it's Schrodinger's, uh, privacy policy here, where it's both a violation and not a violation to share someone's names. And it really, you know, as always seems to come down to what is it that Elon likes and what is it that he doesn't and the things that he doesn't like. Okay. are violations of his policy. And in this case, what he doesn't like is people exposing a neo Nazi, which says something about Elon Musk's view of the world at this time. But also, you know, gets to some of these issues. And I had Tried to highlight these right when he took over Twitter with my, you know, the speed running the, the content moderation learning curve where, you know, you say these, these grand things like we're only going to follow the laws and we're, we're going to be, believe in free speech. We're not going to take anything down unless we absolutely have to. And then you, you discover that they're like really terrible people and they'll do terrible things. And you usually. Usually the way that works out is people figure out like, Oh, well, we have to do some trust and safety. We have to do some moderation to, to make the platform, you know, safe And, effective and to, to deal with these things. And, and here, like,

Ben Whitelaw:

he's got he's going the opposite way to Reddit.

Mike Masnick:

He's going the opposite way where he'll, he'll do those things, but only to protect the Nazis. It's just like this bizarro world that we're living in, where he's, he's sort of learning that you can't do the fully hands off thing, but he's learning it in the worst possible, stupidest possible way.

Ben Whitelaw:

And God knows who his advisors are. Because it's, he's not taking he's not taking counsel from anybody that would listen to this podcast, certainly. And it's but it's almost the perfect kind of Twitter Elon Musk, trust and safety story. So yeah, thanks for bringing that to us, Mike.

Mike Masnick:

it's incredible. So, and then I know you wanted to talk about, there's a story that both of us saw this week that we really liked, From, from, from David Kaye and foreign affairs. So you want to take that

Ben Whitelaw:

Yeah, I mean, this is just a really, really kind of well put together long read by David Kaye, who, if you don't know him, is a University of California law professor. And a rapporteur for the EU. I believe in various roles previously has done a lot of really good work to understand speech laws and the effects of speech. And he, he writes this great piece about basically the risks of internet regulation. And goes through the efforts made by the EU, the UK and the US to craft effective speech legislation. And basically kind of concludes that, you know, there's no one doing a great job right now. The EU is, is the best of, of kind of a bad bunch and in some senses. And, you know, just really flags the, the challenges that are going to be ahead in terms of, you know, Making speech regulation work. He, he does kind of cast ire and throws a bit of shade at the EU, which I wondered if, if that was the kind of You know them looking, you know him looking across the pond and Being slightly jealous of the way that the eu have managed to implement the digital services act in such a short space of time i'm just saying but you know he talks about thierry breton kind of being very forthright and and creating a scene around the dsa which Again, I think many people have said is has not been The best approach and how civil society organizations have pushed back against that and maybe that's creating a The right conditions to regulate platforms, but it's just a great piece. And I think really kind of takes the, a nice long view as to where we've got to really in terms of speech regulation,

Mike Masnick:

So, so I'm going to defend David Kaye's honor because I mean, the fact is, right. I mean, Unlike certainly some people in the U S who will take a very, very U S centric view to everything. Like David has always had a very global view. And as you mentioned, like he's done a lot of work with the UN and he was the UN repertoire on, on freedom of expression. And so this is not just like, Oh, like, you know, an American viewpoint on, on global internet speech, but the, the piece. Is is really really thoughtful and really highlights the, the challenging trade offs to regulating the internet and how it can come at the expense of speech. And I think it does a really good job. It talks about the EU approach. It talks about the, the UK approach and the us approach and the different challenges, you know, between all of them. And I think it's, it's just a really worthwhile piece overall, but it was interesting to me that, you know, right after. Reading that piece, I saw this other piece in the financial times about the EU promulgating these new guidelines around election disinformation and under the DSA and basically making it very, very clear that they are going to enforce these rules under the DSA if different platforms and, and Twitter being a big one that they are targeting, if they do not do something to stop election misinformation from appearing on the platforms, which to me highlights the real risk of this stuff, which is, you know. People in the EU swore to me that the DSA would not be used to suppress any speech.

Ben Whitelaw:

It was process driven, right?

Mike Masnick:

Yes, it's, it's pro it's process and it's all about best practices and blah, blah, blah. But like, every time I would have that conversation, I had that conversation with the, the EU official who opened up a nice little office here in Silicon Valley to, you know, to, to watch over all of, all of the companies here. And I would have this, this discussion where they would say like, well, it's, we're not, we're not regulating speech, but if there's, if there's, Really problematic speech. Like, we're going to think that's a problem. I was like, so you are regulating speech. And they're like, no, no. And so, you know, this is, this is another example of it. And it is raised in David's piece also about like how this, this tries to straddle the line, but, but is potentially really risky as a tool for suppressing speech. And then to see this stuff about, you know, well, we're going to crack down if you don't get rid of election misinformation, you know, and some people will say, obviously like, well, election misinformation is bad and problematic. In some cases, it can. It certainly can be, but where you draw that line and how you decide what is election misinformation and as you know, comes up in, in, in David's beast too, is like, you know, do you allow like Victor Orban to determine what is election misinformation? Because, you know, that presents a very different picture than, than other people determining what is election misinformation. As soon as you open that door, there are real risks involved. And so,

Ben Whitelaw:

I mean,

Mike Masnick:

you know, it, it worries

Ben Whitelaw:

yeah, I would agree that the, this, this piece is a touch worrying to the fact that You know, EU elections happen, I think on the 6th of June and these guidelines could come in as soon as next week and be enforced by that point. And if you're a platform, you know, there's a, that's a lot of work to do in a very short space of time to understand what it is that they mean for your operations and then making sure that you comply. So this does feel very hasty. It does feel very kind of reactionary and it's clearly. in response to the fact that, you know, Bretton and the, the, the commission at large think that something's, think that something's bad is going to happen in the, in the June elections. Which is not a surprise, you know, that's been in the diary for a long time. So even whether you agree that they should be doing it the way that they've done it is not ideal. And, and again, David's piece in this story linked to another story, which we probably don't have time to go really into I think we should maybe spend another episode talking about about canada, but the their online harms act has received a whole bunch of Criticism over the last couple of weeks and there's a really good story in the globe and mail about about it and how it's kind of what it calls stagger staggeringly reckless, which is not a great look for a piece of speech regulation and You yeah, we'll probably come back to that, because that's, that's

Mike Masnick:

Yeah. It's, it's a big, big story and there's, there is a lot of nuance involved in the, in the Canadian approach and I think they are taking lessons from everybody else. But I do think that there are, there are certainly some dangers. So to, to, to wrap it up, do you want to give us a quick defense of the, of the the EU and in a different context where we're, what they're doing with the DSA and some companies.

Ben Whitelaw:

I think we probably aren't there just yet, but the, the way, the way that it's bringing cases against some of the VLOPs and, you know, the investigations that are being conducted against, You know x slash twitter and and tick tock and latterly Ali express which was the story that came out this week, you know I think they're going to be really telling as to whether this is working and and the reaction obviously of those platforms to the judgments, that have passed so I don't think we're yet there yet. But You know, at least there's been some There's been cases brought against them, there's been these investigations set up, which in a relatively short space of time I think is something.

Mike Masnick:

Yeah. And we'll see. And obviously like the, the investigation into Aliexpress sort of follows on what we talked about last week about Sheehan and, and how that, that, that. Plays into the whole DSA thing. So certainly going to be a story that we'll have plenty of opportunities to follow up on, on future podcasts. And so with that, I think we are concluding this part of our second episode, but we will move on to something you do not want to miss, which is our bonus chat, where we invite on expert guests with experience in the trust and safety and online speech worlds in an effort to go deeper and explore Some particular topic. And so this week's bonus chat is brought to you by Block Party whose new privacy party tool helps you better protect your own privacy across a wide range of sites. And so I got a chance to speak to Block Party founder and CEO, Tracy Chou about how feeling unsafe online can actually lead to less speech through self censorship as people don't feel comfortable talking in different spaces and how important that is. If we're talking about online speech, let me talk a little bit about how different companies might be able to deal with that. So it's a really interesting discussion and certainly it has much wider implications for the entire internet and other services as well. So please take a listen. All right, so we talk a lot about online speech and how the internet enables lots of speech, but one, I think, critically underexplored area regarding online speech is how there are certain scenarios that actually lead to less speech in the form of self censorship. So can you talk a little bit about what you think leads to that in some cases?

Tracy Chou:

Yeah, there's a lot of these online attacks, whether it's harassment, doxing or other things that occur under the guise of free speech that are actually intended to silence people. And we've seen pretty high profile cases of folks who've withdrawn from social media or the public sphere saying, I can't handle the harassment. This, this is untenable. It's not okay. And there's a lot of other folks who just quietly. Back away. And in the course of user interviews and research for building privacy parties, trying to build tools for everyday people around privacy, we have talked to a lot of people about how they think about these issues around safety and privacy. And even when people don't talk in these terms, these are very industry terms, like safety, privacy,

Mike Masnick:

right.

Tracy Chou:

often what they describe as their reactions to what's happening online is. Self censorship. So a lot of folks have said things like, I just don't talk about particular topics anymore because it's not worth it to me. There are particular categories of information that people don't share anymore. It could be things that you would have imagined to be fairly innocuous, like, oh, I'm in this city and I'm going to be giving this talk. And I want to meet people at those things. Some folks we've talked to have very deliberately pulled back from sharing and it comes at a very clear trade off to them. They know that I'm missing out on potential audience. There are some messages I want to get out there. There's some discussions I would be really interested in having their connections I might want to make or potential collaborations, but I don't feel comfortable posting these things anymore because of the threats to my safety and privacy. Folks have all sorts of different rules that they use. A lot of the women I've talked to have said things like, do not post photos in front of the house. Do not post photos from inside the house of exterior. So no views outside the windows where you could then identify where I live. Don't post anything, photos or videos within five minutes of where I live or places that I frequent, like coffee shops or gyms. never mentioned these things. There are landmarks close to where I live. And you understand why they're doing all these things, but it also, it does come at that trade off. And I personally have experienced it as well. When I used to be much freer about sharing all of these things online, it led to some really fun, serendipitous connections and really interesting insights from folks that I wouldn't necessarily have expected to hear from or known their perspectives. But when it doesn't feel safe anymore whether it's from harassment attacks, so it could be dog piling or just a bunch of trolls from the internet descending upon your comments or making your life difficult in online ways or people who start threatening things in the offline world, like stalking you and physical safety becomes an issue. It becomes a little scary and it feels like the best solution is just to stop talking.

Mike Masnick:

Yeah, yeah, no, I think it's, it's, it's really scary. And I mean, I've experienced some of that, but I think, you know, I know that it is significantly worse for, for women, certainly. And, but it is interesting to me to think about, to think about these different trade offs, right? And so, you know, there are all these discussions about online speech and like, you know, enabling free speech and And, and there's this belief that, that seems very common that, you know, free enabling free speech online means enabling all speech. But, you know, as, as you're seeing and as the interviews that you've done and the research that you've, you've been involved with shows that that, that actually can lead to, to less. Actual freedom of speech for lots of people. And so are there things that you think different sites and services online can do to make people feel more comfortable to, to be able to, to speak when they want to and to, to bring up these things the way that they want to. Yes.

Tracy Chou:

thing is building that safe environment. How to do so is a little bit harder. Um, one idea I want to call out is in understanding how technology functions and the impacts of technology. The question is often not just what is possible, but what is easy and therefore what are people likely to do. And so when we're talking about things like harassment attacks or abuse, this is true of the mechanisms of attack and also the mechanisms of defense. So when you think about consumer web scale social platforms where anyone can at anyone else or engage with anybody from their basement contact anybody around the world, that does lead to the potential for widespread attacks. So in that case, Perhaps more friction would be better in the user experience. But on the defense side, so people securing their own privacy and safety settings, oftentimes it's really difficult. So when we talk about what are the best practices, industry experts or people who are just familiar and tech savvy often have a long list of these quote unquote, simple and easy things to do. Like rotate your passwords or have strong passwords, turn off location sharing set up 2FA, but this information is kind of spread out. The settings are hard to find. It's just kind of a pain to do these things. So it makes it so that normal users are not going to do them to protect themselves. And so what's easy is for attackers to attack and what's not easy is for people who don't want to be attacked to defend themselves.

Mike Masnick:

Yeah, I mean, I it's funny because I remember having a conversation with one of the creators of PGP, the email encryption service, who admitted that he doesn't use PGP because it was too, too complicated to actually use. So that brings us around to to Privacy Party, which is the service that you guys are creating, obviously. And, and, you know, I think part of Part of the goal of that is to help deal with that and make it easier for people to actually protect their privacy and feel safer on these services. So do you want to talk a little bit about how that works?

Tracy Chou:

What Privacy Party does is make it easy for users to protect themselves on social media, so the solution for safety and privacy doesn't just have to be self censorship. It's a browser extension that provides expert recommendations on social media privacy settings with automations to fix them, so you don't have to go do that yourself, and it runs across platforms like Facebook, LinkedIn, Strava, Venmo. Some of these are obvious ones, others are obvious when you think about the fact that you don't want to share where you run every day or all the financial transactions you have, potentially like paying your landlord. It's sort of like your tech savvy, very security conscious and patient best friend who's leaning over your shoulder while you're at your computer to help you click through and fix all your settings. That's the The short of it, we try to make it easy. We're also working on things to clean up old content. This might be photos that you have shared previously, whether it's flipping all the privacy settings. Right now, if you want to manually change the privacy settings on a photo album on Facebook, it takes six clicks, multiply that by how many albums you have, we can do that in one click for you. We just go through and flip all of them. Just trying to make it easy for you to keep being online without necessarily the overexposure of your digital footprint.

Mike Masnick:

Yeah. And are there ways that the different platforms, I mean, obviously you're doing this on, on a variety of different platforms that you mentioned, are there ways that those services and platforms can, can actually make it easier for you to provide this service and, and to, you know better help people feel safe on, on those services?

Tracy Chou:

The best way would be for them to provide standardized interfaces for us to build programmatic tools, which would be APIs, application programming interfaces. It might take a little while to get to, but also just not making the settings pages horrible is already an improvement. Some platforms won't name names, maybe I will, meta, um, sometimes have three different versions of the same privacy settings pages. So it's not even that one set of them is confusing. It's that there's three different sets of them and you don't know where to go. And so for us to build solutions on top of that is also cumbersome.

Mike Masnick:

Do you know, do you know, is that just because they're like A B testing stuff or is it just like they just put stuff in weird places or is it just like because they've been around so long that that's how it's ended up

Tracy Chou:

like it's all of those things. Um, sometimes I suspect it's that. Someone has the idea? to improve things and so they propose a new settings management system and as they're trying to roll that out and trying to pour people over they end up just with all of the different ones the same time. Yeah,

Mike Masnick:

Okay. Well I think that's, that's great. Thank you. So much for, for joining us and, and and talking about this and hopefully some of the folks at services who are listening to this, hopefully decide that they'll make it easier for you to, to make the lives of their users easier and feel more comfortable speaking on, on the various platforms. So if people want to want to try out Privacy Party or follow you or understand this more, where should they go?

Tracy Chou:

You can check us out at BlockPartyApp. com. Just to give a quick plug, if you are a trust and safety professional or a semi public figure of any form talking about topics that might get people mad, you may be at risk for people coming after you, so I do recommend trying Privacy Party to lock down your own social accounts and limit your own attack surface area. If you're the type of person who's looking out for other people's safety and privacy, try it yourself and you can recommend it. It will make your life easier than walking people through all the settings manually yourself.

Mike Masnick:

So thanks for joining us, Tracy.

Tracy Chou:

thank you.

Announcer:

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.