Ctrl-Alt-Speech

Red Pills & Blue Checks

Mike Masnick & Ben Whitelaw Season 1 Episode 56

In this week's round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Ben Whitelaw:

Mike, you would've no doubt seen the not so sad news about four chan continuing to be offline following its massive, hack last week. But we're using four chan as the starting point for our podcast today. I, I can see, you know, there's a nervous laugh if ever I heard one. you might remember that the first board on four chan was called b. It was a kind of random board, not suggesting that you or I used it. This is all research from Wikipedia, but there was.

Mike Masnick:

I'm familiar with B. I'm familiar with

Ben Whitelaw:

Okay. Okay, well let's leave it there. Um, kind of tagline of B was, the following, the stories and information posted here are artistic works of fiction, and only a fool would take anything posted here as fact, which I think kind of, you know, maybe is a, a subtle tagline for control alt speech. but anyway, to, to kind of celebrate or mourn for Chan's passing, I'd like you to start a new thread if, if you would.

Mike Masnick:

Yeah. Yeah. Well, I was, thinking, this is a, a sort of open prompt, as the B board on four chan was known for being quite random and open to interpretation. so much craziness is going on. I keep thinking that I should stop writing about the news and start writing allegorical fiction. And so when this says everything here is artistic works of fiction, and only a fool would take anything posted here as fact, I think it's time for me to start writing fiction that only fools would take as posted as fact.

Ben Whitelaw:

I mean, I, maybe, I, I thought that's what tech had become over the last few months.

Mike Masnick:

Oh gosh. All right. What about you start a

Ben Whitelaw:

Well, I, I think threads are back. I think forums and message boards are back because Martin Zuckerberg, according to the New Yorker, has said that social media is dead. It's no longer, so,

Mike Masnick:

It's dead. This we have nothing to talk about.

Ben Whitelaw:

to talk about. Don't worry. Listeners, we do. Hello and welcome to Control Alt Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. It's April the 24th, 2025. This week's episode is brought to you with financial support from the future of Online Trust and safety Fund. This week we're talking Australian masculinity canceled research grants and blue sky. blue checks. My name is Ben Whitelaw, and I'm the founder and editor of Everything and Moderation, and I'm back in the chair with Mike Masnick for another week of maybe news, depending on what Martin Zuckerberg thinks.

Mike Masnick:

Yeah.

Ben Whitelaw:

How are you doing,

Mike Masnick:

Uh, I'm surviving. You know, it's a, a constant, constant struggle to deal with the, um, the never ending flow of bullshit, uh, that is completely unavoidable on a hourly basis. I mean, even as we sit here and talk, I'm wondering what craziness is happening out in the world that I will have to

Ben Whitelaw:

and you're not even talking about four chan, are you, you're just talking about generally this,

Mike Masnick:

Just everything. Just craziness.

Ben Whitelaw:

what you've been writing about on Tech Day this week.

Mike Masnick:

I mean, a lot of it has actually been, probably less relevant to controlled speech directly, but all of the due process, immigration, you know, human trafficking rendition, people accidentally incorrectly to a Salvadorian gulag. And the sort of legal process of trying to get them back. and it's such a crazy story, but to me, one that is so important and, you know, I had written about this and, when you were out and we had Kat Duffy on, we'd had the conversation about, Teter being a democracy blog and all this kind of stuff, and, and it's like, you know, why am I writing about. Due process and immigration law. And again, like to me it all comes back to, none of the, the other stuff matters if there's no rule of law and there's no due process. And all of it sort of plays back into, you know, the things about online speech that I think are important have to do with, rights and freedoms and all of this stuff. And if the underlying substrate on which all of that is based is falling apart, the rest of it isn't gonna matter at all. And so, it feels like a different subject, but to me it's all really connected.

Ben Whitelaw:

that's very true. And if you haven't, listeners haven't read that piece about tech debt being a democracy bog. Very much worth a, a read. We'll include it in today's show notes. still as relevant now as it was couple of months ago when you wrote it. Mike, um, were talking before the podcast started. I didn't know this about you. You're a bit of an expert in four chan.

Mike Masnick:

Yeah, like a certified expert four chan.

Ben Whitelaw:

Explain, explain why.

Mike Masnick:

I was involved in two separate, court cases as, an expert witness, to try and sort of explain four chan, uh, not entirely, one of which was, I did actually have to testify as an expert witness. It was about a, troll who was arrested wrongly, I believe, for, claim was that he had threatened an FBI agent. the reality was very different. He was just sort of being a, trollish asshole, which is, you know, not a great thing, but, certainly should be in some sense protected by free speech. he had didn't do anything that was really a threat. He just said a bunch of nonsense. And the FBI, the FBI actually went and visited the guy and, internally had written an email saying everything he said was protected by the First Amendment, and we should leave him alone. And then he got very angry that the FBI came and visited him and posted some really angry, stupid, nonsensical, they didn't make any sense Screeds. Then they went and arrested him and then locked him up in jail for 13 months, before he got a trial with no bail, no ability to get out. And then, I was an expert witness of the trial, sort of testifying about like the way that people troll online and, and referred a lot to four chan and just sort of like the sort of jokes and, angry screeds that people will post in terms of like why they're doing it and how people sort of start to try and outdo each other. I mean, this is, going way back before there was as much I. Interest and focus in, in trolls. And that guy, even though it was a, it was literally a federal case, you know, it was not a local prosecution, it was the Justice Department involved. and I was on the stand for I think two days, two and a half days. and he, he was acquitted eventually, which was kind of crazy. And then based on that, in a local case, in a state case, that did involve a four chan troll who was arrested, for, going to some sort of protest and, and potentially trying to cause trouble. The defense attorneys who were, public defenders were representing the guy and didn't understand how four Chan worked at all. And so actually hired me as an expert to explain four chan to them. And that was, I didn't have to do anything, for the actual case itself, other than explain to the lawyers like, this is what this means when you see this on four chan. Like this is how it works. so that was kind of an interesting experience. But this was all, many years ago, about a decade

Ben Whitelaw:

Translating the internet for,

Mike Masnick:

Yes. Translating trolls

Ben Whitelaw:

but

Mike Masnick:

to, to lawyers. Yeah.

Ben Whitelaw:

protecting the right of trolls to say, to say what they

Mike Masnick:

Oh gosh. And when you put it that way, it sounds really bad, but you know, again, like due process and first amendment stuff, free speech stuff does matter

Ben Whitelaw:

Indeed, indeed. well, you are very, very, uh, eligible to comment on that story. We, we've actually got a whole range of stories, that doesn't kind of veer away from four chan, although that is, a story we'll include in today's notes. before we jump into those, you'll be pleased to know, Mike, we've had our first literary review. this is super exciting. I think three or four weeks ago we invited our listeners, as we always do, to, rate and review the podcast wherever they listen to it and we challenged them to use a literary reference in the review.'cause we were talking at the time about a tech user writing a novel in the comments of an article, which I still haven't found by the way. Um, but this week

Mike Masnick:

I can't find it either. I don't know where it is.

Ben Whitelaw:

there's, another challenge for our listeners, but this week's review. From a guy called Alexander. I won't share his full name or his full, you know, scream name includes a literary reference. So I'm gonna read it to you. You have to kind of find out. pretty easy, but I'll, you have to guess

Mike Masnick:

is not that difficult to pick out where the literary reference comes in, but here it's a little

Ben Whitelaw:

exactly. Yeah. so Alexander writes very happy to hear Mike and Ben engage with the complexity of trust and safety and their ability to cut through the pride and prejudice that so infects, so infects tech companies and the supposed deep thinkers who whine about social media. A very good show. If you want to see technology and free expression, be approach with nuance and humility. Thank you very much, Alexander. You, guessed it. Mike, you're a Jane Austen man, aren't you? I can tell.

Mike Masnick:

I, I, I've read some Jane Austen in my time. but yeah, and we got, few other reviews as well. All very nice. And we as always really, really appreciate the reviews. It's, you know, lets us know that someone is actually listening out there and appreciates what we do. And so that is food for our

Ben Whitelaw:

Indeed, indeed. It keeps us going. Great. So let's, dive in then. yeah, listeners, leave your reviews, make them literary themed if you wish, but any reviews or good reviews, we really appreciate it. So we talked about four Chan, Mike, you know, the balance of speech and harms. It's a great example of that. And another organization that I think knows is very well, which was where we're gonna start. Today's episode is the oversight board. Now, I don't think the oversight board has ever thought of itself as a, a similar organization to four chan or a similar entity, but I've made the link. so for listeners who, dunno what the oversight board is, most will do, it's an kind of independent, but Facebook funded quasi Supreme Court, which was set up or five years ago now to make judgments on. Content moderation decisions that Facebook had made and Instagram had made. it's a fascinating experiment, I'd say in the way that kind of speech is governed and they put out a series of judgments this week, Mike, a series of cases and decisions that, I think very much merits, one of the big stories of today's podcast.

Mike Masnick:

Yeah. And I think the context here really matters because, when Mark Zuckerberg announced at the beginning of January that, sort of unilaterally, they were changing their, content moderation policies and made these bold proclamations about ending fact checking and how they had supposedly gone too far with censorship and then obviously went on the Joe Rogan podcast and did whatever, three hours of utter nonsense and, lies and misleading things about what they were doing. it was interesting to see how different people reacted, right. and. The oversight board sort of initially put out a, a statement that was somewhat supportive of it, like, we're happy to see meta doing this. And then like, as the details rolled out, including leaks from meta about what these new policies meant and how it was basically like our new policies are that we allow slurs and bigotry and hatred, and lies. then people were like, Hey, wait a second. Like, that seems maybe not as, great as we originally thought. And then the oversight board was like, you know, maybe our initial response was a bit hasty and, and now we're a little bit more concerned about this, but like they didn't have any vehicle to sort of directly respond to it. And then there was this sort of, again, sort of semi vague announcement that, well, we have a bunch of these cases from meta that we're reviewing and we're going to use the response to those cases to address the changes from January 7th. And then nothing.

Ben Whitelaw:

Yeah, crickets.

Mike Masnick:

Yeah. and so, a few months went by and then now, a few days ago they, or yesterday when we're recording this, I think they released, a bunch of cases and a bunch of decisions. And the decisions themselves are, they're sort of what you expect from the oversight board. some of the stuff, they're like, meta made mistakes here. We think they should, change their decision. A bunch of'em, they're like, we can defend this, but they use the whole process to sort of comment on the decisions and sort of throw a lot of shade at meta and basically saying like, Hey, did you think through any of the consequences of this? And did you think through how this would impact things on a, human rights, on a global scale and have you, plan for these sort of contingencies and, things like that. And in particular, they really call because. Zuckerberg decided to replace the fact checking with Community Notes, said you know, are you planning to actually study the effectiveness of community notes and all this kind of stuff. And it, it's sort of, it's this weird thing because you have this weird situation where the oversight board exists effectively at the pleasure of meta and yet is they're designed to sort of be, a critique of meta's decisions, which works in a world where everyone is sort of. acting in a way that believes in these kinds of norms and then when they go away, all hell breaks loose. And, and so to me, it's a microcosm of what we're discovering with the US government right now, where there was a bunch of things about what the way the US government worked and that people believed it worked and felt that they were bound by the way it works, which were all really based on norms. And what we're discovering with the Trump administration is that they don't care about norms. They don't wanna know about the norms. And if something gets in the way of what they wanna do, they're just gonna do it anyways. And so the world is sort of discovering that much to, its, its horror in many cases. And sort of saying like, oh my gosh, you know, all these things that we thought were, the rule of law and due process and all the stuff I talked about at the top of the podcast, they were all norms. And if you just ignore them, like what happens and, The oversight board and meta's relationship, though newer and not as well established, and lots of people have criticized it, you know, it didn't feel as locked in cement as, the US government process. it was sort of the same thing. It was based on norms and some of the decisions are technically binding, but a bunch of them are purely advisory and all this kinda stuff. And there are limits as to what the oversight board can do and say, and certain things. And so. They're sort of trying to act within that framework and those norms at a time when it feels like Zuck is taking the Trumpian approach of like, nobody gets to tell me what to do. I do what I wanna do, and I'm not gonna do the investigation to understand why this matters or the wider impact. I don't really care. what a pain. I just don't want to deal with it anymore. So I'm just gonna do this and I'm gonna make bold pronouncements about why it's the right thing to do. And anyone who is against me is against freedom and apple pie or, or whatever. and here the oversight board is, is trying to, you know, they're these experts in, uh, a lot of them are academics and they're experts in human rights law and all this important stuff, and they're treating it as if it's like. a convention on human rights where you have all these, big important discussions with big, important weighty things, but it's directed at a body that probably doesn't really care. And so there's a little bit in here that you read it, it's sort of like this passive aggressive, slightly whiny like, Hey, listen to me. Like, you know, you guys are doing these crazy things and I'm not, you know, I don't think anyone at meta really cares. And, and so, you know, it was interesting also Meta's response to all this, which, you know, it was funny at one point they like thanked the oversight board. I. Only for the, decisions that said meta made the right decision, and they refused to comment on all the other decisions that said meta was wrong. It's just like it's a very Trumpian response where it's like, take the praise and totally ignore anything that is even remotely critical and assume that everything was purely 100% backing you. and then the rest are like, well, you know, we have, whatever, 60 days to respond to the recommendations and we'll see what, meta eventually says. But, if you're looking to the tea leaves on the, future of the oversight board, I'm not sure this, bodes well for, its future in terms of its relationship with meta at least.

Ben Whitelaw:

Yeah, I mean it's, not that surprising, I guess, that an organization that was made to sound like the kind of internet Supreme Court is struggling at a time when the actual Supreme Court and the, and the judicial system and the political system that surrounds it is creaking and crumbling some degree. So I think it does make sense. That's is reflective of. and I agree, with your kind of summary about maybe what it says I, are there any particular parts I've got a few thoughts, Mike, but there any particular parts of like the 11 cases that stood out to you? Because, again, they've been bundled together. They've been held off. You might remember we talked last year about the oversight board trying to get through more cases and more decisions and deliver them more frequently because one of the criticisms was that it didn't produce enough. And we've gone basically, I don't know, three and a half months without a decision, which is obviously in, because of what happened in January with, with the Zuckerberg announcement. Any of those that stood out to you in particular?

Mike Masnick:

I'm not sure that any of them stood out to me directly. I mean, I think the thing that I saw in sort of looking through the different specific decisions was just again, a reminder of like content moderation being impossible to do well at scale because each of them involves such a depth of context where it's like, I think if you skim the decisions in terms of like, which ones the oversight board says Meta did the right thing and which ones it says it didn't I don't know that you come out of it with any like clear principled understanding of like what they should do in the future. Like each one is so context specific. and so it's easy to sort of look at certain cases and say, but why do they say it's okay to take content on there but not okay to take content there? Like how are those cases different? And it all comes down to like the detailed context, which is, the nice thing that the oversight board is able to do is spend months, looking into the details and, researching the context and understanding the wider, world in which these things are happening, which I think is, valuable. But, it's hard then to sort of come out of that with a more generalized rule or policy of this is how any company, let alone meta, should be treating similar cases in the future. Because part of it is that there aren't necessarily similar cases. Each case is unique in its own special

Ben Whitelaw:

Yeah, and it's, notable, I think that the cases that have been selected and that have been, decided on are things we were talking about months ago. you know, the UK riots that happened last year. a story we covered in July and August time only now is, is that obviously there being a decision made on this, which as you say is can question the kind of helpfulness or the usefulness of that and whether the company is going to do anything differently. If, for example, this summer, the same thing happens. I would note here that, this chimes neatly with a, piece of research that has come out, about the role of expertise in, in moderating harmful social media content, which was also on our list of things to talk about today, about how content moderation was kind of moderated, in Ethiopia between 2020 and 2022. Again, a story you've talked a lot about, but essentially how post targeting tigrayans, a kind of group of Ethiopians, led to genocide in the country. there are several court cases happening as a result of that. and, and the effect of Facebook and social media companies on that is ongoing. But there's a kind of idea in that piece of research that we just need to give content moderators and, and experts, more time to make these decisions. We need to get'em to do fewer, post decisions. We need to give them more time to kind of figure stuff out. and Alice Hunsberger who writes for, everything moderation she wrote this week, actually, that's nice in principle, but when you get to the number of posts

Mike Masnick:

yeah.

Ben Whitelaw:

know are on Facebook, it's a different story.

Mike Masnick:

I mean it's, funny because specifically in the case with the UK riots and the oversight board, again, like we're talking about, that happened last summer. we're eight, nine months out from that, and here finally the oversight board is ruling on it, and part of their ruling is that meta was too slow to respond to the riots and, and it's like, yeah, they probably were, but. we gotta put this in context. Like think of the scale and the fact that, the oversight board now took till the end of April to tell meta that they were too slow last July. You know, it's like, there are reasons why these things take time and if you're talking about expertise and like actually understanding the situation and understanding the, seriousness of it. And so, there are obviously, we're not saying it's like you have two things, which is Act immediately or take forever. there are all sorts of, areas along the spectrum. And you can do things like, okay, well something's blowing up. Take some steps now to sort of, try and limit the damage and fix things later, and sort of fix things as you go along. But then you lead to situations like the Hunter Biden laptop story in 2020, where it's like, well, Twitter was like, well, this is an emergency. We have to act now. And Meta did the same thing. We have to act now because we don't know, we don't have enough information, we don't have the context, therefore we need to act. and now five years later, we're still talking about those cases. And, you know, the, Trump world, GOP believes that these are like the biggest crimes against humanity. The fact that Twitter for 24 hours decide to, like, let's hold off and understand, is this story real or is it Russian disinformation? And so no matter what you do, you're going to get criticized for it. And in some cases, in a big way, you know where here the oversight board is saying that they should have reacted faster, but we've already seen, that leads to damage as well because if you react too fast and you get it wrong, then you have a whole class of people who are gonna blame you forever. for years claiming that you did something that was horrible and biased and you were trying to change the course of history. And I'm sure the same thing would've happened as well if meta had reacted very quickly and strongly to the UK riots, which I think is a defensible position that, I think a lot of people would've supported. You would've had a whole crowd of other people who were claiming like, oh, you know, they're trying to censor us and, taking a political side and they're, you know, whatever nonsense would come out of it. So it's not as simple as like, well yes, you have to act quickly or it's better to act slow. there's no good answers here. And this is, Trust and safety. There are no good answers. You're always trying to find the least worst answer and

Ben Whitelaw:

Yeah.

Mike Masnick:

you know, because they're all bad, you only have bad choices. That's because humanity is messy. if humanity were perfect, we wouldn't have these things, but humanity is not, it's not the fault of any particular company directly. It's not the fault of the technology. It's, humanity. We, we suck.

Ben Whitelaw:

Yeah, it's true. If you came to this podcast, listeners thinking that you were gonna feel good about yourself, think again. Um, no. I, I can't, I, can't. I obviously, I, I agree with a lot with, with what you're saying, Mike, but it, it makes me think to a conversation we had last week on the podcast about the FTC antitrust trial as well. You know, because in an ideal world, you wouldn't have platforms, I think that were so large and so ubiquitous in a country that you can in, you know, cause genocide or you can cause these kind of issues. And so I just wanted to kind of, I guess, link those two stories together because I think that's what makes that ongoing trial so important.

Mike Masnick:

Yeah, I mean, there are structural things. There are huge structural things that, again, partly our technology issues and then partly our societal issues that, create better incentives and, and structures. And so, yeah, I mean, I agree, like when everything is, locked up in one particular platform controlled by one particular billionaire, that's probably not great a whole bunch of things. and it leads to, really bad situations. And certainly I've written plenty on that and I, support more alternative offerings and, decentralized systems for that very reason, I think, it's structurally bad for society to have so much power in one particular entity. but, you know, these are tricky problems and, they're not solved by telling meta, you have to change your policy this way.

Ben Whitelaw:

Yeah. it's significant in number of decisions there to read through. we've kind of given a bit of an overview, and I think it's definitely worth diving into if listeners have time. you talked about kind of humanity being, you know, messy Mike, and, uh, you know, that we just make bad decisions. I think that kind of leads neatly onto our next story, which is,

Mike Masnick:

which talks about how humanity is perfect. We, we fixed humanity. Have you figured out a way,

Ben Whitelaw:

I'm still searching for that particular story to, to talk about on this podcast.

Mike Masnick:

disappointed again?

Ben Whitelaw:

but this, this is a, an interesting piece of research that, has been published, with a, b, C News. The original research was done by Vember, it's a charity that many people will know for its mustache, charity initiative. They do research as

Mike Masnick:

just one quick clarification. A, B, C, the Australian Broadcasting Company, not a b, C, the American Broadcasting Company, but go,

Ben Whitelaw:

indeed. Um, so this is a, a piece of research that looks at, it looks at men and masculinity influencers, which is a topic we've covered in various guises over the past few months, might most notably when we discussed adolescence, the Netflix hit show. have you watched the final two episodes, by the way?

Mike Masnick:

I am three episodes in, uh, and so I still have one more episode to watch, and so I can't discuss how this all wraps up. I have no idea how it's going to wrap up, but gosh, those are three crazy episodes so far.

Ben Whitelaw:

Well, I look forward to seeing reviews. We won't reveal anything, here, I promise.

Mike Masnick:

did hear, it was funny. I heard somebody else talking about, adolescents recently, and they said the first two episodes suck you in, and you're just like, that's the craziest thing. Like, these are crazy. I can't believe it. Like, that is the, most heart wrenching anything. And then you get to episode three and you're like, oh, those first two episodes were nothing.

Ben Whitelaw:

Yeah.

Mike Masnick:

so I've gotten there. I haven't found out what, what happens in the

Ben Whitelaw:

Yeah. I don't blame you for taking a bit of a break between

Mike Masnick:

yeah.

Ben Whitelaw:

four. Um, but you know, the issues within, that story I think have, uh, very much within the research that have been published by Vember. Uh, it looks at basically 3000 study participants, some of whom engage in, what it calls men and masculinity, influencer content, and some of which don't. And these participants are between 16 and 25 years old. So a little bit older than Jamie in, Adolescents, but give a really interesting, quite complex picture, Mike, of the kind of push and pull of this type of content and the wider, discussion that's being had right now about the lost generation of boys and, and the disenfranchisement of men, which many of people think comes from Andrew Tate and his, like, what you know, this study has, flaws and has caveats, which it makes very clear, is based on a series of interviews and a survey, with closed questions, and it doesn't look at, whether, views that I'm about to share are pre held views or as a result of kind of exposure to content online. But really interesting kind of top lines that I'll just share with you. So of the participants, about 63% regularly engage with. Men and masculinity influencer content. And there's an, fascinating kind of results as a, as a result of like, segmenting by those two groups. So there are things that are expected, right? Things that we talk about, primarily, a negative kind of mental health. there's a, a four or five percentage point different in those who not engage and those who do engage with this influencer of content around feelings of worthlessness, around feelings of being nervous all the time, and anxious and of being sad. which I think is really interesting. So the group who do engage with the content more tend to be, more sad, more nervous, and often feel more worthless, however. they also report, we don't have the split between these two groups in the research, but they also report being much more motivated, 75%. So they feel much more motivated in their day-to-day lives. They feel much more hopeful generally, and 58% say they feel happier, more generally. So again, that's a quite surprising, outcome of, this research. And, perhaps a bit counterintuitive, and I think it also speaks to this idea, which the research was designed to do, is to kind of ask men what they think about this content. You know, there's no denying it that, the kind of views espoused by, Andrew Tate and his like the kind of, red pill. Influences. The ones that say you have to have a six pack and, a million pound business and a girlfriend and, to follow their, you know, 12 step plan are toxic. But, maybe these, there's a slightly more complex picture here that this research paints than we've been used to before. so yeah, I've got more to share Mike, but that's a kind of fascinating piece of research I think that, has lots more kind questions to answer, but is a bit counterintuitive and, probably, slightly different to the Stephen Graham, adolescent view that we've been sharing on the podcast. And we've been reading a lot about.

Mike Masnick:

Yeah, I mean, I think this actually does go back to, similar other research that we've talked about in other areas of, how children and social media and, and people in social media interact and, it gets back to the, the simple fact that humanity is complicated. Right? And so, this is where a year on or so from the anxious generation book being released by Jonathan Het and, his discussion of how like social media was damaging to, adolescent girls in particular was the focus of, his And, researchers, actual researchers came out with data that showed, well, you know, the story is a lot more complicated than that. Humanity is a lot messier than that. And in fact, you know, like Candace ars, who's the leading researcher in this space, her research showed that the causal factor, like you can show correlation as this particular study does, but the causation appeared to be a reverse causation. So, she was looking at young women who. use social media a lot and had more mental health issues and found that it wasn't, that the social media resulted in more mental health issues, but rather those with more mental health issues would then engage more with social media because they didn't have other resources. So it was sort of a coping mechanism. They weren't getting the help that they needed, and therefore they need to find some sort of outlet and therefore they went to social media. And then blaming social media for that is sort of getting it backwards, right? You're not solving the root of the problem. And so I see a lot of that in this kind of study as well, where it's like the mental health component and the correlation between those with higher mental health issues and, spending time in the manosphere feels like the exact same story. Like, I think there's probably an explanation that is the same, which is these people are struggling with other societal issues, which happens to many, many people and is, part of, what. Lots and lots of people, a huge percentage of the population deal with in one way or another. And if they don't have the resources to deal with that, they sort of, go searching and you have that sort of manosphere red pill community is out there trying to pull those people in and catch them and, and give them an explanation. Right? And when you have people who are struggling, one of the things that happens often is that they will get pulled into people who claim to have the answer because you're searching. Right? and that's one of the things, the wonders of the internet is it allows you to sort of search and if you aren't getting the results that you want, you keep searching. And if you have people telling you, I've got the answer, I can help

Ben Whitelaw:

Mm.

Mike Masnick:

then you get there. And that can lead to all different things. So like, the part about motivation, that doesn't surprise me that much. Right. I mean, a lot of the sort of manosphere stuff as full of nonsense and, bullshit as it is, has these elements of like. You have to take command of your own life, right? I mean, Jordan Peterson, who is just, utterly batshit crazy. one of his first things is like, clean your room. Like, that's like the most important thing. Like, you're not a real man until you clean your room. It's like, okay, it's not bad advice. All the other stuff that you surround it with and like nonsense about the evolution of lobsters or whatever is, is complete garbage. But giving people this answer that starts with something that makes sense, like, oh yeah, I should clean my room. That's a good way to like, get organized and then be motivated to do other things and take the next step and next step, So you have these elements of, reasonable things and you have people who are searching for it, to me, that doesn't surprise me. What it, what it then says to me is, what we really need is better, legitimate tools and resources for those who are searching for help and those who

Ben Whitelaw:

And, and that chimes with the case studies. Actually in the report, there's a really interesting kind of 10, 12 minute video of, one of the research participants, guy called Will. And he kind of basically vocalizes everything you just said, Mike. He, he explains that he was, isolated, that he had family issues, he was completely detached from his dad, you know, adolescence, link there. and he was, he felt like people were saying to him that he was, benefiting from male privilege and as like a kind of white working class guy from, town in uk. He didn't feel that he didn't associate that. And so. I felt that was a kind of really interesting element to it, and that led him to turn to online and to communities that would provide the support that he lacked offline in the real world. He actually, he's gone on to become a facilitator, for young men in terms of discussing their feelings and, issues related to online safety. And one of the things he talked about actually was finding a way to kind of facilitate conversations with, people in structured ways. So again, something we talked about in the podcast previously, it's like being able to have conversations preemptively about issues that people are facing just is the thing that, we keep coming back to, to a large degree. there's one thing that there was a big difference between those who. Engage with the kind of influence of content and those who didn't, Mike. And that was around attitudes towards women. And this was, this was a kind of quite significant discrepancy of about 20 percentage points, again, with the caveats of the research. But, around 60, about two thirds of, actually 70% of men engage with the content felt that women were, holding men back. They felt that women have it easier. They felt that feminism was kind of pushing men down. And that was a significant gap between those who didn't engage with this content at all. And I, I wonder if the, gender issue or the issues related to that we think are online safety issues that come out in adolescents are actually gender issues. They're actually, related to how men think about women. which is obviously a kind of something that happens on and offline.

Mike Masnick:

yeah. I, I, I think that's absolutely true. I mean, I think a lot of it is, gendered in, in terms of the sort of root issues here. And again, like it gets back to this idea of, these people who've sort of stepped into this space to try and provide answers. Often they, present things in very gendered ways and, gendered specific ways and sort of, what they're trying to do to, tell people that they have the answers and to, build a following is often to find someone to blame. And so, you know, when you're trying to attract young men who are frustrated and angry, it's, it's not surprising that often what they're doing is they blame the women for it, right? and that's horrible and horrific and problematic in all sorts of ways. And, is ridiculous too, if you're like, know anything about history and like in terms of who has more power and, you know, who is more likely to be victimized. but, for people who are struggling and looking for answers and feel lost, it becomes an, easy target. And again, like gets back to how much I. We need to think about in terms of how do you deal with these things? Some of it is, educational. And I know that people hate when I say that, like a lot of this goes back to early education and digital literacy, but part of that is respect, like learning to respect all people and like going into scenarios in which you, you recognize that other people are human beings also like you and have struggles like you. And it's not that everybody else in the world has things figured out and therefore if you're feeling helpless and behind is because everybody else is against you. and, and with that also comes the piece that you were just talking about, the communication, right? Like, you know, a lot of people fall into these holes because they don't feel that they can talk to anyone. They don't have a friend or a trusted person in their life that they can open up to. And that somehow opening up is seen as, you know, problematic. And in fact, these kinds of manosphere, influencer jackasses, are almost against that, right? I mean, they sort of push this idea that like, being open and emotional and sharing your feelings is somehow, Unmasculine, uh, which is a, is a problem and it sort of pulls people further and further away towards actual helpful solutions to these things. And so, yeah, I mean, I think all of this is, is a problem, but it's a societal problem and it's one that we go through in all different forms and, for years obviously, men in society had much more power and privilege and women did not. And we went through a whole revolution in, feminism to just try and, get people to realize that. And we certainly haven't equalized things. Right. there's all sorts of studies and stuff about the differences in gender and pay and other things that, are problematic. We haven't figured those out on a societal level And so like, some of this is like, obviously just backlash to feminism and reacting that way. You have all of these sort of largest societal issues, but many of these issues started way before the internet, and it's just that the internet has become the medium through which, you know, the sort of current lens on it is playing out. And, that's where it kind of leads us right

Ben Whitelaw:

Yeah, and I, I, I guess I'm thinking about it a lot more, just because of my own personal circumstances. Like I'm, my wife is pregnant, you know, this, Mike, we've talked a bit about it, but I haven't mentioned it to listeners. I'll be taking a few weeks off the podcast, in May because yeah, I wanna spend time with my boy and, all being, well, you know, that's an exciting thing. But also I've been thinking a lot about how, you know, the responsibility that all parents have and, you know, this is something that I'm gonna be new to and, and figuring out. So yeah, there's a, personal component to it, but I know that many people, including listeners, think about that at all.

Mike Masnick:

Yeah, no, it's, it's challenging, right? And as, as a father of, children who are older and are going through some of this stuff themselves right now, it's interesting to sort of figure out like, what kinds of things are they being influenced by? What kind of friends do they have? What sorts of things are, are they dealing with? And it's, it's an ongoing challenge. And again, there's no perfect answer and there's no right answer and lots of people struggle. but yeah, I mean, understanding this stuff I think is important, but recognizing, just blaming stuff directly is not particularly helpful. And I think that actually leads into the other study. I wanted to mention also that sort of is related to this, which is Pew Research, you know, regularly comes out with really good research and, just came out with their latest report on teen social media and mental health. and I've looked at this report in the past and found it really, really interesting and pretty thorough in that it, you know, in the past I had pointed out how it noted that a lot of teenagers really felt that social media was helpful to them. A lot felt it was neutral and a small percentage felt it was negative. and so there's a lot more in the, the latest update on the report. The, you know, one of the most notable things is that in the last two years since the last report came out, the number of teens that said social media is negative, jumped significantly. It jumped up 16% from 32% to 48%. That's still. Less than half. and you still have a lot of teens who say it's really helpful, but like, as you go through the specifics and they sort of drill down on the questions, I really think, and this might be me coping, you know, you can blame my own bias on this, but reading through the thesis, it really feels like teens are now being strongly influenced by their parents and the media telling them that social media is bad for them. And so now when they're asked about it, they repeat back you like, yes, it's bad for them, but then when you like. Go into the details you find it's like they don't seem to really act that way. If they actually thought it was harmful. And in fact, my favorite chart in this, you know, there's a bunch of interesting charts, but one of my favorite ones, they ask both parents and teenagers what was the biggest threat to teen mental health? And the parents are like, overwhelmingly like social media. Social media is, is to blame. And everything else is barely on the chart at all. Whereas with teens, it's a mix. And it does include social media and that is the highest rating, but almost as high is bullying and almost as high as that is pressures and expectations. And if you then look at the bullying, the bullying comes from social media often, not always, but often, and the pressures and expectations that come from social media. So it's like, I think, the real story is, as always is like. The bullying, which predates social media, certainly like, predates everything, right? kid bullying is something that, that happened, has happened forever. And pressures and expectations, that's a different thing altogether, right? yes, it's, it's intertwined with social media. And social media can change the impact, style, duration, reach of the bullying and the expectations. And I think that's worth understanding. But it's really the bullying and the pressures and expectations underlying it that are important. And it's notable to me that the teens realize that and point that out, and the parents are like, no, it's all social media. As if, if we only deal with social media, if we only ban social media in schools or ban phones in schools, or whatever it is along those lines, that will get rid of these other problems. And I think the teens recognize like, no, that's actually not gonna get rid of the real underlying problems. But the parents are looking for that easy answer and saying like, well, it's social media. Just blame social

Ben Whitelaw:

That's really interesting. I hadn't got a around to looking at the peer research, but that sounds fascinating and we'll definitely, definitely dig into that. This actually leads us neatly on Mike to, our next story. We've talked about the importance of good research to make data-driven decisions. The irony is that in the US at least, that just got a lot less likely for a whole bunch of academics. talk us through what's happening with the National Science Foundation's research grants and, and what that means.

Mike Masnick:

Yeah. I mean, we know that obviously Elon, MOS and Doge and, and the government have been cutting back on all sorts of grants and funding and all sorts of things, and then in the last few days, it has hit the sort of misinformation and disinformation world through the National Science Foundation, which is a huge, massive major funder. Of academic research in the us. It's hard to fathom how much, basic academic, important academic research is funded by the NSF. And so they alerted a bunch of academics who had ongoing grants from the government to study things that vaguely touched on myths or disinformation, that their grants were being canceled in the middle and they wouldn't, get the rest of it. Now, someone, one, one of the people who was either affected by this or knows someone who was affected by it, I'm not even sure if it was directly sent me a spreadsheet of a bunch of the canceled awards

Ben Whitelaw:

What, what are they, what are they titled? What do they say?

Mike Masnick:

I mean, there, there are all these titles that, like, it's anything that mentions misinformation. and there's a whole bunch of them. you know, some of them you could see why like angry Elon Musk would not want people to be researched and stuff. But some of it is just, you know, it's, it's crazy. There's all sorts of things about like, social response to misinformation. Right? There was one on like market responses to misinformation. You would think, in theory Republicans like market responses to things, some of'em are really, really crazy. It's, sorry, it's a big list. I'm like skimming through it now to try and find the ones that I thought were interesting before I should have marked them down. But the one that's craziest to me is there's this one, by a professor who's doing combating censorship from within the network. And I was looking at his research and it's all building tools like literal. Technology tools for getting around government censorship. You would think the US government would support that kind of thing. But no, I'm sure they just probably put in a keyword censorship and assume that this is like pro censorship when it's anti-censorship. But there's just a whole bunch of these research projects that are about, you know, designing technologies for marginalized communities. Can't have that, all these different approaches to dealing with, misinformation.

Ben Whitelaw:

so these, this research won't ever come to fruition now, for a lot of

Mike Masnick:

I mean, we'll see what happens, right? I mean, some of these, the grants, you know, have been partially paid out. In some cases, most of the grant has already been paid out, so it might just sort of, end them earlier. I'm sure a lot of these involve other grants from private philanthropies to back up the research. It may just limit what they can do, and maybe other private philanthropies will step up and fill in the gaps. But it's, it's a real hit on all sorts of important research, around mis and disinformation, which we all know is like really important to be studying right now. And to have the rug just pulled out from under these grants is really, really problematic.

Ben Whitelaw:

and if any listeners have been affected by the cuts to the NSF, grants, let us know. Drop us an email. we're obviously very sorry to hear that, but if you wanna talk to us about it and explain the effect of, the change in situations on your research project, drop us a note podcast@controlaltspeech.com. C-T-R-L-A-L-T speech.com. yeah, we're really interested to hear about how this story plays out. thanks for that, Mike. I actually ordered for this next story, Mike A. Bell, no word ever a lie, but it hasn't arrived yet,

Mike Masnick:

Oh no.

Ben Whitelaw:

you won't be able to use it.

Mike Masnick:

We can rely on my favorite. Ding, ding, ding,

Ben Whitelaw:

Yeah, this is a Blue Sky story. Mike is a board member of Blue Sky. as many listeners will know, couple of little Blue Sky stories. I'm not sure if you'll actually be able to talk about this, Mike. We'll see. the one that I, I was initially wanting to bring to you and talk about was the addition of Blue checks for trusted accounts on Blue Sky. many lists will know that there has no bit, there've been no official kind of blue check system to date. Many people, only users on Blue Sky have used the kind of domain handles as a way of verifying themselves as to who they are. Slightly technical approach, critics have said. And now Blue Sky will be verifying, based upon what it calls authenticity and notability and will be giving. permission of certain organizations to verify other employees of theirs or other people that they deem as verifiable. So, for example, in the blog post, it talks about the New York Times verifying its journalists, with, a scalloped check mark, which is a nice, nice feature. nice, nice little design, touch. kudos to the Blue Sky design team for that. but this is essentially kind of Blue Sky's move into being a slightly more verified, kind of trusted platform. We know that verification is a content moderation, tool in many respects. Mike, and we talked about how X slash Twitter removed very unceremoniously its Blue Check system, after Elon Musk took over the company. what can you say about this? Were you, you know, secretly pulling the strings? were you, did you know about this before it launched?

Mike Masnick:

I did know about this before it launched, and I was involved in some of the, early discussions about the reasoning behind this and the think behind it. And there were many months of sort of internal discussions, which I was sort of more observant and, and occasionally would, would add in some comments and thoughts on, you know, I didn't make any particular decisions or, you know, and, and I hadn't actually spoken to the team about it in a few months now. And so I didn't even know it was launching, until it did. I think that there is a lot of interesting things in here. originally the team really did think that the domain name based verification was a, a good system and it has its benefits, but as you noted, it's sort of technical and confusing for some people. And it is also open to gaming. Like there was an issue where someone registered a bunch of people's names and then tried to sell them to them on Blue Sky, like, you know, popular people. And it's like there are risks associated with that and there are challenges associated that. So the challenge for Blue Sky was could you come up with a verification system, that, stayed true to its decentralized, ethos and, and mission. And so this is sort of where they, came down on it and I think, you know, everyone is focused on the fact that Blue Sky is starting to verify some people to some extent that became sort of a necessary thing. And there, there were strong demands from a lot of people saying like, well-known celebrities folks who were saying like they wouldn't, be able to join the platform unless there was some sort of verification.'cause they didn't, didn't feel comfortable

Ben Whitelaw:

Right

Mike Masnick:

and there were other hacks beyond just the domain name thing, right? the labeling system. Hunter Walker, the, this, journalist, built his own labeler that was, verifying journalists and was doing a really amazing job of it. So there were things like that, but even the labeling system was a little bit confusing for some people also. So this was sort of a, compromise offering that, I think the important part of it is not so much the fact that Blue Sky is verifying some people, but the fact that it's allowing third party verifiers. And so yes, it starts out with like the New York Times and Wired and they can verify their own journalists and it's kind of cool if you click on the verification, you can pop up and see who verified them. But you can think of where that then goes. This is sort of like the first release you could see where that can then expand and you could have all different people and you build on what's known as the web of trust. And so Eventually can and should I think, get to this world where you decide who you want to trust as the verifier but it could be lots of people. So you could, it could get to the point where you or I could verify people like, I believe this is so and so, and, if other people trust you, then, then the verification sticks. But you could start to pick stuff. And we're already seeing like there's an alternative interface for Blue Sky called Dear Social, which is actually very cool. And just last night they were toying around with this idea of choosing your verifiers so you could pick and choose which verifiers you actually wanted to trust and which you didn't. And so I think we start to get to that world where, again, you go back to this really decentralized aspect, even if verification feels like a centralized system where you have different, parties and it's who you trust and who do they trust and you build this what's known as Web of trust. And I think that is actually. That's how social communities build up in the first place. You know, I vouch for you if you wanna introduce someone to me, you know, we vouch for each other, whatever. We're sort of moving that into the digital world through the verification things where it's not all about blue sky verifying this person or that person. It doesn't become this, winners and losers sort of set up that Some people frame it as where this becomes a system where, you pick who you trust and they will point to people that they trust are legitimate and real, which is the point of

Ben Whitelaw:

Right. Is it in theory, could you have, multiple verifications from a number of different verifies?

Mike Masnick:

Yes. and certainly that was part of the discussion was how to do that. The, tricky part is like, how do you do that without getting like a screen that's all cluttered, right? So there's like some visual elements to that that makes that tricky. But yeah, in theory that is part of it is that you can have multiple verifiers verifying, and then, you can layer on different things based on that, which is, you know, again, always part of the point of blue sky is having these different layers and different ways of, working out. What they release now is sort of, a partial version of that with a few verifiers that they're picking who are the verifiers. And that opens up some interesting things. But I don't think we're, you know, this is to sort of. Put toes in the water and test out the whole like, verification system. I think there's a lot more that they can do with it. I hope they'll get there again, it's the team's decision. I give my input, I weigh in on it. but I think, there's a lot more coming on this, some people are sort of framing this as like a step back from decentralization. I don't think it is, and I don't think it should be viewed that

Ben Whitelaw:

Okay, that's interesting. definitely a media play from Blue Sky based upon who it started with. I can't help but ask you about the other Blue Sky story this week, Mike. which many people have sent us and asked us to comment on. we only have a few minutes left today, but there's a story that, tech Crunch and others have published about, blue Sky restricting access to some accounts in Turkey at the request of the Turkish government. I dunno how much you can say on this. We, we don't have a lot of time, but this is something that, I think obviously is, is interesting to both people who've, come to Blue Sky from other platforms, who are interested in topics related to the sort we cover on controlled speech.

Mike Masnick:

Yeah, it's tricky. There's, there's, I can't talk too in depthly about it. you know, the company hasn't said anything publicly about it. There are discussions internally, some of which I, I've been, very loosely involved in basically sharing my thoughts on what the company should do. they haven't spoken publicly about it yet, other than last year they did put out a statement about how they were going to deal with demands from countries. And that included sort of location based labelers, that then, you know, sort of say, who can see stuff in which country. And that was basically a way to deal with the geographic restrictions, which every company deals with at some point or another. And it is important to note, as is noted in the TechCrunch story about what happened with Turkey, that alternative platforms for viewing Blue Sky are not necessarily bound by those particular restrictions, which again gets to the benefit of a decentralized system. And so I think I'll, I'll kind of leave it there. I think, decentralized systems lead to, to some interesting challenges and, and ways of dealing with these

Ben Whitelaw:

Yeah. Okay. Well, one to return to, lots more to dig into there, but that, basically takes us to the end of today's podcast. Mike, I'm, we've covered a couple of really big chunky stories and a and a selection of, others as well that we came across this week. just wanna thank you for your, time and your insights. Great to have, have you comment on particularly the blue sky stuff. also to shout out the news outlets that we, talked about today. A BC News, the Australian version, Reuters, Neiman Lab, France, 24, TechCrunch, all producing, great stories that we were able to comment on in the podcast. Go and read them. Go and subscribe to'em if you can. and

Mike Masnick:

go to four chan.

Ben Whitelaw:

don't go to four chan, unless you're gonna be called as an expert witness. Um, thanks for everyone, appreciate you joining this week, and we'll see you next week.

Announcer:

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.

People on this episode