Ctrl-Alt-Speech

Are Platforms Ready for Elections?

April 05, 2024 Mike Masnick & Ben Whitelaw Season 1 Episode 4
Are Platforms Ready for Elections?
Ctrl-Alt-Speech
More Info
Ctrl-Alt-Speech
Are Platforms Ready for Elections?
Apr 05, 2024 Season 1 Episode 4
Mike Masnick & Ben Whitelaw

In this week's round-up of the latest news in online speech, content moderation, and internet regulation, Mike and Ben cover: 

  • Elon Musk's X appoints new safety chiefs as it seeks to rebuild ads business (Fortune)
  • X’s ‘complimentary’ Premium push gives people blue checks they didn’t ask for (The Verge)
  • Mozilla Research: Platforms’ Election Interventions in the Global Majority Are Ineffective (Mozilla)
  • Implementing the Online Safety Act: Additional duties for ‘categorised’ online services (Ofcom)
  • Social networks could quit Britain under online safety laws, Reddit claims (The Telegraph)
  • Patreon is taking Reddit’s approach to content moderation (The Verge)
  • Facebook has blocked Kansas Reflector (Kansas Reflector

The episode is brought to you with financial support from the Future of Online Trust & Safety Fund, and by our launch sponsor Modulate, the prosocial voice technology company making online spaces safer and more inclusive. In our Bonus Chat at the end of the episode, Modulate's Director of Marketing Mark Nolan chats with Mike about the the recent launch of the Gaming Safety Coalition, why its important for Modulate to work with other companies to create stronger gaming environments and the importance of a hybrid approach to T&S.      

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Show Notes Transcript

In this week's round-up of the latest news in online speech, content moderation, and internet regulation, Mike and Ben cover: 

  • Elon Musk's X appoints new safety chiefs as it seeks to rebuild ads business (Fortune)
  • X’s ‘complimentary’ Premium push gives people blue checks they didn’t ask for (The Verge)
  • Mozilla Research: Platforms’ Election Interventions in the Global Majority Are Ineffective (Mozilla)
  • Implementing the Online Safety Act: Additional duties for ‘categorised’ online services (Ofcom)
  • Social networks could quit Britain under online safety laws, Reddit claims (The Telegraph)
  • Patreon is taking Reddit’s approach to content moderation (The Verge)
  • Facebook has blocked Kansas Reflector (Kansas Reflector

The episode is brought to you with financial support from the Future of Online Trust & Safety Fund, and by our launch sponsor Modulate, the prosocial voice technology company making online spaces safer and more inclusive. In our Bonus Chat at the end of the episode, Modulate's Director of Marketing Mark Nolan chats with Mike about the the recent launch of the Gaming Safety Coalition, why its important for Modulate to work with other companies to create stronger gaming environments and the importance of a hybrid approach to T&S.      

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Ben Whitelaw:

So, borrowing a line from Discord's status box. What's cooking, Mike?

Mike Masnick:

Well, Ben, uh, I was just thinking about how I want to write a book called everything I've learned about verification. I've learned from Elon Musk and it will be. An entire lesson in what not to do? Uh, what's uh, a manual, yes. A manual of things not to do. Uh, what's, what's cooking with you?

Ben Whitelaw:

Well, having exhausted most of the social platform's post prompts as the opening gambit of our podcast over the last three weeks, and having kind of reached the end of that line, I'm not really sure what we're going to do next week.

Mike Masnick:

There are more.

Ben Whitelaw:

run out.

Mike Masnick:

I guarantee you, I am going to find more different kinds of website prompts that we are going to be using. We are not running out. And also, Ben, We will be able to reuse them. It is okay. I do not think that anyone will, will, uh, will mind too much.

Ben Whitelaw:

Okay, well, there's a reason to come back and listen to next week's podcast for that alone. Hello and welcome to Ctrl-Alt-Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. This week's episode is brought to you with financial support from the Future of Online Trust and Safety Fund, and courtesy of our launch sponsor, Modulate, the pro social voice technology company, who is doing some great work to make online spaces safer and more inclusive. We've got a great episode bonus chat at the end of today's show with Mark Nolan Modulate's director of marketing. We'll be talking about a new initiative that they've launched called the gaming safety coalition. Why it's partnering with other companies to create stronger online gaming environments and the importance of a hybrid approach to trust and safety with both AI and humans. My name is Ben Whitelaw. I'm the founder and editor of everything in moderation, a newsletter that comes out every Friday. With your roundup of trust and safety news. I'm joined by somebody who produces many, many more articles and many more fantastic stories than I do each week. Mike Masnick.

Mike Masnick:

Well, hey, uh, I, I, that's, I'm, I'm not sure how to respond to that. Then

Ben Whitelaw:

I I'd like to, um, it's, it's the deferential Britain me,

Mike Masnick:

I see, I,

Ben Whitelaw:

I, you know, I, I have to do that. I'm, I'm afraid.

Mike Masnick:

Well, I, I, I, uh, thing things are, things are good. I was about to say, I, I keep getting accused by people of, of, this was before things like ChatGPT existed. I got accused of. Of using an AI to do all my writing for me. So my prolific ness is, is sometimes a curse more than, more than something worth praising, but, uh, I, I am still, I'm still

Ben Whitelaw:

you haven't.

Mike Masnick:

stuff myself. It's, it's not an AI.

Ben Whitelaw:

okay. Okay. Oh, I'll believe you on that. And I'm waiting for you to show your productivity tips.

Mike Masnick:

I, I, I am actually working on a, on a post, which I don't know if it's going to go up next week or in a few weeks about how I am using some AI tools, but not for writing but for, for a bit of productivity. So,

Ben Whitelaw:

Interesting.

Mike Masnick:

I will be, I will be revealing some of my secrets very soon.

Ben Whitelaw:

Oh, is it likely that I'm going to show up to a recording sometime soon and find an avatar

Mike Masnick:

Absolutely. Absolutely. If I can outsource this part of it, having to actually talk to you, my goodness.

Ben Whitelaw:

If I had a pound for every time somebody said that, believe me, well, it's great to see you anyway, um, we should dive straight into today's stories. We will start, um, with X slash Twitter. And a story we both read and noted, which is both the kind of personnel story, but also potentially a signal of a change in approach, or at least a doubling down of an approach to trust and safety, um, which is the appointment of the new head of trust and safety at the company.

Mike Masnick:

Yeah. And so, you know, obviously there's been a lot of attention paid on, who, who is running trust and safety, if anyone at X slash Twitter and obviously there was, the famous removal of, of Yoel Roth, who over from the existing company who really only lasted a few weeks and then was replaced with Ella Irwin, who was there for a few months and then who apparently quit and then there was nothing and we had no word on anything related to who was actually running trust and safety. And so finally, we got this announcement of, someone actually being appointed to run what is no longer trust and safety at exit is now only safety. And. Elon had put out a, a. I don't know if they're still called tweets, but, uh, put out a tweet not too long ago with some, you know, pompous, meaningless nonsense about how, X, they just now call it safety because anyone, you know, trust must be earned and if they're calling it trust, it's a euphemism for censorship, which.

Ben Whitelaw:

it sounds like my mum, my mum says stuff like that.

Mike Masnick:

I mean, all of which is just, just garbage and is, is meaningless. And anyways, if trust needs to be earned, who cares whether or not, like, well, does that mean that since you've taken it away, that means you're not even trying to earn our trust anymore? You know, I, I don't know. None of it makes any sense. It also, suggests a, a, a, Deep, deep misunderstanding of the nature of trust and safety and why companies have trust and safety. Why we even have the term trust and safety, but eventually there, there was this announcement this week that they've now hired this person, Kylie McRoberts. Who has been working at X for some time and who apparently is the person who was in charge of setting up. I don't know if you remember. It was, uh, maybe a couple of months ago where the company announced they were setting up this like center of excellence in Texas that was, was going, and they were going to hire like a hundred people. It's like, Oh, wow. You know, you fired thousands and now you're going to hire a hundred people to do trust and safety work. Um, and so this is the person who apparently set up that center in Texas and she is going to run this, new approach to safety. Um, which appears to be very product technology focused as opposed to, human focus, like there are, they are hiring employees, but they're really trying to take, you know, and, and again, this sort of fits with Elon's ethos. You know, he had come in early on and suggested that, trust and safety didn't need all these people that everything should, there should be technological approaches to solving the greatest human societal problems of all time that we've never been able to solve. Uh, and so,

Ben Whitelaw:

What do you, what do you make of that? Is that kind of approach? Sustainable? Is it logical? Is that the right way forward? Do you think for any platform, let alone X slash Twitter?

Mike Masnick:

so there, there's a few different things to say there, right? There is. The idea that if they can come up with some technological approach that is helpful, that is different. That's not a bad thing. It's always good for people to be thinking about different approaches and where technology might help. But the reality is a lot of people have been thinking about this. A lot of very smart people. Uh, hopefully a lot of the people who are listening to this podcast are among those people who've been working on these issues for a very long time and realize that. That while technology is a tool and a useful tool for doing certain things, this is inherently a human problem. are, you know, the issues that you are dealing with in the trust and safety world are human, human issues, and you need. Humans who understand humans as a part of that process in some form or another. And taking an approach that says like these things can be solved entirely with technology seems likely to run into some pretty significant struggles pretty quickly. you know, that's not to say, that everything has been figured out and that there's a right approach or a wrong approach. I mean, there are wrong approaches, but, but, but, you know, so I'm, I'm interested to see what comes out of this, but I would be surprised if there's anything that is particularly insightful or breakthrough from this particular approach and, and honestly, this announcement is And the hiring or promotion, I guess, of, of McRoberts feels more like the company realizing it needed to say something publicly because the questions keep coming up about who is actually running your safety efforts. And notably at the same time that this announcement was made, there was also the announcement that they had hired. Someone else to handle brand safety. And this is more clearly like, you know, working with advertisers. This was clearly a Linda Yaccarino hire. It's somebody who has experience in the advertising world and, advertisers have left in droves. And even the ones that have come back up here to be spending a lot less than they were before. And so. It's basically someone to go and soothe the, soothe the advertisers and say like, I'm here if you come across any problems or you find out that your IBM ads are showing up next to neo Nazis again, let me know and I will take care of it. Uh, and so.

Ben Whitelaw:

a job.

Mike Masnick:

It, it strikes me as more of like a, an um, um, Bud's person kind of role, uh, that to make the advertisers feel more comfortable, which I don't know if that works. It, it's, you know, it's somebody who comes from the advertising world. And so. there are different things going on here. And I think a lot of it is really for show as the company is getting more desperate for, for advertisers, as opposed to like a particularly serious rethinking of how the company sees trust and safety.

Ben Whitelaw:

Yeah. And where does this fit on Musk's kind of learning curve around trust and safety do you think? Because you've written about this before, he is essentially doing a kind of caretaker role with Linda Iaccarino over the last, whatever, eight, nine months since Ella Owen left. What do you think he knows now that he didn't before?

Mike Masnick:

exactly, you know, it's very difficult to get into the mind of Elon Musk because it is a weird, weird place. I just, you know, that is one tough one to figure out. The thing that everybody does realize when, when running a platform that has lots of humans is that humans are messy and eventually you have to try and figure out ways to deal with it. And I do find it interesting that, so the same week that this is happening, there are two other things that happened that I think are related to the trust and safety aspect. One of which is that they have started gifting premium subscriptions or what used to be Twitter blue, or what some people refer to as the blue checks. Two. People who effectively like anyone popular on the platform, the official statuses, if you have, I mean, it's so confusing, right? Because the history here is that, verification used to be for people who were notable in some form or another, where there was a fear or risk of impersonation. And so Twitter would go in and verify, like, this is really who this person is. And it was to protect those individuals. It then turned into something of a status symbol because there were questions about who got chosen and musk seemed to interpret it as that, you know, the status symbol was the value as opposed to the like protecting people from impersonation or fake, uh, fakeness and therefore decided that. You get rid of it for all of the people who actually needed it and you make it something that you can charge for and you merge it with the, with the premium program which actually then undermine the value of it. And you had all of this, you know, garbage where basically everybody with a blue check. Turned out to be kind of an idiot. And then on top of that, you add in that they would rank the replies on popular posts where there were a lot of replies, the blue check ones would show up first and so what, where it came down was you, you tended to have the dumbest possible responses to the popular posts were all up top,

Ben Whitelaw:

So bad. One of the worst product features I've ever seen in my

Mike Masnick:

Which, which just seemed like not to work well, um, and so there is a, uh, so, so, so now what's happened is the revising of that policy is that if you have 2, 500 people who are paying for the blue check, who follow you, you get the blue check for free. And in practice, that seems to mean that anyone who is big enough to have somewhere in the range of 70 followers probably has 2, 500 people who are stupid enough to pay for a blue check, and therefore they get the blue check too. And in theory, through this convoluted, very, very silly system, this might improve the quality of commentary under popular tweets, because some of those people who might be a little bit more thoughtful and a little bit less crazy will have some of their replies show up higher. And, I guess some of their comments will get more attention and the algorithm will weight them higher because they have a blue check. And so in some weird convoluted way, this brings back a little of the original version, what, Elon once referred to as Lords and Peasants, it brings back the Lords and Peasants, um, which he has yet to acknowledge. He seems to think that this is a great thing when a year and a half ago, it was this evil Lords and Peasants system, really just a year ago, honestly. Um, and so, it's bringing some element of that back, but in, the strangest possible, least useful way, but it might almost by accident improve some of the quality of conversation on the plot.

Ben Whitelaw:

Yeah, it's interesting, isn't it? I thought he was going to leave verification and it's old guys to kind of die and the people who had verification of which you were one, I believe, and right, but you're not anymore. And you haven't been given a blue tick as far as I can see, since this change, which I'm guessing is because your followers are too smart

Mike Masnick:

Ha

Ben Whitelaw:

for Twitter blue.

Mike Masnick:

Yeah.

Ben Whitelaw:

badge of honor is to not have the new Twitter blue, the

Mike Masnick:

not have, to not have so many idiots following you. Yeah, I mean, there, there are a few things there. So I did have the old one, but it was, I never asked for it. It just showed up one day, which was a very weird thing. Um, like I had deliberately chosen not to ask for verification on the old Twitter. And one day I just had it. And I don't know why.

Ben Whitelaw:

No choice.

Mike Masnick:

was never explained to me. Um, but yeah, so now also like, I just haven't been using Twitter either and I locked my account and so I have, I've had no new followers in over a year at this point. Um, and so, but yes, I think I do not have enough, uh, idiot followers.

Ben Whitelaw:

It's interesting because when I worked in newsrooms, I worked at a couple of national papers in the UK and the way that people got verified was a Twitter partner manager, would come to us and ask for the names and email addresses of journalists who were on staff. Right. Or, or who were regular contributors. And that was, it was a very manual process, which obviously was not scalable, but it kind of gave the, Platforms and credibility in its early stages. And lots of that's why lots of journalists use Twitter or used to use Twitter. So it is interesting to see you're right. This kind of return to some extent, of Twitter and towards verification. And I think in the wider context of other platforms, it's also interesting, right? We, we talked before we started the show about how LinkedIn is making a massive push into authenticating users. You know, I can't go on that app without being. Prompted to upload some documents on my email address in order to,

Mike Masnick:

Right.

Ben Whitelaw:

to say who I am. Right. And, and that all the benefits that that supposedly brings.

Mike Masnick:

though I was saying it, LinkedIn keeps rejecting my attempt to verify myself, but that's, that's another story, but, um. But, you know, the other thing, the other thing I did want to know really quickly was that the same, the other story this week on Twitter is that Elon claimed that he was finally, and he's claimed this before, getting rid of the bots and, spam and stuff. And so there is this sense that there's something going on at Twitter where he, or X, that he feels he needs to clean up the platform in some way. And so he is making. A concerted effort to actually do so. It is sort of weird and hamfisted and in typical Elon Musk fashion, just, you know, not the best way of going about this stuff, but there's a sense, I think between these three stories combined that he realizes the platform has to be cleaned up.

Ben Whitelaw:

Yeah. And we should, we should move on, but it's probably the U S election, right?

Mike Masnick:

I, I don't, I don't know if I believe that I, that, that would make sense for anyone else. But, but given his particular political views right now, I'm not sure that that is what he's thinking. The, you know, it is possible that, that Linda Yaccarino is whispering in his ear and saying like, advertisers are getting really kind of pissed off about all this and we need some advertising revenue. So you gotta clean up this stuff. And, and these are, sort of just. Table stakes to do so, but I, I don't know if that's true or not.

Ben Whitelaw:

Okay. Well, elections nonetheless, as a neat segue into our next story, which, I think is, uh, Something I've been reading this week. It's not particularly new, but I wanted to bring it to you and to our listeners because it's, uh, it's very current and relevant now as platforms start to announce their interventions ahead of, the U S and the UK elections, as well as India and EU, which are happening in the next few months. So you might have seen. I certainly have lots of platforms saying we're doing great things ahead of the election. here's all the work we're doing to make sure that our platform is safe. And, and I've always wondered actually what actually that looks like on, on a wider scale when you zoom out and a report by Mozilla gives us an indication of that. Published about a month ago, it's a interesting kind of analysis of all of the interventions from the last seven years around elections across 27 countries, looking not just at Meta or Twitter, but across six platforms. And the researcher or Danga Madung has created a kind of spreadsheet of all of the announcements and categorize them. To give us a real sense of what is it, where is it people are focusing, how are they trying to kind of, preempt the problems that are more likely to happen during elections. And it's really fascinating to see these kind of mini themes come out, there's about 200 interventions overall, which, you know, I was kind of surprised by it, it's a relatively big number. And, then he breaks it down into a few different categories. So unsurprisingly, I think Meta has the most number of interventions. it's responsible for almost half, and I think that's because, and he says this in the report, that it's because of the heat that the platform has got related to elections going back further than seven years. And then he looks at the kind of, the regions or the countries where the interventions are targeted, and he finds that actually it's mostly the US. And the EU, which again is not super surprising, but, you know, he notes as a Kenyan researcher and somebody who's done a lot of work, looking at trust and safety policies over the last few years that, the companies have basically completely missed out on the rest of the world in many senses. It's, it's not at all focused their interventions on massive democracies and countries which have gone to the polls in that time. And then lastly, he, he looks at the different types of interventions and he finds that it's a mixture of fact checking of, digital literacy initiatives, and content moderation policy changes as well, which are kind of. evenly split, across the different interventions that you find. So basically looking at aggregate in aggregate at these new ideas and this work that platforms have introduced basically gives us a whole new perspective. I think on these individual blog posts that we read and we, share, I share on the newsletter, you share via Techdirt, and it gives us a new view. I think of the very kind of EU, U S centric nature of these interventions. And Adanga makes it very clear that, you know, this is not great. This is not, this is not good enough. Um, and he calls it, a rinse and repeat approach to policies. Because he finds, he finds that there's lots of different policies and interventions that are essentially copy and pasted across

Mike Masnick:

Yeah. In some cases,

Ben Whitelaw:

mad example. Yeah. Did you see the example in, in Kenya, Nigeria, and there was another country, I think, but basically the same, text was pushed out across those three countries, despite them having completely different kind of contexts, different elections, different polling processes.

Mike Masnick:

I mean,

Ben Whitelaw:

I was quite shocked by

Mike Masnick:

yeah, I mean, the other, the other example of that, that I thought was, was really high opening was the policy in Brazil regarding elections where they talk about mail in ballots, even though there aren't mail in ballots in Brazil. So they, they just sort of copied that from elsewhere, probably the U S um, and it just sort of shows, elections are obviously like very local topics and it does feel like the social media platforms. And again, we should be noted. These are, announced interventions and you have to assume that there are other interventions that are happening that aren't announced and aren't publicly discussed, which have its own questions. If the announced ones are so sort of poorly done and poorly designed. Um, and it's, it's not surprising that. The interventions are more focused on the US and the EU in that these companies tend to be much more heavily based in those places, and those are the places that get more attention in general from the media. But you would hope that, especially this far along and especially the biggest companies meta, for example, that they would recognize. That the different countries in the different context around the globe, and especially in the global majority really matter and the specific context too. And that you can't just, you can't just phone it in. You can't just say like, we're going to copy this and do the same thing. Now, the other thing that, that struck me in the report was that the approaches outside of the EU and Europe were. Notably different as well. So, I mean, you talked about the digital literacy and the fact checking and the content moderation and. What they noted was that those first two things tended to be more common in the West and the last one tended to be more common in the global South. And so there is this element of, it appears, and one way to read this is effectively like. We're going to try and deal with the underlying problems of people not understanding these things and, being tricked by disinformation or whatever, by trying to educate people and, and help people understand these things. And then in the rest of the world, in the global maturity, we're just going to pull stuff down because we don't want to deal with it. And it, it, it has this feeling, whether it's true or not, it has this of just like wiping your hands of the global majority and saying, this stuff, uh, too complicated for us to deal with. So we're just not going to do it. Uh, and, and like, I recognize the challenges, right. You have however many countries that are out there, you know, 170 some odd countries or so, depending on how you count, It is tricky to find expertise in every one of those and obviously not all of them have elections and everything, but like, how do you deal with these things, but there, there certainly appears that more could be done that. Also is more respectful of the local context, the local cultures, the local language that it appears is, is not, is still not being done. Um, and maybe you can understand that with smaller platforms, but the largest platforms in the world, Meta in particular, it feels like they should be a lot more on top of this stuff by now.

Ben Whitelaw:

and that leads a danger to basically kind of posit the idea that there is a diminishing focus on election integrity across all platforms, because he draws a line between, the cuts to trust and safety teams and, these efforts and says that, you know, basically platforms can't stretch themselves with the resources they have, across all of the elections and all of the countries. One thought I had and that playing devil's advocate a little bit here, Mike is we've, there was, there was 30 elections last year. across the world. Obviously there's something like 60 plus I think this year and half of the world's population are going to the polls. Have, have we seen the kind of fallout that we might expect for the lack of preparation that platforms have put in place? Like, have we seen it go as badly as we expected? And again, I don't agree with this personally, but are the platforms justified in their approach and the amount of time they spend on elections? Because we. As far as I can see, haven't seen, the kinds of huge seismic Facebook or otherwise inspired election results that we maybe thought or maybe have happened in the past. And I know you have views on this generally.

Mike Masnick:

and you know, it depends on who you talk to, right? So, there are people who are very ready and willing to blame any sort of election result that they disagree with, or that they think is problematic on social media companies, and I think that's, Silly and not realistic. I think that social media is often a reflection of what is actually happening in society. And when things happen that people disagree with and they blame it on social media, they're sort of missing the point and so, uh, I do think that there, you know, it is a little overblown to blame these things on social media. And there's always a question of like, what is feeding into what? And it is always a combination of different factors. And, and I, I do separately agree to some extent that, the idea that somebody is going to put some sort of like. Deep fake on, Facebook and it's going to magically convince a whole bunch of people to vote in a way that they weren't planning to vote. Otherwise, I think that is an unlikely scenario, but I do think that there are still like very significant concerns at the margins of how these tools are used and how campaigns and how. Foreign state actors are trying to influence like get out the vote efforts and things of that nature. Uh, and you know, how much strong support versus weak support there are, there are for certain candidates. There are a number of things at the margins that do matter and where there are malicious actors trying to game systems where you need thoughtful approaches to election integrity.

Ben Whitelaw:

Yeah. I mean, I asked that question, but obviously the incidents in Myanmar, uh, In Ethiopia, you know, they are, they are, I think, indications, if not in an election context, of the potential

Mike Masnick:

Yes,

Ben Whitelaw:

of platforms, right? But I think if we're being very specific here about the election results, particularly over the last year, I wonder if we've yet to see the fruits of, um, these, um, I mean, I wonder if basically the platforms are doing just about enough, um, to kind of keep to kind of get by

Mike Masnick:

I mean, the issue

Ben Whitelaw:

and how you even quantify

Mike Masnick:

there's, there is no way, there is no way to really quantify that. And it's always like, and it is a moving target, right? I mean, malicious actors are a moving target and they're always seeking the next advantage and the next edge. And so like, yes, I don't think there are any platforms that don't take this seriously at all with the possible exception of, Twitter X, there are definitely a lot of people who are thinking about these things and trying to deal with the different approaches and looking at it. And part of the issue is that, some of this is just the nature of how elections work. Like dirty trickster politics is not a new concept and did not suddenly spring forth with social media. Like it has existed. You can go back as far as you want to any sort of democracy, and see Efforts to, trick the public and to convince them of things that aren't true and to get them to vote in a certain way. and so I don't think any of that has changed. the question is, is how do you deal with it? And that is something that people are still studying and people are still figuring out is there's a huge amount of academic research looking into these things as well. So, you know, As with anything, it's complicated and there are a mix of different factors here. And nobody is saying that the companies aren't taking it, seriously at all, because they all are, it's just a question of where, how much effort they're putting to it and what areas they're sort of. they're still lacking in that where they could be better. And so, it's one thing to say like, Oh, you know, meta doesn't care about elections. Like, that's obviously wrong. Like clearly they do. And clearly they have lots of people working on it. It's just a question of what I think this report highlights is that., they clearly care more in certain countries, in certain areas. And they're taking a more contextually relevant approach in those areas than in other countries. And that's a shame. And it would be nice if they were willing to, recognize that and do a little more in other places as well.

Ben Whitelaw:

yeah, agreed. So yeah, for listeners, if you see announcements over the coming weeks and months related to elections. Remember this podcast and, and do return back to the Mozilla report, which we refer to, which will include in today's show notes, it is worth taking a look, it's an interesting piece. and on that note, we should move swiftly on to our best of the rest. Mike, we've, we've done our two in depth stories and we, at the end of every episode, we flash through, other things that are worth our listeners

Mike Masnick:

as fast as we can, which is not always that fast, but yes.

Ben Whitelaw:

No, let's give it a go. So I'll, I'll start this week. So I think the, the most interesting other story, that I've read this week is some, new information coming out of Ofcom who are obviously working very hard behind the scenes on the online safety act. The UK's, legislation to, kind of regulates online speech in the UK. And it has put out some information about the categorization of the largest companies and the different duties that those companies are going to have to fulfill. It's seeking evidence to inform its approach. So if you're listening to this and you want to kind of get involved in that, there's a six week window to get in touch and, and, uh, inform this, this approach, but basically it lays out the three categories, category one, category two, a, and category two B, with different kind of, designations on those and, unpacks really what those different platforms are going to have to do if they are Categorizing one of those buckets. So category one has, a number of duties ranging from things like transparency reporting, which are in lots of different, legislation nowadays, also includes things like protection for publishers and journalistic content, which is a whole minefield, which, I know different jurisdictions have struggled with and in different forms, super hard to be able to do, and then this kind of quite odd. line, which I noted about providing user empowerment features on that, on that level is anybody's guess. But if you dig, if you dig a little bit deeper is, I think a product feature that it wants users to be able to kind of find verified and filter out non verified users on the platform, which is obviously something we've talked to them about today. So. Basically it's starting to outline its approach to what platforms are going to have to do. The timeline is a long one. There's a formal consultation that's going to follow in 2025. So this is a kind of staggered process, but it's interesting to see, or to start to think about which platforms will fall into these different buckets. you know, are they large enough? Do they do certain, things? Do they recommend content? Do they represent a percentage of the UK population? You know, those are the ways that Ofcom is thinking about how to designate, different platforms. So yeah, this picture is emerging as to how the UK regulatory regime will unfold.

Mike Masnick:

Yeah. And it's interesting and certainly reminiscent of the DSA and the concept of VLOPS with, larger duties and larger requirements. It's also, there's always the element of. This online safety act is now going into effect and then we're figuring out what it actually means, which always bothers me a little bit, but it's sort of the nature of the regulatory state, around the world. but yeah, there, there are elements in here that seem a little bit like. What does user empowerment features really mean? And I always get a little bit allergic to the idea of government agencies effectively making product decisions, which is, which is what this does. And, you know, You can look at it and say, there are a set of best practices around user empowerment or identity verification, which is another part of this that could be really interesting, but when it starts requiring it, you begin to wonder like, well, Are we blocking out certain kinds of services that don't fit into that kind of model? Uh, and so things like that worry me. And I know that, Ofcom will present themselves as we're taking a very thoughtful and careful approach to this. We're just trying to hear from people and understand the risks and challenges and, you know, good for them on that front. But. It does worry me about where this ends up and, and do we lock in a particular approach in a world that is still fairly dynamic?

Ben Whitelaw:

Yeah, it's true. And. I think that that is something that some of the platforms have flagged themselves in the evidence that has informed this new development and one story has made it into a couple of different, outlets where it's been reported that Reddit has said that, we feel like we're going to be disproportionately affected by these regulations as a quote unquote, smaller platform, which is, Fascinating to me because Reddit for me is, one of the largest, you know, it's in that, it's in that bucket of kind of five or six big platforms. It's, it's one of the kind of originals in many senses. I know it's not on the same scale as the meta, uh, platforms, but it, it feels like it's, the thresholds are, going to have, an unfair economic and operational effect on its work. And so, you can really think about how other platforms are going to be affected if, if not read it, right. You know, what are like the new startups coming through, how they are going to have to adhere to these regulations if Reddit are

Mike Masnick:

well, there's, yeah, there's a few things that are obviously with reddit having just gone public. We have a little bit of visibility into their, economic situation. They're really not making that much money. So they, they don't have that much money to spend. But, but I also think what's notable is that reddit has a very different approach to, trust and safety content moderation things in that you effectively have two different levels. You have big, important decisions that are taken at the central level by the company and the admins there. But then most of the rest of it is sort of pushed out to the various subreddits, the various communities themselves, they have their own moderators who are for the vast majority of them, volunteers who have no. No payment, but are managing their own communities and setting the rules for those communities. And those rules can be very, very different for each community. And, and, so rather than having this top down approach, then what does that mean in a regulatory environment where there are certain things that are expected and so. If you have something like required, uh, user verification or something that often does not make sense in the context of Reddit. And I often point to Reddit as an example, when people are talking about policies for regulating online speech, where they are thinking about meta and YouTube and Twitter and Instagram. And say like, how does this apply to Reddit because of that very different nature? Most of the users on Reddit are not using their real names and don't want to use their real names and would be hesitant to use the service if they were required to have their real names and be verified and authenticated in some ways. And the fact that like the moderation decisions are very context specific and very much, you know, per community. If you're laying down rules about how those things have to be handled, does that create a wider impact? And then you begin to think like, well, what other types of startups, you know, there may be other communities where those things happen as well, where each time you're looking at. These different services, that they have different approaches for different reasons because they're creating different types of conversations, different types of communities. So I think Reddit sort of speaking up and even saying, even going so far as to say like, Hey, we might not be able to operate in the UK under these rules is really important to think about as, as Ofcom is, moving forward with these rules.

Ben Whitelaw:

Yeah. And I'm sure it's really difficult to create these buckets in such a way that it. They apply to who you want them to apply to, you know, in a sense, but, um, and it will be interesting to see how the consultation period shapes that.

Mike Masnick:

Yeah. And I think, you know, the other interesting related story that we saw to this was Patreon this week announcing that they're effectively trying to move more towards a Reddit approach to moderation in that, Patreon is a crowdfunding supporting thing, but what they're trying to do is create communities. And so, one of the things that they're doing is now enabling people who run Patreon communities for certain types of content to empower user empowerment, empower some of the users in their community to be moderators of those communities. And, it'll be interesting to see how big of a story this is. Like already you could, give certain users. So, you know, normally you would think of this like somebody who's working for you could moderate a community. So now it's just somebody in the community with a little bit more moderation powers, but it's interesting to think about. See their approach towards the same sort of two tier system of moderation, where there is some central moderation, but then allowing each community to decide for itself, but also then goes back to these exact same questions, like how does Patreon handle, requirements of the online safety act, because it's, it's going to hit them too. And so as they're looking to empower users or empower community owners to empower their users, it creates a different sort of space as well. Yeah.

Ben Whitelaw:

Um, and our final story of, um, mini roundup and of this week is my favorite category of story, actually, in some senses, which is local news organization. Shadow band band busted by big corporation. And, you know, are furious about it. Tell us about that.

Mike Masnick:

So there's a local newspaper called the, or news organization called the Kansas reflector. They had published a story earlier this week that was critical of Facebook and basically talking about, when, Facebook causes problems for local media, then local media dies, and that is really bad for democracy and for a variety of other things, and what happened was, uh, The day after that story came out, suddenly the Kansas reflector discovered that all links to Kansas reflector in any form were banned from all meta properties. You couldn't post them on Facebook or threads or wherever. And it is very easy to then immediately leap to the conspiracy theory of like Facebook saw this one story that was critical of Facebook and said, that's it. These guys are dead to us and to everyone. Um, You know, that does not appear to be the case as, as always with these things. That's generally not how it works. It is not like Mark Zuckerberg is sitting there and saying like those folks in Kansas, man, they mess with the wrong guy. Um, and so, as the story started to spread, of course, you sort of get a Streisand effect thing where like the story started to spread and everybody was, was playing around with it. And so on Thursday, after the story became bigger and bigger. A few hours went by and then links were turned back on, to everything except that one story for a while, that one story, I don't know if that's still true, but I do note that, uh, Facebook's, uh, PR comms head, Andy Stone has come out and said it was just an error, had nothing to do with the reporting and it has been fixed. And so it is true that these kinds of errors happen. It is. Notable that when these errors happen, right after a negative story is being, being pushed, that almost everyone is going to leap to the conclusion that it had to have something to do with it, even if that was not true at all.

Ben Whitelaw:

Yeah, this is about how things are perceived, rather than actually, and the importance of how things are perceived, right? Rather than the actual kind of act or the event itself, like, people jump to conclusions as they are liable to

Mike Masnick:

and I guarantee you that a year from now, two years from now, we are still going to hear about this story and it is going to be presented by someone as factual that Facebook decided to ban the Kansas Reflector because of the reporting because of negative reporting, which is almost certainly not true, but the perception is going to live on.

Ben Whitelaw:

Yeah, agreed. And in a year or two years, maybe it will be us. If this podcast still persists talking about this story,

Mike Masnick:

is definitely going to persist that long. Come on.

Ben Whitelaw:

we'll be talking about it with the kind of nuance that you've just explained there. Mike, I hope all being well. Um, so yeah, thank you listeners for taking, time to this week to listen to the podcast and for your ongoing support of Ctrl-Alt-Speech. Um,

Mike Masnick:

of which, speaking of which, please, please go rate and review the podcast. Everyone tells us it is super, super important to getting other people to listen to the podcast. We got some very nice reviews this week. We'd, we would love to see some more. So please, whichever Platform you use that takes reviews and ratings, please go and do that. It really does help us out.

Ben Whitelaw:

agreed. We did promise a few weeks ago, we would read out some of those reviews. We didn't, we haven't got quite enough, I think, to make a mini feature of it. So if a few more come in over the next seven days, we promised to do it in next week's episode. Um, That's, that's a commitment from us. So before everyone kind of disappears, I just want to flag that we have a great bonus chat, which is coming next. Again, from our launch sponsor modulate, which has a suite of tools that help make online spaces safer and more inclusive, particularly for gamers. gaming is something that's really fascinated both Mike and I, since we've been covering this topic. And is often at the bleeding edge of kind of online harms by the nature of the immersive gameplay that people engage in and Mark Nolan, the director of marketing and modulate has done a lot of work over the last few years to bridge the gap between other companies who are thinking about this and has created this kind of. This coalition that we talked about, at the top of the episode, he has, produced a white paper as part of that process. It's got some really. I think sensible views about how AI and humans should come together to make gaming safety for everyone. So stick around to listen to that. Thanks again to you, Mike, for your wisdom this week. Thanks to all the listeners and, uh, we'll speak to your scene.

Mike Masnick:

Thanks. So Mark, you recently announced the new gaming safety coalition between you guys, which modulate along with keyword studios, active fence, and take this to deal with gaming environments. So what is the thinking behind this coalition?

Mark Nolan:

Yeah, we had been chatting with those partners for a few months about sort of a gap that we saw in the games industry where we had each been having conversations with various trust and safety teams or player experience teams or any other sort of department within game studios of all sizes that are responsible for either player safety or player privacy or well being and The common thread that we kept uncovering was that there were no set of best practices,, for the games industry specifically. Trust and safety is kind of new as a, as an independent, um, sort of function in itself. And especially within the games industry, it's pretty, pretty nascent. And there wasn't really any resource for studios to go to, to say, Hey, I have this problem. In the trust and safety space, I'm trying to protect my players, reduce toxicity, make sure that our users have a, you know, safe experience, but I don't know where to turn for, either tools to use or processes to follow or ways to structure a trust and safety team. So we partnered with, yeah, Modulate, ActiFence, TakeThis, and Keyword Studios because we each sort of have a unique perspective on, That problem and on providing those answers. So modulate provides voice moderation for a game studios act defense provides among other things, text and image moderation. Keyword studios provides also among other things, like moderation team and staffing and then. Take this, does some really great work in the mental health space and does a lot of research into harassment and hate and toxicity in the game space. So with each of our sort of little, um, areas of expertise, we were able to come together and say that we, we have, we think what, the best background to start setting out those guidelines and start Pointing the industry in the right direction.

Mike Masnick:

And so what is it that you expect the sort of product to be? Is it going to be things like releasing best practices or holding events or, or everything, you know, what, what is it that you, you plan to be doing with the coalition?

Mark Nolan:

Yeah, we, we have big plans and I think it, it, will be eventually sort of a little bit of everything. We launched the coalition with a new white paper on, how studios can combine human intelligence and AI tools to build a successful trust and safety motion within their. communities and within their games. So that was a sort of a net new research project that those the four organizations, as founding members came together to publish. So we want to continue that as, sort of a function that we're doing research and, publishing those, those guidelines in sort of, uh, an official capacity. But we also definitely want to be hosting events and, and making sure that we're having conversations in the right places about, those best practices and having sort of more casual conversations with studios and making sure that we're also listening to what they need and what questions they have. And, that we were flexible enough to answer those. So that's the, the, we'll, we'll, you know, it'll be multifaceted, but, um. Lots, hopefully lots more to come.

Mike Masnick:

Cool. Why, why do you think that, that gaming is such sort of like a. leading edge indicator for, for the world of online speech. Like, I find this really interesting that you're, you're creating this, this coalition specifically for the gaming space where, all of these different companies work and certainly you guys at modulate have always focused. But it's interesting to me that as the whole world of trust and safety and these questions around online speech are developing, that it feels like the conversation always goes back to. To games, it's sort of like the area where the most experimentation is happening. Why do you think that that is?

Mark Nolan:

Yeah, that's, it's an interesting question that, that we've pondered as well. And I think, um, this is definitely not the only reason, but I think a huge factor of that is just the nature of multiplayer games means that people are having active conversations, whether it's on, you know, Text chat with keyboards or a voice through headphones and microphones where they're having, you know, cooperative experiences, competitive experiences, sharing spaces with other people and, and they're like, inactive, um, sort of active use of that space. And I think that opens the door to, first of all, I mean, I'll start with the positive. It's, it's a great way to connect with people. It's a great way to, share time with friends, with family, with people you've never met before. But that opens the door to a lot of potential for hate, harassment, just in general bad behavior and because games can be such active spaces that people are so passionate about, it can spiral into a big problem pretty quickly. And, I'll emphasize the big there because, there's so many players of multiplayer games today, the market just continues to grow and the sort of traditional label of gamer doesn't really apply to any one specific demographic or age group or anything anymore. It's kind of everyone has a game they, they might play. And a lot of those experiences are multiplayer games where you are sharing, you know, either virtual space or sort of mental space with other people. And it just, opens up opportunity for, folks to behave in a way that makes other people feel uncomfortable. And the scale of that can make it hard to moderate and hard to keep an eye on what's happening in your own community if you're a game studio. So I think the, the question of what do we do about, you know, online speech is important to a lot of games, but what do we do? Do about that, has been hard to answer until recently. Yeah.

Mike Masnick:

Yeah. no, that, that makes a lot of sense. And with the announcement of the coalition, you released the white paper, which you mentioned already. And the white paper is really interesting. It sort of talks about the intersection that I think a lot of different. Online communities are, and trust and safety folks are thinking about, which is the, human based moderation and AI based moderation and the intersection, everybody sort of recognizes that you need elements of both of those, and I think that the paper does a really nice job of, of laying out the sort of hybrid approach that you think makes sense. Do you want to talk a little bit about that? Why, why do you think that hybrid approach is the most sensible?

Mark Nolan:

Yeah. So I, I was just sort of mentioning the scale and the problem of scale in moderating spaces. And I think it is particularly a problem in games because the spaces are not as neatly defined as some other online platforms that might have more specific, I don't know, guardrails around what a community is or who's talking to each other or who might be in charge of this space or that space. So the scale is really a problem and AI is the sort of really the only way that you can effectively reach that scale and make sure that you are. addressing all of the potential bad behavior that might be happening in online speech in games. AI, uh, is not infallible and If an AI is incorrect in one situation or another, in one particular instance, it might be okay. But if it is incorrect at a, at a wide, you know, in a whole category that you're trying to monitor for, or it can't catch this particular type of hate speech or something like that, then. That's a big problem and that's where humans come in and can sort of do data validation and make sure that what is getting moderated or even getting flagged as potential to be moderated is actually accurate and is, uh, you know, in the case of game studios is lining up with what might be a violation of their code of conduct. So. the human element is a necessary component in using AI tools but also in, the building of those tools and making sure that, you know, in our case we at Modulate we're, we're Training our machine learning models on voice chat data from games so that the models understand how people interact in certain games in different genres, what types of language they use, what types of, you know, sort of like gamer specific speech might be happening that otherwise would. confuse an AI system or, not be, you know, trained on a specific model. So the humans need to be involved both in the actual use of the tool, but also in the development of the model and, and making sure that it is kind of trained and tuned to that specific, you know, And so together with humans and AI, you can achieve the scale that you need to in, games that are up to hundreds of millions of players with that many people may be talking to each other at the same time. But you still have the, stopgap of a real person who is trained on your code of conduct and how your specific studio or that specific game treats different types of behavior. And that kind of gives you that, um, confidence that the way that you're moderating that community is actually accurate and reflects kind of the expected behavior that your players have.

Mike Masnick:

Great. And so for people who are interested in learning more or downloading the white paper, which is, which is an excellent read, where should they go?

Mark Nolan:

Yeah, gamingsafetycoalition. com is our website for the coalition, and yeah, you can download the white paper there, and also contact the coalition as a whole, and especially at this early stage where we're looking for input from, not only from game studios, but kind of from the industry at large on Types of things folks might be interested in seeing coming out of the coalition, whether that's research into a specific topic or some sort of event that they, you know, that that someone might feel like would be worth our while or, or, um, even just crossing bridges to other industries and applying what we've learned with the gaming safety coalition to another similar organization. We're, we're open to hearing all of that and definitely welcome any comments and feedback. Uh, yeah. On gaming safety coalition. com.

Mike Masnick:

Great. Well, thanks for joining us today.

Mark Nolan:

Thanks for having me, Mike.

Announcer:

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.Com. That's C T R L Alt Speech. com. This podcast is produced with financial support from the Future of Online Trust and Safety Fund, a fiscally sponsored multi donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive trust and safety ecosystem.