Ctrl-Alt-Speech

Rated R for Ridiculous

Mike Masnick & Ben Whitelaw Season 1 Episode 75
Ben Whitelaw:

Mike, it's great to have you back on the podcast. it's really nice to see you again, and we're gonna kind of, Chris, in that moment we're gonna, celebrate it by, using a prompt from, somebody who we often talk about on, on the podcast, Elon Musk, his,

Mike Masnick:

my favorite.

Ben Whitelaw:

your Bestie in so many ways. this is a, tool in a story we're gonna talk about later, but gr imagine the X Twitter AI video generator, which you may or may not have used. It has the prompt, when you log in, which I had the displeasure of doing in order to find out these words. has the prompt typed to imagine So, so now you've been given permission by, Elon to, imagine, because your brain can do that anyway. How would you do so? I.

Mike Masnick:

I mean, as someone who, does plenty of typing, my preferred MO mode of communicating seems to be typing. I would like to type to imagine that I could forget the things that people are using grok ai. Imagine for, as we are going to talk about in a few minutes, I would prefer not to know some of this stuff, but, uh, it is now imprinted on my brain. Uh, what about you, Ben? What would you like to type to imagine?

Ben Whitelaw:

I would like to type to imagine a world in which I haven't been kicked out of my usual podcast recording spot because, my small infant son has moved out of our bedroom and into, our spare bedroom and, and now I'm stuck at the kitchen table. So this, so I'm imagining in a world where, where I have enough space to not move and to record this

Mike Masnick:

The kids are our future. They're taking over our homes.

Ben Whitelaw:

Yeah. But as we'll find out today, they're also, in trouble because of these social media companies that we'll talk about. Hello and welcome to Control Alt Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. It is October the 16th, 2025, and this week we're talking about Sam Altman getting Spicy TikTok workers and will film ratings work for social media platforms. My name is Ben Whitelaw. I'm the founder and editor of Everything Moderation, and I'm freshly accompanied by Mike Masnick, back from his whirlwind tour of the US having been bestowed with the protector of the internet award that we talked about. It must've been a couple of months ago now. Right. You've, you've been, you've picked up the, the famous protector of the Internet shield, I guess, sword

Mike Masnick:

So Well, I, I had forgotten. Yeah. So when they had announced it, I had made a joke about, I think I had said I wanted a shield to go with it. Um, I forgot exactly what it was, but when I went to pick it up, I hadn't really prepared a speech, which was my bad. Uh, and so on the spot, I sort of improvised. I forgot that it was a shield and I said a sword. And I said I was promised a sword and I didn't get one. and I will note that the, child of the organizer who was there then drew me a very nice sword on a piece of paper and said, uh, a sword for Mike Masnick because my dad didn't give you one, which was, was very nice. but yeah. Yeah, it's, but it's good to be back because, uh, last week I was traveling the US the week before you were away. The week before that, both of us were away. So it's, it's been a good month or so since, since we've both been in the chair. So, uh, it's good to see you, Ben.

Ben Whitelaw:

Yeah. Luckily not much has changed.

Mike Masnick:

Well, you're in a totally different spot in your house. What are you talking about?

Ben Whitelaw:

Yeah. And there is, you know, there are more stories than we can, uh, can get through today, I'll tell you that. yeah, no, it is nice to be back. It's nice to have you. we managed to commandeer two very good co-hosts, while we were away. Dave Wilner, uh, Thomas Hughes, listeners should go, back and have a listen to those episodes. Very rich with experience, from both the kind of platform side and, from the kind of nonprofit regulator side in Thomas's case. yeah, loved hearing about you and, Dave jesting over the, the speech that he gave, the presentation that he gave. how did you kind of find, Thomas' episode?

Mike Masnick:

Yeah, I thought it was, really good. our producer had warned me, he said, oh, you're gonna listen to this episode, and there's gonna be a whole bunch of points where you would've wanted to have jumped in to talk about stuff. But no, it's always fascinating. You know, Thomas has, has a really interesting perspective on things and I, thought the discussion was, really good and touched on a bunch of really interesting points. So, yeah, I really enjoyed it.

Ben Whitelaw:

Yeah, no, definitely. so many experts, out there that we, we want to have featured on the podcast. Um, if you are one of them, if you'd like to put yourself forward to be a co-host, get in touch with us podcast@controlaltspeech.com. we love to hear from people who, who want to take the mic when we are not around. it's always really good to have kind of new and emerging voices as well. Mike, we, I, I regret to inform you that we haven't had any ratings or reviews on any platform

Mike Masnick:

Oh,

Ben Whitelaw:

the best part of six

Mike Masnick:

for shame.

Ben Whitelaw:

this is a crying, crying shame. It's, it's not on, frankly. And so, we are appealing to listeners as ever, that if you enjoy the podcast, or you have sympathy for Mike and I, one of the two, then please do go on and wherever you get your podcast, leave a rating review. I'm gonna challenge listeners, Mike, seeing as it's mid-October and we're approaching spooky season that reviews that have a Halloween theme in them will get read out in, in future episodes of the podcast. So, there's a little challenge there. We know our listeners are a creative bunch and, uh, this should, I think, put them to the test.

Mike Masnick:

Okay. I, I like it. And I will say we are looking at the numbers and our, our listenership continues to rise. That means there are some of you who were not listening before and therefore you should go and write reviews. So, you know, I know like some people are like, well, I've been listening from the beginning. Why should I write a review now? but those people, you can write a review also, but, I think let's get some new listeners to write a review. And if you, I guess, make it Halloween themed, then, we'll pay attention to it.

Ben Whitelaw:

Yeah, exactly. And, and definitely the, the numbers have increased, I think because people have been slowly but surely, reviewing us, you know, that's the way that the, the platforms work and so it's all, all in service of helping control or speech reach more. around the world. So thanks for the people who do that in advance. we've got, as I said, Mike, lots of stories to get into today, and so it makes, sense for us to jump straight in. we have on the podcast talked a lot about platform CEOs and the way that they think about trust and safety. it's kind of a, I would say like a bit of a pet topic of ours to chart the comments and the kind of public declarations of how platform CEOs, think about content moderation or speech. And, we probably have a continuum, I would say, of, platform CEOs all the way from, the thoughtful ones that kind of understand how speech and safety are, are balanced on one side, probably the Evan Spiegels of the world who we featured on the podcast before, all the way to, you know, perhaps, you Neil Mohan at Google, who we've discussed in the past, and then further beyond that, almost beyond kind of where we can see Elon Musk.

Mike Masnick:

I was gonna say the, the, that spectrum, the, the most extreme one has to be Elon. You can't, you can't make a chart that doesn't include him at the, the farthest end.

Ben Whitelaw:

he's, he is a dot on the horizon of the controller speech, CEO continuum. Um, but we're starting to hear more about kind of Sam Altman's worldview about speech as, chat, G PT requires more users and open AI obviously gets kind of more funding and, I think this week is one of the most clear expressions of how he's thinking about, speech and content moderation, particularly on, through open AI's tools. Is that right?

Mike Masnick:

yeah. Well, there's a few different elements to this. One of which is, you know, something that we did discuss a couple weeks ago, which is that open AI is sort of. dipping its toe more into social, for one with, SOA too. but also there are questions about, you know, what kind of guardrails there are on chat GPT and other things. And so Sam Altman came out this week and basically talked about how they were going to open up chat GPT for more adult content. Which is interesting because two months ago he had kind of said that they wouldn't do that and that there was like, that he knew that there was a huge marketing opportunity for more adult oriented chats. but that they had chosen to stay out of that. Because of the, challenges of that. And then this week he suddenly has sort of changed his view on it and, and then starts saying that, that open ai, was not elected the moral police of the world. and therefore they're going to enable it though he, you know, he sort of tried to couch it in, you know, if we verify that you are over 18, then we're going to relax the guardrails to allow for more, adult oriented chats and, conversations. and he referred to it as like, in the same way that society differentiates other appropriate boundaries are rated movies, for example, we want to do a similar thing here. and that has gotten a lot of attention. And so there were a bunch of different things that I mean, people are upset about this or happy about this, and we're, we're seeing all different reactions to it. A couple of things that sort of struck me were one, like that line that we're not, the moral police strikes me as the exact same comment in a slightly different context of Mark Zuckerberg saying, we're not here to be the arbiters of truth. it's this you know, platform, CEO effectively trying to brush off the responsibility that they're taking on with their product and

Ben Whitelaw:

would say, I would say that, it does say a lot about how San Morman thinks about both kind of like politics and also law enforcement that you think you can be elected. To the police. I know it's a small point, but you know that El Elise, Elise, Zuckerberg, kind of got the, the framing

Mike Masnick:

Right.

Ben Whitelaw:

you arbit is the truth, actually kind of makes sense. And, um, but being elected, the moral police, I'm, I'm not on board. I'm not sure altman's like that in touch with reality when that's how you think that society works.

Mike Masnick:

fair enough.

Ben Whitelaw:

That point aside,

Mike Masnick:

Yeah. But it, it is this sort of statement of like, know, we're seeing this a lot and I understand where it comes from. You know, people who are building these kinds of tools are like, look, we just made this tool that we think enables cool stuff. Like, just, use it wisely. Right? Like, the root of all trust and safety issues is like. Why can't you just be good? You know, it's just people like, we built this tool, it has all these cool uses. Yes, you could misuse it, could you please not misuse it? Like, stop making it a pain for us. That would be wonderful. And that's like the root of all trust and safety is like, just stop being jerks. Um, and, and I think there's some element of that frustration that Sam Altman is sort of displaying here, but it, it just, you know, it's like you guys have billions of dollars, you know, you've taken on this responsibility understand the responsibility that you're actually taking on and then be more responsible about it. And I'm not saying that like, open AI shouldn't do this. I think actually, I think this makes sense. I think that there's this, think there is a, a reasonable concern that, you know, there's a push to sort of. dify the entire internet, right? And make sure that there's no content anywhere that is inappropriate for children. And we'll talk a little bit more about some of this later as well. But like, I think it's nice to see platforms saying like, no, there are places where there should be content that is age appropriate for age appropriate people. and so, I'm not against that, but it felt very dismissive in the way that, he handled it and it didn't feel like necessarily he had a full grasp of the different trade offs and the different approaches to how you could do those things.

Ben Whitelaw:

Yeah. What did you make of his, um, he, he talked about giving those experiences to verified

Mike Masnick:

Yeah.

Ben Whitelaw:

what did you make of, of that kind of term and phrase like, what's your expectation? How how do you see. You know, chat, PT or sawa,

Mike Masnick:

I.

Ben Whitelaw:

certain types of content, only being for verified adults. Inverted comm,

Mike Masnick:

you know, there's a question of, you know, how are they gonna do the verification and how does that work? I'm not against the idea of age verification entirely as a concept. My concern tends to be when there are the mandates for it. when there's like the legal requirement for it, there are certain situations where I think there should be some effort to make sure that accessing certain services or content should be done in an age appropriate way. and I think it should be, the services that are determining the best way to do that. and so in this case where it's basically like, you know, we're building effectively an adult side of this and we're going to take some precautions to. Try to make sure that that people who are age appropriate are making, are making use of the tools that way. I find that to be less problematic than when it's, the government coming in and saying, you have to block everyone under 16 or under 18, or something like that. So I'm not, I'm not necessarily against platforms making steps to say like, we're gonna try and, do some sort of age estimation or, or something along those lines. If it's done in a, in a reasonable way and it's not because of there being ordered to, and there are other sort of restrictions that cause problems along with that.

Ben Whitelaw:

Doesn't that approach, I mean, that's interesting'cause I, I doesn't that approach of self-imposed age of verification come with the same data privacy issues that are mandated government approach would like, what,

Mike Masnick:

It,

Ben Whitelaw:

why do you kind of

Mike Masnick:

Yeah, it, it's a good question. I think it depends because the issue with when you have the mandate often is that because there's some sort of legal requirement for it, you then have to be able to prove that you're doing things, which often means keeping the data or having access to being able to go back and prove it in some form or another. And there is more risk associated with getting it wrong, which also makes it so that you're in a position where you have to go much deeper in terms of like confirming with a government issued ID or something of that nature. which. Means that there's, more risk associated with it and more points of failure. Where I think, and again, it depends on how open AI determines this or sets up the actual age verification tools that they're using. I think when it's not mandated and there's no, like legal liability, if you get it wrong, you can do it in a way that is a little bit more fuzzy and a little bit more like, you know, we, think this is age appropriate or this is not. and that could involve collecting a lot less data, certainly keeping track of a lot less data. you know, and like we've seen that with, other, uh, I forget who and where, but there are things where like certain apps have said like, if you've had an account on our service for over 18 years, we're going to assume you're over 18 without you having to prove any identity or something like that. Now OpenAI obviously hasn't been around that long, but there I'm saying there are ways to, say like, okay, we think we have enough evidence to believe that you're over 18. That probably won't. Qualify for any law or mandate. And so that's, you know, there are a lot of, gray areas in this in terms of like determining what is right or what is going to cause more privacy issues. but I think that, there are ways that it can be done when it's not mandated that, make it less problematic.

Ben Whitelaw:

Yeah. Okay. Fair enough. I'm sure. Stories that we talked about on last week's podcast, which would've made you say, I told you so, was the discord

Mike Masnick:

Yeah.

Ben Whitelaw:

leak.

Mike Masnick:

Yes.

Ben Whitelaw:

I'm sure when you listened back you were shouting, I told you sir, this is exactly what I said. But you know, there, there is nuance I think in

Mike Masnick:

Yeah. And I think, you know, I mean there are other elements too, which is that, there is this whole discussion about, when you're talking about age appropriate material and different things around that nature is also like. Part of the appropriateness of the age related stuff is also related to the interest of the people who are looking at it. Right. And I know, and this is not, you know, this is not 100% true, but there is this element of like, kids are often not interested in adult related content. I'm not talking about like specifically like pornographic content, but other kinds of content that is more, you know, adult focused. kids they're not thinking about it. They're not interested in it, it's not engaging for them anyways. And so there is this sort of natural walling off of, different kinds of content, even without having to go through the process of, verifying someone's someone's age. You know? And one of the other stories that we wanted to sort of tie into this one, which I think is that as Sam Altman is talking about, allowing more, adult content un Chat Chippie is at the same time, there are other services that are doing this as well. And there was a Rolling Stone article this week about, grok, imagine our opening prompt

Ben Whitelaw:

Yeah.

Mike Masnick:

and how apparently it's getting really good at pornography, uh, which was not, uh, you know, the headline was not something I I would've expected to have read, but it said GR is learning genitalia really fast, which

Ben Whitelaw:

Good

Mike Masnick:

gives you a sense of where machine learning is at and what people are using, grok imagine for. And it's been said before, you know, sometimes jokingly, the internet is for porn or whatever, but like how. What is happening with adult content is often a leading indicator of where the rest of the internet is gonna go. you know, the, the people who are experimenting with adult content often discover different services or different apps or different ways of consuming content or sharing content that then spreads out to the rest of the internet. whether that's for good or bad is, you know, different people can feel different ways, but we sort of see where that is where a lot of the experimentation happens. So having open ai, allowing for more adult content, having gr allow for it, I am imagine ROC is not doing very much, if any, age verification. you

Ben Whitelaw:

I didn't, I couldn't tell in my, uh, brief dabble with it as I logged in to find out what the prompt was. I, I, I certainly wasn't ver, I wasn't

Mike Masnick:

you weren't, you weren't. Yeah, I don't, I've never used grok, so I, I couldn't tell you. But, I think it'll be interesting. I'm sure that there will then be some sort of moral panic related to all this. Already in that article about Sam Altman, they mentioned that, enco, the, uh, sort of famously, how do I wanna phrase this? They, they're an activist group that is just hates the internet and believes that all adult content should be removed. All encryption should be banned. That section two 30 is evil. they're sort of the anti me on everything, but they, they sort of spun out of, they were originally called the Moral Majority and it was a very, religiously based group that was just focused on, like banning. Sexy ads in magazines, you know, that that's like their reason for being. And so there are already, protesting Sam Altman saying this stuff. And I imagine that they will work very hard as they normally do, to sort of drum up a moral panic about, oh my gosh, there is sexy stuff on the internet. and, and that's bad. And so we'll sort of see what happens. And I'm sure that there will be a case somewhere, someone will find something where, someone who is young comes across inappropriate content as they do on, on any particular service. and it'll become a story and then Sam Altman will have to deal with it. And he, you know, one of the other interesting things, I mentioned in the opening of this part, he talked about, r rated content, you know, there's such a thing as R rated movies. We also saw this story this week, about Instagram saying that they're now going to have rules, for PG 13 content, that, if they think the content is rated PG 13, that they'll restrict it to people who are over age. And so it's interesting to me at least that both Sam Altman and, whether Mark Zuckerberg or whoever adversary, are using the movie rating system to say like, oh, we can set up different standards for different kinds of content. and I will note as a side note, at least for Instagram saying, we're gonna use PG 13 as the ratings, the MPAA the Motion Picture Association, who created the movie rating standards as a voluntary system to avoid the government trying to, uh, yell at them about a previous moral panic. they came out with a very strongly worded statement saying like, Hey, we didn't say you could use. This standard, uh, this PG 13 rating, that's ours. Uh, and they got, they got very, uh, territorial

Ben Whitelaw:

Yeah.

Mike Masnick:

rating system.

Ben Whitelaw:

I mean, I, I can imagine, to be honest, you know, it It's very, very interesting that these CEOs have kind of fallen back on movie rating system as a way to essentially kind of justify strategy, right? That justify their, approach to speech. You know, it kind of makes sense in, in some ways because it's widely accepted that films have ratings and that you, there is this kinda discussion about whether a film, is PG 13 or 18 and, and that's, part of the, decision making process for, children and for adults about where, where your tolerances are. And so it kind of makes sense that they would do that. But it is, it's a bit cheeky

Mike Masnick:

Yeah.

Ben Whitelaw:

as you know, it's a bit of a cop out in loads of ways as well, like.

Mike Masnick:

it is and, and also it's sort of almost, uh, it doesn't really recognize both the history of the movie rating system in the US at least as well as the critiques of it. Right. And there have been sort of somewhat famous critiques. Again, the movie rating system is voluntary, but it's made up of a panel of people. And there have been huge protests in the past about sort of the prudishness of that panel. and there's certainly been talk, like movie makers always want their films to be rated PG 13 instead of r because the box office difference is, is massive. and they fight over it. But there's also been all this talk about like, basically. PG 13 allows for all sorts of violence, but no nudity, right? Like if you show a single female presenting breast, you're going to get an R rating. But if you show like two hours straight of violence, as long as there's not too much blood, you're probably gonna get a PG 13 rating. And it feels a little bit arbitrary, because it is, it is a fairly arbitrary system. There was a, famous documentary, this movie is not yet rated that sort of took apart the movie rating system. you know, so that system is already problematic in its own ways. It's voluntary, it's for movies in which you can, and they were rated on sort of the whole of a, 90 minute to two, two hour piece of content. And they have this whole system that goes back and forth to figure out the rating. I don't see how that really applies to millions, to hundreds of millions to billions of pieces of content that are short form, that are generated, you know, either by individuals just making their own content or by ai, which is generating anything. you know, if feels like, if we wanna think through a rating kind of system for appropriateness related to age, why don't we, start from scratch and think through this a little more carefully than picking a system that the movie studios came up with in the 1960s under pressure from the government who is mad about, you know, movies becoming sexier, you know?

Ben Whitelaw:

Yeah. And, and also the, you know, the way that Instagram is doing this is by having large groups of parents decide what content constitutes kind of 13 appropriate and above. And naturally, that's gonna have kind of, there's gonna be some sort of selection bias in there. Right. You know, and not only that, but the nature of these kind of initiatives is that other platforms tend to kind of follow suit and, and take a lead, particularly from Meta, which has the most resources and is able to lead in these kinds of ways. So there is a, there's this kind of potential for the pg kind of 13 approach to, actually appear elsewhere as well. And obviously as you mentioned, there's loads of downsides in that. So it is one to keep an eye on, I I'm is a little bit concerning. Um,

Mike Masnick:

Yeah.

Ben Whitelaw:

once again, a, a sign of CEOs not really getting it or, or fudging it in order

Mike Masnick:

And the thing is too, like I really think, you know, where this has to move eventually is more personalization on the user side as opposed to at the corporate level, right? I mean, I would rather, rather than Sam Altman determining what is R rated or Mark Zuckerberg determining what is PG 13 rated like, give me options to, tell a system what kinds of content I am okay with seeing and what kinds of content I am not okay with seeing. And if you wanna do it at the kid level, then, put it in a sort of parent control thing where allow them to, have presets but also be able to, to change them. And again, it's always, you know, to me all, all this stuff always comes back to user controls. If you can set it, cause people can designate upfront, like, I don't wanna see you know, any nudity or whatever, you know, you can set different levels and you can set the different standards because different people in different communities will have different standards themselves. It's not, you know, the idea that there's like one single standard of like PG 13 or are for like, all kinds of content, I think is probably wrong. and just, creates larger problems. So I, I would think that eventually where this has to go is giving more control to the end user in terms of which content they wanna enable, is being viewable or visible or recommended or whatever it is. put the control there and you can start with defaults and you can start with the defaults at whatever level you want. but allow the users to have, more say over it.

Ben Whitelaw:

Yeah, a decentralized approach. Some might say.

Mike Masnick:

what an idea. Someone should

Ben Whitelaw:

thing is, we, there are white papers that, talk a bit about that I've heard. feel like one of our listeners is gonna say, Hey, you didn't mention the fact that Mike is a board member of Blue Sky at this point. So I've said it, but also I don't have the bell because

Mike Masnick:

you're in a different spot. Yeah. Ding, ding, ding. We'll go back to the verbal bell.

Ben Whitelaw:

Yeah, yeah, yeah. Um, sounds good. so AI video, we've, we've talked a bit about, there are problems with that. We're almost at the kind of ground zero in terms of how we, I guess, think about trust and safety when it comes to these kind of hyper personalized content. Putters like grok you know, so as well. And a, AI video is a component of this next story, but as ever with online speech and trust and safety, it's actually not that simple. And there are many aspects to this. I want to kind of start by, unpacking the, the TikTok workers who have been laid off, in the uk. This is a, a story that we've covered in bits and pieces over probably six, six or so months. And we've talked about, labor, relations of trust and safety workers in countries like Kenya and the Philippines. But this is new because it's the uk. you know, UK is, doesn't necessarily have a large by number, trust and safety workforce, but TikTok is probably, um, at least by my understanding, one of the largest workforces of trust and safety workers, on staff. Now, as of this week, they're facing a bit of political pressure following, their cutting of about 430 jobs from their trust and safety team. a few months ago. There have been protests, uh, outside their offices in London in the last few weeks, and that has led to a letter from a couple of large, technology unions of which these workers were part of, and also signed by some trust and safety experts, um, who we've talked a bit about in the podcast. And it's essentially culminated in, one of the labor ministers, labor, the party, not labor as in work, indicating that she is kind of significantly concerned about the fact that TikTok have cut these workers at a time when obviously trust and safety and child safety is such a big topic. And, Ofcom in the UK are doing such a, big effort to try and make sure that platforms have, safety elements in place. first of all, this is kind of a interesting mind.'cause what we have here is a, minister in the UK government talking about trust and safety, job cuts, which I think is kind of in and of itself quite a, a big thing. But what's more important and more interesting to me, I think, is that these job cuts actually came eight days before workers were due to vote on union recognition. And this is a, a small part in this guardian story, which we'll share in the show notes. this, again, is something we've heard in, Kenya and the Philippines and and Ghana, but to have it in the uk where obviously worker, Rights are, typically much stronger. There are, laws in place that mean that people who are involved in the creation of unions should not be disadvantaged and certainly should not face kind of, sackings in this way. and it also comes at a time where in the UK there is this big push for workers' rights. there's a, a fair work agency that's just been announced by Stan, the Prime Minister, and there is this kind of emphasis on ensuring that British workers are, compensated well and they're, they've got the conditions to do their job properly. There's, that's a big part of their manifesto. So in the context of that, while this story is something that we've heard in other contexts before, I think this is, this is kind of interesting and new and, and I'm wondering how TikTok are gonna, think about the eyes that they're receiving, on them for this.

Mike Masnick:

Yeah, I mean, I think it, it's interesting on a number of different levels, tiktoks response to it was effectively claiming that they're really trying to centralize trust and safety functions in, in different places, including Kenya and the Philippines. which could be true. Uh, you know, I, I wouldn't surprise me though. There have been plenty of discussions, certainly of the need for sort of local expertise in trust and safety and where you sometimes run into problems when you centralize all the trust and safety people because they don't have local cultural knowledge that is actually fairly important, uh, often and in determining, what is happening within the trust and safety context? you know, there's also this element that seems like this is, feels like a pretty typical union busting kind of move. You know, someone's gonna vote to have a union recognized, and management wants to scare them off. And so they make a big show of, Hey, if we don't like what you're doing, we're just gonna shut down this whole plant and we're gonna offshore it to some other country because you're becoming too expensive as a unionized labor. so there, probably is an element of that as well in this. you know, we're also at this moment where the entire trust and safety field, I think is in, in the midst of a change that lots of industries are facing as the rise of AI tools is becoming a bigger and bigger deal, and how is that going to impact jobs? And I don't think it's the kind of thing that is. You know that some people believe that, oh, it's gonna get rid of all of these jobs, but I think it's gonna change the nature of those jobs. and sort of how a human and the AI interact and work on these things together. and so I think all of that is playing into this one thing though. I do think it is really interesting to see. A sort of trust and safety workforce. Trying and getting very close, at least to being recognized as, as unionized within a workplace and directly within TikTok is, is certainly an interesting place for that to happen. I imagine we'll start to see more of that as well. and then we'll see different kinds of reactions to it. you know, because the general sense of things is, companies try to resist unionization and then go through all different kinds of tactics to, try and resist it. And so I think this is just that sort of playing out in a slightly more modern context.

Ben Whitelaw:

Yeah, no, I think you're right. And I've forgotten that you were a, a, a labor

Mike Masnick:

I, I, the, the, the, well, I wouldn't exactly say I do have, my undergraduate degree is in industrial and labor relations. And that

Ben Whitelaw:

Yeah.

Mike Masnick:

plenty of classes on labor history, collective bargaining, uh, negotiations, arbitration, all that kind of stuff. So this is reaching deep back into my history, but, I do have some familiarity with it. And I think I finally, just recently, within the last year, got rid of all my old labor textbooks, which were a bit outta date. And I was trying to clear off some shelf space behind me. But, uh, I do remember some, some of the union history stuff.

Ben Whitelaw:

Yeah. It stuck with you. There probably no mention of AI in those,

Mike Masnick:

No, no, no. This was, this was pre ai. We learned a lot about, you know, a hundred years ago what was happening.

Ben Whitelaw:

Yeah. Yeah. I mean, AI is, is obviously the kind of justification for tiktoks cutting of this workforce. But, I think there's, again, from, from talking to some of the people involved, the people who have been cut, It was very telling from the way that they explained the processes that they used to moderate that actually it feels quite soon to be making this shift from human to AI moderators and, and this idea that it's ready as a technology to replace the kind of expertise and, understanding of context and meaning. certainly felt a bit kind of premature. You know, there was even things that this, these kind of workers told me about, you know, TikTok using kind of Google sheets for particular tasks. And, you know, They're not the processes of a, really slick moderation operation that is can do without humans essentially.

Mike Masnick:

I mean, it, it, it's kind of interesting, you know, this was what Dave Willner's keynote speech, three weeks ago at, at the Stanford Trust and Safety Research Conference was all about. It was about sort of how, the framing of it was that effectively AI could make ma Nick's impossibility theorem obsolete, or as he's put it, make masnick wrong again. And it was funny was the one thing I was thinking through that presentation was like, yeah, but you know, AI still does not do context particularly well. And then he got off stage and, and I, there were different, you could go to different rooms. I stayed in that same room and like the first like three presentations of research after that, that went on in that room.'cause it's all, you know, it's mostly academic research presentations. Were all about teaching AI context in trust and safety, examples. And it ha and I'm blanking on the, the specifics, but there were a couple of studies that were really eye-opening of like how successful. How much better AI can get if you just give it a little bit of context. that was like, oh, that really sort of woke me up to the idea that like my original thought that, you know, AI still couldn't do context particularly well, might be obsolete and that the technology is really getting better. I still don't think it replaces humans. I still think you need humans in the loop, but it changes the nature of their job and what they're doing and how they're doing it. and, presents some, some really interesting opportunities, I think for the trust and safety space.

Ben Whitelaw:

yeah. I mean it's, we've got a couple other stories that mention TikTok, um, and they are your classic kind of harm on platform stories, right? You know, these are some good investigations done by the Bureau investigative journalism and, by Global Witness, published by The Guardian. And they talk about, discovering harm, in one case, pornographic content. The other case. AI video tools, a AI video content that is, racist and, violates tiktoks policies. And you can't help but kind of draw a, a, a line between, human workforce being reduced and some of these harms. I know that's not necessarily kind of causal, but there's a lot of issues that TikTok has in terms of, managing its ever increasing, you know, user base and the many, many millions of hours interactions and posts that it has that, means that I think we'll, we'll see more of, stories like these in the future.

Mike Masnick:

Well, I mean, the other thing too is that, you know, in theory, if the AI technology is working better is that you also have to expose less humans to the really harmful versions of that content that they don't have to review it, and expose themselves to it. But that the AI can take care of some level of the really, really harmful content, and you can bring humans in for the more context specific stuff where it's more helpful to have human eyes on it. but yeah, I mean, it's something that we'll, we'll have to see. I'm always a little bit skeptical, I, I, I understand like the other stories that you're talking about were, I thought were very interesting. but how people define the different kinds of content and the determination, like, oh, we think this violates the policies. Whether or not it actually does violate the policies is not always clear. Again, these are all like. These are all stories that involve a lot of nuance and, and trade offs that are, it's always difficult to, to sort of tell from the outside.

Ben Whitelaw:

This is where I would, I would blow my trade off tuber, you know, you know, I talked about having a, a sound effect for, for when we mentioned trade offs. I think I'm thinking of going with a tuber.

Mike Masnick:

I like, I like the trade off too bad, but what, what I really want is for you to actually get a physical tuba that you have to pick up and put on. And only I see it.'cause we don't do video yet. Right. So like, I would love to see, I, I would relish mentioning trade-offs just to see you reach over, have to pick up a tuba, put it over your head and blow the trade off. Too bad. That's, we gotta do that, Ben.

Ben Whitelaw:

yeah. If, if anybody has a spare tuber that they can lend me, I, I will come and fetch it from wherever you are. How about that? Um, the trade of tuber is probably a good segue onto our, of smaller stories. you know, we've touched quite a lot on, adult content and, what is defined as adult content and how do you ensure that, the right people are kind of seeing that content? And these small stories, Mike, of which I'm gonna let you kind of pick, which one you start with are all about, how children's experience on, on social media platforms primarily is, kind of shaped by, the experience and the design and all of the policies. So, where do you wanna begin?

Mike Masnick:

Yeah, I, I'm sort of thinking that all of these stories that we picked out, all sort of blend together. So I think they're all kind of one story. That we're sort of mixing up. I'll start with the New York City story, which is that The New York City school system, sued all the platforms basically for effectively being a public nuisance, is the way they put it. This is the second time they sued. They sued last year and the lawsuit sort of got consolidated. There's this crazy lawsuit that consolidated all of these, like there was a few different law firms that really convinced a lot of school districts to sue social media for harming the mental health of children. and they all got consolidated into one and New York City pulled out of that one and now has filed a new lawsuit sort trying to get around that, multi-district one, And it's really bad. I mean, the lawsuit is, is huge. It was like 300 and something pages. I didn't read the whole thing, but you know, I just, going through it, you're just like, what is this? Even? Because the whole point of schools should be to teach children how to, think through things and how to deal with the real world. And I think that includes how to use technology wisely and how to have media literacy and critical thinking skills and all of this kind of stuff. And yet in the lawsuit, like, you know, the one thing I picked out was that they complain about Instagram filters and that kids, you know, their, cognitive abilities to recognize that something is a filter as opposed to real doesn't exist. And I was like, that's what you're supposed to teach. Like, that's the job of a school. And, to see a lawsuit like this is, to me, the school system admitting that we can't teach your children, we don't know how to teach your children, which. You know, strikes me as like the school system giving up. Like, that's your role. Your job is to teach kids how to live in the real world and how to, you know, understand things and how to process things and think through things and be critical consumers of stuff. And yet they're saying that they're defeated by an Instagram filter. Like, come on. I, I,

Ben Whitelaw:

all been, we've, we've all been defeated by a filter at some point. Um, but so I I, I enjoyed your tech duck post on this, and I thought it was really interesting. I didn't, I hadn't seen, this suit and I didn't know that, that this was the second time that they'd, kind of gone after Instagram, but, the issue I have about the kind of fact that this is the school's issue is that schools are, fundamentally designed to test students' knowledge of a topic, right? That's kind of what most education systems around the world are, are about. And the idea of kind of digital literacy or media literacy being tested is something that we haven't kind of got round to, to figuring out, uh, as far as I understand. Like, who, how do you see that being tested or, you know, how do you validate that student knows something that has been taught in schools?

Mike Masnick:

mean, I think, I think there, there are certainly ways to do it and I'm not sure, like, to me at least, like the testing aspect of education is not the purpose of education and it's not, like obviously like gaining knowledge is a part of it, but really what schools should be doing are teaching kids. how to learn, how to understand content, how to experience the world, how to think about things. I had just seen literally this morning, and so I didn't explore it deeply, but somebody w somebody had posted something, about how, you know, universities are obsolete because all the knowledge in the world is at your fingertips with chat GPT. So why should anyone pay, huge amounts of money to attend a prestigious university, to hear from professors when you could just type the same question into chat GPT and

Ben Whitelaw:

Was that Sam Altman by any chance?

Mike Masnick:

don't think it was, I don't think it was. Oh, maybe. Uh, and, there was a, uh, professor who responded to it and I'm suddenly blanking on who it was and I feel bad. Who was saying like. the point of a professor at school is not just as a repository of knowledge to dump on the kids, it's to teach them how to learn themselves, how to understand this stuff, how to think critically about the world. The test is just like one designation of how well they're doing in getting to that point. But, the point is not just to load up the kids with a whole bunch of knowledge, it's to, teach them how to think critically. And, you know, that I think is an important role of schools and, specifically teaching digital literacy or, or you know, how to think through these things. I, there are ways to do that, but, that is the whole point of schools to me is, getting kids ready for the real world. And you can't deny the fact that, you know, social media and the internet and AI tools exist in the real world. And we should be teaching kids how to use them responsibly.

Ben Whitelaw:

I think that's fair. I mean, I, I obviously didn't start with what are the main barriers here, which is like curriculum takes ages to, to evolve, you know, funding for these kinds of topics and teacher training is, is the issue. But I, I thought the kind of, I think people kind of get on board with the idea of like, excelling at a topic. And so I think it was kind of almost, I found it interesting to think about, like, what would excelling at this topic look like and how would you, how would you show your proficiency, you know, because not how education should work necessarily, but it's how a, a lot of it does work.

Mike Masnick:

I mean, I'm sure there are challenges there and I'm not, uh, expert on how to create curriculum, you know, for these things. But I'm sure that there are ways to do it and, and in fact, like, I've spoken in the past to people. In fact, like, I know that Khan Academy, which is a very, successful, you know, nonprofit education tool, which was started by, my former roommate, Sal Kahn, which is a whole other story which we're not gonna go into, but, but I, I lived with Sal for a little over a year after we both graduated college, when we moved out to Silicon Valley. and, and he's put a lot of thought into this stuff and he's talked about how. He wants to sort of flip the education system on its head where he's saying, there are all sorts of ways to, I mean, sort of the same argument in a different direction. There are all sorts of ways to get facts and information into your head. And he's like, the time in school shouldn't be about just like drilling you with information. It should be helping students really better understand it, better have a grasp of it. The teaching time should be more personalized, more one-on-one, more, you know, exploring what are they getting, what are they not getting, getting kids to, to the right level as opposed to just like how it's often done now, which is like the teacher's going to give a lesson, here's what happened in 1492, blah, blah, blah, blah, blah. We're gonna test you on that next week. Here's what happened in 1653. We're gonna test you on that next week, you know,

Ben Whitelaw:

Tech Dirt was founded.

Mike Masnick:

Yeah, exactly. Uh, you know, but there are better ways to do education and yeah, it's a big lift and no, we're not going to, you know, switch that on a dime. But like, there are ways to start thinking about it, and I don't think suing Instagram is the way that you get there. Now, that's not to say, and I'll use this to transition into one of the other stories, that's not to say that there, there aren't all these other challenges out there. I mean, so one of the, the other stories we saw was this new study, which unfortunately I could only read the NPR reporting on it because the actual study is behind paywall, which frustrates me because I would very much like to look at the actual methodology and the details, but a very interesting study that looked at what looks like a fairly large database of young people and their, habits and, and practices showing that. They tended to do less well on reading and memory tests if they use social media more. And there's obviously been a huge debate, and I've been a part of this in terms of like the impact of social media on kids, and what does it mean. And I think this study is very interesting, even if. A lot of it might challenge some of the assumptions that I've certainly had in in the past. I did think it was very interesting, but part of what was really interesting to me was actually how few kids in the study actually used social media at all. or used it a lot, because the impression that we've gotten in the media is that like, oh, social media is like everywhere and the kids are completely consumed by it, and you have to have the New York City school system suing them because of it. And yet they found that. Basically 60% of the kids used little or no social media at all, which is not what you would think if you heard, the general sense of things. And then another 37%, used very little social media, though it does increase as they get further into teenager dumb. and there were only 6%. Of kids who were the high increasing social media group and were spending, in that case, they were spending three or more hours a day, by age 13. I do have some questions about some of the other methodology, including what is counted as social media, because one of the things that we've certainly learned is that kids can turn anything into social media if there's any way to communicate. There was a famous story years ago of kids in schools using Google Docs comments as like a form of social media because they could actually communicate that way. Um, and so like, does Discord count? does Fortnite, the game count? You know, there are all sorts of things where you're like, what actually is social media here? And, does it have to be interactive or is it just consumption if they're watching YouTube or Twitch streams, is that social media? And if so, is that different from TV where you're just sitting there sort of passively watching? I, I don't know. There's a whole bunch of questions I have about this, but the thing that struck me as really interesting was what this study showed. it's a very small group of kids that seem to be using social media quite a lot, which is different than the story that you hear in the media, but also sort of ties with the few studies that have shown negative impact, say, to mental health. as we've seen, it's always been like a very small, small segment of the population. And that's where the question is like, well, what is the cause? Is it that the social media is causing the mental health problems or is that they're having mental health issues that are not being treated well and therefore they're turning to social media? and this could align still with that. you know, be nice to sort of lay all of the details out among the different things. but it's interesting to see more research on this and, and how, if you're spending three hours a day doing anything that is unrelated to schoolwork, I could see where that might harm your, your school performance. You know, it,

Ben Whitelaw:

Chores. Chores. I used to do a lot of chores, Mike, and I think if I did three hours of chores,

Mike Masnick:

It, it would've, would've harmed your, your,

Ben Whitelaw:

that probably, probably didn't help, did it? Let's ban chores.

Mike Masnick:

Yes. Ban, ban chores, ban. You know, I mean, there's all sorts of stuff, but it, it is interesting. I think the research is interesting. I wish I could read the full report and I really get at the methodology of it. Unfortunately, it doesn't seem like there's an open access version yet. if somebody does have access to this report, I would love to see it and go through the actual details. Um,

Ben Whitelaw:

it is really interesting and I think, if those two stories paint a kind of, bleak picture of what it looks like to be a child in school with social media, I think the last story that we'll touch on, which is an, an op-ed in time gives a certainly kind of more realistic. View, view of what it's like to be a child in 2025, having to, contend with the internet. So this, this is a, a really interesting op-ed by a guy called Jonathan Reed, who leads a nonprofit called Next Gen Men. And I dunno if you remember Mike, when we talked about adolescence, that Netflix uh, multi parter about boys, A boy who, who basically gets kind of jailed because he, kills a fellow classmate and, you know, is thought to be, it's alluded that this is because he, has been radicalized online. We don't necessarily know that for sure, but that's what the, the kind of documentary suggests. And there's this big story

Mike Masnick:

Not a documentary. It's a, it's

Ben Whitelaw:

not in documentary, sorry. Yeah. As a fiction. Sometimes the lines get blurred and there was a big political story in the UK and the Prime Minister was, kind of commenting on it and it garnered a lot of press attention. And this op-ed from kind of Jonathan Reed is a interesting counterpoint to that. He runs a Discord server for boys in middle and high school. He's been doing that for five years. And his point is that actually, it's not about what the platforms Allow or ban or, prevent from being seen on these platforms. There's a broader point about kind of working out what boys of a certain age need from, their family, from their friends. What, do they, what are they struggling with and how do they support that the, the platforms he points out are the kind of gateway to addressing those needs when they're not addressed by, the people around them. it kind of, it reminded me of that conversation we had where we said, actually adolescence isn't about technology. It's about, you know, men and their relationships with other men. and this kind of speaks to that, actually coincidentally, Stephen Graham, who wrote the Netflix series, he actually has this week announced that he's doing a book, not about. Why technology should not be put in the hands of children ever, but about masculinity. He's, he's doing a book called Letters to Our Sons, And, uh, it's a clear indication I think of, from his perspective that this is a broader, societal issue around gender roles, as much as it is about technology. And I think that's really, really notable.

Mike Masnick:

Yeah. And, and I think the time article is really good because it, it sort of demonstrates that the underlying issue is the content itself and the communities that are formed, right? everyone wants to jump to blaming the technology, but if you use the technology well to create communities that allow for people to express themselves, to have cultural meaning, to find their communities online, they can be very, very positive forces, which is the point that, you know, we keep coming across when all of the studies show that for certain groups in certain communities, it's great. What it comes down to is the actual communities. If those communities exist and the technology can enable that, we can have really good results for kids as well. And so finding more of those communities is important, and I think that's part of what the Time Magazine piece gets at.

Ben Whitelaw:

Exactly. and that brings us neatly, I think, to the end of today's podcast. Thanks Mike, for, your thoughts for, giving us a kind of backgrounder on, uh, film ratings, uh, in the us. Always something to, uh, good to catch up on. And, and, you know, it's all of the, outlets we've covered. The Guardian, the Bureau of Investigative Journalism, CNBC, business Insider. Go and read their pieces. the links are in the show notes. thanks everyone for listening. we'll be back as ever next week. Take care. See you soon.

Announcer:

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.com. That's CT RL alt speech.com.

People on this episode