Ctrl-Alt-Speech

Have Yourself a Very Meta Christmas

Mike Masnick & Ben Whitelaw Season 1 Episode 84

In the last Ctrl-Alt-Speech of the year, Mike and Ben round up the latest news in online speech, content moderation and internet regulation with the following stories:

Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.

Ben Whitelaw:

So, as regular listeners know, Mike, the user prompt is the way we start all our episodes of Controllable Speech. we have a kind of special, has a special place in our hearts for the fact that it's so key for getting users of the internet to say what they think, to write in, you know, little boxes online. and so, we do it every single week, but because it's the festive season, we've decided to use a Christmas-y app and we've decided to go for the portable North Pole app. which, yeah, which is, uh, is not one that I use regularly, I must say. but I want you to kind of respond to its prompt, which is talk to Santa, ask your questions.

Mike Masnick:

oh, well that is, an interesting prompt. as a Jew, I don't often talk to Santa.

Ben Whitelaw:

Not a big, not a big user of the uh, portable North Pole app.

Mike Masnick:

no, no. But, but you know, you know, if I have a chance, uh, given this opportunity, I, but you usually don't ask questions of center, right? You ask for things. So, so I think what I would like is, some time this holiday season to just recoup and regather and, catch upon sleep and have nothing crazy happen in the world at large, let alone the world of online speech and trust and safety. Could use a little break. Ben, I'm exhausted.

Ben Whitelaw:

Yeah, I feel you. I feel you. I think if I was to ask, if I was to ask sense of something, it would be what is the best way to get rid of man flu quick. cause I've come down with something and, you know, I've dragged myself out bed to come and record control or speech and, you know, I sound like Pat Butcher, which is a very niche, UK reference for, for our,

Mike Masnick:

idea.

Ben Whitelaw:

for British based, listeners. But, it, hasn't done well for my, my vocal chords. So we'll see how this goes. This could go any direction.

Mike Masnick:

this, between the two of us. I'm exhausted. You're sick. This is gonna be a great episode.

Ben Whitelaw:

Buckle up everyone. Hello and welcome to Control Alt Speech, your weekly roundup of the major stories about online speech, content moderation, and internet regulation. It's December the 18th, 2025, and this week we're talking about why meta seemingly hates women, why meta seemingly hates the internet at large and for something a little bit different, why Meta is doing what Meta is doing to keep children safe online. I promise you there are other stories as well. My name is Ben Whitelaw. I'm the founder and editor of Everything in Moderation. I'm desperately, desperately sick, but I'm here to support my co-host Mike Masnick, founder and editor of Tech who sounds like you, you'd rather be somewhere else doing other things.

Mike Masnick:

No,

Ben Whitelaw:

Why? Why is it? Why is it that this year, this time of year, is such a chaotic kind of crazy time of year?

Mike Masnick:

I mean, there's always just the, like, everybody trying to get certain things done by the end of the year. And then I think this year is just overwhelming in general. There's obviously a lot going on in the world, and then yeah, lots of people are sick. I think I have a very low level, maybe sickness that like, I have this like, sort of small lingering headache in the back of my head, but not, not anything more serious than that. it's just a lot's going on and, and it's just, uh, but no, no, I, I am happy I am here'cause this is, this is a definitely a highlight of the week always to get to talk to you and I appreciate you dragging yourself out of your sick bed

Ben Whitelaw:

Look at us bringing the Christmas chair.

Mike Masnick:

Yes. Yes. Yeah. Yeah.

Ben Whitelaw:

doing our best. Um, we have got, a few meta stories to talk about this week, Mike, which I think is an appropriate way to end 2025. This is our last episode of Control Speech. gonna take a few weeks break, I think, understandably, from what we've both shared. but back in January, you'll remember that we spent the first probably few episodes at least, talking about Mark Zuckerberg's big gold chained, big watched announcement about how, you know, meta was moving into a kind of world of free expression, getting rid of censorship. so the fact we're covering them so in depth today, I think is an appropriate bookend to this year. Wouldn't you say?

Mike Masnick:

Oh yeah, absolutely.'cause it's, it's kind of a chance for us to check in on how much Mark Zuckerberg is living up to the ideals that he laid out in January. Very exciting. New, Facebook, new meta, new Mark Zuckerberg. How's that going? We're about to find out.

Ben Whitelaw:

Can't wait to see, we will be back in the new year with, actually a really interesting, episode. I think that we'll, we'll set out the year ahead. We're gonna do a bit of a kind of forward looking episode where we try and preempt what's happening. It's not a prediction episode. we don't really do predictions

Mike Masnick:

we have no crystal balls.

Ben Whitelaw:

Yeah.

Mike Masnick:

We, just, we have some insights perhaps.

Ben Whitelaw:

yes. Yeah, exactly. so yeah, listeners will have to, use this episode, and drag it out for the course of the holiday season. but they can also spend their much, Needed downtime by leaving a review or a rating for the podcast wherever they get their, podcasts. apple, Spotify, you know, the niche platforms that I know Listen on Mike. You know, they're all places where, where we need ratings and reviews to help us get discovered Going into 2026. I think it's gonna be a big year for online speech and content moderation. And so your support is very, very, valued. Very, very welcome. Thank you to everyone who's been listening this year. but yeah, make sure you, you leave a rating review when you can. So let's move into our first story, this week, Mike. this is Meta's kind of first strike, I'd say. in the Controlled Speech podcast this week, as you know, in the content moderation enforcement systems, you get a number of strikes if you're a, a user who doesn't follow policies. and I would class this as its first strike today. it's a story that, I think kind of went a bit under the radar, didn't get a lot of, traction as far as I saw. but a really interesting one. It, could be from Any year, really in the past 10 years. Mike, this, this is a story that could have come in 2024 when Instagram users got restricted from sharing content about, L-G-B-T-Q rights issues. It could have been in a story that came in 2022 when, meta removed posts related to abortion bills on the back of Roe versus Wade. And it could even be a post that, you know, you could almost trace back to the free the nipple protests in 2013, such as the, kind of, feels like we've been talking about this kind of story forever and ever. This latest version is meta restricting and removing hundreds of accounts, related to over 50 organizations across the world, providing help and support and information regarding abortion L-G-B-T-Q, access, information about women's health and. As of October, a lot of these organizations are seeing their accounts shadow banned, the reach restricted, and in some cases being taken off the platform completely. It is a piece of work done by an organization in nonprofit called Repro Uncensored. And, they have essentially got a process by which these organizations can notify them when something like this happens. And so they've, produced this report of all of the different, reports that they've received over the course of, the last few months. And it's a pretty shocking, shocking document, Mike. it's been covered by the Guardian, who often write up some of these stories about meta's most egregious, I guess, policy fails. And it's a pretty shocking thing. Basically, the, a lot of these organizations are very small. They're not just, organizations you maybe haven't heard of or, or based in kind of parts of the world where this is more likely to happen. Many of these organizations are from Europe, from the uk, some in the US as well. And, again, it's, It's quite a shocking, state of affairs. Repro and center are saying that this is kind of part of what meta, has been kind of undergoing in the last year, which is its Trump era approach to women's health and L-G-B-T-Q issues. And they've also flagged the fact that many of these organizations haven't been able to get in touch with meta or, had their appeals or questions answered appropriately. And in some cases have been kind of totally blanked, but it's left. lots of, women, lgbtq plus people and marginalized groups, essentially unable to get the information they need because of sudden decisions made in the background to, to take of accounts down. in the kind of like, is this a kind of story that will never, never die? Are we gonna have to contend with this for the rest of the time?

Mike Masnick:

Well, let me remind you, Ben, that content moderation at scale is impossible to do. Well,

Ben Whitelaw:

I wonder who came up with that.

Mike Masnick:

it's a, it's a phrase I've, I've heard it somewhere. Um, so this is, I mean, it's interesting in a variety of ways. Obviously there's a story here and I think the Guardian covers it well, and, repro and censored has some data to back up what they're claiming about these things being taken down. It's always difficult to evaluate some of these stories because, you know, so meta's response to all of this is, well, in most cases, we are just enforcing our rules. If organizations violate our rules, they can face, temporary suspensions or bans, and that's all this is. And in a few cases, they said we made a mistake that they since corrected. And that is the same story that we have always heard about content moderation, going back forever. And some of that is probably true. I don't, think that they're, being disingenuous in saying that because there's always some story behind this. And one of the reasons why content moderation at scale is impossible to do well unless you sort of. Believe in the, the new Dave Wilner approved approach that he's going to fix that, you know, Is that something happens that we don't know. You know with each of these accounts, like it's easy to present them in a way that sounds bad. You don't know what they presented or specifically what rule was violated, and that creates a problem. Now in the article themselves? some of the organizations complain, it's like, well, they don't tell us exactly what we did wrong, so we don't even know what we did wrong. And then it is also one of the big challenges of content moderation that happens all the time, which is that that is its own struggle because how transparent do you want to be? In one sense, you want. Organizations to be transparent and say you did this, this violated a rule, and therefore that's why we had to ban you. But we also know that that becomes really problematic when you have bad actors who come back and sort of rules lawyer you, or use that information to figure out how they can just sort of tiptoe around the rules and still do bad things without actually tripping the wire. And so, a lot of companies get to this point where they sort of give you a general sense of, you violated this rule or this part of our terms of service and you know, we're not gonna deal with it because it sucks if you're a good actor who gets caught in that. But if you're a bad actor, you don't want to give them necessarily too much information. Now, the other part of this is that for all the talk, especially over the last, let's say, eight years of anti-conservative bias. In content moderation. The reality has always been, in some sense the opposite. The reality has always been that marginalized groups are much more likely to get caught in various bans and for violating rules, whether legitimate or not. And a lot of that just comes down to the, determination of how do you interpret and how do you enforce these rules? But often things around, you know, abortion for example, there's a bunch of ways that discussions around abortion can trip various rules and they can be, Perfectly benign sounding rules, you know, like, advertising medicine online, right? That can trip a bunch of different rules. there are a whole bunch of areas where, rules can get broken even from well-meaning groups, and that has to be dealt with. And how do balance those things and how do you figure those things out? But the reality is that historically rules get enforced against marginalized groups. Much more often. the whole like, oh, they're just targeting conservatives was always nonsense. In fact, what we found out was that, especially with both Meta and Twitter for years, was that they sort of had two sets of rules and gave, better treatment to high profile political conservative accounts to try and avoid the appearance of bias. Even though the reality was that, they much more frequently broke the actual rules. But, the marginalized groups, you know, groups for LGBTQ plus, et cetera, weren't as high profile would get banned all the time and, you know, didn't have the same recourse. and some of it is just like, yes, they may actually legitimately be, be violating the rules. Some of it may be accidents, but like it is really tough to tell from the outside what is actually happening. Now the one other element in all of this, tying it all together and bringing it back around to Mark Zuckerberg at the beginning of this year, saying, we were too quick to be banning all these folks, and that was a mistake. And we're taking this new approach and we're really gonna lean away from doing bans and we're going to allow much more speech. And if people have problems, we're gonna have this wonderful new community notes and blah, blah, blah, blah, blah. And the fact that we are now still here and it's still marginalized groups, L-G-B-T-Q groups, all of that, and they're still getting banned. It makes me question how, honest and genuine. Mark Zuckerberg was being at the beginning of the year. Now I have lots of other reasons to question how, how how earnest he was in those comments, and how much of it was really, targeted for an audience of one. The same guy who three months earlier had said he wanted to put him in jail for the rest of his life, But I think this highlights, even if we say this is, unfortunately this is sort of standard fair for how trust and safety works, is that these groups often face excessive targeting and excessive rules violations for whatever reason. The fact that Zuckerberg went out there and claimed that they were changing their ways to make sure that there were fewer accidental bans, I mean, that was part of the point. He is like, oh, we were too aggressive. We're gonna roll back those rules so that we're, banning way fewer people. But like these groups are not seeing that. And so I think the real story here is calling out how much nonsense Zuckerberg's statement was and how that was really just evidence of even further, because again, like there's evidence going back basically a decade now that they bent over backwards to have a separate set of rules for, conservative, high profile political accounts, and now the real announcement from Zuckerberg was that they were taking that even further and would do less to ban those, but everybody else still gets screwed.

Ben Whitelaw:

Yeah, it does feel very familiar, doesn't it? and as you say, it's a kind of weird deja vu back to that moment at the start of the year, which I thought we'd been done talking about.

Mike Masnick:

Oh, that, that, that image haunts my dreams, Ben.

Ben Whitelaw:

yeah. And so, the question is like, is meta kind of doing this on purpose? Is this baked in to the product and the policies? Is it part of the way that the platform works now? Because you could argue this is kind of systematic, you know, this is a, I get that doing it hard at scale. I is a

Mike Masnick:

Impossible at scale, Ben?

Ben Whitelaw:

Impossible. Not just hard. I get that it's impossible, but there's, it is. Not really following its own rules here. Some of the, accounts, some of the organizations who had their accounts taken down were reinstated as the report from Repro Uncensored makes clear. And, EFF has done some really good work in the past 18 months or so, documenting cases where meta hasn't followed its own rules. And so when that is the case that suggests that there is something that is not written down in the policies or is not necessarily kind of, about enforcement because some of the enforcement is wrong and has been kind of repealed, that's what kind of makes people think that it's systematic and baked in. And that's where I think a lot of people end up in their views about metas concept moderation policies.

Mike Masnick:

Yeah, I, I tend to, on that one point, I tend to give meta the benefit of the doubt that I doubt. I very much doubt that this is systematic and done on purpose. I don't think this was a case where the company in any way, shape, or form said like, we're targeting these groups. And, some of the argument that is being made, and as mentioned in the, in the Guardian article is this idea that, this is sort of. you know, going in the other extreme, pro Trumpian, we're going to take down these groups that tend to be against Trump and against Omega. I don't think there's like a conscious decision of we're going to, suppress these groups. I do think that they, as an organization, they are trying to enforce their rules. It's just that they're not very good at it. They've never been all that good at it. And at scale you're going to have a bunch of these problems and, mistakes and, and it's possible too that, you know, part of the process that was talked about in, Zuckerberg's talk was, effectively saying that they were going to, lessen the need for trust and safety. There have been lots of trust and safety layoffs that we know of in the last year. it is possible they just have fewer people there who can review these things and catch the mistakes. I don't think it's, it's a ideological attack. I think it's, know, a, a combination of general incompetence and, and maybe lower staffing levels.

Ben Whitelaw:

Yeah. too big to get it right. Not interested in, trying to get it right, I think is how I'd sum it up.

Mike Masnick:

Yeah, I think that's fair. Yeah.

Ben Whitelaw:

it made me think about the importance of user appeals and we talked about this quite a lot in relation to the Digital Services Act in Europe, but there is this almost like magic article, article 21, which is about user appeals. There are these user appeal bodies being set up now in Europe. Where, organizations like the ones affected in this, case, if based in Europe, would be able to go to these user appeal bodies and have them essentially kinda mediate with the platforms in order to get a response. this is kind of why I got excited about user appeals in the first place. They're only relevant to Europe. which is I think one of the issues. But does stories like this make you think that that is the kind of way forward? and, and does it kind of give you hope in a way that some of these organizations will be able to get some kind of, there'll be some sort of adjudication happening if, if they decide to go down the user appeal route?

Mike Masnick:

Yeah, I mean, conceptually, it's interesting and the Article 21 stuff has always been interesting in, in the eu, not in Europe entirely, because Ben, I believe you are in Europe, but no longer a member of the eu.

Ben Whitelaw:

Don't, don't remind me. Don't remind me.

Mike Masnick:

But, um, yeah, I mean part of what comes across here and that I do think is important is that, you know, a lot of these groups said they reached out to people at meta and basically didn't get any answers, or didn't get any response or any clarity. And the appeals seem to go nowhere. And that often happens. That is also not a new thing that is sort of a system-wide problem across almost all of social media in that internal appeals processes are a complete mess, and every single company is completely overwhelmed and it is often very difficult to do appeals in a new reasonable manner. And that means that even the appeals that do reach some sort of adjudication are often, done badly. You know, it's, it's a mess. And so the idea, that is now in place in the EU of a third party of a set of third party adjudication bodies is actually really exciting as a different kind of approach that could be useful. But I think as we've seen. We may have talked about on the podcast, I can't remember, I can't remember last week, let alone everything we've talked about this year, but, uh, you know, it doesn't feel that that many people are actually, making use of these bodies and the fact that they're there and they're staffed up and they're ready to go, and the number of people actually trying to make use of these appeals bodies feels very low. I wonder if that will change. I don't know if that is just like a knowledge thing. More people need to understand that these, these things are there, or, you know, quality people, because there's, there are options. People don't know which one to go with. And so you have the sort of paralysis of choice in terms of deciding, well, do I appeal to this body or that body? I don't know. I still think it's an interesting thing and if it does lead to more thoughtful appeals processes, or even if it just forces the companies to take appeals more seriously, I think that could be valuable. But I don't think we've seen it as successful yet, in part because there just isn't the usage there to, have the data to know whether it's successful or not.

Ben Whitelaw:

Yeah, I had a, I think it was one of the few times you were away this year, Mike, I got Thomas Hughes from the, one of the appeals bodies to come in and co-host the podcast. And he had just published a report about the number of people who'd used Appeal Center, Europe, ACE as it's known. And it was in the kind of, low tens of thousands, probably just above 10,000, if I remember rightly. And then a lot of those, a good chunk of them were not eligible to be. processed by Ace. So if you multiply that across the, I think probably 10 or 11 appeals bodies there are now, it's still a relatively small number and yeah, the, the awareness is not there.

Mike Masnick:

I do hope that more people will start to take advantage of them and then we can get a real sense of, is this a way forward? Because conceptually it's a really interesting idea. And having third party adjudication systems, think is really compelling when done well. it can fall down really badly as, as we have discussed a few times on this podcast. my educational background was in, industrial and labor relations, and that included, one of my favorite classes was on arbitration, which is this concept of a third party adjudication system. And sort of learning how that works and also where it fails was a really, really interesting process. Now that my education on that is now way too old. I'm not about to do math in my head. I'm a bit

Ben Whitelaw:

tell him how old you,

Mike Masnick:

bit out of date. but this idea of setting up third party adjudication processes can be really compelling and it can be a useful system, but it has to be done well and people have to use it. And so we just haven't gotten there yet at the numbers that are necessary to make it to, to, to have it make a real differe.

Ben Whitelaw:

yeah, yeah. one of the big themes I think will, will be next year is how this appeals process expands. I'm looking forward to it, even though I'm not in Europe and I can't, I'm not, I'm not

Mike Masnick:

You, you're, oh boy.

Ben Whitelaw:

and I can't take advantage of it. So, lucky you. EU users. Um, let's go onto another story now, Mike, which I, I would say also questions, meta's supposed pro free expression stance. this is, this feels like a kind of get rich quick scheme that the, that the meta decided to run. Um, talk us through what it is and why it's so mad.

Mike Masnick:

Yeah. This is, this is a bizarre one. Uh, this is, and it was discovered by this guy, Matt Nevara. and Meta has sort of said, this is a test. This is nothing more than a test. Do not read too much into this. But effectively for Facebook, for a segment of Facebook users who have professional accounts. So this is not, you're just on Facebook to send pictures to your grandmother or whatever.

Ben Whitelaw:

Not that anybody does that now, Mike, let's be honest.

Mike Masnick:

Yeah, perhaps. I have no idea. I haven't used Facebook in years, so I have no idea what people use it for anymore. But if you have a professional account and therefore, you know, doing something sort of business related for some users, meta appears to be testing a limit on how many external links you can. Post on your page. and that test seems to be taking a number of different forms. This is because it is a test. but it appears that if you post more than a certain number of links, they don't appear as links, but rather they will appear as plain text. there are a few different variations on this that have been reported. Matt Navarra, who, who first sort of, posted about this, he's been getting more and more examples from people. So he keeps updating it and showing different versions of what he's seeing. And so some people were able to not post links in their post, but then in a comment they would do it, which has become quite common across social media as Facebook threads, LinkedIn, Twitter, or x have all sort of down ranked posts that have links to external sites because they want to keep you in the site. So this seems to be a sort of. Cranking up the level of that where it used to be, if you put a link, we're going to down rank you and you're not, your posts are not gonna show up here. It's taking it even farther and saying, you can only post two links before we block you unless you pay. And so this is then ties into another story that we have talked about in the past, which is the paid for verification. Right? So this was something that Elon Musk, innovated on, let's say, uh, with Twitter where he turned, verification, which had been a process for helping users understand who they were communicating with and where they legitimate into what he thought was the cash cow business model does not seem to have worked out that way.

Ben Whitelaw:

Also, it's incurred in a rather large hundred 30 million Euro fine from, from the eu, so

Mike Masnick:

which,

Ben Whitelaw:

doubly bad for business.

Mike Masnick:

Yes, very, very bad for business, but just, you know, changing the very nature of what verification is into one, that it became, can you pay X amount of money and then we'll give you a blue check on your account. And Mark Zuckerberg actually fairly quickly followed suit and said, oh, hey, here's something we can do. And so they added a paid for verification system as well. And the way this appears to work is that if you are in the paid for verification system, you have unlimited links that you can add to your posts, which basically means, hey, you need to pay to link outside of Facebook, which is quite a move, right? I mean you know, there's always been this issue of Facebook and going back a decade or more sort of trying to convince the world that. The entire internet is contained within the walls of Facebook and you never should need to go elsewhere. And every app or service or content you should need, you can say, right? I mean they had, their free basics program for a while where they're providing internet service to people in various countries where you only got access to Facebook, you could get on not the internet, but on Facebook's internet. and this is sort of, it feels like a continuation of that kind of thinking of we want to keep everybody in our walled garden and it is a very siloed place that we control. And if you want to do anything external, you need to pay us more money. This just seems so counter to everything that the Open Web is supposed to be about. it feels incredibly cynical. It feels. I understand that it's just a test and they might not continue it, but it just feels like yet another marker. you know, one of many that we're seeing of, what used to be the open web and the idea of, linked to anything linked to anyone travel around. We just, you know, last week on the podcast we talked about the, linked taxes that are now showing up around the world that was in our bonus chat

Ben Whitelaw:

Yeah.

Mike Masnick:

you know, that is becoming a bigger and bigger thing around the world as well. And all of these are sort of one by one bit by bit attacks on the concept of the open web and the freedom to link and the freedom to, click and go from this website to another website is something that should be core to our concept of an open web. And we're just seeing more and more examples of it being shut down and this, this being one of them.

Ben Whitelaw:

Yeah, if I was a small business user if I had accounts on all of the platforms that Meta had, and I used that to generate business as meta, have encouraged for the last 15 years, plus, I would be so pissed at this. I'd be so pissed at this, you know, if I had like a, I don't know, let's say a, a tiny bell making company,

Mike Masnick:

Oh boy.

Ben Whitelaw:

you know, obviously you'd buy one. Um, and, and yeah, for the use on the podcast, but if I had a tiny bell making company and I, you know, had an e-commerce store and I linked to that Obviously on Instagram, they don't really care about links. They've always kind of, you know, shied away for links. But, and Facebook increasingly, as you say, they make you kind of put the links in comments, but WhatsApp, WhatsApp, links are not downgraded. There is less of a penalty for posting links as they're on, on those other platforms. And, and it's an increasingly big way that small businesses, communicate with their customers. and so this would be such a major change the way that that works. I, I very much expect this to be just a test and never to kind of roll out any further. because I, I can't see anybody in that, like, really paying 15 pounds is the lowest you can pay for verified, but it goes up to guess what?

Mike Masnick:

I'm afraid to ask.

Ben Whitelaw:

$350 a month for some businesses. so businesses that want more seats, that want more accounts that are verified need to pay up to$350. So it's very much, I guess, targeting those big business. There's, businesses have already, already got, a kind of presence on the platforms, but yeah, like you say, it's, it's, stuffing everyone else as a

Mike Masnick:

it's, yeah, it's value extraction, right? like, this is the nature of, Corey Rio's concept of in acidification, which is that early on, these platforms are all about providing value and you get more value out of it than, you know, you, like, that's the whole reason why everyone flocks to these platforms. But then over time they have demands of Wall Street or whatever, and they begin to twist the knobs. And the first knob is like. value extraction from you, the user. Now that you're on there, we're gonna try and extract some value, and that can be ads and data and all this kind of stuff. But then the next stage, as Corey described it, is then, then we start to extract value from our business partners. And this seems like an example of that, where it's just like, we have you in the system, we have you locked in, you can't leave easily, you rely on us, you're dependent on us. Now we can start twisting the knob and making more money. In theory, if there was like, an FTC that functioned, this is the kind of thing that they might normally look into where you're sort of trying to abuse your position in the market in order to extract more value from users, You know, we will see, I think you're right, that the backlash to this, if it became more common would be pretty loud. But, um, I don't know man, I, I thought that there would be more backlash. to paid verification and it sort of become pretty widely accepted. And so I feel like, you know, and I do think since this is under the banner of their verification system, they're sort of trying to figure out can we goose the numbers on, you know, there's a chance, I, I have no idea, but there's a chance that there's some product manager who has a profit and loss, goals that he needs to make on how much verification is making. And he's like, how can we juice those numbers so that I can show that number going up and, and meet my, annual, uh, targets. And one thing to do is like, well, we have to, make it more valuable for people to do that. And there's two ways to make something more valuable and one is to actually add more value to it. And the other is to restrict you if you don't pay.

Ben Whitelaw:

Yeah,

Mike Masnick:

And so they may have gone with that path, um. I, you know, maybe we'll find out at some point. But yeah, it seems this is just a, a bad idea. I understand where it comes from, but I think it's a really bad idea.

Ben Whitelaw:

Yeah, no, exactly. And also the, announcement makes it clear that publishers are not part of this group of pages who would be restricted in this test. but, I don't fully believe Mets ability to judge who is a publisher and who isn't a publisher. Is it just the traditional guys? What about the kind of local

Mike Masnick:

there's that, that raises a lot of questions and I think what that gets to honestly is like they know they're currently already fighting. The publishers on paying for links in the other direction and like doing this would sort of send that into overdrive and, and they don't, probably don't wanna fight that fight right now.

Ben Whitelaw:

Yeah. for sure. okay, well that's, that's a fairly good amount of time that we've beaten meta. We've, we've got another story that I think is, is worth pointing out. This is, we don't have anything kind of necessarily negative per se to say about this yet.

Mike Masnick:

Oh, oh. Give me a chance.

Ben Whitelaw:

yeah, let's see how you do. So this is, the news, um, reported by the ft. The meta has adopted a new age check system as part of its efforts to meet the kind of plethora of, age, verification mandates that are built into a lot of the global regulation around safety this year. we've seen lots of platforms announce who they're partnering with. I've been making a list and, this is

Mike Masnick:

Are you checking it twice?

Ben Whitelaw:

know who's naughty and who's nice Mike.

Mike Masnick:

go.

Ben Whitelaw:

Um, and it's interesting, you know, because Meta have such a big share of the market. It's, it's Facebook, it's Instagram, it's WhatsApp. And so it's very notable that, a company called KID have, been given the kind of, I guess the pleasure of, being in charge of the new age verification process across all its apps, certain parts of Meta did use Yoti, a kind of British based, their age verification tool for a while, as primarily on Instagram as far as I'm, I'm known. so this is I think an effort to kind of consolidate their efforts and, and make it so that, users only have to kind of log in or verify themselves once and then be able to use all of the different platforms. I'm not very familiar with the kind of past keys approach, Mike, but you were quite interested in the kind of technical element of what, meta have used KID for. Talk us through why that is.

Mike Masnick:

Yeah, so, so they're using, something called Age Key, which is a relatively new, it just was announced last month, this initiative, and it's an open initiative built on the Passkey standard. And, passkey are something that's been around for a little while and it's sort of was an outgrowth of, right. So let's take a few more steps back, right? So for years you logged in with a password and we all realized that like, password based authentication has its limits. there've been all these attempts over the years to sort of broaden us beyond password based authentication because it was never the most secure one. Everybody would just reuse the same passwords. Passwords would leak. And then, you know, certain companies were not good about storing passwords. There were all sorts of problems with it. Everybody knows they've had accounts hacked, all this kinda stuff. So we've leaned into a variety of different solutions, and two factor or multi-factor authentication became more and more popular. And sometimes that was like, we'll send you an email and then you click, or we'll send you a text message with a number. and then we've had things where there's like authentication apps where you, you know, when you first get an account, you scan a QR code and it'll give you a new, six or seven digit number every 30 seconds. And, and then there have been was a whole process that, became a standard by, Fido, where you can like use a fingerprint. You stick a USB key into your device and you put your finger on it. there There are all of. these different attempts at sort of multifactor authentication to make things more secure. Many of them are really good, some of them are too complicated, and only us crazy people who go overboard start playing around with them. More recently, that has evolved into past keys and it's also a standard and it comes from the same people who are doing the, the little USB keys that you could stick into your computer and put your fingerprint on. and it is a somewhat clever, but somewhat confusing, standard to try and make things more secure, but still keep them easy where you are storing a pass key on a device or in a password manager or some sort of. Secondary factor that will sort of, vouch for you. It's technically very clever and very interesting, and they're becoming more and more common, but because a lot of people are confused by them, they don't actually use them. But I do think that they're actually very interesting. Now this is something called age key and they're basically building on the passkey standard, which is very interesting because passkey is becoming more of a standard and is being more widely used. And basically it appears they're trying to build this system that basically uses the same thing, but instead of giving you just the pure, this is me, this is who I am, it also throws in a and I am of age and I'm over 16 or 18 or, or whatever. and so that is technically very interesting. Now part of this though, and where I'm still somewhat concerned about it, is that in some ways this is also an effort to kind of, separate out different layers of the age verification stack to push further and further away the problematic parts. Right. So the age key itself, like, oh, it's fascinating. From a technological level, it's really interesting. It's an approach that is, an attempt at a standard. KID who put it together actually purchased another organization that was building it and sort of brought it in-house. But it is an open standard that anyone can use. but there's still at the end of the line, there still has to be that verification part. That creates this token that goes into the age key thing. And this is in some ways, it's kind of a way to obfuscate that and hide the fact that, yeah, you know, you still have to scan your ID or scan your face or do something to prove how old you are. that is always a risk. And that is where there's always the concerns about who's doing that. How is that being kept, is that being kept private? What other concerns does that lead to? Are you locking out people, people who don't have an id, you know, there are all sorts of potential problems with this where you are limiting the ability of certain people to access certain content. So I'm still concerned about all of that, but from a technological standpoint of how they're approaching it, it's interesting that it's this open standard. It's done in a way that it strikes me as fairly clever. but I think much of that is done in service of obfuscating the problematic parts and kind of handing them off to somebody else and say, you deal with this. We're just gonna be this nice little token provider in the middle. And then we can sort of whitewash the fact that there's still bad stuff happening at the point of verification itself.

Ben Whitelaw:

yeah. So, so essentially kind of rather than meta having the, data, the personal data will be KID

Mike Masnick:

Or a third party itself who's doing like, the initial verification that can then send Ks, the PAs key in some form that says, you know, so, and, and you, well, the, to be clear, like the age key itself will be device specific, so you will store it on your device, but they're sort of providing the token in the first place. There's, we, we don't need to get into the, the weeds here, so it's not that they're storing anything. KID doesn't need to store anything as I, as I understand this, this setup. but it's, it is a way of sort of, demonstrating that, that you are of age, but there's still, I think it's hiding a bunch of the other problems behind it.

Ben Whitelaw:

Yeah. Okay. And, but interesting as well, I thought because there is an attempt to make the kind of age key, somewhat, the kind of way that all users of, of all apps verify the age, right? So there's this kind of effort to consolidate, all of the age verification efforts or, or, obligations and to have it all in one place, which I think is obviously beneficial for platforms.

Mike Masnick:

Yes. And, and,

Ben Whitelaw:

Also probably better for better for

Mike Masnick:

better for users than having to give their, identification to every single app and every single service that they use. Certainly having one single one is, probably fundamentally better though also then becomes a sort of a single risk point, a single target, uh, you know, of potential problems as well. but conceptually, yes. Probably better.

Ben Whitelaw:

Yeah. And, and that digital ID space is something we don't really talk about. We, we kinda increasingly doing so, but there's a lot of really interesting stuff happening in, in loads of different countries, particularly in the uk, which I'm most familiar

Mike Masnick:

Yes. Yes.

Ben Whitelaw:

Um, which again we won't get into today, but I think we'll probably touch on in that first episode in January. but yeah, nothing bad to say, per se about meta at this stage. Not making any judgements.

Mike Masnick:

Oh, I'm making judgments. Ben.

Ben Whitelaw:

you actually seem quite impressed. So, you know, I think we're

Mike Masnick:

I, I, I am, I am. I am impressed by the technology. I am. I'm still not happy with the age verification.

Ben Whitelaw:

Yeah. Yes. To be clear. To be clear. Um, cool. So, a few other stories we wanna, we wanna mention, in the course of today's episode, Mike, a country that doesn't get a lot of airtime, uncontrolled speech, mainly because their approach to online speech is fairly restrictive, let's say.

Mike Masnick:

You don't have any is their approach to online speech.

Ben Whitelaw:

Exactly. But that's why you felt this story's worth a mention.

Mike Masnick:

Yeah, this is a non meta story we are moving off of meta for, for now. Uh, this is about Roblox, who we've talked about in other contexts, but this is about Roblox in Russia. So Russia recently banned Roblox now has been sort of suggested. Russia is sort of famously bans lots of apps and lots of speech. They are not a free speech friendly country and they are not shy about letting you know that they don't believe in free speech and they don't care about your free speech. but the interesting thing here was that they banned Roblox and it has led to protests in Russia, which is also very rare because the country does not believe in free speech and sort of has a fairly long and somewhat famous track record of cracking down on free speech and doing all sorts of bad things to people who speak out. So the fact that, a group of people, it's described as several dozen people in the Reuters report on this in a city in Siberia, so way out there, middle of nowhere, Siberia being the, the term that people use generally for the middle of nowhere. But this is actually in Siberia. You had a few dozen people who went out with signs protesting the fact that Roblox was banned and calling out the fact that, you know, how ridiculous they thought this was. and that just struck me as, as notable, right? Just the fact that people are protesting any kind of internet ban in Russia at all is a statement. The fact that it's because the ban is of Roblox is kind of interesting and just a, a sign of, how frustrated people get When certain services that they like are being banned and you know, where they don't feel that there's a good reason for the ban. you have to be pretty pissed off to go protest in Russia. Uh, and especially about Roblox. It's kind of kind of an interesting statement that you would get of people willing to go out in the snow, and, and speak up about this.

Ben Whitelaw:

Yeah, it does look awfully cold.

Mike Masnick:

Yes, it does.

Ben Whitelaw:

God, that's, that was not my main thought, but it was one thought. Um, are you surprised by this, I mean I, I wasn't aware that Roblox had such a large user base in Russia. I think it's, the second country in terms of the kind of users for the platform. It's got a really large, base, but I was still surprised to see, see this, actually turn into a kind of in the streets in the snow protest. we like to see more of this, do you think in future.

Mike Masnick:

Yeah, I mean, I, I do kind of feel that like, as governments get more and more aggressive about cracking down on different aspects of internet speech, that, people are, are getting more and more angry and there is this sort of general sense of like, who are you really protecting here and what are you doing? And, you know, this is gonna bubble up in all sorts of interesting and different ways. And this one, it struck me as worth commenting on just because the oddity of actually seeing protests in Russia and, you know, it's not protesting about Putin or the fact that they're, sending everybody off to war, uh, uh, in Ukraine and, and, and whatnot. But about Roblox being banned. It just, it's a sign of something of, of people getting mad, like really mad, about the government stepping in and, blocking an internet service they liked.

Ben Whitelaw:

Yeah. In some ways I like the fact that systems are getting more clued up about their digital rights and, what censorship is in real terms, and, and maybe, you know, how they, they can kind of push back against that. I think that's, in some ways, it's not a bad thing.

Mike Masnick:

Yeah, absolutely.

Ben Whitelaw:

talking of kind of the blurry lines between politics and technology. the next story I think speaks to that. This is a story that, I couldn't help laugh at to be honest. And, and this is the news that OpenAI has hired, George Osborne, the former chancellor of Jeca, the conservative politician to spearhead its global Stargate expansion. This is a story in the ft this week that, I only could shake my head out Ruly. Um, and I think the reason for that before we go into, it's just the kind of conveyor belt of UK politicians, but particularly conservative party politicians who have buddied up with cozied up with and are now kind of being paid by. Big tech companies, the most famous of which is Nick Clegg, who recently stepped down at the start of the year, but who worked in a global public policy role at Meta for, for a long time. Rishi soak the, Prime Minister, for a relatively short space of time during the kind of pandemic he has become an advisor for Anthropic. And so George Osborne is the third in a, dirty triumvirate of conservative politicians to take the sweet, sweet tech dollar.

Mike Masnick:

you know the system better than I do, but, Nick Clegg was not technically a conservative, right? He was a lib dem.

Ben Whitelaw:

he was, yeah, you're right, you're right. He, he's, he was a

Mike Masnick:

You should know that better than I do.

Ben Whitelaw:

know, but he, he, he's widely kind of thought of, I think as a, a Tory in sheep's clothing in lib dem clothing. So, uh, you're right. Good pick up. Good pick up. Um, so this is kind of. I think my main takeaway. There's much more we could say about the, the global Stargate expansion, which feels a bit like the free basics program that you talked about a second ago, Mike, for the AI explosion, Stargate is this kind of data infrastructure project that has been rubber stamped by Donald Trump. It's been funded by Oracle and SoftBank, and is, you know, going to help the US become a kind of leader in, global ai. And, the expansion of that Stargate program is, of which Osborn is gonna lead, is about helping other countries create that same data infrastructure, in their, their own countries. the bit I kind of take umbrage with is this, the idea that this. Global Stargate, which I think it's called Open AI for countries now, is about spreading democratic ai. and which again, just is, this makes me, kind of, makes me feel all icky. You know, I, I, I'm all for democracy, but who's, why is OpenAI taking that as their job?

Mike Masnick:

Yeah, it, it's, uh, we don't release video of us, but Ben has just been shaking his head this entire time as he's been talking. But, but, you know, this is all political, right? I mean, you know, it's interesting because in the article announcing the hiring of George Osborne, there's a quote from, I'm not even sure how to pronounce his name, but Chris Le Hane, I think, who's a famous political operator in the US has been involved in a whole bunch of US political campaigns, and is sort of famous, is a somewhat sort of like dirty operator, you know, opposition research stuff. the thing that strikes me about this is just how political all this is becoming, right? This is not actually about technology or innovation. This is now all about politics

Ben Whitelaw:

Yeah.

Mike Masnick:

and, hiring former politicians is, kind of sign of that. And I don't think that's where. You know, that's not an area you go to if you want good innovation. Uh, you know, once it becomes political, it probably becomes stupid and it becomes about horse trading and deals and not like what is actually, certainly what is not best for innovation and absolutely not what is best for the public. And so putting it under the, banner of democracy or democratic AI or whatever is pure marketing political speech and is not actually, useful. I think that there are all sorts of ways that AI is actually really important and can be really important to democracy and freedom and empowerment and all of these things if done right. But I have very little faith that this announcement is a step in that direction.

Ben Whitelaw:

Yeah, no, I agree. and it just makes me think, you know, I wish we'd paid, I wish we paid our politicians more when they were in, in power.'cause they probably wouldn't have to go and do this afterwards, would they?

Mike Masnick:

Wouldn't have to cash in. Yeah.

Ben Whitelaw:

Um, but yeah. good on you, George, for another Cushty corporate job. we'll round up now, Mike, on a story that's, is a slight, slight kind of, departure from our usual type of stories. But I think concerns one of the big platforms that we often talk about and I think is a, a nice milestone, a nice moment to reflect on, as we end the year.

Mike Masnick:

Yeah, this was, the story just came out yesterday that the Oscars, which for many decades, I think since the 1970s, have been presented on A, B, C, which is the channel that is owned by Disney. Are moving off of, A, B, C and are moving to YouTube. They're not moving to a different competing network TV station, but rather to YouTube. And this is a huge statement for the way that the cultural landscape has shifted. it will be streamed free on YouTube and if you are paying YouTube TV subscriber, you'll have it and everything, but it will just be, streamed through YouTube. that is, it's incredible. Like I don't think anyone could have predicted this. Now there's an interesting bit of timing, which is that this week is also exactly 20 years since Saturday Night Live uploaded the video, lazy Sunday to YouTube, which is generally considered the first viral video on YouTube that sort of showed the power of. Putting professional content on YouTube. There were a few sort of, amateur bits of content that maybe went a little bit viral, but that was the first really big, professionally produced, for NBC, bits of content that went on YouTube and went completely viral and had millions of views. And so 20 years from that to now YouTube taking over the Oscars and they have like a, it's a five year contract. I think they'll be broadcasting it for at least five years and then we'll see what happens. But it's this, it's a big statement about how much of the world of entertainment has moved to the internet. First of all, streaming services second, but YouTube in particular, as you know, it wouldn't have surprised me quite as much if it had gone to like. Netflix or something like that. But going to YouTube is, that's a statement and I think it's, worth noting about the power of the internet and online services and services that, cater to user generated content becoming so central to our cultural moment that I thought it was worth calling out and mentioning.

Ben Whitelaw:

Yeah, you almost think that the internet can't become any more central than it is to our lives, and then something like this happens and you realize that no, there is still some way to go. and if nothing else, Mike, it means that we'll likely have a job to do next year and the year after that and the year after that about kind of assessing and analyzing and critiquing how these platforms are making decisions as they become increasingly more culturally relevant and integral to people's lives. so yeah, useful to note and a good reason to go back and watch that Lazy Sundays video, which is always good fun. That brings us to the end of this week, Mike, and to the end of the year. thank you as ever for your fascinating analysis, breaking down the different ways of, verifying passwords. I had no idea actually. That was a very helpful, little segment and yeah, I hope you have a really restful, Christmas period, festive period. and come back refreshed and ready to go next year.

Mike Masnick:

yes. Same as you. Same, same to you as well. And, uh, rest up and, get better. Uh, so we don't have, we don't have to keep dragging you out of your sick bed.

Ben Whitelaw:

exactly. And, uh, for all listeners who, who tuned in, thank you for your following us this year, for subscribing, for sharing, for getting in touch. it's really made this year very worthwhile. we're approaching some big milestones in controlled speech. We've got a lot of great plans next year and, uh, we hope you'll be there for it. So take care and, see you next year.

Announcer:

Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.com. That's CT RL alt speech.com.