Ctrl-Alt-Speech
Ctrl-Alt-Speech is a weekly news podcast co-created by Techdirt’s Mike Masnick and Everything in Moderation’s Ben Whitelaw. Each episode looks at the latest news in online speech, covering issues regarding trust & safety, content moderation, regulation, court rulings, new services & technology, and more.
The podcast regularly features expert guests with experience in the trust & safety/online speech worlds, discussing the ins and outs of the news that week and what it may mean for the industry. Each episode takes a deep dive into one or two key stories, and includes a quicker roundup of other important news. It's a must-listen for trust & safety professionals, and anyone interested in issues surrounding online speech.
If your company or organization is interested in sponsoring Ctrl-Alt-Speech and joining us for a sponsored interview, visit ctrlaltspeech.com for more information.
Ctrl-Alt-Speech is produced with financial support from the Future of Online Trust & Safety Fund, a fiscally-sponsored multi-donor fund at Global Impact that supports charitable activities to build a more robust, capable, and inclusive Trust and Safety ecosystem and field.
Ctrl-Alt-Speech
Stuck in the Middleware with Youth
In this week’s roundup of the latest news in online speech, content moderation and internet regulation, Ben is joined by Vaishnavi J, former head of youth policy at Meta and founder and principal of Vyanams Strategies, a product advisory firm that helps companies, civil society, and governments build safer age appropriate experiences. Prior to founding Vys, she led video policy at Twitter, built its safety team in APAC and was Google's child safety polciy lead in APAC. Together Ben and Vaishnavi discuss:
- House overhauls KOSA in a new kids online safety package (The Verge)
- A nationwide internet age verification plan is sweeping Congress (The Verge)
- Grindr supports app store age-verification bill despite censorship concerns (Pink News)
- A summary of the technology sector’s response to the UK’s new online safety rules (Ofcom)
- Age Assurance Implementation Handbook (Vyanams)
- Interoperable Age Assurance (Age Verification Providers Association)
- EU's non-binding resolution around revamping child safety rules (European Parliament)
- ‘We’ll be watching’: Social media companies warned about complying with ban as teens flock to alternative apps (Crikey)
- The Salesforce of safety: Software vendors as infrastructural/professional nodes in the field of online trust and safety (Sage, Platforms & Society)
- It's their job to keep AI from destroying everything (The Verge)
Ctrl-Alt-Speech is a weekly podcast from Techdirt and Everything in Moderation. Send us your feedback at podcast@ctrlaltspeech.com and sponsorship enquiries to sponsorship@ctrlaltspeech.com. Thanks for listening.
So Avi, I don't wanna start on the wrong foot with you, but, I'm being shamed by my running app. and this is a bit of a burning contention. I haven't been for a run for a long time. I've traditionally used Strava, the running app to track my runs and, perhaps show, I know. How slow I'm getting over time as I get older. and this week I got a kind of a further prompt to remind me of how inactive I'm being. So I thought actually we'd use that as the start of today's control alt speech. and the prompt that I get when I open the app, begrudgingly obviously, and, and disappointingly was the following. It said, start your streak by logging an activity. So I want you to, to log an activity. What have we been up to recently, before coming here on controlled speech?
Vaishnavi J:Wow. Uh, so I've spent the last six weeks with a couple of exceptions, on the road. So if I were to log an activity, it would probably be checking in at the airport over and over and over again. In a variety of different places. very excited to share that. I will have no more check-ins for the rest of the month and that, that makes me very happy. But that'd probably be like the biggest activity of the last six weeks.
Ben Whitelaw:Nice. Okay. Anywhere? Anywhere fun. Anywhere you'd recommend to listeners?
Vaishnavi J:Oh, uh, let's see, but most recently, two weeks ago, I was in Malaysia, for the A ICT forum, which was held in Kuala Lumpur. and it was fantastic. I mean, really great event organized in partnership with the unicef. we did a bunch of activations there. It was really great to talk to people about. Kind of the state of child safety in, the Asia Pacific region, would highly recommend KL if you haven't been. It's a fantastic city.
Ben Whitelaw:Amazing. And we're gonna talk a bit about that today. suppose if I was to log any activity. Over the last few weeks, if it wasn't running, it would be sleep training. I've been, um, sleep training my young son because, I feel like sleep is an essential skill for anybody to have. but yeah, it has meant that my, my running activity has somewhat declined. But, we are here and I'm looking forward to talking to you today.
Vaishnavi J:Yeah. Awesome. Thanks for having me. This is exciting.
Ben Whitelaw:Hello and welcome to Control Alt Speech, your wiki roundup of the major stories about online speech, content moderation, and internet regulation. It's December the fourth, 2025, and this week we're talking about Csosa 2.0, the rise of trust and safety vendors, and the imminent arrival of the Australia social media ban. I'm Ben Whitelaw. I'm the founder and editor of Everything in Moderation. we have no Mike Masnick this week. He's probably avoiding another. Podcast of us trying to understand child safety legislation. but I'm joined with somebody who really is the best person to have in his absence, and that's svi, the former head of youth policy at Meta and Founder and principle of Vias Strategies. Welcome SVI to the podcast.
Vaishnavi J:Thanks for having me. Really glad to be here and long time fan of this podcast.
Ben Whitelaw:Amazing. We actually tried to get you on the podcast, a few weeks ago. This is kind of letting the listeners know how, how it works under the hood. And you were in Malaysia as previously explained, and you, very kindly let us know. 24 hours before that your internet, connection, your internet speed wasn't very good and you're worried about coming on the podcast right.
Vaishnavi J:Yeah, so I, you know, just before Malaysia, I had taken a little vacation to Bali and I think the last time I had internet speeds that slow was like back in the two thousands. And so I just had no idea that 9.5 megabytes uploads was still like a thing. And so I remember testing the internet connection and being like. I should probably tell you that this is not going to go well. if, if we record while I'm sitting here in this, you know, beautiful, beautiful island of Bali. unfortunately terrible for podcast recording.
Ben Whitelaw:Well, and I don't blame you for doing that. Um, thank you for even considering recording whilst you were on, on a holiday. I wanna give listeners a bit of an intro to you before we get started and, get your kind of take on this week's news. EMS is a kind of product advisory firm that helps companies, civil society organizations, and governments build age appropriate, uh, experiences. And you are eligible to do that work because you've got this extensive work prior to setting up your own, thing in some of the biggest platforms around. Right? So you led video policy at Twitter, you built out their safety team in apac. you Google's child safety policy lead in APAC as well. so there's basically not much, you might not say this yourself, but there's not much that you don't know about how big companies create the rules for making children safe. how has it been in the kind of decade plus years even working in these companies? Like, can you give us a sense of, of how the child safety discussion has changed in the time from when you started working there to now?
Vaishnavi J:Yeah, I mean, I think when I started, child safety was an incredibly important topic, but just it didn't have the same like crisis, urgent kind of, tenor that it has today. Uh, I remember, you know, working on child safety back in 2011 uh, sorry, 2012 actually. Gosh. Thinking about like, oh, what an interesting topic to work on. I don't know why a tech company is thinking about child safety, but I suppose it's something for me to dig into. and you know, you fast forward to today, um, all these years later, and the amount of. Urgency, the amount of heightened emotions around it. the very firm camps that have emerged around what is the right one, indisputably, right way to protect and support children online. It's just a very different scene now. and you know, I was in-house until about two years ago when I decided to start, uh, vice. It's been such an interesting experience working with a variety of companies, large and small. as opposed to being in-house at one company, working on that company's suite of products. So, a good time to be doing this work, but, um, doesn't slow down very often.
Ben Whitelaw:I am sure we're, constantly talking about, child safety stories on, on the podcast. And so to think that you are, there having conversations with platforms in the background is, super interesting. how do you tread that line, of emotion that often kind of emerges when talking about child safety? There are these camps that have, kind of sprung up of, you know, I guess. Ban children from using all social media and sometimes by extension from smartphones generally, to obviously the other end of the spectrum, which is we need to be kind of careful about how we remove access to these platforms and to, to these devices. For children, we don't know the unintended consequences. The research, isn't always clear. How do you navigate that when you go into these companies, but also when you go to events like the one you mentioned in Malaysia.
Vaishnavi J:very delicately. You know, we are in a very messy middle. I think sometimes wistfully that it would be nice to be in one of those camps. It, it feels like it would be a nice, safe, cozy place in which I wouldn't have to contend with a variety of other opinions. but because our final end user, our end client is. You know, a platform that needs to implement and scale these product features or policies. We have to sit in the messy middle. So we do have to think about what the government perspective is, what academic civil society perspectives are, what company business and engagement, incentives, as well as. internal resourcing trade-offs might look like. We have to balance all of that when we're coming up with a final solution set for a client. So I think it's, it's a tough question. Like there's no one good way to do it. What we try to do is, you know. Really position ourselves as the translator between what are regulatory expectations, user expectations, what's academia and research saying? How do you translate that into product and policy? So we're not researchers ourselves, you know, I'm not, uh, an academic, but I know how to. Right policy. I know how to build product. And so that's where we can be really helpful. but I think in a way that's also our superpower because it lets us truly be very objective about the trade-offs in either direction. When, you know, if we're talking to platforms and their concerned about what government is asking them to do, we can help them understand how to. Build with that guidance, but in an innovative way that's still business forward. And when we're talking to government, we're able to very clearly outline what the trade-offs are of what they're proposing for businesses to say, Hey, this is the very, like, the very practical consequence of what you're going to be doing to businesses if you say pass this or consider that approach. So in a way, being in that messy middle is kind of our superpower too, I suppose.
Ben Whitelaw:Yeah, really fascinating and we're actually gonna talk a lot about how. experts like yourself who've worked in some of the big platforms are increasingly, interpreting the rules, that governments are laying down and having a really important role, I guess, in defining how not only platforms work, but kind of broader, you know, safety markets. So we'll come onto that. I want to just get you, before we kind of run into today's stories, on something comes up, we talk about child safety and that's control. Team at Instagram was, I think one of the first that built parental controls and a lot of platforms followed. how do you see the kind of parental control ecosystem now? Who do you see as the leaders in this feature, which is often where platforms kind of return back to when they think about child safety. They, they say, we make parental controls available. It helps To, to kinda moderate their children's usage and who they're speaking to. How has that changed over time, do you think, and who's doing it?
Vaishnavi J:I think parental controls are really important. I think we see that in the companies that are rolling out parental controls, but even the third party vendors that now offer suites of parental. Controls overlaid, over the, products or devices that you're using. So I think it's definitely an important part of the equation. What's challenging about parental controls today is that there are just way too many of them, there are way too many apps. We live in a very federated online ecosystem that's very different from the, you know, one or two app world that you and I may have grown up with. and so when you've got a child that's using. Seven to nine different apps across education, across socializing, across gaming. You then have to navigate parental controls across all of them. And that is just, very unfeasible for a parent to do. I think what we also lose in that conversation is the importance of. agency when it comes to, children and teens, that there is some degree of independent exploration they need to be able to do without constantly being surveilled or monitored by their parents. think you and I really benefited from that. I think a lot of our online experiences were done independently and we've grown as a result. It would be a real pity if we couldn't extend that to the next generation. So I think parental controls definitely have a role to. Play. But you know, I'm more interested in, I think the more interesting design tech question is how can we build the better scaffolding within our products so that kids and teens can explore them pretty independently without having to have a parent constantly supervising them. And that's, by the way, before we even talk about any questions of privilege or which children actually have parents who have the time and resources to do this. I mean, that's a whole other discussion. So. Certainly important, but I think a really small piece of the larger child safety puzzle.
Ben Whitelaw:Yeah. And what would you, that scaffolding you talk about, what would you give an example of, good scaffolding within some of the, the apps that you've seen or worked with?
Vaishnavi J:Well, you know, we've seen some of this in action already. Um, it wasn't too long ago that most teen accounts were not private. By default, you signed up for an account online and immediately your content, your bio was available to. Anybody who was searching for you, and sometimes I don't think we give ourselves enough credit for the amount of great change that has taken place over just. Frankly, the last five years, we now have private, by default, as a generally accepted best practice across industry for most young users. That's a great example of scaffolding right there. You don't have to worry either as a child or as a parent that the moment you start engaging online, you're gonna start getting bombarded with creepy, unwanted interactions or that your. Your online footprint is gonna be visible to everyone, so that's a, that's a really good example.
Ben Whitelaw:Yeah. Yeah. Really interesting. so, so I think one thing to mention for listeners is that you have a really good newsletter that comes out, rounding up the, interesting, mostly interesting child safety stories and everything in moderation often includes the links that you have shared. So I want to thank you for that. where can people sign up and, and, you know, give a bit of a kind of sales pitch as for the newsletter.
Vaishnavi J:Oh, thank you. That's very kind of you to say. Especially since I'm a big fan of everything in moderation's, uh, newsletter. we put out, uh, a youth and tech policy newsletter every two weeks called Choir. It's on Substack, uh, it's like enquire, but without the een. and we typically round up the big news articles, what we think that. Both our clients as well as anybody interested in youth safety should be paying attention to. and then, on a less, uh, on a less frequent basis, but still throughout the year, we'll try and share snippets from work that we've done. So that we can kind of democratize access to that information in the wider ecosystem. we were actually one of the first companies to come out with an age appropriate AI design framework back in May of 2024, so about more than a year and a half ago. and we were very grateful to our client for letting us share even a sliver of that work publicly. but that's been the foundation for a lot of other great. AI design codes and, and foundational work that's been done since then, so I'm really proud of that for the team.
Ben Whitelaw:Great. And, and we'll certainly include a link in the show notes for listeners to get that. before we dive into the stories, just a reminder for listeners that you should, if you enjoy the podcast rate and review it wherever you get your podcasts, only takes a couple of minutes to leave some favorable or. You know, critical words, they, they, are all important for improving what we do on controlled speech. if you are a company that wants to sponsor controlled speech and, and help support, its, continual production and we're also looking for sponsors for next year. You can get in touch with us at That's, podcast@ctrlaltspeech.com. we will definitely get into stories. I've, teased it enough, vie. So we're going to, talk now about the big news of the week, I would say, which is news coming out the US Congress, um, which is always a worry, and in some ways I'm kind of disappointed the mic isn't here to, have a rant on this one. But, there's been a whole raft of, Bills this week that have been, talked about in Congress that have been announced and of particular interest to control. Speech listeners will be the fact that, cosa, the Kids Online Safety Act is back. COSA 2.0 has, been, revealed, and this is a bill that has been criticized, for a long time now, for the potential unintended consequences that you mentioned around Challenges to privacy and anonymity and all in service of obviously increasing child safety. haven't been following it as close as I should. Fashion, and you are the experts. I think help unpack it. Explain to, to me and to listeners what has happened this week, what you found most interesting, uh, and why it's important.
Vaishnavi J:So the house, introduced its version of COSA this past week, and honestly like a whole package of other child safety related bills, alongside it. But the biggest shift I think, that most people are talking about is that, the duty of care provision in the house version of COSA has been dropped. duty of care was always the stress fracture for cosa. in the Senate version that passed in 91 to three earlier this year, it was the part that a lot of child safety folks saw as essential because they saw it as the requirement that the. Would actually compel platforms to identify and reduce risks for minors. but it, you know, it was a pretty controversial provision because civil liberties groups, a number of house members, certainly I think voices from industry, were really alarmed about the First Amendment implications. And I think. The argument that they made was pretty simple. if you are gonna be creating liability around exposing kids to quote unquote harmful content, you're going to incentivize us as industry to over remove on a massive scale. now I think it's worth mentioning that. We've actually actually seen some precedent for this. So this was not a theoretical concern. when off comm's age checks kicked in earlier this year in the uk we actually saw a number of platforms starting to block access to completely legal subreddits and, and forums around mental health communities, sexuality, education. Because they thought it might be risky. So not because it was illegal, but because it was easier to block them than to figure out where exactly the regulatory line was going to fall. So it was a real concern that was raised about what the duty of care might do, in the us And, and I'll pause there'cause I could go on about this for a while.
Ben Whitelaw:Yeah, no, and that does make sense. incidentally, Ofcom have just today put out a summary of, how industry has responded to, the Online Safety Act, which I haven't had a chance to read yet, but, um, we'll include in the show notes and we can, you know, maybe talk about it in next week's podcast. so, so the kind of reception to COSO 2.0 is it better than it was? Are there still limitations to it? Are we, are we likely to see it kind of progress further than previous versions of it?
Vaishnavi J:Yes. I think what's been replaced, uh, duty of care, that language has now been replaced with, a call for a reasonable policies and practices standard. and this was something. That the house, uh, intentionally swapped out. that said, reasonable policies, as you can imagine is, is pretty elastic as as a term. a lot of the effectiveness of that will depend on enforcement, and, what the FTC or what public expectations determine, are considered reasonable policies and standards. but that said, I think, you know, COSA is. Certainly a big part of it, but there were a number of other bills that were introduced that are pretty significant because I think it shows that, the House is really trying to reframe the overall youth safety conversation. So they brought in, copper 2.0. Which would expand privacy protections to everybody under 17. limit behavioral advertising to teens. they introduced an app store accountability act, which would, push stores like Apple and Google to handle age verification at the store level. and then there were other bills that were also folded in around transparency and platform reporting requirements. So this was a whole. Package of bills that was introduced. Cosa of course grabbed the headline and, for good reason, but it's one piece of a, of a larger puzzle.
Ben Whitelaw:Interesting. I want to kind of zoom in on, on the app store accountability app because this is this idea of app stores playing a greater role in the kind of trust and safety stack as, as they would say. it's something that's been around for a while and I've spoken to a few significant trust and safety heads of different platforms and they have the view that this is kind of where we were gonna end up in many ways. it's the kinda least worst situation for platforms for users in many ways. And we're seeing, rafter platforms give support to this idea. meta snap. X, have previously said they'd support the idea. This week we've had Pinterest and also Grindr, throw their weight behind it in many ways. what's your sense of kind of Yeah. This idea and, and the traction that's building behind it. Is this a kind of, a more logical endpoint for, keeping, children safe?
Vaishnavi J:I really love the idea of interventions that are deeper in the stack because. It also just democratizes the process of age assurance. It means that it's not just the big platforms that are gonna be getting this age signal, any app that is on these stores now has access to these signals. So I think that's really promising. but I think, you know, age assurance is not a problem that we're going to solve. we're gonna have to really. Continue being creative about it. So this doesn't really account for shared accounts or shared devices, which is so common with young people, and that's the demographic that this is supposed to be targeting. it also doesn't really address the question, which, which hasn't come up yet, but we'll eventually around who's responsible if a bad age signal causes a harmful experience on the product. So if you know the age signal isn't accurate. And a child experiences harm on a product as a result, Is it the app store's responsibility because they provided a wrong age signal? Is it the product's responsibility? We haven't had that scenario come up yet, but I do think we'll need to address that in great detail. particularly in the US legal system, before we can definitively say, yes, this is the right approach or the wrong approach. But overall I think it's great that there are more options for age assurance in the ecosystem. We, Put out an age assurance implementation handbook back in July. That was a very practical, uh, how-to implementation guide for companies. And it, fit in with our prediction from earlier this year where we said, regardless of what you think philosophically about age verification, it's here to stay. So as a company, you're gonna have to figure out what your strategy is. The strategy will vary for every company, but you're gonna have to have one. And we got a lot of pushback back in January. I will say we got a lot of pushback when that piece came out. but I think, you know, here we are and whether it's app store or individual, platform, based verification, it's here to stay. We're gonna have to figure out how to deal with it.
Ben Whitelaw:Yeah. And and to be clear, the kind of pushback was around the, the kind of principles behind age, age verification. You know, they, people were saying that actually we shouldn't be going down this path at all.
Vaishnavi J:Yeah, I think that the principles, but also a pushback against the supposed inevitability. The, the idea being that no, this is still an open conversation and it's really not. I mean, you know, at the, as ICT forum two weeks ago, Malaysia announced that it's planning to, um, introduce, under 16 requirements for social media, the eu, has released a non-binding resolution to similar effect. Australia's, uh, social media age restriction comes into effect in six days. Uh, so this is not, something that can be avoided anymore. And I think. the folks who understood that earlier have had more time to prepare and get themselves ready for this moment.
Ben Whitelaw:Yeah, I want to come onto the Australia social media band specifically and, some interesting kind of stories we're seeing in the run up to that deadline, uh, next week. But just kind of layout for listeners why. The likes of the App Store and the, and the Google Play Store are kind of better placed to verify user's age. You know, listeners might think, well, they have a lot of my information already. what additional information might I be asked for? What does that enable or potentially prevent? can you just explain that in real clear terms for, for folks?
Vaishnavi J:So, I think one of the, most common arguments about app store verification is that it's a centralized source. Of, verification that only has to happen once, or at least it has to happen far fewer times than if you had to verify your age across a number of different apps. There's also a perception, I think, of improved privacy because you don't have a. 20 different apps, knowing your age, it's just one provider that knows your age and is distributing that signal to a variety of different products. that said, I think the privacy risk and security risk for those app stores is significant. when you centralize. a source of data and you really are creating a honeypot that becomes very attractive for adversarial actors to try and hack and, and break. So, it'll be interesting to see how that issue is addressed. and then of course, I think the issue around liability that we talked about is still not ironed out. No one really has a clear answer as to what it would mean for Google and Apple. If, that scenario we discussed were to play out. but I think, um, the other argument that's made in favor of App Store, verification is that it reduces the burden on smaller companies. It also creates more, interoperable ecosystem. and actually the age Verification Providers association just put out a great comparison of the different interoperability systems that are currently emerging in this space, including, if app store verification. What to become, more widespread? you know, we can share that in the notes if you think it'll be helpful. Uh, but it's a really great resource.
Ben Whitelaw:Brilliant. my only kind of thought on this again, was just the potential lock in that it creates, for users. You know? Yes, there is an an element of, encouraging kind of competition for smaller providers who maybe can't have trust and safety teams in house or don't have the resources to do that. But as a user, once you've, committed to, an operating system or device in a particular play store, you know, you're even more unlikely, I think to, to change or to shift once you've already given, further ID or, or personal details that obviously kinda then connect to all of the apps you have. Do you think that's a risk or, or do you think that's something again, that we're just having to, to suck up and, it's part of the, part of the situation now.
Vaishnavi J:I think the, the starting premise of, significant switching between operating systems, I think we would need to really look. Into whether that's the case for young people as it stands. Uh, generally speaking, you know, folks tend to stick with the operating systems that they have been using, so perhaps I think that could happen, but we would need to really look at the data around what is the delta. of, inelasticity now compared to previously when, when they, you know, maybe weren't switching around as much anyway. but I think the other question here is that, we're talking so much about mobile and App Store, verification. We're not really talking about any other form of, internet access, whether that's web browser access, whether that's console access. we know of course, for example, console is a big part of how people are, accessing their games and even their entertainment now. It kind of comes back to a point I made earlier, which is that everyone's looking for that silver bullet to fix age verification when really, like we are gonna have to adopt a Swiss cheese method if you really want, a, a good robust privacy preserving mechanism. So app store verification is great. You do have to think about non app store verification options. Um, you do have to think about interoperability. You know, in different forms of interoperability, all that needs to come together for, I think like a really robust strategy.
Ben Whitelaw:Okay, great. Well, Swiss cheese isn't my favorite, but I can see, I can see that being a, a way forward. Um.
Vaishnavi J:neither. You know, I don't understand. Sorry, not, I don't mean to have any Swiss cheese slander on this podcast, but like I do not get the appeal of Swiss cheese.
Ben Whitelaw:No, maybe, maybe that's why. It's a good analogy. Um, we'll go on to, um, talk a bit about the social media ban in Australia. As of next Wednesday, 10 platforms designated by the EAF Commission. We'll need to make reasonable steps, inverted comm to stop children having accounts. This is a story that Mike and I have talked about for over a year now. it's amazing that the kind of, deadline is finally here. kind of saw a story this week published by Crikey in Australian Outlook that talked about how some of the platforms are not very happy. About the fact that other apps that have, started to see increasing number of users are not obviously bound by the social media ban. smaller apps that are potentially more risky have, trust and safety of systems and processes that are not as well established. And, it's an interesting dilemma here. You know, it's a bit like the kind of whack-a-mole situation that we often talk about on the podcast. I was looking at one of the, the apps in particular called Yo, never heard of it before. quite a strange beast in the fact that it's based in the us got a London founder. Doesn't seem to have any connection with ours, but I found in the terms of conditions, no mention Australia, no mention of the kind of under 16 limit. obviously that has, is about to come into play. Maybe that will change on Wednesday and the. I would love your thoughts on this as a, as somebody who worked in platforms before, but the way that you kind of highlight or report any violation of terms on the app is to email, a contact at email address you're making a face now, which I, I feel like listeners, would, benefit in seeing, but it doesn't feel like the best kind of trust and safety approach, for an app that is, got to number one in the, in the app store, this week. can I be safety commission in solving one problem and creating another.
Vaishnavi J:I mean, email. Yeah. That I think my face said everything right there. Um, email tickets to resolve, concerns. It's just, you know, but I think this is exactly the point when it comes to age verification and age assurance, you. Need to have a really good strategy that's not going to just push young people to darker unmoderated, more ephemeral apps that are not going to have the same safety, provisions in there. As you know, some of the more established players like Instagram or Snap or Twitch, that's just come into, into the remit for the ban. and I think it's an ongoing, it's an ongoing struggle because you're effectively always having to play catch up and really trying to understand, well, where, where are teens and children going to next? What's the next platform? Um, how. How prevalent are they on those platforms? What measures are those platforms taking? it's a tough problem to solve. I don't envy the job of a regulator that has to address it. but it is going to be an ongoing monitoring regime, and I believe now, if I'm not mistaken, y actually has to do a self-assessment and, is most likely to, you know, come under this regime as well.
Ben Whitelaw:Right, and as you said earlier, what's really interesting is that other countries are also looking at a similar ban for under sixteens, despite actually not knowing the, the impact or the effect of it. It hasn't gone into, place in, in Australia, and yet there is this. Seemingly kind of cl to, I guess, follow suit, because presumably there are political, reasons to do so. You know, there is, a push to, as you say, to protect children, because of all the kind of societal issues that we've, seen in the news. so I think that's definitely one to follow. we'll, see how it pans out on Wednesday and beyond, but. I wanted to kinda switch gears now and talk a bit about a paper that we both, read in the last couple of weeks. It's, it actually came back, came out prior to last week's episode, prior to last Friday where we didn't have an episode because of Thanksgiving. And that's a piece of research about the role of trust and safety software vendors in platform governance. Now, we talked about Cossa, we've talked about the social media ban in Australia. These are kind of. part of a wave of regulation that has created a shift to platforms being more compliance focused. And where there is a kind of focus on putting in processes in place, that means that harm can be measured and reported onto regulators. And so for kind of listeners who don't know, there is this. period when legislation passes where there is a process of inter interpreting what needs to be done, how it needs to be done. Right. And you've been part of that process before in companies. this paper kind of un unpacks that a bit more and, and zooms in on the companies who are, are often responsible for doing that measuring and that reporting and trust and safety. Vendors are kind of growing group of, software providers who. Include everything from age verification and age assurance technology to, AI classifiers for hate speech or, particular harms. And it's a growing, area, growing industry, and not a lot of focus is put on it. Right. Just wondered what you thought about the kind of growing ecosystem that we've seen over the last, let's say five years.
Vaishnavi J:Yeah. You know, we do, um, this annual forecasting of kind of what are the big trends we anticipate for the next year, and we're working on the 2026 trends right now. And just as a sneak peek, I think the role of. These, vendors and the vendor ecosystem is going to be a really big one. so much attention has been, placed directly on the platforms, but platforms increasingly are using and engaging with vendors to, be in compliance with a lot of these regulatory requirements. So then who moderates the moderators? I suppose the, the big question. What are the standards, um, that need to be in place for vendors now? one of the things that we do as part of Vice is help companies with identifying the right vendor for their needs, helping them integrate that vendor into their overall, product development process. And, a recurring challenge that we face is, exactly the right metrics by which to measure the efficacy of vendors. There are so many certification programs and badges that, folks can acquire. It's not clear which ones are truly the gold standard or regarded as the gold standard by policymakers. and in the. absence of more thoughtfulness around the vendor ecosystem, I worry that what we'll see is a raise to the bottom where we're simply, simply going with the cheapest option rather than the option that truly meets the requirements in both the letter and spirit of the law. I really liked this paper. I thought it was a really good overview. It. Focused primarily on content moderation vendors, which, I think is a complex enough space, but actually the vendor ecosystem is so much broader than that. You've got, vendors that are doing each verification risk scoring, that are doing behavioral. Threat, detection that are providing red teaming services and even, and now doing a lot of AI evaluations. So it's actually a much wider ecosystem of vendors that we're talking about. and if it's this complex from this paper, when talking about content moderation vendors, just imagine what that means once you expand to all those other categories, offenders.
Ben Whitelaw:Yeah, totally. That's exactly my thought as well. Lucas Wright, who, he wrote the paper, he's a PhD student at Cornell University in the States, and he gets to this position of, I think, I think seeing the importance of trust and safety vendors and seeing them as a, as a really important part of this network, of safety actors by doing interviews with the companies himself, as you mentioned, he, he speaks to, uh, 12, I think he also goes to Trust Con, which is the big, event in San Francisco and he observes, and speaks to lots of people there. And then he also analyzes the way that these companies, Have kind of marketed themselves and how they talk about safety and he observes a few really interesting things that I hadn't really thought about and which I think are, vital going forward and, and actually might be part of your predictions vie as well, but one of them is the fact that, there is a kind of undue influence of these trust and safety vendors because of who has founded them and who works at them. So a lot of the vendors have been founded by people who worked at Google, who worked at Meta, who worked at platforms who've done trust and safety for a long time, and they therefore. of carry what he calls a unique legitimacy, which is something that I hadn't really, thought through. Is, is that social capital that you get as somebody who is at the kind of very early stages of a trust and safety company. You know, much like yourself, you know, who's got vast experience in this and therefore can have an influence on how kind of the wide industry works. and that risk scoring element you talked about is, part of that, you know, in trying to help platforms, comply with regulation. He says there are lots of vendors who are adopting this risk score system, which isn't something that's actually called out in legislation. It's not something that's kinda mandated, but is the way that they, as experienced professionals have interpreted that and sought to help companies do so. And so I guess when you are in this industry, and you, you know, you speak to people as often as, as I do, you kind of come to, see that actually. You take for granted that lots of people are, are very, uh, experienced and they're, you know, doing things for the right reason. I guess this is a slightly more kind of step back, analysis of these vendors and pointing out some dynamics that I hadn't really kind of considered, which I think is, is really good.
Vaishnavi J:Oh, absolutely. You know, the risk scoring example is a great one. we've had to score the scores, you know, as consultants were trying to find the right vendors for a particular project. And then when we go into kind of understanding the methodology, understanding the factors that went into creating. Those different scores, we realize there are no consistent, frameworks across industry. You know, this really is a pretty nascent space. One thing that Lucas points out, in the paper is that it's unclear how many of these vendors will really survive in the long run. how much. Budget or market capital there really is for them in, in the industry. and so we're kind of gonna have to watch it play out. I think I'm really fascinated, we have already seen, um, in the last few years some consolidation in the vendor ecosystem. You know, folks buying up one another or merging to form sort of these. Bigger, more comprehensive practices, it remains to be seen if that's gonna continue being the trajectory. because ultimately you've got, you know, a couple of large players who can, kind of support the market with some really significant contracts. But then as Lucas points out, what happens to. the vendors that are looking to all the mid and smaller sized companies with ever shrinking TNS budgets, how are they going to really be able to sustain one another? so I'm really curious to see where that goes. Um, right now, we're, you know, we're only a consultancy. We don't, we don't provide any tooling or, proprietary product. But it's something I think about a lot if we ever decided to go down the vendor route.
Ben Whitelaw:Yeah, indeed. And, I suppose one of the things that also kind of stood out for me is the fact that in 2024 when Lucas was doing this research and doing these interviews, there was a sense that demand hadn't really started.'cause I guess the Online Safety Act and the Digital Services Act, those hadn't necessarily trickled down to the platforms that we're going to need these vendors. I wonder, based upon a conversation we've had today, whether that's actually changed quite a lot in the last. 18 months or so, and whether that demand will be much higher. I've certainly seen many of these trust and safety vendors, be much more aggressive in some of their marketing, be much more kind of, open in terms of what they do. As you say, they've expanded what they, the harms they're able to help platforms mitigate with AI being a, a mini, a really big example there. And you know, also, they all have some sort of consultancy element to them as well. So helping people, helping platforms think through what they need to do. So fascinating kind of actors within the space.
Vaishnavi J:And I think what's really exciting for me, you know, having been in-house and then now sort of in this consulting advisory space, um, so in between proper in-house work and a full on vendor, I think it's really great to see the ecosystem expanding because Really valuable for platforms to be working with specialized teams that have done this over and over again. you know, we are a pure advisory firm, but it's really valuable when folks get to work with us because they get to benefit from all the work we've done across a range of different companies and we can move really fast. And I think. that kind of value is only scaled when you're looking at, a vendor or a tooling solution that has to be, you know, effective for a variety of different companies. So it's a really good development. I'm curious to see what the ecosystem looks like over the next two to three years.
Ben Whitelaw:Yeah, indeed. And, and very much recommend listeners going to have a read of that. it's very, very, readable, not, not too long, uh, fascinating looking to, a new, new area of the trust and safety world. let's finish. Fashion on, on a story that's, I think, a bit more upbeat, something that is, a story we don't get to see a lot of on controlled speech. And that's a, a safety team being profiled and, and held up as, as maybe, something that we can, we can be hopeful about. This is a story from the Verge about, a nine person team, at Anthropic. That, has been profiled and, been I guess, lauded in some ways for the work it's done within the company over the last three to four years. the team is called the Societal Impacts Team, and they do a range of kind of research and reporting across the work that philanthropic does. Essentially looking at, problems that may crop up in the use of ai. and so. One of the jobs that the team has is to tell the company the broader company of 2000 staff, some inconvenient truths about AI that perhaps they don't want to hear, and they go around kind of digging into, data, user data, uh, primarily about, what Claude, it's, primary. foundational model is used for and then shares that back with the philanthropic leadership. And I, I like this primarily because it's a very kind of human profile, a very detailed profile about a team that is obviously doing some very important work within a big company at a time where there is a lot of pressure. For, safety, within these kind of big AI companies particularly, and it just made me kind of think about how there is scope within the media for profiles and coverage like this. I wondered if, you know, when you were at Meta, did you ever have any profiles like this? Were you ever hoping to be covered in the media like this? Wondered what you thought about this piece as well.
Vaishnavi J:I thought it was a really. Great piece to read. I love human interest stories. I love the story about one of them apparently having a tungsten cube at their desk. And I was like, that's, that's a great resource to have just to help you ideate better. I love this. Um, so yeah, I thought it was a really, um, it also reminded me, not to date myself too much, but it reminded me of the very optimistic. Mindset that we all had about like the frankly like magic and potential of technology in the 2010s. so I really loved reading that story. the, the article, you know, itself mentions this is a team of nine people sitting within an org of 3000. And I think what it really occurred to me was, well, who are your cross-functional stakeholders? how do you need to liaise with them? Where does your voice play a role in this much bigger matrixed organization? and with leadership? So, you know, I think they're doing great work. but we can't look, I think, and if any, if the last 10 years have taught us anything is that we can't look to any one team within an organization to save us all. it is going to have to be like an organization wide. cultural, commitment to safety, to safeguards. Um, so I was really excited to read the story and now I just wanna know more about all the other teams at Anthropic.
Ben Whitelaw:Yeah, definitely. And you know, the fact that the, this team works closely with, with the safety team anthropic, the fact that it's kind of empowered to, to go digging is a really interesting, I suppose. narrative in, in the context of other tech companies who perhaps have been criticized for burying research of, you know, pushing back against insights that have been within the company and who, have, you know, then had that information leaked and seen the kind of consequences of that. So I think it's really interesting piece. as you say, big companies. Like Anthropic have had responsible AI teams or safety teams in the past, and then got rid of them. and so who knows how long these guys will last, but it's a great insight into how they work right now.
Vaishnavi J:I really enjoyed it. It was a great story.
Ben Whitelaw:Well, I think that brings us to the end of today's episode vie. I'm so, so grateful for having you on the podcast. You brought a, a wealth of expertise and, insights about the child safety world. and really glad we finally got you on here. Um, it's been great to have you.
Vaishnavi J:Thank you so much. I'm such a fan. I'm such such a nerdy fan of this podcast, so it's such a delight to be here. I'm so glad we could finally make this happen. Thank you for having me.
Ben Whitelaw:I hope it won't be the last time. and yeah, thanks to all the listeners for tuning in. thanks to all the, publications that we've used today. platforms in Society, which published Lucas's Research. Crikey, the Verge. and other now outlets as well. We couldn't do this podcast without them. go and read them, as well as listening to us too. And we'll be back next week. Mike will be in the chair and we'll look forward to, being back on your feeds. Thanks very much everyone. Take care.
Announcer:Thanks for listening to Ctrl-Alt-Speech. Subscribe now to get our weekly episodes as soon as they're released. If your company or organization is interested in sponsoring the podcast, contact us by visiting ctrlaltspeech.com. That's CT RL alt speech.com.