What's Up with Tech?

Designing Trust: How Age Verification Protects Kids And Platforms

Evan Kirstel

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 25:21

Interested in being a guest? Email us at admin@evankirstel.com

How do you protect teenagers online without turning every app into an ID checkpoint?

That's the question governments, platforms, and parents are all wrestling with right now — and most of the answers so far have been blunt, binary, and broken.

We sat down with our guest from TELUS Digital to go beyond the headlines and into the actual design challenge: what age verification gets right, what it gets dangerously wrong, and how to build systems that protect young people without making privacy feel like a casualty.

Here's what we unpacked:

The smartest approach isn't one-size-fits-all. It's layered and proportionate:

→ **Low-risk spaces** (forums, general content) can rely on self-declaration and behavioral signals
→ **Medium-risk spaces** use facial age estimation — a quick confidence range, image deleted immediately, no data stored
→ **High-risk spaces** (adult content, dating, gambling) justify stronger verification with human-in-the-loop review

The architecture matters as much as the intent. Poor design is how safety becomes surveillance.

Transparency is the trust engine.Us ers need to know *why* their data is requested, *how* it's processed, and *what* they can do when the system gets it wrong. Appeals aren't a nice-to-have — they're the difference between a system people accept and one they route around.

We also got into the real trade-offs nobody talks about enough: accuracy, privacy, inclusion, and the very real risk that blanket bans — like those emerging in Australia, Spain, and across the EU — backfire without safer defaults, stronger parental tools, and genuine digital literacy investment.

Our guest walks through how TELUS Digital supports clients across the full stack: content moderation, fraud prevention, bias testing, account security, age estimation models, and verification systems built to correct mistakes at scale.

And we close on where this is all heading — zero-knowledge proofs, privacy-preserving credentials, and portable age attestations that raise protections while *reducing* data exposure. The technology is ahead of the policy. The question is whether platforms will lead or wait to be forced.

If you're building products that touch teenagers, this conversation is for you.

Make your podcast work for your business - Listen to Podcasting Amplified
Practical strategies to turn your podcast into a business growth engine.

Listen on: Apple Podcasts   Spotify

Support the show

More at https://linktr.ee/EvanKirstel

SPEAKER_00

Hey everyone, really excited for this chat today. The digital world is changing fast, and age verification is becoming a major focus for platforms and parents and brands, regulators. And we have a true expert to talk about this issue and topic from TELUS Digital. Labisha, how are you?

SPEAKER_01

Hi, Evan. I'm very good. How are you today?

SPEAKER_00

I'm doing great. Thanks for joining. Of course, many of us know as TELUS, a global communications giant. But TELUS Digital, how do you describe the team and the role, uh, what you guys are up to these days at TELUS Digital?

SPEAKER_01

Thanks for having me. Uh, TELUS Digital, uh, we we are the customer experience transformation partner primarily for our clients. We our aim is to help our clients win their customers' trust in in the moments that matter. Um, like I mentioned, customer experience management is a is a big part of our business, but then we also uh the part of the business that I lead, trust and safety, um, is also a significant pillar. We also have AI transformation business and digital solutions as well. Uh, but yeah, as I mentioned, I look after our trust and safety work, which uh which includes content moderation, which includes fraud prevention, includes uh uh account security, depending on the type of client we work with, those solutions they all may apply or parts of those. Uh but yeah, ultimately helping keep our uh clients' platform safe, helping keep their users and communities safe. Uh and we're really proud to work uh in this field.

SPEAKER_00

Yeah, brilliant mission. Uh one thing that's top of the news every day, it seems, is age verification in one way or another, and you know, beyond the obvious, you know, issue of screen time and reducing that. What is changing online these days that's making age verification so critical around the world?

SPEAKER_01

Um I I think really is uh the the few factors there, right? Um you mentioned yourself there, kids are spending a lot more time in front of screens. Uh the the thing that's really the most significant change is really the the risk profile of the internet has changed. So it's not just the number of hours, right? It's um kids are kids are not just consuming content, they are participating in these sort of highly active, interactive systems that respond to them in real time, right? So it's social feeds, uh live streaming, chats, uh social gaming, and then you know, more recently, obviously, it's AI companions as well. Um, so these systems are optimized for engagement, for personalization, and so they can shape what a young person sees, right? Um, and and they are really you know encouraging you to do something next, right? So uh that that's one main shift that I see. A second one is really blending of sort of content, commerce, uh, community. Uh you might be watching a uh clip, but then you get nudged to join a server, right? So uh you end up in sort of paid creator economy uh type environment. Um so yeah, age is not just a demographic detail in this situation, it affects what protections are appropriate and what kind of interactions should be permitted. Um, so yeah, that the data becomes a really uh significant issue in this space. And then the third is really, you know, what we are seeing a lot more right now, it's regulatory expectations, right? So we're seeing this clear move towards uh duty of care thinking, uh safety by design. Um we we are expecting platforms to anticipate risk and to build into their products. Uh we're seeing laws coming up in in many countries. The the EU Digital Services Act was was one of those large examples, but then we're seeing this coming up in in other countries as well. And and yeah, like to put it simply, if a platform cannot reasonably determine whether a user is a child or a teen or an adult, it's very hard to apply the right safety controls and protections.

SPEAKER_00

Yeah, there's a lot to unpack there. So when it comes to you know protecting kids, I mean, what's the way to approach age verification? What's how does the technology work in practice?

SPEAKER_01

Yeah, so it's uh uh it's become a hot topic, right? It's it's how much of this is uh technology uh driven solution, how much is society-driven solution. Um, I I try to think, and I think probably the best way to think about this issue is to think about it as a trust and safety or online safety design decision rather than just a compliance hurdle, right? So and then and then you can think about it from like a proportionality principle. So there's different risks uh at different levels, uh uh which require different levels of assurance, right? So if you if you are in high-risk uh contexts like adult content or dating or gambling, etc., then uh in my mind, those really justify very strong verification processes. If you are in lower risk, let's say um some sort of educational service, right? You you you want to you you can avoid collecting overly sensitive data. Um so so in practice, what this means for most platforms is is a layered approach, right? So you're using low friction models like self-declo declaration, account behavior patterns, etc., for for for yeah, those those uh low risk uh situations. Then you escalate when the risk is is is crossing certain thresholds, and then you know you provide a fallback. So I think transparency is a big uh is a big thing here. You you want to provide people ways to appeal uh because age age systems, especially age estimation systems, are not always gonna uh get things right. And and yeah, I mentioned transparency in my mind is the one thing that really makes or breaks trust in this space. People need to understand what's being asked of them, what happens with their data, um, and then the recourse, like I mentioned. So appeals, uh, if you are wrongly blocked. Um in a way, it can be as harmful for some people to be prevented from communicating with someone uh online because they're wrongly blocked uh as it is to actually not have access to the service uh in the first place. Um the the one thing that we are seeing through many of our clients is that that human escalation part is is very much necessary where where there is there are mistakes that happen through systems that humans can actually step in and verify. Um so yeah, humans help build these models that actually work at scale for age estimation and verification. But then ultimately, I think for for situations where which require human review, uh you need that expertise.

SPEAKER_00

Interesting. And tell us about the move that Australia made to ban 16 and unders for social media. That's kind of a blockbuster example. Many countries are looking uh uh at that. Can you tell us how that works and maybe it tells what it tells us about where things are headed?

SPEAKER_01

Um yeah, and and there was another example just a couple of days ago. Spain has moved to uh to do uh very similar. Um Spain's actually introducing uh uh sort of personal responsibility for for uh folks who who run platforms as well. And then Ireland, um Ireland has had some discussions recently around this as well. Um generally, uh actually many countries in the EU were discussing whether this needs to be some sort of decision at the at the uh EU level rather than uh specific countries, but it it looks like some countries don't want to wait for that. Um so I think it's moves like Australia's is really showing us how serious governments are actually thinking about child safety online at this point, right? It reflects a global trend. Uh I think um these measures also highlight trade-offs, right? So enforcement at scale really requires robust age assurance, but then you are like I mentioned earlier, it it's that proportionality that then starts bringing questions around accuracy, privacy, inclusion, what does it mean for user experience? Um, and then you you before you even start thinking about the social questions, right? So, what does uh platforms, whichever, right, online communication platform, which uh uh many teens use, uh they've become a major part of our lives, how people communicate, how they learn. And I think any hard cutoffs are are sometimes ending up with unintended consequences, and we've seen some of those stories uh in Australia. So, from my perspective, the direction of travel is we are moving towards a blended model in a way. So there's no civil, there's no single silver bullet on this combination of age assurance where needed, sort of safer defaults, product design solutions uh in in certain situations, stronger parental tooling, so parental controls in my mind will will become uh a lot more prominent in many ways. And then what uh is probably lagging at this point is is just more digital literacy and education, and and this will be a big part of it. Uh so the responsibility is sort of really shared across. And and I think regulators are increasingly expecting platforms to demonstrate that they've thought through foreseeable risks and implemented solutions in a proportionate way, but then consistent with their with their laws and some of the more prominent laws like the Australian one you mentioned and like the UK online safety approach.

SPEAKER_00

Um interesting. And you for someone new to this space like myself, my my children are adult age, so not something I've worried about, but what are some common misunderstandings or myths that you hear again and again about age verification, either in the industry or or generally?

SPEAKER_01

Um I think sometimes people think about it in sort of zero-sum way. Um so one of those is, for example, age verification is not ID verification, doesn't necessarily need to mean that, right? Um I think uh age verification is not necessarily about who someone is, it's about knowing how to protect them in a certain context, right? So for for the most part, uh I believe platforms don't necessarily need to care who someone is, but they need to know approximately what age group they beh belong to so that they know what what types of protections they they need to implement, right? And and I think the goal here is not maximum certainty, but it's appropriate certainty for the risk involved in that situation. So I think it's uh the the some of those misconceptions really go through kind of, hey, how much transparency uh is involved here? Do I know? Does my parent know um what data is collected, uh, etc. Like I think if if you think, for example, from a technical standpoint, right? Um age estimation most often requires uh requires a selfie check, right? But those images are are almost instantly deleted once that check is is completed, right? So the the way the way the technology works, it estimates based on certain features, and and in those solutions, those are deleted, right? Only sort of like I mentioned, those uh uh most complex ID verification solutions is where there is a uh a more privacy concern. But again, um I think sometimes the the framing is it's either safety or privacy. And I in my mind it shouldn't be that way, and it's not that way. It's really privacy and safety are not opposites. Um, but at times we end up in in those discussions because I think badly designed systems make them feel like they're opposites. Uh and and and that's where I think uh people sometimes have misconceptions. But again, there is a shared uh uh duty of care, I think, across um in my mind, the honest answer would be no single group can own the entire problem uh because no single group controls the entire system. So, like I mentioned, there's a shared responsibility. We can talk a little bit more about that aspect if you'd like as well.

SPEAKER_00

Yeah, so you know, obviously, platforms, parents, governments all share responsibility to work together. What's the role of Telos Digital in all of this? What do you do exactly with your clients and partners?

SPEAKER_01

Um, so our our role in in the if if we think about sort of social media or gaming ecosystem is is a cross uh uh user experience, if you will. We may moderate content that they uh are seeing or or not seeing in many situations because they shouldn't be seeing it. We may uh uh we may look at investigating accounts that are either reported or flagged for certain types of behaviors or or or for um misbehaviors. Uh and then in in verification specifically, we mean we may actually be that last step where which needs a human intervention to ensure that somebody is of of a certain age to access a platform. Um so it could it works across the ecosystem. Um we also uh like I mentioned, we have uh uh data solutions uh part of the business where we where we help our clients build uh uh models uh across many, many different industries, but specifically for safety, they could they could be detection models to prevent harmful behaviors, or they could be in in the age verification space, in age estimation space, where we help uh build those models and fine-tune them so that you know over time, obviously their the accuracy is higher. But like I said, and and and you will hear you know leaders from major social media platforms, they will they will often say themselves, no systems are 100% accurate. And the the scale at which they operate means that even the small sliver that is is is not accurate ends up uh potentially in front of some of our uh sort of community safety representatives who might be working for a particular client's platform where they might review that piece of content or review that account, or in this situation, maybe help confirm um age of a person.

SPEAKER_00

Interesting. So AI, I guess, plays a double-edged sword in this uh area. You know, you have AI bots and uh uh that are increasingly targeting children and with you know sometimes quite distressing uh outcomes. On the other hand, behind the scenes, AI I think is an amazing tool uh to protect children. What is the cutting edge in terms of what we can do with AI and where you see things going over the next year or two in protecting kids using all the latest models and uh advancements we see?

SPEAKER_01

Right. Um I mean, I think there's a couple of ways to think about it. One is is um a lot of the technology that's used in this space specifically is not necessarily AI-based, but the behaviors that it's trying to protect children from may be AI powered, right? Um of the personalization uh uh et cetera. So I think uh there is no one single age verification technology, it's an ecosystem, right? With a ton of different methods and different uh uh trade-offs. Uh AI powered models are one of those as well. Um, so one is one category is, for example, behavioral and technical signals. Uh some those are called gray data sometimes. So that can include patterns, right? How is somebody navigating a service, device and account signals, velocity of actions, etc. And that can help a platform decide whether somebody is potentially underage. Um, so this is like a low friction, less invasive than submitting documents, etc. Uh, then then is uh age estimation is sort of the next layer, uh, most commonly facial age estimation. The key distinction is that this is not, like I mentioned, who are you, but it's what age range do you appear to be? Um, and and you know, this is briefly processed. In best practices, that's not stored. Um, and and the result is a confidence level, right? So it's not an identity, uh, it's it's uh uh but this must be paired with bias testing. And I'll touch on that in a second because that's an important piece of um where AI plays a role here. And then you have the the higher assurance methods, right? So verified credentials or ID-based checks. And these are appropriate when accuracy is is um critical and the order law requires it, right? So um the privacy and security stakes are higher in this situation. Also, the best practice in the industry, uh actually across, so it's not just for social and gaming, but also in financial services, is to try and minimize retention uh and and minimize reuse because uh um obviously people are concerned about the the how their data is stored, who has access to it, etc. So the the increasingly there's interest in preserving privacy preserving options. Like um I'm not sure maybe you've heard, but there is this phrase called zero knowledge proof, which um which really means um there's no sharing of data further and and it's not preserved. Uh but the reusable age credentials are also becoming one thing that is uh uh like sort of almost like a wallet of some sort, a digital wallet where which is transferable across platforms. And there was actually in the EU there was a discussion of the EU digital wallet, which you can use across platforms that is stored by a third party, not by a platform itself. Um and and and just to come back to the bias in the systems, this is this is where uh at times we we help our clients, right? You need to try and make sure that this technology is fair and accurate across different users, right? It shouldn't be the case that um a teen in France or or in Germany should be a lot more easily, that their age should be a lot more easily estimated than a teen in uh Bangladesh, for example, or uh or somewhere in Africa, right? So we are we're talking about introducing high accuracy data sets that help with age estimation to remove these biases. Um and and so uh you know the way to address this is through really end-to-end governance, right? So you have representative training data, diverse ages, skin tones, geographies, all the relevant conditions that are that are applicable. Then you look at ongoing tests, and then you break down those error rates into really specific user groups so that you can then refine and make sure that the uh accuracy in those specific groups is rising, not just average accuracy, right? Um and then thresholding and confidence design, when confidence is low, you don't force a hard decision, you can you might be able to trigger a fallback, which is, for example, where we might come in with uh with a check that is uh there is a process where a person might actually uh um their case ends up in front of uh uh a representative who is is making sure that they are who they are or that they are the certain age, depending on the process. But this is where, but like I mentioned, appeals are incredibly important. The human in the loop element is there. Um, and and this is where we have over over the past couple of decades really built strong expertise to help our clients uh uh maintain that high level of accuracy where systems might not um be at that point.

SPEAKER_00

Brilliant. Wow, so much hard work behind the scenes to make this uh quote unquote easy and seamless. Uh that's that's amazing. So I'm headed out to Mobile World Congress uh shortly, and this will be one of many hot topics. Where can people meet TELUS Digital? And uh what are you excited about over the next few weeks, months? There's a lot going on.

SPEAKER_01

There is, yeah. You mentioned Mobile World Congress. We will have uh a strong uh representation there. We will have many uh folks. You can stop by our booth there, but also next month in there is a trust and safety summit in London on the 24th and 25th of March. Uh, we will be there. We'll we'll be speaking also actually about AI safety and security and about um how we've built an automated red teaming solution for enterprises to ensure that the chatbots they're pro uh uh AI-powered bots that they're putting in production are safe. Um and and uh yeah, you can come and say hi there, but also are uh across a few different um we we participate in many sort of gaming conferences in the US. Um we will be at the TrustCon in July as well in San Francisco. Um so yeah, but also just uh you can find me on LinkedIn and reach out uh with any questions. I'm more than happy to to talk through what solutions we have and how we might be able to help.

SPEAKER_00

Brilliant. Well, you're an amazing spokesman for this important topic. And um, here's to success. I think we're all rooting for you.

SPEAKER_01

Thank you. Thanks, Evan, and I really appreciate you having me here. It's uh it's an important topic. Uh, and and I just wonder if I if I may just sort of close. Like I'm really passionate about trust and safety space. I've worked in this space for for more than 15 years on platforms and and and for for uh Telos Digital now, where we help large platforms. I think we're we're moving towards a world where age awareness is part of responsible digital design, just like security or accessibility. And I think the challenge now is doing it in a way that protects kids without normalizing bad practices. So the balance is where trust will be won or lost, and and this is where we we really work hard to help our clients build and maintain that trust with their with their online communities.

SPEAKER_00

Brilliant. Well, they're lucky to have you. Very exciting work, meaningful work, and thanks for joining. Thank you. And uh thanks everyone for uh listening, watching, sharing the episode. Also check out our TV show, techimpact.tv on Bloomberg and Fox Business. Thanks everyone. Take care.