aiEDU Studios
aiEDU Studios is a podcast from the team at The AI Education Project.
Each week, a new guest joins us for a deep-dive discussion about the ever-changing world of AI, technology, K-12 education, and other topics that will impact the next generation of the American workforce and social fabric.
Learn more about aiEDU at https://www.aiEDU.org
aiEDU Studios
Teaching kids how to use AI responsibly
Are you worried your teen is spending more time with a chatbot than with real friends?
On this episode, we spoke with child/adolescent psychiatrists Dr. Jeremy Chapman and Dr. Ashvin Sood to learn how AI shows up in teens’ social lives and schoolwork, and how parents can respond with clarity instead of panic. Together we map out a simple framework: curiosity first, judgment last, and functionality as the North Star for family decisions.
We explored why adolescence depends on honest, sometimes uncomfortable feedback — the kind you don’t get when a bot always agrees. You’ll hear practical ways to ask better questions about prompts, privacy, and purpose:
- "What do you ask your chatbot?"
- "How does it make you feel afterward?"
- "Did it help you prepare for a real conversation, or replace one?"
Both clinicians outlined red flags of AI overuse (falling grades, dropped activities, hostility when access is limited, and late‑night screen time pushing sleep off a cliff) and they offer calm, early interventions that will rebuild routines without power struggles.
We also got specific about safety in a world where parental controls lag behind fast‑moving features. You’ll learn why young people should avoid using AI for companionship, how to set clear boundaries on data-sharing, and how to implement reasonable guardrails like teaching teens to verify information and keep real relationships at the center.
By the end of the episode, you’ll have conversation scripts, monitoring cues, and a balanced mindset to make AI a helpful coach rather than a stand‑in for human connection.
aiEDU: The AI Education Project
My overall advice for parents about social, emotional, well-being, and AI in your kids is to be curious, try to be not judgmental. And if there are prompts that they're putting in that really are kind of questionable or make you concerned, that's when you get an expert involved and say, hey, it seems like you're trying to use that for help and assistance. Maybe we can get a pair of human eyes on this and they can help you a little bit more.
SPEAKER_03:But this is really your area. Like, what are you hearing in your practice? Are you seeing similar questions that you received, you know, around things like smartphones and social media? Like, what are what are parents talking to you about?
SPEAKER_00:Yeah, I mean, every worry you can think about and the worries that existed with smartphones and social media, I think are tenfold with AI, but similar concerns. I don't want my kids on their phones at dinner. I don't want to miss real life connections, figuring out whether they have strong social relationships so they can handle this. Does it boost them? Does it actually undermine their learning? Does it undermine their connections?
SPEAKER_03:Yeah. And, you know, we tell folks all the time that in a lot of ways, the old lessons apply, you know, that we've already learned from other technology, you know, whether it's being smart and safe about what you share and, you know, what personal information you put out onto the internet. Um, and also like thinking about how much time you're actually spending on, you know, with technology and also like how you feel when you don't have it. And so that's what we're talking about today how to stay safe and healthy when using AI tools for learning and exploration. On this episode of Raising Kids in the Age of AI, a podcast from AI EDU Studios created in collaboration with Google. I'm Alex Katron, founder and CEO of AI EDU, a nonprofit helping students thrive in a world where AI is everywhere.
SPEAKER_00:And I'm Dr. Aliza Pressman, developmental psychologist and host of the podcast Raising Good Humans. On this episode, we're focusing on best practices for using AI tools for school, for hobbies, in career prep, all the things we've been talking about so far in good health and as safely as possible. This is really so important. So I'm excited to have some tips to share with parents, families, and educators today.
SPEAKER_03:Exactly. And that's why I'm so excited to talk to our guests for this episode. It's two colleagues, child psychiatrists, Dr. Jeremy Chapman and Dr. Ashvin Sood.
SPEAKER_02:I am Jeremy Chapman. I'm a child and adolescent psychiatrist, medical director at SSM Health Trufford Center and Trufford Studios in Wisconsin.
SPEAKER_04:Hi everyone, my name is Dr. Oshman Suit. I'm one of the board-certified child and adolescent psychiatrists that work closely with Dr. Jeremy Chapman at the SSM Health Trufford Studios. I focus on digital media and the intersectionality of mental health and all digital media that hits our kids today.
SPEAKER_00:Jeremy started a program called Psych Child while in residency because he started noticing his patients and their parents were using tech, but they seemed unaware of the potential mental health risks that come with tech use. Currently, aside from seeing patients, he also makes videos for social media sharing with parents and caregivers how to make their kids' digital lives part of the conversation about health and wellness. He and Ashvin connected over shared interest in the intersection of psychiatry and technology and education, and they've been working together ever since.
SPEAKER_03:They're going to share what they're seeing in their practices, how they recommend parents embrace the world their kids are living in, and learn how to make conversation about their digital lives. Jeremy started out by pointing some of the ways that AI is like other big movements we've seen in technology, like ultra-popular video game worlds and social media.
SPEAKER_02:You know, what is now known as AI is making its way into our offices. It's already in our patients' lives, but it's not the type of thing that's even on the radar of the parents or the kids unless you were to directly ask about it. And that's how I think social media was at some point. Parents can see very evidently how it affects their kids' lives. They're on it from 6 p.m. till midnight. You know, it can't stop them from being on it. Like they can't avoid it. And I think that's starting to happen with things like chatbots and AI interactions with kids, but it's not even the thing where anyone's thinking about it. And Osh and myself and some of our colleagues have published kind of some guidelines for how clinicians can ask children about screen time activities. Here's the standard things you should ask. You should know to ask what Fortnite is. And depending on what they say, you should know to ask them if they play on battle royale mode, if they play duos, what their skins are. These are meaningful bits of conversation that are relevant to our ability to do our job and extract meaningful information from the kids who we talk to. And it's the same with AI now that we're going to have to have a shared dial, a basic working understanding of the different elements of this as clinicians, so that we can A, get meaningful answers out of the kids who we talk to, B, connect with them, develop rapport with them, show them that we know some of what they do. And uh and then additionally, just better understand, like intuitively, what kinds of questions to ask about it. That goes for the clinicians and for parents and teachers as well.
SPEAKER_04:When you think about teenagers and adolescents, the first thing you have to think about is developmental stages. Teens or adolescents are usually in the developmental stage, which is identity versus role confusion. How do you belong to a group? And if you don't belong to a group, are you a loner? And they want to connect with each other. Teenagers are using AI as a source of connection. We as adults might use it for information or research, right? But outside of like homework, that's what we kind of think of teens using it as, it actually is being used as chatbots and supportive tools and validation. 70% of teens have used a chatbot in the United States. And 50% of that population, so 20 million teens, use it regularly. And what are they using it for? To connect. The issue is with using a chatbot as your main social connection, is that it gets rid of the challenge that helps you grow. So what they're going to most likely get is a chatbot that supports and validates them, but doesn't challenge them, doesn't establish boundaries, doesn't establish that it has other chatbots that it has to talk to and it can't talk to you all the time. And if we say it's all good and this is how relationships should be, kids will then avoid difficult relationships or avoid relationships where there might be a conflict that might occur. And that is what relationships are. Relationships and friendships change, right? We are talking about some of the difficult stuff, the growing pains of being an adolescent. I want them to recognize what are red flags in a relationship and what are green flags. That's the point of having these types of dynamic relationships in real life, is to have the pushes and pulls and be able to abandon things that are really bad, right? And to also hold on to things that you're like, I want that, right? That's a good thing to have. Those are the intangibles that we're not at yet with chatbots.
SPEAKER_00:I want to really emphasize what they were talking about when it comes to the challenges of social relationships and how you figure out kind of how to be in this world with other people when you experience and have exposure to those challenges, the small bumps in the road, so that when a big bump happens, it's not end of days. And those are really important times for skill development. So that to me, if we could spotlight that part of this conversation, I think it could really help parents understand like why this is a problem. Because even if you could think about, you know, having a chat bot as a friend, you've still lived your whole life understanding what relationships feel like before you got there. And so imagining that at the time of individuation that teenagers are learning relationships this way, it could be a real problem.
SPEAKER_03:Right. And one of the other elements of AI that's still in progress is what the parental controls are going to look like. And so, you know, we've seen in the same vein, you know, for technologies like smartphones and social media, um, parents having opportunities to control screen time. And, you know, these are really important tools that families like to use when they help their kids enter digital spaces. But while AI companies are evolving to meet parents and kids where they are, kids are always going to be testing the boundaries of AI. And that includes seeing how far they can push the AI using clever prompts. And, you know, we don't always know exactly what kind of information they're going to get in return. And that experimenting with the boundaries of LLMs is part of the reason why parent involvement and moderation is even more important.
SPEAKER_02:It's kind of a wild west there with with AI and chatbots still. And so it's not like you just have to outsmart one kid. You have to outsmart every single kid, which is impossible. And that gets back to why we we tell parents all the time stop acting like you can push away and eradicate technology that you don't like. It's not going to work. It is here. Accept that fact. Now, how can we make it a healthy experience? How can we train kids to be wise about it and make good decisions around it? And frankly, the most direct way to do that is to let them use it and to have conversations with them about it instead of pretending like you can create a world in which it doesn't exist.
SPEAKER_03:Jeremy, you know, he clearly is very passionate about this, but you know, but ultimately he's articulating this idea of AI readiness, which is doesn't really matter whether you're excited or afraid of AI, you still need to talk to your kids about it.
SPEAKER_04:We're writing this paper right now on how to talk about how to talk to teens about their chatbot experience. And the basic thing is one number one, parents need to be curious, non-judgmental. Don't eye roll, right? Empathize. Hey, and if they aren't opening up with you and be like, I'm really curious, what are your friends using your their chatbots for? Make it relational, right? And then you can have an open and honest conversation about how do you keep them safe, right? So prompt generation is probably the best place to step. Like, what do you ask your chat bot? Um, is your chatbot supportive? Is it validating? Is it nice to talk to? You need to kind of then take a step into the uncomfortable with the kid. Sometimes people use it for kind of stuff that is kind of hard to talk about. That could be like getting out of a risky situation. Has the chatbot offered you any advice on that type of stuff or your friends? Just curious at how it's helping you there, or is it hurting?
SPEAKER_02:How does it make you feel when you talk to it? It comes down to the trust that your child has for you and the respect that they have for you and their communication with you. I'd say, look, you are going to use chatbots to ask for social advice. I know you are, I assume you are, and people around you will be too. Let me just remind you that that the chatbot does not actually know you or the context of your life or your day-to-day relationships. It's also not a real person. It does not have uh feelings or personal experiences. So basically, all I'm doing is I'm I can at least educate my kid.
SPEAKER_03:As with anything, kids can end up becoming dependent on chatbots. And that's where we're gonna start to see the negative effects from overuse. Oshman shares some science to look out for and how to identify when over reliance has become a problem.
SPEAKER_04:When we think about kind of dependency and emotional dependency on AI, you want to think about functionality, right? So everyone uses it, right? But are you scrolling for an hour a day or 13 hours a day? Right. So with kids, and what parents need to be aware of is, and this is what we guide them in clinic all about, is is it getting in the way of grades? Are your A-B students turning into C D students? Second, does the kid get super aggressive when you pull it away? Right? Like punch a hole in the wall, get frustrated, yell expletives, we see all of it, right? If that thing has been taken away. Uh third, are they not no longer doing extracurriculars, right? Sports, robotics, like it doesn't matter if they've just stopped that and they're using their phone all the time. Is their screen time going up so much that it's getting in the way of sleep, right? We see teens and they typically are gonna push their sleep later back and wake up later. This is a normal developmental stage. The issue is with tech, it's almost a catalyst to push your sleep even further back, right? So all of this talks about functionality. So my big thing about all of these chatbots and everything is like great tool, but moderation is key. Absolute key.
SPEAKER_00:And maybe we have a longer runway to work with. So I I say that almost to help everybody take a deep breath and feel a little bit less out of control in this incredibly intimidating space. But what I do think keeps coming up is we have to know our kids and we have to know what their particular signs of poor functioning are, what their particular signs of maladaptive behavior are. Like, like he was saying, if an A-B student starts to turn into a C D student, that's a flag. But if you already were a C D student, you have to look for other flags. It just reminds you to understand your child's temperament and have a close relationship with them.
SPEAKER_03:Yeah, and I think, you know, for parents who do understand their kids, I think the missing ingredient is like understanding these tools, not just what they're capable of, but like kind of how they work. Um and one thing that I I want to share with you, and like legitimately, I'm so so curious for your feedback. So I I was playing around with one of the language models, it was just like telling really bad jokes. And and I said, Okay, I have a joke, it's super funny. Uh, okay, so why did the parrot cross the road to get to the other branch? And the the the language models like in capital, all capitals, oh my god, like laughing emoji. That's so good. I didn't see the punchline coming. The other branch, question mark, question mark, genius, 10 out of 10. The parrot's out here doing stand-up now. And, you know, I was a bit of a class clown when I was in school. And I'm not a very funny person, but I had enough experience, like, you know, telling bad jokes and not getting a response to kind of condition myself to like, you know, make sure that I have something that's actually funny before I'm gonna go and sort of like blat it out. And I just don't understand how you're gonna get to that feedback loop of, you know, developing wit, if you know, the only feedback you're getting is just always good. And I think that's kind of like for me, that's a way of sort of illustrating this point about like why it's actually not necessarily a good thing for these models to be, you know, so accommodating and so accessible, which is something you hear a lot. It's like, oh, I feel like I'm really being listened to.
SPEAKER_00:I mean, I think the most important thing you said, by the way, you are funny. Um, and I'm not a chat bot saying that. But I think that, first of all, of course, that's how you figured it out growing up. And so that's a really good point and highlights kind of all of these things that we're talking about, about how we develop, we need feedback. And it can't just be positive feedback. And another thing that I wanted to mention, picking up from Ashman and Jeremy talking about some of the riskier things that kids are asking about and hearing back from AI chatbots, is no matter how much we think we're talking to our kids about this, you just don't know what you don't know. And so it's important to just find out what is out there because if you think that you've figured out Chat GBT or other large language models, which like I've just learned about, we really just need to hear these stories, have our jaws drop in private, figure out how we feel about it. And then we can talk to our kids in that more curious, less judgmental way, because you want to come in being able to handle whatever it is that's being said to you so that your kids are less nervous to tell you things.
SPEAKER_03:You're providing some really helpful general advice, which is just like pay attention, like be really sort of mindful about, you know, any changes that you see for parents who are asking, like, oh, like what is the exact list of AI tools that are bad and what are the ones that are good? And you know, I think it's it's just so nuanced. Um, there's gonna be a lot of situations where I could imagine kids really doing beneficial and positive things, even for their relationship development. You know, you could actually engage with a chat bot to coach you ahead of a really tough conversation. I feel like that's the kind of thing that that feels okay, right? Like if you're someone who's really nervous to have a discussion with your friend about how you're not getting invited to the movies, practicing with AI actually seems like a really clever, you know, use of the technology. I think it's different if you're not going to the movies so that you can stay at home and talk to your chat bot. And you might not know the difference unless you ask.
SPEAKER_00:Yeah, I mean, I think this keeps going back to we need to be curious and we need to check in a lot. And actually, I think asking young people to teach us how to get smarter about this would really benefit everyone. I'm also thinking about the kids, to your point, like the ones who aren't going to the movies, maybe I'm a little bit more concerned about how they might be using it than I am with one who says how they're using it and can show me some of the ways that they tried out a conversation.
SPEAKER_03:You talked about sort of like this idea of being like a little bit like uh sort of like under the radar, like secretive. I mean, is that the kind of thing parents should be looking out for?
SPEAKER_00:I mean, look, we're supposed to develop into beings that want a little more privacy as we get older, but secrets are different and they can be insidious and they can be dangerous. And so I think there's a difference with telling our kids like, I respect your privacy, but I do need to kind of know the lay of the land. And this is the same thing we say about social media and screens in general, is that like if it's on a digital device, it's not private. So we need to be able to have access to help keep our kids safe as they're developing into humans that are going to be off in the world on their own. Whereas if it were a diary and you're handwriting it and it's like no one else can see it, then that is privacy that's totally appropriate.
SPEAKER_03:Yeah. It's like if the diary was talking back to you, suddenly it's a little different.
SPEAKER_00:Yeah. And you know, I'm just thinking about an adolescent brain and how there does come a point in adolescence where they do become more skeptical. And that's to me a better time than a younger brain that isn't at the ready to sort of question what is in front of them. And so I think we have to start these conversations early, but then just imagining how much you can expect of a young person in terms of remembering this is technology, not something that cares about you. And understanding that something that's agreeing with you and validating everything you're saying is not necessarily like a good friend or a good companion. And I would be remiss if I did not say that the research is a hundred percent clear, which is, you know, unusual, that young people should not be using AI for companionship. So I think we can give our kids basic guidance. And when I say kids, I really mean adolescents. So I think we have a lot more control if we're paying close attention and if we're, you know, keeping devices. But I'm curious what beyond like, don't give personal information about physical features or you know, address or social security, but are there other things that need to be protected when we're talking to AI? Or is that one of those things where the technology can't keep up with the young mind?
SPEAKER_03:I I I think protection is really hard. I was on the other end of the debate around video games and like sort of like just, you know, browsing online. Um, you know, my parents were sort of like in the uh trying to rein it in. It's very hard to prevent kids from accessing the internet. And there are guardrails that really should be put in place, but I think you also can't totally sterilize their online experience. And I think the same is probably going to apply with AI. And so there's a balance here of it's really important to push for, you know, establishing those safeguards and those guardrails, um, but not focusing exclusively on that because you know, parents are always going to have a role in, you know, whether it's just ensuring that kids are being safe or just sort of like building the resilience and the mentality that kids are going to need. And that's why I just, you know, staying curious and and you know, continuing to have the conversation with your kids and and maybe with, you know, other parents um so that you're, you know, at least not too many steps behind, even if you're always a few steps behind. Thank you so much to everybody listening. Join us next week for the final episode of this season. We're gonna hear from real teenagers and they're gonna let us in on their stories of how they really use AI. And there's some pretty surprising stories in there, but overall, I think the kids might just be alright.
SPEAKER_01:So I inserted the text message to the AI and I asked it if it sounds passive aggressive or how I can fix my tone and make it seem more like I wanted to resolve the conflict instead of like fueling it further.
SPEAKER_00:Tune in next week to hear how the kids are really using AI right now.
SPEAKER_03:Find out where AI will take us and future generations next on raising kids in the age of AI. Until then, don't forget to follow the podcast on Spotify, Apple Podcasts, YouTube, or wherever you listen so you don't miss an episode.
SPEAKER_00:And we want to hear from you. Take a minute to leave us a rating and review on your podcast player of choice. Your feedback is important to us. Raising Kids in the Age of AI is a podcast from AIEDU Studios in collaboration with Google. It's produced by Kaleidoscope. For Kaleidoscope, the executive producers are Kate Osborne and Lizzie Jacobs. Our lead producer is Molly Sosha, with production assistance from Irene Bantiguay with additional production from Louisa Tucker. Our video editor is Ilya Magazanen, and our theme song and music were composed by Kyle Murdoch, who also mixed the episode for us. See you next time.