Raising Kids in the Age of AI
A new podcast from The AI Education Project (aiEDU) created in collaboration with Google that explores how AI is shaping the future of learning, hosted by aiEDU's Alex Kotran and Dr. Aliza Pressman, a developmental psychologist and bestselling author.
Raising Kids in the Age of AI
Answering parents' questions about AI
Are you worried your kids might let AI do their thinking for them?
On this episode, Alex and Dr. Aliza dig into the questions parents ask most and share a practical roadmap for raising curious, confident, and discerning kids who can use AI without losing their edge. Whether it's developing everyday habits to build critical thinking or setting clear boundaries for schoolwork, we show how to help your kids become AI ready — fluent with AI tools, appropriately skeptical, and proud of their human advantage.
We start by unpacking what AI readiness looks like at home and in class:
- Using AI as a tutor, not a shortcut
- Asking for hints and feedback instead of final answers
- Testing understanding by explaining concepts in their own words.
From there, the conversation shifts to AI ethics around cheating and why expectations should be set by teachers up front. Cheating isn’t new, but trust matters and class assignments should clarify when AI is and isn't welcome.
We also look at AI-generated misinformation and deepfakes. For this, we offer simple, repeatable checks that kids can use right away:
- Pause, ask what would make this true
- Verify the info through a second source.
- Look for who benefits if you believe it.
Finally, we talk timing and development: when to introduce AI, how to avoid leapfrogging core skills, and why creative success still depends on taste and craft. You can’t speed-run taste — hours of practice, feedback, and iteration teach judgment that AI can’t replace.
aiEDU: The AI Education Project
Dr. Aliza Pressman
As a parent to kids ages 12 and 13, how can I encourage my kids to think critically?
SPEAKER_04:I would like to ensure that my children don't become over reliant on AI.
SPEAKER_06:I do have a concern that AI could improve so much it becomes difficult for a child to discern what is real and what is not.
SPEAKER_03:Today we're going to address some of those AI quandaries you have. We'll hear from a few parents who shared their burning questions about their kids using AI with us.
SPEAKER_08:We're going to share our answers to these questions, but we'll also pull in some of this season's guests for help on a question or two. So keep listening. You might have your own questions answered on this episode of Raising Kids in the Age of AI, a podcast from AIEDU Studios created in collaboration with Google. I'm Alex Katron, founder and CEO of AIEDU, a nonprofit helping to make sure that students thrive in a world where AI is everywhere.
SPEAKER_03:And I'm Dr. Lisa Pressman, developmental psychologist and host of the podcast Raising Good Humans. On this episode, we're answering some of the most popular questions and common concerns parents have around their kids using AI. So let's dig right in. I am ready, I think.
SPEAKER_08:Okay. Uh here's the first question.
SPEAKER_01:I'm worried about them becoming overly dependent on AI for critical thinking, problem solving, and creativity. How can I encourage my kids to think critically, problem solve, and not constantly rely on AI?
SPEAKER_08:The first thing that I have to say in answering to this question is we don't really know. There's one school of thought that's like, you know, technology has always, you know, replaced aspects of the way we learn. You know, we used to write with pen and paper. And when we were switching to computers, there was actually like a lot of, you know, people who were worried that if we lose the skill of like handwriting, you know, that's going to get in the way of students' ability to remember things because you you remember things better if you write them down. And then there's another school of thinking, which goes something like you know, AI is so helpful that it can become a crutch. Um, and rather than, you know, work through complicated problems, uh, students will sort of reflexively uh just turn to the AI the minute they run into any kind of challenge. Um, it's probably going to be a mix of both. And the inverse of this question is, you know, what do you look for to make sure that your kid isn't becoming too reliant on AI? And what we're talking about here are the characteristics of what we call AI readiness. Uh, it's something that we're really focused on at AI EDU. And that's because being AI ready is more than just using AI tools. Being AI ready means being able to use AI tools, yes, but also being appropriately skeptical of what AI produces and being able to talk about what you've learned in the process of using AI. So that means being able to understand that AI can be biased, understand that it can make mistakes, sometimes really big mistakes. And most importantly, being AI ready means understanding your human advantage and making full use of your own firepower. So it's important that you stop and consider your perspective, you know, the critical and creative thinking that makes you uniquely human and uniquely you. So critical thinking is obviously a huge part of this. And Aliza, I know this is something that you've been talking a lot with parents about. So, you know, what's your advice about how do we help kids build those critical thinking skills and ensure that they're keeping those muscles, you know, flexed and strong?
SPEAKER_03:Well, I think there are ways to develop critical thinking skills outside of the context of using AI so that by the time you're facing using AI, you already kind of have your brain primed to ask those questions. Because basically, critical thinking allows you to step back and look at a problem from different dimensions and it helps you evaluate. And so all of those things are things you can do every single day, from when kids are young doing pretend play activities, from reading stories and pausing to ask questions to naming what's going on in circumstances or reflecting back when your kids are telling you stories about their day or challenges they've had, and sort of coming back to them with naming it and then asking them questions about it and asking them to come up with different solutions. And all of those things can be done in playful ways or in, you know, around the dinner table. And that helps you when you finally do end up sitting there facing these questions with AI. It's already part of your thinking skills.
SPEAKER_08:All right, let's uh let's hear from the next parent question.
SPEAKER_06:I'm very concerned about the ethics or morality of using AI in schoolwork. I would never hide it from them or encourage them not to use it, but I'm definitely being very thoughtful about how I would encourage them to use it.
SPEAKER_03:I mean, the first thought that I have around that is ask your teacher. Every teacher is gonna have a different idea about what they expect when they give you homework or when they give you assignments. So be open about it. And you can say this to them: if you're asking yourself, is this cheating? Does this feel too easy? Does this feel like I'm skipping a step? Go ask your teacher. There's nothing wrong with keeping in good close communication with your educators. Different ones are going to have a different idea. So each class is an opportunity to figure that out.
SPEAKER_08:Yeah, it's, you know, it's it's still, this is a work in progress. At the end of the day, you know, you know, my mom was a math teacher. And I say this a lot, you know, like when I went home from school and I needed help on my math homework, I had mom GPT. And not every kid had, you know, my mom or a dad who could help with their algebra homework. So, you know, I think in many cases it's it's quite powerful that kids are going to have this outlet to kind of turn to, you know, when they do need to sort of like, you know, help sort of moving them along. Um, and the key is, you know, and my mom would, she would never do my math homework for me. You know, she kind of walked me through it and gave me just enough hints to kind of like, you know, help me push past some of the challenges that I was having. And that's a good way of thinking about how to use AI ethically for schoolwork, not as something to just give you the answers, but as a tool for getting unstuck. So you could ask yourself, are you using AI in a Socratic way, in the same way that a parent or a teacher or a tutor might support you? Is it asking me questions and pointing out where I need to keep working? Did I come to the final answers myself? And most importantly, could I do another problem like this one on my own? Or are you pretty much where you started in terms of understanding? So answering these questions honestly is going to give you a good idea of whether you're making good use of the tool or whether it's becoming a crutch.
SPEAKER_03:Another question we hear a lot about when it comes to AI and cheating is like projects. If you're in a group project and this happens constantly, if you're in a group project and some of the kids want to be using AI and other kids don't feel comfortable with it, you know, how do you communicate that? And there are just so many things that get opened up with all of this. And I actually don't know if we will have for quite a while an understanding of whether or not cheating has increased or not in the era of AI. It's certainly easier. We know that. But Alex, how do you address this with the students, teachers, and school administrators that work with you and AI EDU? Is it um you know, is using AI to write an essay or study for a test cheating? I mean, of course, I say that, and it is cheating if it were to write the essay. But if you're having an iterative conversation like we've talked about, or um if you're using it to spit back questions to help you study for a test, is that cheating? Like the question is where do you draw the line? And I think I will be very curious to see, is there ultimately more cheating? Or is there more available cheating because the lines are still too blurry? So the adults don't really know how to define it for the kids anymore.
SPEAKER_08:I think it's also really helpful to, you know, hear directly from educators who are tackling this in real time. And um, so I why don't we why don't we listen to uh Nick Mate's take, who um has been actually not just asking this question, but uh trying to figure out how to answer it.
SPEAKER_07:This cheating, or I guess if we want to call it cheating, the use of LLMs, this technology, use of technology to like kind of get past assignments. I don't think I'd say it's worse. I'd say they're using the tool a lot for what it's being marketed for. For, you know, help us, you know, uh there's a lot of students that use it to like make flashcards, to, you know, ask them questions, to they'll feed all the notes for class and say, like, give me possible test questions on these. You know, there's students that use it, uh air quotes the right way. Um, and there's students that are going to use it to like, you know, just get the answers out. But look, you know, I was taking classes in 2002, and we had our calculators in our engineering, you know, engineering classes, and and people would program the formulas into their calculators that they were supposed to memorize. Like that it's it's nothing new under the sun on on some of this stuff. At the end of the day, if you put a barrier in front of people, which is kind of how a lot of folks see education, then folks are gonna figure out how to overcome that.
SPEAKER_08:Nick is sort of of the school that, like, you know, the future is here, we kind of have to get with the program because you know, we we can't really resist this tide. Uh, he also works with college students. And so, you know, it's a slightly different demographic. I think as you go earlier and earlier in school, you know, the dynamics change a little bit. I think the sentiment is there, though, is and Lisa, I've heard this from you sort of time and again, is in this is a conversation about trust. Um, I I think students are going to reflect back, you know, the way that we approach this challenge. And if we approach this, you know, with this like assumption of bad intent, that's a really demoralizing place to start the conversation. And so I think it's helpful to take the approach that Nick explained previously in the podcast. Don't spend time focusing on creating a hack-proof assignment and instead be really clear how do you want students to work with AI? When is it appropriate? And also build assignments where AI isn't present at all or just couldn't be useful.
SPEAKER_03:The majority of the time, even if it's easy to cheat, kids aren't going to cheat. That's not their natural go-to place because they've learned over time that there is a moral code and it doesn't feel comfortable for them, particularly um, you know, when they don't have sophisticated thinking, it's actually harder to cheat. And so we do need to think sort of bigger picture, like what's going on at home, what's going on for them personally, what's going on with the relationship with the teacher, in addition to, yeah, we've got to design classrooms given this new world. Being able to teach your children how to be savvy about AI and understand that it can hallucinate and provide misinformation seems to be on a lot of parents' minds. One parent writes, How can I teach my child to question what AI tells them to spot misinformation or bias?
SPEAKER_08:I had this experience with actually my parents. Um, my dad, we were sitting down at the dinner table, and and he was like, Yeah, you know, I've been getting this ad and it's it's Tom Hanks, and he's like trying to sell some diabetes medication. And I just like it's so weird. I'm like, I don't understand why Tom Hanks is trying to sell this like random medication. And so I use it as an opportunity to have this exact conversation. I said, Well, I was like, I was like, Oh, that's interesting. You know, did you did you stop to consider, like when you were like watching it, did you stop to consider whether it was real? And my dad was like, I mean, it it must have been real. And I said, Oh no, like let me show you. And so I showed him some examples of deep fakes, and he was like, looking at the deep fakes, and he went back to the ad and he was looking at the ad and he had this like truly he was like this moment of like everything was sort of like coming down to earth all at once to him. And and he basically said this something to the effect of it's like, wow, I guess I can't really just trust anything that I see on you know the internet anymore. Um, and I was like, Yeah, I mean, I don't know if it's that you can't trust it, but I think you should always ask the question why would Tom Hanks be advertising a diabetes medication? Does that even make sense? You're there are going to be these opportunities where when you encounter a deep fake, you know, treat those as moments to just like have a dialogue with your kids.
SPEAKER_03:Right. And just to underline what you're saying, we can remind our young people to stop, think, and ask themselves does this check out with everything else I know about this topic or this person? And also maybe it warrants double checking anything that seems dubious or surprising, or even just something that feels interesting enough that's worth repeating. As a rule, you could just know that that's when you give a second search or ask another chat bot, did Tom Hanks really promote a diabetes medication?
SPEAKER_08:And also be aware that, you know, in many cases, like our kiddos are a lot more aware of deepfakes than even adults. Yeah. And so I think sometimes parents also need to kind of like wrap their heads heads around like what these capabilities even look like because and we're almost at a point of indistinguishable, um, like AI generated images and and and also video now.
SPEAKER_03:It's so funny because my kids, I often go to them when I can't tell if something is real. And that's happened with grandparents as well in our house. So uh I'm actually less worried about the kids. If you've had just a couple of conversations, they're pretty much better at it than we are. But of course, we have just keep on talking with them about it.
SPEAKER_08:I was attending some of this webinar and someone was kind of giving like an AI 101. And she basically said, Well, you know, because of deepfakes, you just can't trust anything you see on the internet anymore, full stop. And I was like, Well, hold on a second. That's actually kind of scary. Like, I don't, what does it mean for a child to tell them, like, you can't trust anything? Yeah. Um, I think it actually has to be more nuanced. I think it's more about you have to start with questions and curiosity. Um, but you know, at the end of the day, there are going to be reliable sources. You know, it's just a matter of making sure that you're not taking everything at face value. But I think we I I'm like, what's your take on that? Like, how do we prevent students from feeling like, well, I if I can't trust anything, like what is real anymore? I mean, that feels like a really heady thing to grapple with as a kid.
SPEAKER_03:Yeah, that's a pretty um big scary thing to have to reckon with. I think it's kind of the difference between cynicism and skepticism. Like, we want them to be skeptics. We want them to question, but we don't want them to think there's no hope and there's no point. And that's more where cynicism comes in. So I'd rather help them figure out like who do you trust and why do you trust them? So you have like a number of sources that you just feel safe around and have at the ready questions so that they know how to sort of assess, you know, quickly whether or not they want to look into something.
SPEAKER_08:Yeah. So that's a such a beautiful way to frame it, sort of cynicism versus skepticism. You know, I've been in conversations with people who are like understandably and rightfully concerned about some of like the ethical risks to AI, things like algorithmic bias. I mean, there's like a bunch of things to be legitimately concerned about. But there's, I think, sometimes a sentiment of like, well, AI is just bad. And so we just shouldn't be talking to kids about it because like, look at all these bad things that AI is capable of. Um, but I I worry because, you know, at the end of the day, I think the genie is out of the bottle. You know, understanding the risks is one thing, but you don't want to there we have to find a way to give students agency. I think agency really requires uh this sort of like a posture towards towards curiosity. All right, let's go to our final question for this episode.
SPEAKER_02:Now, what point do you implement it in a curriculum? Because I'm already thinking if my child wants to go into editing or something, we already know AI can speed this process, right? So I guess I would ask, when is the appropriate time? What is the plan and how to introduce it? I think that's where a lot of parents struggle.
SPEAKER_03:So you want to think about your individual child, whether they're more curious or whether they're a little bit too trusting and literal. You want to think about whether they're eager to try new things, how their impulse control is. And also there are some developmental principles to keep in mind. Is your child at the point where they can interact with AI and ask questions of it while being skeptical of the answers and push back a little bit? That's super important. And also, you don't want to jump over the productive struggle of trying to figure out something new. So I wouldn't want to hand a powerful AI-backed tool to a young person as a way of leapfrogging learning so that they can learn how to do something too quickly. You know, we want them to kind of learn the slow way. You want kids to be able to do a task for themselves before you give them a tool that kind of makes it effortless. It's just like what we do with calculators. We could give them to the seven-year-olds when they're learning edition, but we don't. We want them to do a lot of work first so that they understand what's going on. Once you learn the concept, then the mechanics can become more automated. Alex, what do you think about the part of the question that's more specific to working in a creative field like editing, where AI tools are becoming quite commonplace?
SPEAKER_08:This is a really such a good question. It's kind of hard. Um Alex Moulton, who who did AI EDU and AIDU Studios brand design, I was asking him this question because I gotten questions from parents who their kids are like really artistic and want to go to, you know, want to become artists. And I said, like, what's your advice to them? And and I sort of started by like, how does you know your agency uh like do they look for AI skills or you know, like what are the skills they're looking for? And he was like, we'll teach them how to use the tools. He's like, the hardest thing is, do you have taste? Like literally, like, do you have taste? And and then I said I asked him, well, well, how do you uh achieve that? And he said, I don't think you could speed run taste. You literally have to like go through the process of learning to draw, learning to paint, studying art, going to museums, you know, spending hours on a painting or a drawing, and then it sucks. And you have to go and do it all over again. And I think that's an example of if if you have a kid who's interested in art and they all they're doing is just typing prompts in and pressing enter and then pasting their, you know, their AI art on the walls, I think that would raise some red flags for me. I think I would, I would be, I would have a conversation about, you know, that's cool that you're you're using AI to generate images, you know, can I be inspiration for something that you do? Um it comes back to what you were saying, Lisa. We want to learn the concepts behind things. We want to learn how to learn. And that means even if we have tools that allow us to go beyond specific, maybe mundane aspects of learning, it still is valuable to go through that process. Shantanu talked about this with me, the value of learning for its own sake.
SPEAKER_05:You know, when I was in in college, one of my professors was uh Professor Bose. And he's he's the person who founded the speaker company, uh, Bose Speakers. And I remember one of the first things that he said in in class on the on the first week, well, he's teaching this acoustics class. And, you know, people ask, well, when am I gonna use this? Am I gonna do the Fourier transforms? What's the point of this content? And you know, he had a really good um lecture where he taught, where he said, you know, this class and engineering generally and the and the content that you're learning here, it teaches you how to think. It teaches you how to see a problem you've never seen before and know that you can be confident and be persistent and make your way through that problem.
SPEAKER_08:Like the TLDR is like, there's this question of, you know, are you learning to learn? And, you know, I think if you if you can, if you feel like your kids are learning, like if they're actually learning, I mean that's that should give you like a lot of confidence. Um and if you have, if your spidey senses are saying that, like, I'm not sure that there's actually learning happening here. I think that might just be um, you know, they might just be going through the motions. It's an opening for you to have a conversation. Um, again, an approaching with curiosity to to repeat back something Lisa said in an earlier episode about sort of just like the way parents can kind of like engage in this conversation.
SPEAKER_03:Also, when I think about what it means to be a human, I mean, that's that's what we're talking about. Our kids are wired to learn and to be little scientists and to really want to understand how the world works. And our job is to help them remain excited about that and not to get in the way of it. Thank you so much for listening. Join us again next week as we discuss what it means to be citizens of an AI world. We'll be joined by Philip Colligan, CEO of the Raspberry Pi Foundation.
SPEAKER_00:In the future, we're looking at finance decisions, healthcare decisions, law and order, criminal justice decisions being automated more and more. And so it's a fundamental issue of rights that all young people have the literacy they need to be able to interrogate those systems.
SPEAKER_08:We're going to be going international and getting a glimpse into what AI readiness looks like globally. Find out where AI will take us and future generations next on raising kids in the age of AI, a podcast from AI EDU Studios created in collaboration with Google. Until then, don't forget to follow the podcast on Spotify, Apple Podcasts, YouTube, or wherever you listen so you don't miss an episode.
SPEAKER_03:Take a minute to leave us a rating and review on your podcast player of choice. Your feedback is important to us. Raising kids in the age of AI is a podcast by AIEDU Studios created in collaboration with Google. It's produced by Kaleidoscope. For Kaleidoscope, the executive producers are Kate Osborne and Lizzie Jacobs. Our lead producer is Molly Sosha, with production assistance by Irene Bantiguay. Our video editor is Ilya Magazanen, and our theme song and music were composed by Kyle Murdoch, who also mixed the episode for us. See you next time.