Tuesday Talks!

Raising Thinkers In The Age Of AI: A Practical Guide For Parents And Teachers

Dr. Tiffany Season 3 Episode 27

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 24:32

What if the real danger isn’t AI itself, but raising kids who can’t think without it? We dive straight into that tension and map out a clear, humane path for parents and teachers who want powerful tools without sacrificing grit, curiosity, or integrity. Rather than banning technology and fueling shortcuts in the shadows, we show how to build AI literacy—knowing when to use it, when not to, how to question it, and how to fold its output into genuine, student-owned learning.

By the end, you’ll have a shared language for digital responsibility at home and at school, along with concrete steps to protect effort, stamina, and voice. AI should amplify a child’s thinking, not replace it. If you’re ready to swap fear for a framework and raise thinkers who can thrive in the world that is, press play, share this conversation with a friend, and leave a quick review to tell us what guardrails you’ll try next.

Tuesday TalksReal conversations sparking real change in education.

Links to all episodes ➡️ https://linktr.ee/drtiffanyslp
Like, comment, subscribe & share!

Connect with us via email  at info@ourwordsmatterllc.com!

Tuesday Talks is hosted by Dr. Tiffany. She has been a Speech/Language Pathologist for 20 years. She's also a speaker and educational consultant. Dr. Tiffany hosts webinars and in-person workshops for teachers and parents.

Book Dr. Tiffany as a speaker for your teachers, parent groups and professional development sessions! Visit: www.OurWordsMatterConsulting.com

SPEAKER_00

Welcome to Tuesday Talks, your educational podcast helping parents become strong advocates for their kids and teachers to make big impacts in the classroom. Here we go. Hey, hey, hey. Welcome back for another Tuesday Talks. Thank you so much for taking time out to join me today for another episode. And I want to talk about a big fear today. When we are talking about our kids using AI, I propose that the real fear is not AI itself. The real fear is raising a kid who cannot think without it. Just think about that for a moment. The real fear is not AI. The real fear is raising a child who cannot think without it. And so it got me to thinking how do we create kids who use AI responsibly? How do we prevent dependence? How do we protect critical thinking, creativity, and integrity while still embracing powerful tools like AI? So when I broke down this worry, I really thought about it in these ways. As parents, and I speak for myself too, I'm really worried about mental laziness, solving problems and using stamina to get through them. So like that reduced problem solving stamina, um, emotional discomfort with struggle. And then the shortcut culture that our kids are a part of right now. They want to take the fastest route to the end results possible, especially when it's a non-preferred activity or subject area, right? So AI doesn't create dependency, but a lack of structure around AI does. If you caught part one of our AI series, we talked about how parents feel about AI and using it in study mode and not giving kids answers. But in part two today, we're gonna move from the feelings to framework. And so I want to introduce kind of this, I think of it as a powerful concept. AI is a literacy issue. Think about that. Just like internet literacy, social media literacy, research literacy, we need AI literacy. And what that means is knowing how to use it, knowing when to use it, knowing when not to use it, questioning its accuracy, understanding its bias, and using it to enhance thinking rather than replace thinking. So I also teach an evening graduate school course. And in my graduate course, I don't ban AI because I think it's a valid tool, but I do teach my graduate students how to cite it, how to critique it, how to refine it. Don't just be lazy and copy it and paste it into an assignment that you're turning in. Don't do that, right? Like have some integrity when you use it. Use it to spark ideas, use it to kind of corral all of your thoughts about a particular topic, but to just copy and paste it. I've had some interesting experiences with even graduate students turning in work where they have not edited or refined anything that AI gave them. They just turned in what AI said and they went with it. And that positions you as forward thinking and not reactive. When you as the educator can teach your students how to cite AI, critique it, refine it. You are positioning yourself as this forward thinker, not this gatekeeper to say, no, you can't use AI, don't it's bad, don't use it. That sets you up to be more reactive. And really, we know the more that we ban things from students and even our own kids, the more they want to use it. So I think about it like this when I talk to my son about it. Think first, type second. Think first, type second. Before my son can use AI, and he typically uses it for math because that is a non-preferred subject area for him. I make sure that before he puts anything into AI, that he attempts the math problem, he identifies what is the confusing part for him, and that he asks AI a very specific question. And I allow him to use AI in study mode so that he can get walked through the process for how to answer and arrive at the answer for the math question. So after he uses AI, so after he has, you know, kind of done that pre-work, he's attempted the problem, he's identified what's confusing, he's asked AI for specific questions. I allow him, allow him to use AI. And then once he has the answer, he has to explain it back out loud to me. He does another problem, exactly the same as what AI walked him through, to do it independently. And then together we check for understanding. So it's not just this free-for-all, right? There's a framework to it. It's not just go use it and you know, use it however you want. I have instilled in him the way that I think it's appropriate for him to use it. And of course, in your home, in your classroom, you do the same, right? You set these boundaries for them so that they are not just using it out here all willy-nilly, but that they're using it in a more refined way. And I think about it in this way, and this is what I expressed to him in those moments. Like AI doesn't get to replace your brain, it can support it, but AI does not get to replace your brain. I don't want to raise a kid that's dependent on AI. I don't want to raise a child who can't think for themselves. They need to go put it into a prompt in order to get, you know, guidance on the next step for every little thing. I want to raise a kid who knows how to use this tool wisely and properly. So even in refining this process with him, there's some, there's lots of reminders, right? Like with anything, kids are gonna, hey, they're gonna need reminders for you to be like, hey, remember we said you're gonna try the problem first. You're not gonna ask AI first, you're gonna try it. And you also need to tell me what is confusing about it. What can't you figure out on your own? Go back through your notes, through examples that you had in class, whatever the case may be, but you need to be able to explain to me what's confusing about it. And then I want to know what you're gonna ask AI because we cannot just ask him a term AIM. We cannot just ask him what the answer is. We need a specific question to ask. And so we refine that process. Sometimes he can he come up with a super specific question, sometimes not, because he's just so confused by it. And then that's where I come in as mom to walk him through what his steps are that he's trying, and then we arrive at the point where he's stuck or confused, and then aha, that becomes the question that you ask. AI, I've done this, this, and this. This is my math problem, and I'm stuck on this part. What should I do next? But he has to be able to explain that, has to be able to think first and then type second. So when we talked about productive struggle back when we discussed the learning pit, you know, productive struggle is something that kids need. And I'm gonna be honest with you. Kids today struggle less, but they tolerate less discomfort too. Think about that. They struggle less, most kids. You know, we their can, you know, main concern is the Wi-Fi not going out so that they can still watch YouTube Shorts or play PS5 or Roblox or whatever it is, they struggle less, but they tolerate less discomfort too. So think about back when dial up was a thing. You wanted to get on the internet, it was not as easy as opening up your laptop or turning on your computer and boom, you're on the internet in seconds. You had to wait for that dial up. I know if you were a dial up kid, you remember that noise. If you needed to look something up in a physical encyclopedia, go into the library, go into the card catalog, digging through it, figuring out where the book is, going to get it, thumbing through the pages until you find exactly what you're looking for, or taking the time to even go to the library to get the books that you need to do the research. Like that was the struggle, the productive struggle that kids in my generation had, but now answers are instant. And the danger isn't AI, the real danger is losing that productive struggle, and that is essential. That productive struggle is the space where the brain grows. How am I working through this? How am I going to problem solve? How am I going to keep pushing even though I'm ready to call it quits for the day? AI should shorten confusion, not eliminate effort. Think about that for a minute. AI and the way kids use it should shorten confusion, not eliminate their effort. We still want them to have to put in that effort to experience that productive struggle, to help their brain grow, create new experiences of pushing past something that's challenging, pushing past something that's giving them some discomfort to arrive at a product that they could be proud of, not something that they just copy and pasted and turned in, right? So I thought about it in this way: some guardrails for kids. Like as a family, guardrails that me and my son have, there's again, no AI use before you attempt whatever it is, whether it's spelling a word, creating a paragraph. Like I said, my son uses it primarily for math. You don't get to use AI before you actually attempt it. AI explains it doesn't answer. So teaching him that just because this is what AI gave you, it is a computer-generated system. Yes, it's bringing information from all parts of the internet. But at the end of the day, this is technology. And we all know that technology can be friend or foe. Really just depends on the mood of the tech at the moment. So AI explains, it doesn't answer. It it needs to be checked. And then when you get that answer, you need to be able to explain it back. I remember he used it to look up a not look up a word, but to spell a word. I said, okay, fine. How do you think you spell that word? He gave me his attempt. He wanted to show me something in Chat GPT, so he put it in to Chat GPT and chat gave him the correct spelling for it. And so now I need you not to just copy that word down on your paper. I want you to read the letters out loud. I want you to write it on your paper. I want you to read what you wrote on the paper so that it matches what you wrote on the screen. We know that transference is a whole nother skill. Transferring what you see on a screen or a book onto the paper. I want you to make sure that you wrote it down correctly. And then I want you to spell it letter by letter based on what you wrote on your paper. And now I want you to spell it back to me because I need to know that you can explain back the response that AI gave you. I don't allow any copy or paste submissions. And let me tell you, he has tried many times to get over on me on some language arts type assignment. This was early on, back when you know AI was just kind of first coming out. But I know how my kid writes. I know what his writing looks like, what it sounds like. And he wrote this magnificent paragraph. I mean, magnificent. I think it was about a they gave him a free write assignment and he could write about whatever he wanted. And I mean, this paragraph was phenomenal. I also knew it was not his work. So, you know, the way I approached it was to say, hey, you know, this actually sounds really good. Um what inspired you? He answered my question. So I noticed there's some big words in this paragraph. I circled one. Can you tell me what that word is? He read it. What does that mean? Like the way that you used it in that sentence. It is, I mean, it's excellent. What does it mean? Um now we started to see the breakdown. Well, I think like, oh, well, you know, you really shouldn't use words in sentences if you don't know what they mean, because it could mean something different. And now your sentence meaning has changed completely. So I went through and I circled a few other words, and then we had the discussion about copying and pasting from AI. This isn't your work. We talked about what plagiarism is and why it's important for him to be able to use his brain to come up with his thought and ask AI for some input, but he can't ask AI for the thought, right? The thought needs to be his, the concept needs to be his. Maybe you ask AI to expand it, but not to think for you. So we had some moments like that, which I'm sure if you're a parent out there listening, you can that totally resonates with you. You can relate to it. But that was one of the garrels that we talked about in the beginning, and that I kind of put in place for him. And like I said, we have to revisit it. Kids are going to be tempted to take the shortcut. Remember, this is a shortcut culture that they're living in. They're going to be tempted to take the shortcut. There's a lot of work to be done, a short time to be to do it in, and they want to get back to something that's preferred. Typically, it's social media, YouTube, video games, friends, whatever, sports. And then the last guardrail was me periodically going in to review the prompts that he was putting in Chat GPT. Now, you know, chat opens up in Google Chrome, and so I just go to my son's Google Chrome profile, open up chat, and I can see what he's put into the prompt. So that gave me ways to reframe how I'm talking to him about what he's putting into AI. Of course, we never want any personal information. He's using the main, he's using it mainly to have, you know, some guidance on math, usually. But that gave me some insight into how he was actually using because he can tell me one thing and use it for something totally different. So, you know, trust but verify. You're gonna trust my kid, but I'm gonna verify what he says, right? So those are kind of the five AI guardrails that I put in place. No AI before you attempt it. AI explains, but it doesn't answer. You must explain it back. You cannot copy and paste and submit that work as your own. And then I periodically review the prompts that he has put into Chat GPT. And having kids show you the exact prompt that they used will give you so much clarity into how they view AI as far as it being this 100% truthful machine, or do they use it just to, you know, copy word for word? Do they ask it questions back? Like, well, you said this earlier, but now you're saying this. You know, chat will give you multiple prompts. And so you do have to go back and and make sure that you're synthesizing the information that AI is giving you. So going back to look at those prompts will really be helpful. And I think it builds transparency, it builds that digital responsibility, it builds on metacognitive skills. And again, remember, like we talked about at the top of the episode, we need AI literacy. It's building that up as well. So if we we zoom out a little bit and we think about what schools are struggling with, you have 25, 30 kids in a classroom, that's a whole different ballgame from just monitoring your one, two, three, maybe four kids, five at home, right? Like schools are struggling with that academic integrity policy, outdated assessment models, teacher training gaps as well. And if assignments can be fully completed by AI, it's my position that the assignment design needs revision. If a kid can go to AI and complete a whole assignment using it, we gotta think about redesigning and revising that assignment. Because while that is truth, it is very bold to say that, but maybe we need more oral assignments. Maybe we need more process bad process-based grading. Maybe we need more student explanation in their own writing versus typed out on the computer. And maybe we need AI aware rubrics that we use when we're grading assignments. These are some things that I've implemented in the graduate course I teach with the college students, having them explain their answer, making them aware of how I am looking for AI use in their work. And we have started to integrate more oral assessments as well. But thinking about that in the classroom, if if an assignment can be completed fully by AI, the assignment design needs revision. I think that's just a plain and simple truth. Because we are raising kids in the digital age, and so we cannot keep using outdated assignment models on kids that are being raised in the digital age. It just is not ever gonna line up. And we, as the educator, are always gonna be a few steps behind our students, and to just ban it is, I think, making the problem even worse than giving them the guidance. Yes, that might be easier, right? To just say you can't use it. If I find out that you're using it, this will be the consequence. That's the easy way out. But kids are savvy, whether they are elementary school age or college age, they are savvy. And if they're looking for a shortcut, by any means necessary, will they find one? So instead of going the band model, why don't we go the refined framework model? So maybe as a teacher, as a parent at home, you think about what guardrails you want to put in place. Like I mentioned, the five guardrails that me and my son have come to an agreement on. No AI before you attempt, AI explains, but it doesn't answer. You must explain it back. No copy-paste submissions. And then I check his uh prompts that he's put into AI. Take those. And as a matter of fact, I'll put together a little handout and post it on social media for you all to be able to use. Come up with a family agreement, have them sign, have them understand the power of their signature. You're signing this, you are coming into agreement that this is what we are saying are the guardrails that you have to stay within when you're using AI. We're not gonna ban it, but we are gonna use it within this context. Does that way take more effort on you as the parent? For sure. For sure. But does the outcome boost the student up versus push them towards sneaking and using it anyway? We really, really have to consider that AI literacy and making sure that we are not taking away that effort, but maybe just shortening some of their confusion a little bit. And like I mentioned in last week's episode, sometimes AI can rephrase how you've already explained something to your child to get past the confusion. AI can rephrase that, and then it might click with your child the way that it's been rephrased. Because sometimes I know for myself with my kid, I explain it one way. Sometimes that's the only way I know how to explain it. But I might try a different way to explain it and still not getting it. I'm at a loss. Like, okay, I don't know how else to say this. And sometimes AI can come in and be that additional explanation that we need. So thinking about it in those spaces at home and at school, I would ask you are you preparing kids for the world that was or for the world that is? Because AI is already integrated into medicine, into law, into engineering, into writing, into business. All of these areas, the future workforce will use AI. So the real skill is critical thinking. Plus discernment. Think about that. The real skill is critical thinking plus discernment. Because our job isn't to protect kids from tools, it's to teach them how to use the power responsibly. And AI should amplify your child's thinking. It shouldn't replace it. And if we can start there with our framework, working with our own kids, working with our own students, I think that we'll see a generation of kids who use AI in a way with integrity and not just trusting a machine to do whatever they want it to do, but actually using discernment to know if what they have asked this machine to do, did it do it to their level of standard? Did it do it with accuracy and analyzing what the machine has fed back to them? So let me know in the comments how you have been using AI, framing the use of AI in your classroom or at home. Let me know how you've been managing that. I really want to know. It takes a village, right? You might share something that another parent could fully be like, yes, that is what I need to do. So look for that. Um, AI guidelines and the parent agreement that you can sign with your kid and see what comes of that. Because I think that the more we try to ban things, the more we're pushing them towards it. And if the future is using AI, we want our kids to have a place in the future workforce, right? And if they grow up not knowing how to use this tool responsibly and knowing all that it can do to help, we're gonna raise kids that can function in the future. Thank you so much for joining me for another Tuesday Talks. Be sure to share this episode with friends, colleagues, other family members, because we really need to get this conversation started and get it heightened so that everybody is working towards a common goal, which is to have our kids know how to use AI, but use it responsibly and with integrity. So, with that, I will see you next week for another episode. Bye. Be sure to share this episode and join me next week for our brand new Tuesday talk. See ya.