Screen Deep

Does AI Help Students Learn? With Adam Dubé, PhD

Children and Screens Season 1 Episode 26

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 47:18

Schools are increasingly deploying AI-powered technologies in classrooms that promise to revolutionize education, yet there is growing concern over the risks to children and adolescents’ learning by doing so. What do children, adolescents, and teachers think about how AI is being used in the classroom, and do these technologies actually impact learning and key cognitive skill development? In this episode of Screen Deep, host Kris Perry explores these questions with educational technology expert Dr. Adam Dubé, Associate Professor of Learning Sciences at McGill University. Dr. Dubé describes his research on how youth and educators view AI, key pitfalls to avoid when using AI for learning, and how past lessons from introducing technologies can inform the way we think about educational AI. He shares what parents need to know about selecting a good educational app for their children, and how to make informed decisions about utilizing technology to enhance, not replace, the learning process. 

In this episode, you will learn:

  • How relying on AI for task completion can interfere with the development of fundamental skills like creativity and detailed reading.
  • Simple questions educators and students can ask to determine if AI use is offloading thinking or supporting learning.
  • How teachers’ opinions of AI use in the classroom have changed over time and what’s causing growing concern. 
  • What is needed for more thoughtful integration of AI into classrooms.
  • How to identify a quality educational app and why parents often overlook those features when selecting digital apps for children.


For more resources and research on this topic visit the Learn and Explore section of the Children and Screens website (https://www.childrenandscreens.org)

--------------

Follow Children and Screens on:

Facebook: Children and Screens: Institute of Digital Media and Child Development
Instagram: @childrenandscreens
LinkedIn: Children and Screens: Institute of Digital Media and Child Development
X: @childrenscreens
Bluesky: @childrenandscreens.bsky.social

---------------

Music: 'Life in Silico' by Scott Buckley - released under CC-BY 4.0. www.scottbuckley.com.au

[Kris Perry]: Welcome to the Screen Deep Podcast, where we go on deep dives with experts in the field to decode young brains and behavior in a digital world. I'm Kris Perry, Executive Director of Children and Screens. 


Today I'm joined by Dr. Adam Dubé, Associate Professor of Learning Sciences at McGill University, Director of the Technology, Learning and Cognition Lab, and former Director of McGill's Office of Educational Technology. Adam is also a joint fellow of the American Educational Research Association and the Society of Research and Child Development. Adam brings a combination of expertise on the intersection of cognition, development, educational psychology, and EdTech design. His work explores how children learn math, how teachers adapt to technology, and how AI tools from classroom software to voice assistants are reshaping learning environments. His research has examined everything from the quality of educational apps to how teachers can meaningfully integrate games and AI into their instruction. 


Adam, we're delighted to have you on Screen Deep.


[Dr. Adam Dubé]: Well, it's great to be here, Kris. Thanks for speaking with me.


[Kris Perry]: The rapid development and deployment of GenAI tools in the last year has been absolutely dizzying. I've heard many examples of how these tools are being used in classrooms, how valuable EdTech companies are becoming, and that policy makers are even racing towards policies to limit smartphone access in schools, just to name a few of the many ways this is being deployed. 


To help our listeners unpack all these layers of GenAI tools and their impact, let's start with how children understand and interact with AI in general. Please start with, like, the way your lab has done significant work on investigating that very question.


[Dr. Adam Dubé]: Yeah, so when it comes to children and students understanding artificial intelligence, we've asked the question of, “Well, how do children think AI actually works?” And we start with this of, “How do children think that AI thinks? How do they believe it produces answers? Do they think that it works like a calculator? Or do they think it works and thinks more like a human being?” Because when they talk with these systems, they respond back like a person does. 


And so, we conduct experiments with kids from 4 to 8 years of age. We have them interact with smart speakers, and then these AI systems that are in their homes every single day. We give them experimental tasks, and then we ask the kids about their interactions with these smart speakers. And we ask them, well, “What do you think of the smart speaker? How does it know stuff? Why does it say that? Why does it get things wrong?” And so we do research with children, asking those types of questions, with the idea being that, well, once we understand how kids think AI thinks, then we're much better situated to understand how they learn from artificial intelligence.  Because we know it's really important for how children think their teacher is thinking, when they take lessons from a teacher. And so how is it that they think an AI thinks, and then how does that influence the way they use say and learn from Alexa or Siri or Google or a ChatGPT, once it's in their classroom, say, for example. 


So that's one of the pieces that we're looking at with studying how children are interacting with these systems and learning from them. How do they believe it thinks? And then that research is just starting. It's in its infancy.


[Kris Perry]: I just heard a funny anecdote recently about asking children, particularly young children, to draw a picture of what they thought was inside a smart speaker, Alexa or Siri. And if they're three or five years old, of course, they're drawing very different things. Have you heard that about those experiments or those anecdotes? And if so, why is it that kids are drawing what they picture is going on inside. It sort of ties back to your point earlier about, like, how do kids think this works?


[Dr. Adam Dubé]: I am familiar with some of those studies. We actually debated how we should go about asking kids what they think of AI and doing the type of study where you ask them to draw the AI is one way to go about it. And you can get some really interesting results from that. Some kids will draw sort of like technology looking systems, they'll draw pictures of computers, but other ones will draw pictures of human beings. And we thought that was really interesting, but we wanted to, in our work, just ask them and have a conversation with them as well as have them interact with a speaker and then ask them questions afterwards and then just have them explain to us what they think about these AIs. 


And so, either – whether it's the drawing or if it's having a conversation just asking a child, you know, we tend to get two different types of responses. Children, when they're younger, say, between four to six, they're more likely to talk about AI systems as if it's like a person. They use language that just explains these things like, “It has a belief. It thinks something. It knows information like a person does.” But as they get older towards seven and eight years of age, they start bringing in more technical language. They start talking about these things like they would talk about Google or a computer, say, for example. 


So early on, children are thinking of these things as thinking like human beings. And later on, they're more likely to explain their thinking like technical systems. Now, it doesn't mean it's sophisticated, but they're using language that talks about them as if it's a machine. 


But what's interesting is we also ask them a question of, like, “Where would you put these devices? Do you think it's more like a person, more like a machine? Do you think that it's unintelligent, intelligent? Is it unfriendly or friendly?” And kids don't place them, whether they're young or old, clearly as a machine or a person. They're putting them kind of in this weird middle spot. 


It doesn't quite work like a calculator or like your computer. It's not - it's clearly not a person, but it's somewhat alive to kids. It's definitely intelligent and they tend to think of them as friendly. And so when we think of young children and how they perceive these AI systems that they're interacting with, it's this smart, somewhat active or agentic, alive system that has sort of its own ability to initiate actions. And that they find overly – they tend to find friendly. We tend to see that quite commonly with young children. So that's where they're perceiving AI.


[Kris Perry]: One of the things you're helping me think about is the different stages of development, particularly the first three, four, five years of life and how rapidly the child is learning to distinguish between themselves and the caregiver. And there is a point, right, where that's very blurred when the child's not yet three or four, they're still having a hard time understanding the difference between them and the rest of the world. And as they get older and can distinguish not only between themselves and their caregiver, they're also able to distinguish between themselves and objects, and what those objects can and cannot do, and are they real or not real. And so, I just always want to come back to these fundamental concepts around child development because the way technology is purposely, or maybe accidentally, exploiting these really delicate developmental stages is of concern to me because I think, with AI and how rapidly it's being deployed, the confusion that it can cause that it's like a person that has a name and it talks, when the child's still differentiating itself from everyone else is worrisome. 


And you've noted that some AI technologies can be compared to calculators or other tools, but I've heard recently about another problem that's coming from these tools is something called cognitive offloading. And, simply put, this is when kids use GenAI for schoolwork, or they don't exercise or learn certain cognitive skills because they're letting AI take care of it. And I've seen research on younger children that showed weaker within-network connectivity, and attention, and memory systems in the brain when engaging with these tools over time. What's your sense? Of how much of this cognitive offloading is happening and whether it's a concern.


[Dr. Adam Dubé]: It is a concern. Now, importantly, what we need to set the stage right here is that we actually don't know how much of a concern this is for younger learners. We have information on research with adolescents and university students, but there's almost no research about how young children are learning with these systems and cognitively offloading, say, they're learning to like ChatGPT say, for example. All the research that's been done, up to this point,  is with adults or adolescents. Because those are the people that are easier to do studies with. That's the people that we first turn to when we actually do this work. 


But say, for example, with adolescents, there has been research about how they're using AI for their schoolwork. Now, even just seven months after AI having launched, over 70% of university students have reported using AI for schoolwork. And similar numbers happen in high school. Now, it's upwards of 90% of high school students say that they use some sort of generative AI for their schoolwork. And it really matters how they're using it for whether or not we should be concerned about cognitive offloading. 


A lot of people when they're concerned about students using AI, they're worried about cheating. But there's been research by Victor Lee where he's talked to 4,000 students across the United States within high school students, and said that, “Well actually, consistently, as we've seen in the past, only about 10% of students use AI for cheating. Just like in the past, only about 10% of students consistently would cheat on exams, and have someone else write their homework, and those types of things. It doesn't mean that students don't cheat here and there, but it's only about 10% that do this pretty consistently where we're really worried about it.” 


But every other student is using generative AI for some purpose. 80% say they use it to explain ideas to them because they're confused and they want an extra explanation. It's fewer, around 70%, say that they use it to generate ideas. About 60% of them say that they use it to summarize a text. So instead of reading something, they're putting it into a generative AI and they're saying, “Okay, give me the Coles Notes version of this.” And about 50%, just under that, are using it to give them feedback on what they write. 


Now, those things aren't cheating, but for every one of those things, we have to ask ourselves, is when the student's getting this extra help from this system, are they still learning the fundamental skills that they need or are they just having the system do it for them? And what we've seen in the past with recent research with LLMs with adults or previous research where we design technologies that were supposed to be helpful aids and helpful learning aids for students is that when we have these types of systems available to us, they tend to help us learn better in the moment. They tend to help us write better in the moment. But once they're gone, we actually didn't learn the underlying fundamental skills. We didn't learn how to generate ideas. We didn't learn how to summarize and deeply read. We didn't learn how to explain concepts to ourselves. We let these systems do it for us, and they didn't give us the opportunity to practice doing that ourselves. And that's where when we use these systems, when we talk about cognitive offloading, it's that actual effortful practice that doesn't happen, because we have a system doing it for us that results in us being less skilled in the long run. And so that's the concern when it comes to cognitive offloading with these systems, that we're not putting in the hard practice work because the system does it on your behalf.


[Kris Perry]: Earlier this year at one of our Ask the Experts webinars, you talked about questions teachers and students ask themselves to determine whether using GenAI for a specific task is helping them think or if they're offloading thinking. Can you share some of that with us now?


[Dr. Adam Dubé]: Certainly. So any technology you use in a classroom can be kind of thought of just being used for only four purposes. You can use it as a resource, which is like using it as a book to get information. And there you want to ask yourself, “Okay, if I'm using generative AI as a resource, should I do this?” It's like, well, a lot of the research right now is telling us that generative AI makes mistakes. It hallucinates. It's got inaccuracies. Now, would you use a book, a textbook, if you knew that it actually had significant mistakes in it? Well, maybe it's not a good resource as a result. Maybe these systems will get better, but the research is actually showing that these systems and hallucinations are actually increasing as they get more sophisticated. So there's a real question of whether or not they will get better in the long run. I don't believe they should be used right now as a resource of information. 


We can use technology, say, for example, as a tutor where you can use technology to teach you things. It's similar to a book, but there's more of a back and forth. But again, a good tutor is somebody that knows a lot of information. It's available, but they're accurate. And so these systems, they're going to be available to students and they're being used, say, for example, across North America and a bunch of different classrooms because generative AI tutors have been made freely available to teachers through Microsoft and OpenAI, say, for example. But they're inaccurate. So should we be using them as tutors? Well, probably not right now. It's probably not a good idea. 


And then it comes to using these things, you can use AI as a tool to get stuff done. Okay, so maybe you're a high school student and the teacher has said, “We want you to write a book report, but also we want you to shoot a video where you explain your book report.” And then that way you've got this other representation. Okay, well, maybe you can use generative AI to help you edit the video. It's like, well, for there, it's helping you get stuff done. Do you care if the student knows how to edit video? Do you care that they know how to use the software? Well, if you do, well then don't use the generative AI version. It's not teaching you all the decisions and all the steps. But if you don't care, if what you care is just about the book report, well, then fine, use generative AI to do stuff for you. It's skipping steps. I think that kind of makes sense from that standpoint. 


Now, the final one is you can use technology to sort of, it’s what's called a mind tool, which is where it gets you to reflect and think deeper on subjects. And so right now people are saying that generative AI could be used for this purpose. And it speaks to what we were just talking about. You could have a student that says – writes an essay, writes a paragraph, puts it into generative AI, and then asks it to give them feedback. And then the question here, and this is where research needs to happen, it's like, okay, when we use these systems and it gives us feedback, what makes it so that we actually listen to the feedback and then do the hard work of reflecting and thinking and saying about, “Okay, what makes me a good writer? Why was the sentence not well written?” Instead of just saying like we currently do with Word when you've got a grammar check, accept change, accept change, accept change, right? There's not really a lot of thinking when you're doing a spell check system right now. You just kind of accept all the recommendations. And we wanna make sure that if there's a generative AI tool that's supposed to help students think, it's not just telling them what to think and they're mindlessly accepting it. It should be sort of prompting them with questions about how to think and making them think deeper. And that's where there could be generative AI use as a mind tool. Now I have not seen any system that exists commercially that actually does that at this point, right? So I would say like that's an ideal version, an idealized version. It doesn't necessarily exist right now.


[Kris Perry]: Well, what you just described is what teachers do. So you have a pulse on how teachers view AI use in the classroom and do they support some of the uses that you just outlined or not?


[Dr. Adam Dubé]: So the pulse of it is that teachers have become increasingly negative towards AI in schools. When generative AI first launched, there was initial studies just seven months after it came out, the majority of teachers had tried generative AI, K-12 teachers, and 80% of them thought it was going to have an overly positive effect on education. In fact, teachers' early feedback on these systems, they were more positive than students were. 


But now, it's gotten much worse. In fact, only 7% of teachers right now in a recent Pew study believe that AI is going to have more benefits than harms. High school teachers are the least positive. 80% of them think that AI is going to have more harms or at least as many harms as good outcomes. 64% of middle school teachers have a negative attitude as well towards generative AI. And elementary teachers are the ones that they're most unsure. 47%, the largest group of them, are unsure about the impact of AI on education. 


So what this shows us is that the more teachers are in sort of higher levels of schooling, like in high school, there they have more negative attitudes. Why? Well, it's likely because they're being faced with the reality of their students using these systems and they're seeing the consequences and the difficulties that have arisen from both them and their students using these systems in sort of an unregulated, unfettered way. But when you get down to elementary teachers, it's very unlikely that an elementary child is using generative AI. So there the teacher's like, “Well, I don't know what the future is gonna be. It's not really affecting my classroom at this point in time.” 


Now, we also did some recent research on elementary teachers, because we were curious about, well, we know that kids aren't using generative AI, like ChatGPT, but a lot of the educational apps that are being deployed in classrooms for elementary students, like math games and reading games. The future of those is going to have generative AI. And we did a study, an experiment with 200 teachers, the elementary teachers, and we asked them, “Okay, which of these apps would you download for your elementary students?” And half of them had generative AI powering them and half didn't. Now, what we actually found was that the largest group of teachers, just over 50% of teachers, didn't care if generative AI was present or not. They were just like, sure, if it's there or if it's not there, but if it seemed like a good app overall, they were fine with using it in their class. 


Now, what that means to me is that for – going forward at the elementary level, we're going to have teachers that are choosing apps for their students that have generative AI running in the background. So that these things are going to make their way into classrooms, even though the kids aren't choosing the apps and aren't using generative AI for learning, teachers will be choosing apps that are powered by generative AI because they're somewhat indifferent.


[Kris Perry]: Well, I mean, I'm not going to take us on a side journey here about policies and schools, about apps and technology, because that's a whole other podcast. But I do want to go a little deeper into teachers and training and whether or not additional training would improve their view of AI in the classroom. And do you know of examples where teachers are getting trained on the best ways to use AI in the classroom?


[Dr. Adam Dubé]: Yeah, so when we talk about training teachers for AI, I don't know if it's about improving their attitudes towards AI. I think teachers' negative attitudes towards AI might actually be an accurate reflection of how it's impacting learning in the classroom, right? So them being techno-skeptical, them being like, “We're not sure if this is actually having good outcomes,” might be an accurate representation of what's happening in schools. So we shouldn't be putting in training to try to improve their attitudes. We should be giving them training to improve how critically they use them and selectively they use these systems, if at all, and then empowering them to make those decisions. 


Now, unfortunately, right now, the majority of teacher training that does happen doesn't do that. It tends to be purely tech-focused where it's basically teaching teachers, “This is how you use ChatGPT. This is how you use a prompt. This is how you can use this system.” And doesn't teach teachers, say, “Well, this is how you use an LLM for optimal teaching outcomes. This is how you use an LLM to design a high quality lesson.” It's just how to use it. 


And a lot of the professional development that's happening, say, for example, right now in the United States, is actually happening from tech companies. So say, for example, OpenAI and Microsoft just gave a million dollars towards teacher training. Now, they say that they're going to teach or have a teacher deliver this training, but it's money coming from the companies themselves who were then delivering training to teachers, about how to use AI in the classrooms. And that's probably not the best source of critical training on the use of AI systems. 


So what we need going forward is teachers need to be taught about these systems, but they need to be taught so that they can be a critical judge of when it's good to use these things and not good to use these things, as opposed to just being taught how to use them, which is how most technology training has been done in the past. And it hasn't produced good outcomes.


[Kris Perry]: It's building on some history around EdTech in the classroom and research showing that the preponderance of it doesn't have that much pedagogical value. And here we are now on the cusp of AI being fully deployed into classrooms. So I really appreciate that you called out some of those issues in the system right now, because it is true that we haven't seen, you might say, school-driven quality assurance and policy that ensures that the child is getting a high quality experience with any technology in the classroom. 


What do you think we need in the future from AI to support a positive and thoughtful integration in classrooms, and what barriers are you seeing to achieving that?


[Adam Dubé]: What we need is efficacy testing. What we need is actual evaluations of these systems. And we need transparency from the companies to say, “Okay, what is the AI actually doing in these systems?” So we need to evaluate if it's working and we need to know what it's actually doing. And those are things that seem very obvious, but they're actually things that don't happen. 


And so a technology company will come in, and they'll pitch a product to a school, and they'll say, “ Okay, it's powered by AI.” Well, what do you mean by that? How is the AI working? What exactly is it doing? Be transparent with this information. And then asking – and then the company saying – “Okay, so you say that it's going to be effective in this way. How come? What evidence do you have? Have you done studies before? Who did those studies? Were the studies researchers and independent or your internal testing?” So these are questions that we have to ask at EdTech companies trying to deploy products into public schools with public funds. We should be asking them these types of questions. And the same thing is true for AI educational technologies. 


And unfortunately, what's happened right now is that the biggest barrier to that happening is the amount of economic force driving these technologies into society on the whole. There has been billions, if not somehow trillions, of dollars being invested into the AI market space. Companies need to realize profits from this investment. One of the places that they're turning to is education. And they're saying, “Okay, let's go and have our systems deployed to millions of students across the country. Let's have large purchases from school boards and states. And we're doing this because we invested all of this money into this system and into this hardware, we need to start making a profit.” It's like, but what they haven't done is the efficacy testing before they actually deployed the product. 


And so, unfortunately, we're actually getting, right now with AI, is a “deploy first, test later” mindset. And instead what we need, is we need the mindset of we have to have educators involved in the development of these systems. We have to have testing happening while they're being developed. And then we have to have testing happening while they're deployed. And so when a school brings them in, a school shouldn't just have this system running and then not ask themselves, “Is it working? What evidence do we have that it actually is doing what we are paying for?” And so schools need to be testing these things as they use them to see whether or not they're getting their money’s worth.


[Kris Perry]: Absolutely. I can – just thinking about hardware alone, right? These devices they’re often giving kids have a camera, a microphone. They are a portal into data collection systems that may or may not be secure. They can have geolocation built in. There are so many problematic features that come with just the hardware itself that schools are handing out, but then you add some of the technology we've just talked about, and you enhance the ability for the device to collect information about the child and use it in ways that may not be something the parent wants, or even the school, but they didn't know to ask some of these hard questions. And you mentioned that AI in classrooms is really part of an overall infiltration of technology in schools, often referred to as “EdTech.” And over the past 20 years, we've seen that this has continued, but it doesn't seem to be working. 


So what is going right with tech and classrooms and what is going wrong? I'm thinking about gamification and what a huge trend that was a few years ago. Is that still a trend? Does gamification really help kids learn? You know, talk a little bit about that and other examples of ed tech in classrooms.


[Adam Dubé]: Yeah, I just want to quickly jump on one point that you made about the amount of information that's being collected. That's actually a real concern when it comes to AI systems. Not just because of the device having a camera and everything else, but the people that design AI systems, they say, well, “You know, the AI could personalize the lesson for your student, or your class, or your school. It can teach them more specifically about how they exactly need to learn. But we can only do that if we have information, if we have data.” So it becomes an argument where they start saying, “Well, let's collect as much information as possible.” 


And these devices, as you correctly point out, collect a lot of information. They collect not just what the student does on the device, but where the student is, and what time, what other devices are near it. There's so much information that could be collected. And then they make the argument that, “If we just had more information, then we can make a more personalized learning experience.” And that same promise has been made in the past and it hasn't worked out. So the concern right now with AI and personalization and data collection is that we're creating a vector for companies to collect a lot of information from our kids and from our schools, but we're not likely to get the return on it. So that's something that we should be really concerned about with this. 


Now for your second question here about the second part that you're going into with what's worked, what hasn't worked with EdTech in the past– the biggest thing that hasn't worked is we actually haven't been evaluating whether or not technologies are effective once we purchase them and deploy them in schools. We're really poor at doing this overall. And that's because it's really hard for a school to create a program evaluation system, to systematically evaluate what's working and what's not working. Schools are complex, schools are messy. And so it's difficult to do this work, and schools haven't been empowered or given the resources to actually get it done. So that's one of the problems. We purchase things, we hope it solves the problem, but we don't actually test if it does. And so that's been a central issue. 


Now, when it comes to things like game-based learning – so gamification was a massive trend in the 2010s. It was something that I study. I develop game-based learning systems. And what works with game-based learning is when someone takes the core idea that the student's supposed to understand, like something like mathematics. I study math, that's my original area, was studying learning and how children reason about addition and subtraction, simple thing like fractions. And it's like, take something simple like, “Does a child understand what a fraction is?” It's like, let's take that core thinking, and let's turn that core thinking into an activity that they can practice again and again and again. But a way of practicing it that is enjoyable – where there's strategies, where there is potentials for failure, but then you can overcome that if you just think a little bit differently, a little more flexibly. And that when – if you get that thinking and you make them practice it just right, it turns into a game. You actually make the act of thinking about math or thinking about fractions a game. And that's what game-based learning is. It's taking what you want the person to learn, and making them engage with it repeatedly, so that they really use that knowledge. That's when game-based learning works. And there's great examples of that that exist out there. I don't mention specific companies or anything like that because I’m not marketing people, but there are good ones that do this. 


But what's happened for the most of the time was that what people had was gamification. What they did is they just took a regular lesson that was from a classroom and then they stuck stuff on it that was kind of found in video games like, “Okay, here's stars. If you get stuff right, you get four out of five stars and you collect stars. And if you get enough stars and you get a badge and you get to have an avatar that you dress up.” So it's all of these things that games have that people think that's what makes games fun. But that's not what makes games fun. What makes games fun is actually thinking, and strategizing, and problem solving in unique ways, in ways that, it's like, you're cleverly getting the problem done, and that sort of reinforcement and that sort of sense of accomplishment that you get from successfully completing a game. That's what makes a game fun. And so gamification hasn't really worked all that well, but most educational games tend to use gamification. 


And then game-based learning, where you make the learning a game, that's what's worked. And what that requires is having people that understand what makes a good game – working with people that understand how learning actually happens. So typically when someone makes a good educational technology, what they have is they've got someone who's a really good designer and a really good educator. 


And so when we talk about when educational technology works, it's when you've got companies that employ and work with teachers and with researchers. And so when I tell parents what should they look for in good educational software, that company should be partnered with an educator. They should have educators on staff. Those people should be involved from the beginning. Or they should be working with researchers who help them develop and design their products. But the majority of EdTech actually doesn't do that. Right? They design it themselves and then later on they go out and they test it and they get some feedback from teachers and students. 


No. From the ground up, work with the educators, work with the learning designers.


[Kris Perry]: You didn't mention this as one of the responses in the child, from either a gamified tool or just learning in the classroom with the teacher, old school, of the dopamine, the reward system that's triggered in the brain that allows the child to continue to feel motivated or excited by learning. And I think one of the worries about gamification was this sort of flow state where the child is experiencing dopamine more easily and maybe more often than they would if they were in a more traditional classroom setting where that's maybe going to happen less often. And I worry about how gamification is short-cutting that very complicated learning process that you just described, which is ultimately how this child's learning – you know, their ability to learn more and more complex content is scaffolded. 


One of the first technologies that elementary school children encounter in school, right, are tablets or tablet-style devices. You have an older study that I think is really interesting about the use of tablets in classrooms and how children of different executive function skills are able to process information from simple versus complex apps. Can you tell us a little bit more about that study?


[Adam Dubé]: Yeah, so that was done just a little bit after the iPad had launched. It started being purchased by schools and deployed across the United States and Canada, where I'm from. So then you were seeing them in classrooms and they were being sold as the future of say, math education and early literacy education. It's like, well okay, you're just going to put these apps in kids hands and they're going to be engaged and they're going to want to learn math and learn reading. 


And so what we did in that study is we looked at very popular commercial apps, and we categorized them in terms of whether or not the app was very complex, had a lot of bells and whistles attached to it. So there'd be animations, there'd be sounds, there'd be a lot of sort of extra things that would capture the attention of young learners. And then there was other apps that were sort of more focused. They had learning activities, but they didn't have extra animations. They didn't have extra sort of art in the background. It was more centered on the actual learning content. And we asked – then we looked at different groups of kids – kids who had the ability to control their attention, and so your attention is your ability to focus to stay on task. And we grouped the kids into ones that were better able to stay focused and ones who were less able to stay focused. And we looked at okay, how do they pay attention when you've got apps that are more distracting and less distracting with kids who have better and less ability to pay attention? 


And unfortunately, what we found was that the kids who have the least ability to pay attention had the biggest problems when using these educational apps that had distracting content. And so when they were using these apps, they were looking at everything on the app. Their attention was being taken away by the animations. It was being taken away by the art in the background that had nothing to do with learning. In fact, they were actually spending more time engaging with all these distracting spots in the application than they were engaging with the focused learning content. Whereas when a child had a good ability to control attention, they were much more focused. Whether it was a highly distracting app or not a distracting app, they could focus on the content they were supposed to be learning. 


And the real message of this though is that this is a massive problem because, in schools, who are teachers most likely to hand an iPad to? Well, it's probably that kid in class who's disengaged, who's having difficulty paying attention, who feels overwhelmed by the classroom. And you hand them an iPad or some tablet and then you just give them some commercial app and unfortunately, their inability to direct their attention in the classroom is also happening in the iPad app itself. 


And so that's where – what we recommended is that we need to make sure that the design of these apps actually have the ability of the child in mind. And that's where you need to pay attention to, say, for example, principles of multimedia learning theory, of cognition, saying, “Okay, what makes a good app?” You want to make sure that it's built for the cognitive system of a young child – and the ones that need it the most, not just the ones that are already doing well.


[Kris Perry]: You know, we're talking a lot about teachers in classrooms, but many parents today are utilizing learning and educational apps for their children's use at home. How should parents choose a quality learning app and how are they actually even doing that? Where do you go to find high quality or educational apps for your kids?


[Adam Dubé]: It's very difficult to have a place to go. There used to be some websites that evaluated educational apps. We've actually done reviews of websites that review educational apps, and we found that they have no consistent criteria. And so it's really difficult. So I would say that, for the most part, what parents are doing is that they're going to app stores themselves or they're hearing recommendations from fellow parents. Right, so that's – they're going to the app store and they're doing the search or their kids bringing an app to them saying, “I want to download this.” That's what people are actually doing. But then, okay, when you search the app store, you have one recommended to you. How do you say, “Yeah, my kid should use this educational application?” 


And this is very common. In our research, we found that over half of families in Canada – because we did some research in Canada a bit ago – it's like half of them report that their kids use math apps every single week for families that have kids from kindergarten to grade four. It's a very common practice, right? So a lot of people are doing this. 


Now, how do you actually choose that app? Well, I say that you can look for four things in the app. It's that you wanna say, “Okay, what is the app actually teaching? What's its learning content? And is that content actually what my child wants to learn? I want them to practice addition, subtraction, I want them to practice reading. Is this app actually teaching that?” You'd be surprised how many apps don't actually tell you what they teach. So it's like, if it doesn't say that, just don't go there whatsoever. So that's the first thing. 


The second thing is you can say, “Okay, how does this app help my child while they're learning? Does it scaffold their learning?” That means does it provide supports? If the child gets stuck, does it give a hint? Because a lot of problems occur when a child is using an application – they do something that they're not supposed to do or they're trying a task and they're failing at it. It's like, does the app prompt? Does it give some sort of support? And a parent can do this by playing any app for just five minutes. Make mistakes, see what happens. Does it give a helpful hint? 


And then the last thing you can look for is for feedback. It's like, okay, does the app tell your child when they're getting it right or wrong? Because it's really difficult to learn if you don't know what you're doing right or what you're doing wrong. So it has to do these things. It has to be teaching them stuff you want them to teach. It has to be providing hints and it has to be providing feedback. These are just three entry-level benchmarks that we say that if it has these things, it's more likely to be effective. 


The last one I say is that if you want to go deeper, go to the app's website and see like, did they have an educator? Did they have a researcher involved? If those people developed it or tested it, okay, it'd be more likely, right? If you want to do that extra bit of going to their website. But those are things you can look for in an app. 


Now, if your child brings you an app and it doesn't do these things, does that mean that they shouldn't use it? Well, not necessarily. It's like, maybe they could just use it for fun, but don't think of it as an educational experience. Just think of it as a time waster that's maybe better than some other entertainment game out there that you'd rather have them not play. But don't think of it as something that's potentially benefiting their learning – just as a waste of time, which we all do, even as adults, right? So that's the way that I would think about apps that don't have some of those features.


[Kris Perry]: Well, you and your colleagues recently looked at the top apps in the App Store and how they fared on benchmarks of quality, which you sort of just laid out. What did you find?


[Adam Dubé]: So we had done two studies. One of them was, did they advertise these benchmarks transparently and correctly in the App Store? And we found that for the most part, in the App Store, the companies actually don't talk about these benchmarks. They talk about a bunch of other things. They don't talk about what their curriculum is. They don't talk about if they give feedback. They don't talk about if they give supports. They tend to talk about other aspects. They say it personalizes. They say that they're hands-on, but they don't talk about these things. And so they're not giving the information that parents need to know in the app store themselves. 


And we've been making the argument that companies like Apple actually need to change their app stores so it's easier for parents to find the apps that they want. The App Store itself should have a checklist just like it does for privacy – “Does this app share data?” is a checklist in the App Store. It should also have, “What is the curriculum of this app? Does it provide feedback?” That should just be there. The developers should have to transparently report on these things, but there's no requirements. They can write wherever they want. 


We found that the majority of apps only talked about one of the benchmarks in their descriptions of their apps. So they're pretty useless for trying to figure out which ones are good or bad just from the App Store themselves. 


Now, when you actually downloaded the apps for those top ones, they actually did have scaffolding. They provided feedback. They were based on curriculum content. So it would actually be better if the companies were just transparent so the teachers and the parents could see, like, what's in these apps themselves. And that was for the top apps. So those were like some of the take-home findings. 


The other big finding that we've done, though, is that we've researched in the past, you know, what drives educators’ and parents’ decisions about which apps they choose for their kids at home or their kids in their classrooms. And the biggest driver, unfortunately, is just the user ratings. If the other users give a high star rating, so it's four and a half stars out of five, that overwhelms everything else. If the app is well described and says it has all the features, it doesn't matter. If it's got a high user rating, that's what people navigate to. And that's something that people shouldn't do because we've done research on the top apps in the App Store and there's no relationship between the user reviews and the quality of the educational app. And that's because when people are reviewing them, they're reviewing a bunch of different things about it. They're reviewing them if their child had a positive experience. They're reviewing them if they think it looks interesting, if the topic is in the line with what their child likes to learn, say, for example, but they're not necessarily evaluating the educational quality, right? So unfortunately, user reviews drive app selection and they have no relationship with educational app quality. And so like, we've been doing a series of studies with this and unfortunately the most robust finding is that user reviews overpower everything else.


[Kris Perry]: Well, that's really disappointing because as somebody that's running a research institute right now that relies on evidence and studies over time and sample sizes and really validated questions that can be extrapolated to the rest of the population, it's really disappointing to hear that we're using this consumer model, this popularity model, for something so important, right? And I wonder, as we reach the end of our conversation, if you've ever had an “aha moment,” where, in your career, you thought about all of these factors – technology and education – and how they shaped what you wanted to investigate.


[Adam Dubé]: The “aha moment” for me, when I started studying technology, came when I was studying children's math problem solving and I was looking at what makes a child more flexible in math and less flexible in math. And while I was doing that, Steve Jobs actually launched the iPad and pitched it, during the initial keynote, as the future of education. And in that presentation, he was like, “This is what the future of textbooks look like. They're more interactive. They're more flashy. There's like – just imagine every person having this more personalized device.” I was like, “Okay, this is a very strong pitch for these technologies. I think this is going to convince a lot of people.” And so that was like the first thing of saying, “I need to be studying how these technologies are being deployed in classrooms, because I think they're going to start taking over.” 


And then the second part, was when I did that initial research with how children actually use technologies. So in that tablet study we talked about earlier, not only did we find that a lot of the commercial apps were engaging kids with the wrong stuff, which means they were poorly designed, but we also found that when kids were using the apps, the majority of the time they were using them, they were making tons of mistakes. They were just tapping everywhere, all over the screen, which means that the kids actually didn't know how to use the technology. 


Now, this was the second “aha moment” because it was like, “No, no. Kids don't know how to learn with technology. We're just giving technology to kids and we're just assuming that it works.” And so there's this problem where we've got companies that are very good at marketing an idea to us. It gets deployed. But then when we actually see how it's working in classrooms, it's coming up against how students actually learn, and how students actually know how to use technologies for learning. And these technologies aren't designed for how they learn and kids need to be taught how to use technologies effectively for learning and we don't tend to do that. We tend to assume that they know how to do it. And so, and that's been a driving force for me going forward. We have to design the technology for how students actually learn and we need to teach students how to use technologies effectively for learning because we can't assume they know.


[Kris Perry]:  You've been so active in studying how tech is being used in classrooms and this field just seems to be evolving at the speed of light. And I wonder, what's next? What do you feel we need to know about children's learning and tech?


[Adam Dubé]: I think what we have to have is a principle. And that principle is that we know what good learning looks like, we know what good teaching looks like, and we have to ask how technology is reflecting that back at us. We don't say, “How is technology transforming and changing teaching and learning?” Because that's not what it's doing. 


And don't listen to that sales pitch. It's not going to redefine the classroom. That's not the goal. The goal is to know what good teaching and learning is, and then to use technologies that enable us to do that. When we do that, we're going to choose better technologies, we're going to avoid wasting a bunch of money, and we're going to put at the center of this whole question the students and the teachers that actually do teaching and learning.


[Kris Perry]: Many thanks to Dr. Adam Dubé for a thoughtful, research-driven conversation about technology, learning, and the realities of AI in the classroom. Today, we learned why EdTech trends so often follow the marketplace rather than pedagogy, how cognitive offloading with AI may reshape the way students learn, and what teachers and schools actually need to integrate AI effectively. Adam's work reminds us that technology in schools is neither inherently good or bad. Its impact depends on how it's designed, deployed, and understood by the humans who use it. 


For links to Dr. Dubé's studies and additional resources on AI and learning, visit childrenscreens.org and check the show notes. If you found today's episode helpful, please follow, rate, and share Screen Deep so more families and educators can access evidence-based guidance. Until next time, I'm Kris Perry. Thanks for listening.