Raising Kids in the Age of AI

Being citizens in an AI-powered world

aiEDU: The AI Education Project Season 1 Episode 8

AI can sound human, but it isn’t — and that difference changes how we teach, parent, and prepare kids for a future shaped by AI. 

On this episode, we dive into AI readiness: the blend of skills, ethics, and technical insight that young people need to question, adapt, and lead in an AI-powered world.

We sit down with Philip Colligan of the Raspberry Pi Foundation to unpack layered AI literacy, including what students should know about data, large language models, bias, and the social impact of automation. He shares how Experience AI (co-created with Google DeepMind) equips teachers with free classroom resources so every student can get hands-on practice with training AI models, diagnosing bias, and interpreting results. From “tomato vs. apple” misclassification to image-generation blind spots, Phil shows how simple activities can spark important conversations about fairness, accuracy, and accountability. 

We also hear from Kenyan teacher Mr. Monyancha Isena, whose students crowd around limited computers yet light up as they test AI models and ask why accuracy never hits 100%. Their curiosity illustrates a bigger point of how access and equity determine who benefits from AI. 

If you’re a parent, teacher, or curious listener, you’ll leave with concrete ideas on how to build AI-ready habits: teach students how AI systems learn, demonstrate model bias through classroom activities, keep privacy guardrails in place, and emphasize student agency in using AI technology. 


aiEDU: The AI Education Project


Dr. Aliza Pressman

SPEAKER_00:

The AI industry is working incredibly hard to make these systems as human-like as possible. And that can beguile you, any of us, a young person or an adult, into treating them in a way which they don't really warrant.

SPEAKER_02:

What does it mean to be ready to be a citizen in an AI world?

SPEAKER_05:

High level, the definition is, you know, what are the collection of skills and knowledge that you need to thrive in the world where AI is everywhere? It's not just knowing about AI. It's not just using AI. I think what's powerful about AI readiness is it it creates space for everybody, no matter kind of where they sit in the conversation, whether they're, you know, in the STEM or computer science community or focused on civics. You could be the biggest skeptic of AI in the world or its biggest proponent. Um, you know, the one thing that you'll agree on is that we need kids to be ready for the future.

SPEAKER_02:

Yes. And we're going to find out why it matters and how you can ready your kids for AI citizenship too, on this episode of Raising Kids in the Age of AI, a podcast from AIEDU Studios created in collaboration with Google.

SPEAKER_05:

I'm Alex Katron, founder and CEO of AI EDU, a nonprofit helping students thrive in a world where AI is everywhere.

SPEAKER_02:

And I'm Dr. Lisa Pressman, developmental psychologist and host of the podcast Raising Good Humans. On this episode, we're learning about what it takes to be an AI-ready citizen. What does it look like to be an empowered, active member of an AI-supported world? We're going to hear from Philip Culligan, the chief executive for Raspberry Pi Foundation, a nonprofit that empowers young people around the world to learn about computers, machine learning, and AI.

SPEAKER_05:

We'll also hear from Mr. Manyancha Asena, a teacher in Kenya participating in Raspberry Pi Foundation's Experience AI program.

SPEAKER_02:

But first, Philip Culligan talks about the many layers to AI readiness and why hands-on learning is crucial to truly understanding AI.

SPEAKER_00:

I'm Philip Culligan. We are a nonprofit, and we're really focused on democratizing access to computing education. And we we think about that very broadly from computer science through to AI literacy.

SPEAKER_02:

Before his role as CEO of Raspberry Pi Foundation, Philip worked for the parent company as director. And before that, Philip actually served as the advisor on social innovation in the cabinet office to the Prime Minister of the UK and held a number of important roles in government before that.

SPEAKER_05:

The nerds who are listening will know the name Raspberry Pi as this revolutionary bit of technology. It was inexpensive, but remarkably capable. And it's been used to build all kinds of things that need computers. It was a huge moment for accessibility and computing and gaming. The company essentially formed a foundation with a goal of empowering young people not just to use, but to truly understand digital technologies like AI, democratizing AI literacy for the next generation.

SPEAKER_00:

Lots of us are trying to figure out what AI literacy is. At its simplest, it's helping young people understand what AI technologies are and the role they play in the world, their opportunities and limitations. But if you deconstruct it a little bit, there are different layers to AI literacy. So we think one of the most important layers, and the one that often is focused on, is the social and ethical implications of AI systems. And it's really important that that's a key part of AI literacy. But also we think that literacy involves not just understanding the applications and how you interact with them, but also what's under the hood. What is it that's making, you know, this system I'm interacting with work? In order for young people to have agency and power, they need to have some foundational understanding of how these systems work. Because, you know, at its core, it that empowers you to be able to challenge those automated decisions, to understand them, to ask, you know, can these be reviewed by a human? It's not just going to be recommendations on which movies to watch, right? Which we know is AI powered at the moment, and we can all kind of get heads around that. In the future, we're looking at finance decisions, healthcare decisions, law and order, criminal justice decisions being automated more and more. And so it's a fundamental issue of rights that all young people have the literacy they need to be able to interrogate those systems.

SPEAKER_05:

One way the Raspberry Pi Foundation helps students understand not just how to use AI, but also how it works, is through their Experience AI program, which they co-created with Google DeepMind. It's a free curriculum for students ages 11 to 14, and it's designed to help teachers in any subject area lead hands-on lessons in AI and machine learning.

SPEAKER_00:

So Experience AI is a curriculum to help young people learn about artificial intelligence, how the systems are built, the role that artificial intelligence systems play in the world now, and what they might do in the future. It's a set of classroom resources which include everything a teacher needs, right? So it's the lesson plans, it's the handouts, it's the slides, it's the videos that they use to bring AI to life. But it also comes along with um teacher professional development. So we run uh programs in partnership with organizations all over the world to help teachers build their understanding and confidence of AI as well. We designed Experience AI to be global from the outset. The sort of ethical case we make for that is technological advancement and innovation should be a great engine for social mobility and equality. Too often it becomes a driver of a gap between those who have access to opportunities and those who don't. And so we see that time and time again within countries and between countries, right? And so it was hugely important to us from the outset that we thought about Experience AI as an AI literacy program that would help young people wherever they are in the world develop the uh knowledge, skills, and understanding that would help them take advantage and be part of the AI revolution, not be lagging behind, missing those opportunities and so on. And so it was really important to us from the outset that we focused on it being a global program. And what we found is the demand around the world is absolutely incredible. The thirst for knowledge from uh young people, parents, and teachers uh is absolutely huge. One of the ways that we're trying to help young people develop critical thinking and media literacy is giving them hands-on experiences with artificial intelligence technologies. So, a good example of an activity that we do is um we we get young people to train a classification model.

SPEAKER_05:

So, you might be wondering, what's a classification model? This is when AI uses data to identify or classify images or content that you share with it. In order for the machine to be able to label the content correctly, it needs to be trained and given data that will help it determine what's what. This is also where the diversity of data that large language models are learning from becomes so important.

SPEAKER_00:

One of the favorites I've seen is uh wearing a hat or not wearing a hat, right? And so you take lots of images, some of them are in a hat, some of them are not wearing a hat. But what you understand is that the model is only as good as the data you put in. And so that helps them reflect on and understand questions around, you know, the limitations of the training set that we might use. And and we know, don't we, that um things like facial recognition technologies, there have been well-documented problems about those being used on on very limited sets of data and therefore not being able to deal with uh the full diversity of the population which we uh which we are. Um so we're again it's about breaking down that wider question of the data that's used to train the model by giving young people a really simple hands-on experience where they're using data to train the model. And there's lots of different versions of that. You know, one of the ways is, for example, talking about the training data for large language models. If you go to an LLM and ask it to, can you give me a picture of somebody writing with their left hand, please? And it will say absolutely, and it gives you a picture of somebody writing with their right hand. And then you say things like, okay, can you show me a picture of somebody writing with both hands? And it can do that. But then you say, Can you remove the pen from the right hand? It says, absolutely, and it removes it from the wrong hand. So uh and you know, this is because they are trained on the images on the internet, and the overwhelming majority of those images are, of course, of people writing with their right hands. Um it's a trivial way, but it's a good way of exposing some of the limitations. And the models are developing all the time, right? So you have to be careful when you use examples of their limitations, because often you'll use an example and then lo and behold, they've fixed that. The point that we're trying to help young people understand is however good these models get, they are only as good as the data that they're trained on.

SPEAKER_05:

This is where AI literacy shifts into readiness. Once kids understand how these tools and systems operate, how they make decisions, only then can they accurately and confidently critique the outputs. And it's an increasingly vital skill as AI continues to grow across pretty much every corner of the economy. It's all about putting ideas into action. Even in an age of vibe coding, Raspberry Pi thinks it's not a bad idea to have some foundational insight into computing.

SPEAKER_00:

So one of the things that we're seeing at the moment is uh a bit of a public discourse around whether computer science and coding is still relevant for young people to learn in an age of AI tools that can allegedly do the coding for us. And uh look, the simple answer is we think yes. We think actually computer science and coding are more important, not less important, in the age of AI. And the reason for that is pretty simple, right? There will be more economic opportunities available to young people who've developed computational literacy. Maybe we'll have fewer roles that are just focused on programming, but we think that understanding of how technology works, the ability to integrate it into your profession, will become increasingly important across a wide range of professions.

SPEAKER_02:

I'm so interested in this question of how much people need to understand the inner workings of AI. It rings true that to really lead something and not have it overwhelm you, you need to understand how it works. And to that point, I was really heartened to hear Philip emphasize that the better younger generations understand how AI really works, what it is and what it isn't, the more they can stay safe and healthy while using it.

SPEAKER_00:

You need to remember that they're systems and you're giving them data, and that data's going to a server somewhere, and there is a risk you know, you don't know what's happening to that data necessarily when it's being shared. So there's an element of personal information and security. We hear a lot about young people using LLMs as a personal coach or even therapist or even for some sort of psychological support. And we think it's really important, again, that young people understand that it's not a human with a sense of ethics that they're interacting with. It doesn't care for you in the way that a professional has an obligation and regulations and various protections that make sure that the professionals that young people are interacting with do really have their best interests at heart. It can't have that. It's a probabilistic system which is generating text based on its training data. And again, that's not to undermine its usefulness, but helping young people understand how these systems are built will help them be critical interactors with those technologies. And we think that's an important part of safety, which isn't really getting the attention that it needs to. These technologies are phenomenal, and you know, there is every possibility that they will transform all aspects of our lives for the better, but only if we really are intelligent users of them. And I think that's the core of what we're trying to do with experience AI and AI literacy more broadly.

SPEAKER_05:

And just to get your reaction, because you know, when you start to think about the anthropomorphization of AI, you start to emerge out of this realm of technology and into the realm of like human psychology. And I'm just curious for your reaction to, you know, like some of the risks that Philip is sort of naming out and you know, like what your take is.

SPEAKER_02:

Yeah, I mean, that is the absolute, like way more terrifying to me than cheating. You know, I feel like we know how to handle some of the challenges that AI is going to present and does present. But this one we have no precedence for, and especially with developing brains, but even with adult brains. And that issue is so big, we aren't gonna solve for it in this series. But let's start a little bit smaller. I'm curious to hear from an experienced AI teacher.

SPEAKER_03:

My name is uh Mr. Isena Monyanja. I'm a teacher at uh start of the city uh in Mombasa County, Kenya. I was one of the literate uh teachers, and then I started like how can we help these uh girls? My school is girl school, how can we help these girls to uh know how to interact with their computers?

SPEAKER_02:

Mr. Monyacha Isana has been teaching the experience AI curriculum to get his students excited about understanding every facet of artificial intelligence.

SPEAKER_03:

When I first introduced the experience AI curriculum to the learners, it was exciting to them. They were asking what is this that we hear about? It said that it's linking of the human intelligence. And then we started with uh what is AI itself. They needed to know the background of AI and how it was developed. So we were integrating the Kenya curriculum with uh the experienced AI uh lessons, like what is AI and how the computers learn from the data, and many, many more other lessons.

SPEAKER_05:

The girls of the school dove headfirst into learning how to train these models and understanding how the bias that goes into making a model will impact what comes out of it as well. One of the most challenging lessons was training an AI to identify the difference between a green apple, a red tomato, and then to identify a red apple. But basically, if you think about how you'd go about training an AI model to identify an apple or tomato, you'd show a bunch of images of green apples or red tomatoes. And it's learning from that data set. But the data set is biased towards color. And because of that bias, if you show it a red apple, it's gonna identify it incorrectly as a tomato. So if you want to be able to identify red apples and green tomatoes, you you need a much bigger and more diverse data set because you have to get into subtleties of texture, uh, the shape of the indent at the top. So I think this is a really clever way to illustrate bias and also demonstrate how you go about solving it.

SPEAKER_03:

We did uh a project also where we were doing about biased in, biased out. They created a model, they trained, and then they tested. We were testing maybe tomatoes and apples, and then they found some they're not meeting their expectations as like this one, it is a certain percentage confidence, and this one is certain percentage of confidence. They were even asking uh why, why, why, why, why, why, why is this one not 100% and already we have tested, we have trained this data, and why why are we not uh getting it 100% accurate? So there was a lot of reactions from the learners themselves, and they were even asking questions about how can we uh make sure that there is a hundred percent accurate from whatever that we train and test. We look at the uh the kind of uh students or the kind of learners we have in those schools. Uh in Kenya, the social economic background is very poor. We have very minimal resources that we that's the truth. Very minimal resources that we use they have to share. One of the uh girls where they were fighting for the test troop, all of them they were crowding there, they wanted to see how this AI is working. So we can see because of the resources, the infrastructure we are behind. We don't have the infrastructure which we can use in teaching or in integrating the experience AI the way it is supposed to be done. But we are trying, it's not like we are asleep, we are doing what is what is supposed to be done.

SPEAKER_05:

I mean, first of all, it's it's so cool that like as you're hearing Mr. Senna's interview, kind of like hearing the kids in the background, it kind of like brings home, you know, like how real this is, this is like actually happening in a school, and I so much deep respect for the Raspberry Pi Foundation because AEDU is really focused on the US, but this is truly a global challenge. There's like an interesting tension here because there's this sort of like respect for both the the need to understand the technology and to not like sort of like you know shortthrift the you know the the risks and the dangers that it poses. Um but to also always pair that with learning and agency that that will you know put students into sort of this mindset of what role can I play in addressing that as opposed to just being afraid?

SPEAKER_02:

You know, generally speaking, literacy and education leads to better outcomes for everyone. And so if our future includes, which it clearly does, needing to be AI ready, then incorporating that into programs to support underserved communities seems incredibly important.

SPEAKER_05:

I think too often historically we've kind of like, you know, gotten technology into the places where it's easy. And then once we figure that out, then we sort of turn to the kids who are now already left behind and start to feel like they're falling behind more and more. I'm obviously biased to speaking of bias, um as a nonprofit leader, but I just think that we can prioritize uh reach and access as opposed to um, you know, just scale for the sake of scale.

SPEAKER_02:

Join us again next week as we hear best practices for using AI tools in good health and as safely as possible.

SPEAKER_04:

You can have an open and honest conversation about how do you keep them safe, right? So prompt generation is probably the best place to step. Like, what do you ask your chatbot? Um is it your chat chat chatbot supportive? Is it validating? Is it nice to talk to?

SPEAKER_05:

Find out where AI will take us and future generations next on raising kids in the age of AI. Until then, don't forget to follow the podcast on Spotify, Apple Podcasts, YouTube, or wherever you listen so you don't miss an episode.

SPEAKER_02:

And we want to hear from you. Take a minute to leave us a rating and review on your podcast player of choice. Your feedback is important to us. Raising kids in the age of AI is a podcast from AIEDU Studios in collaboration with Google. It's produced by Kaleidoscope. For Kaleidoscope, the executive producers are Kate Osborne and Lizzie Jacobs. Our lead producer is Molly Sosha, with production assistance from Irene Bantiguay, with additional production from Louisa Tucker. Our video editor is Ilya Magazanen, and our theme song and music were composed by Kyle Murdoch, who also mixed the episode for us. See you next time.