aiEDU Studios

AI 101: Everything parents need to know

aiEDU: The AI Education Project Season 1 Episode 20

Let’s strip away the hype and make AI understandable, useful, and human.

Google Research VP Maya Kulycky explains why the human brain remains unmatched (And why that’s good news!) and offers practical guidance for using AI as a collaborator, not a crutch. Google DeepMind COO Lila Ibrahim takes us inside different projects that expand what’s possible with AI in anthropology (Project Aeneas) and molecular biology (AlphaFold). 

Responsibility runs through every story here as both Maya and Lila emphasize safety reviews, partnerships with domain experts, and community voices shaping how tools land in classrooms, labs, and homes. We also talk about supporting different learners, like how AI can patiently explore rabbit holes for one student and help another organize ideas and communicate with confidence. 

By understanding what’s behind the AI curtain, (Statistics, not magic.) we learn how to set smart guardrails, design better prompts, and turn AI’s fluency into real learning and better decisions. 

If you’re ready to replace AI mystery with mastery, press play on this episode! 



aiEDU: The AI Education Project

SPEAKER_01:

We've always been on a quest for knowledge as people. And we've always wanted to understand ourselves better. How we think, how we make decisions, how we improve our world. And I think the origins of AI are part of that quest.

SPEAKER_04:

Today we're going back to the beginning to try to understand just what AI is and how it works. It's AI 101. Welcome back to a new episode of Raising Kids in the Age of AI, the podcast from AI EDU Studios in collaboration with Google. I'm Dr. Lisa Pressman, developmental psychologist and host of the podcast Raising Good Humans.

SPEAKER_02:

I'm Alex Katron, founder and CEO of AI EDU, a nonprofit that helps students get ready to live, work, and thrive in a world where AI is everywhere. Today in the pod, we have two AI experts and parents with us to help. Maya Kalicki, VP of Strategy and Operations for Google Research, and Chief Operating Officer of Google Deep Mind, Lila Ibrahim. They're going to help us see the shape of AI, from its origins to its future and best practices, and going over some of the different types of tools that are available.

SPEAKER_04:

This is just what I need, honestly. But before we hear from Maya and Lila, I have a few questions that I think are so basic, but we never really define them. And I like to have things defined. So I'm going to start with just what actually do we mean when we say AI?

SPEAKER_02:

So AI can mean a lot of different things. It's a relatively broad term. In its simplest form, it's technology that allows machines and computer systems to perform tasks that typically require human intelligence, things like learning, problem solving, decision making, perception, and understanding language.

SPEAKER_04:

Okay. And these are terms I hear, but I have no idea what they are. LLM.

SPEAKER_02:

So an LLM is a large language model. And it's okay. So basically it's a computer program. It's trained on massive amounts of data. And we're talking billions and billions of pages from books, articles, websites, forums. And you take that data and you process it with literally the most powerful supercomputers in the world. You add a little bit of refinement from human feedback and you get this incredible tool. We call it a model. And it can generate all kinds of content and engage with you in natural conversational English.

SPEAKER_04:

Got it. That makes sense. But then what's generative AI?

SPEAKER_02:

Generative AI is a little bit broader. It includes large language models, but it also includes tools that can create art, uh images, video from text. Uh-huh. Because you can kind of use them interchangeably. If someone's talking about generative AI or large language models, more or less they're talking about the same broad set of capabilities.

SPEAKER_04:

Okay. So now I feel ready to dive in. Who are our AI 101 instructors?

SPEAKER_02:

So lucky for us, we have two folks who sit close enough to the hard science to see everything up close, but they're also coming at this on the side of the humanities and can help translate some of this complexity to non-experts who are trying to peel back the curtain. So we're going to hear from DeepMinds Lila Ibrahim later on, but first we're going to hear from Maya Kalicki.

SPEAKER_01:

My name is Maya Kalicki. I lead strategy operations and outreach for Google research.

SPEAKER_02:

Part of Maya's role is working with research labs to help shape the next iteration of technological breakthroughs that we hope are going to change the world. Maya also had a hand recently in introducing Google AI Essentials, a course which was designed to help answer some of the most basic questions that people have about AI.

SPEAKER_01:

When we think about AI and its origins, it's really tied to the human brain and our ability to try to understand how do we think? How does the human brain work? And to replicate some of the activity that we do as people in order to assist us. Thinking, understanding, learning. These are different cognitive tasks that are part of AI. What AI can do is remarkable. It can't do what we do as people, and it is not as efficient as the human brain. The human brain is a remarkable, beautiful organ that runs off very little energy. It is just that's why people have wanted to find something similar to it, you know, since the dawn of time. It is exceptional and unique. But we've created some tools that have some aspects that are shared with the human brain, and we are very, very grateful for it. They allow us to share information and to understand things as people use using our brains much better than we did in the past. And to get to mutual understandings more quickly, too. I always encourage people to go and experiment with large language models and see how they can be used in their lives to make their lives easier. It certainly makes my life easier. I'm a parent and I have two lovely kids, ages 13 and 15, who are who would be cringing right now if they knew that I was mentioning them. I'll tell you some of the personal miseries that AI has eliminated on from a very, very basic point of view. So if my kids, for instance, would ask me for a jacket, I want a jacket, I want a red jacket, okay? I'm gonna search for a red jacket and I have to type in, I'm looking for a red jacket. And then, you know, the conversation goes like, well, what kind of jacket? Oh, it's a puffer jacket, okay. And that's not that color red, it's a different color red. Okay. I mean, we're 15 minutes in and we're still looking for this jacket. AI has allowed us to do things that like circle to search, right? I feel like, or I can give it a picture of someone wearing the jacket that my kids are asking for, and then a few seconds later know exactly where that jacket is, how where I can buy it, how much it costs, is it on sale, things like that. So I'm thankful for all that time not wasted at the same time. Please, please do not take the output of a large language model and think you're just gonna take that and roll. No, it's meant to be a collaborative tool. It's meant to be something that helps you get to a certain point. You look at it and you say, Oh, okay, this is a great place to start, but I'm gonna improve it, or oh, this piece is not what I meant. I need to change this, I need to change that. It's something that you need to check the work of. It's not something that is meant to be a freestanding tool without us as people.

SPEAKER_04:

It's not just Maya here saying this, but in the previous episode, we heard similar ideas. You do have to check your work.

SPEAKER_02:

Yeah, this is what we mean when we talk about the human in the loop. It's this idea that, you know, AI, if we really want it to be an extension of us and not a replacement, we have to understand our role in that equation and brings us back to why it's so important to understand AI. Um, because if we understand it, then we have the building blocks to actually begin to harness it and use it, you know, for our purposes and desires as opposed to what the AI thinks that we want.

SPEAKER_01:

I talk to my kids about AI and about what it is and what it isn't and how to use it and how not to, um, and the guardrails that I want them to have around the technology. I think the most important thing for kids to have is um their humanity. Hold on to the exceptionalism and the beauty of what we are as people and what we can create and use these tools to do it. Be thoughtful about it. Pursue your dreams. But they're your dreams, right? The things that you want to accomplish in your lives as kids, the things as parents that you're envisioning for your kids to be able to do. You just have another tool in your toolbox to do that. Use it with caution. Make sure kids are educated about what AI is and make sure that they understand the limitations of AI, and then let them walk into their own exceptionalism as people.

SPEAKER_04:

I love what Maya said about the human brain. It's so cool to hear someone in technology acknowledge that this is irreplicable. And so I think we need to keep on having these conversations and modeling how beautiful this human brain is and hold on to our humanity and emphasize that with our kids. That this is not a freestanding tool.

SPEAKER_02:

Yeah, it's not intuitive. It's simultaneously one of the coolest aspects of language models, the fact that you get this, you know, very human-like experience. You know, you're essentially, it feels like you're having a conversation. And it's actually hard to get past that into what's really an abstraction of almost like a compression algorithm for the entire internet. So to put it more basically, um, you know, the AI is guessing how a human might answer your question or how an expert might answer your question. Um, but it's actually a lot of statistics that's happening in the background, statistics that we don't even fully understand. It's it's beguiling, even to the experts. And so, you know, one of the biggest challenges that we have as educators and as parents is, you know, how do we kind of pierce through some of that? You know, if you're feeling confused, you're in good company. There was actually a study a couple of years ago, and it basically listed a bunch of everyday uh instances where AI is, where we know AI is used, things like wearable fitness trackers, chatbots, um, product recommendations, like spam identification, you know, music recommendations. And only a third of respondents were able to identify that AI is used in all of those examples. So this is a relatively widespread problem that that you know, people are kind of flying blind, as it were.

SPEAKER_04:

Yeah.

SPEAKER_02:

Um, and what it means is that there's still so many of us that are using AI regularly and don't even realize that it's AI. Um, so if there's any, if there's just one thing that you can leave this podcast with is this curiosity about, you know, when you're using technology uh and it feels like magic, you know, kind of like really asking yourself what might actually be happening behind the scenes. And once you understand that this isn't magic, but actually just some fancy statistics that's happening in the background, it leads you to the next question, which is, you know, what is my role in using this technology so that I'm not, you know, um purely beholden to, you know, what this is the statistics think uh I should be doing or what the answer should be. Does that make sense?

SPEAKER_04:

Yeah. I actually, dare I say, I feel like I have my bearings and I want to go a little deeper and understand the bigger potential of AI, but I also want to understand a little bit more about what kind of work is going into building these models safely and responsibly. I think that's kind of on a lot of our minds. So we're really lucky to hear from our next guest.

SPEAKER_05:

I'm Lila Ibrahim. I'm chief operating officer of Google DeepMind. We do a lot of work around the research of AI and bringing that into the world. And part of that will be through large language models, through products like Gemini, but we also use AI to apply to some of the world's most challenging scientific problems.

SPEAKER_02:

Lila was brought on as DeepMind's first chief operating officer in 2018. And she has spent more than three decades in the tech industry. Today, in addition to overseeing their day-to-day operations, Lila's work focuses on impact and responsible innovation. And one of the things we asked her about is how Google DeepMind's advanced AI is making exciting new discoveries possible, including changing the way we understand the past.

SPEAKER_05:

There's a really cool project we've done called Project Enias. This takes ancient text and it helps us fill in the gaps. Imagine a stone is broken and we're only have part of that text there. Where did it come from? What context might there be? Help us translate it. So imagine now historians being able to use artificial intelligence to help us unlock understanding of the past, something that we thought might not have been possible. Another area that we've applied AI is to something called protein folding. Our advanced AI system called Alpha Fold was a breakthrough discovery. Actually, it got the Nobel Prize last year. First time ever in history that the Nobel Prize has been awarded for the application of AI. And it helps you predict the 3D structure of a protein and interaction with small molecules. But why is that important? Imagine being able to understand diseases better and then come up with better therapeutics or break down industrial waste or figure out how to grow some crops better resistant to disease. This is all possible now with the help of our advanced AI system called AlphaFold. What's really been transformational about this is we have over 2.5 million researchers in 190 countries using this technology, as simple as a Google map search, using this database for free to advance scientific discovery and create a better future for us all. When Google DeepMind was founded, at the center of everything was this belief that we had to do this responsibly. It's because we thought this could be such transformational technology. And transformational technology requires exceptional care. That means everything from our early stages of research to how we do the development, what our governance models are internally, and even how we deploy it. So just to give some specifics, like our advanced AI system, Alpha Fold, before we released it, we worked with uh dozens of researchers to say, is this safe to release? How do we need to think about it? And it actually led to a partnership with the European Bioinformatics Lab that actually had this network of researchers and could help us do this, birth this technology in a responsible way into the world. And that's a general philosophy that we have of like, how do we bring in those voices and those communities into the process as we're developing? Whether they're music artists or whether they're teachers, experts in learning science, all of that gets put into how we develop it. We like to think of it as like AI shouldn't happen to us. It should happen with us.

SPEAKER_02:

We can't say this enough. We can't just be bystanders in the AI story. By understanding the technology, we can and should be an essential part of that story.

SPEAKER_04:

In her personal life, Lila's found AI to be an incredibly useful tool for distilling essential information.

SPEAKER_05:

I've actually taken notebook LM and uploaded housing manuals of like how to use my dishwasher, my washing machine, the coffee machine. And so now whenever I have a problem, I just go and I say, okay, using all of this information, tell me why this light has gone off and what do I need to do about it. You can imagine we have a lot of conversations in my household about AI, and every family is different. But what I've personally found is actually being very open and having conversations around the technology as it evolves has been really important. I have twins and they both use the technology very differently. One is a traditional learner, and so what she's found is AI doesn't judge the questions, will willingly go through rabbit holes of learning rather than give just a single answer and textbook of here's what we need to learn. My other daughter is dyslexic, and so what she's found is that she has all these ideas that sometimes she has a hard time expressing. And what she's able to do with AI is to help organize her thoughts and communicate it in a way in which other people will have an easier time to understand what she's trying to say. And so it's completely unlocked her potential in a way that has also given her a big boost of confidence where she may have struggled uh in the past.

SPEAKER_02:

All right, Aliza, we've talked to two very impressive members of the Google team that are working on this. And you know, one thing that stands out to me is they're they're also learning sort of as they go. Like they they're not presenting as if they have all the answers. And and a lot of the descriptions and examples they've given were really examples of where they're experimenting. But it all really comes down to, again, this idea of AI happens with humans, not to humans. And you know, collaboration and really harnessing it is at the center of it.

SPEAKER_04:

Yeah, I mean, it's definitely more transparent. It's not yet translucent for me. But one of the things that I'm also just kind of wondering is like when I think about research, like we have so many questions all the time. Like, can I input data into AI and tell them what kind of analysis to do? Like these kinds of things kind of excite me because that saves so much time. So I'm definitely curious about that. I've certainly never used AI in this way, but I'm kind of growing alongside this podcast because I'm learning about how I might use it in the future.

SPEAKER_02:

So if you still have questions, you want to learn more, you can check out some of our resources at aiedu.org. And you can also just spin up Gemini and start, you know, asking some questions yourself, including some of the ones that were asked here. My guess is that Gemini is going to have some really compelling and solid answers.

SPEAKER_04:

Thank you so much for listening. Join us again next week when we take a look at an AI-enhanced classroom from a teacher's perspective and what it means for your kids' education. We'll hear from New York City public school teacher Shira Mauskowitz.

SPEAKER_03:

This teacher came to me with something that was not a technical issue per se. And I presented him with a technology solution, but it actually addressed his challenge and more. The behavior was better, the engagement was better, his scores were actually higher.

SPEAKER_02:

You'll also hear from Google's Jenny McGuera. Jenny's the global head of education impact at Google and a former classroom teacher herself.

SPEAKER_00:

It's giving me that feedback. It's almost like I'm the head coach of a Big Ten football team, and I've got like all the other assistant coaches in my ear telling me, like, hey, you need to go over here. Hey, let's, you know, pause this lesson and regroup because they're they're not getting it. We need a timeout. So that's really magical.

SPEAKER_04:

Together, they'll tell us what the classroom looks like when it's supported with AI tools and the ways it can help support different learners.

SPEAKER_02:

Find out where AI will take us and future generations next on raising kids in the age of AI. Until then, don't forget to follow the podcast on Spotify, Apple Podcasts, YouTube, or wherever you listen so you don't miss an episode.

SPEAKER_04:

And we want to hear from you. Take a minute to leave us a rating and review on your podcast player of choice. Your feedback is important to us. Raising kids in the age of AI is a podcast by AIEDU in collaboration with Google. It's produced by Kaleidoscope. For Kaleidoscope, the executive producers are Kate Osborne and Lizzie Jacobs. Our lead producer is Molly Sosha, with production assistance from Irene Bantiguay, with additional production from Louisa Tucker. Our video editor is Ilya Magazanen, and our theme song and music were composed by Kyle Murdoch, who also mixed the episode for us. See you next time.