Raising Kids in the Age of AI

This professor makes using AI an expectation

aiEDU: The AI Education Project Season 1 Episode 5

Let's be honest: Trying to make assignments “AI-proof” is like trying to write a “calculator-proof” math problem. 

With that in mind, we explore how to design AI-ready assessments that reward genuine understanding and insight when answers are cheap and instant.

Alex and Dr. Aliza unpack a college course that embraces AI rather than hides from it. Tulane University associate professor Nick Mattei walks us through a term project where his students prompt multiple AI models, compare outputs, and critique errors before writing drafts and transforming their essay into another medium. The plot twist: Nick's assignment requires students to defend their choices in-person! That one change re-frames the assignment so students don't try to conceal AI use, and instead spend more time learning the material well enough to explain it to others.

From there, we sit down with Shantanu Sinha, founding president of Khan Academy and now VP and GM of Google for Education. Shantanu argues that AI shifts the spotlight from product to process. For instance, he draws a sharp line for high-stakes essays: AI can suggest structure, but only the student can supply authentic voice. The goal isn’t to ban technology, it’s to cultivate critical thinking, problem-solving, creativity, communication, and teamwork — skills that will outlast any AI or edtech tool. 

If you’re a teacher, parent, or curious learner, this episode will leave you with concrete strategies to re-think homework, re-wire assessments, and turn AI from a crutch into a scaffold. 


aiEDU: The AI Education Project


Dr. Aliza Pressman

SPEAKER_03:

There's a lot of conversation around, you know, how do you make a LLM proof assessment? That's like trying to say you're gonna make a calculator proof math problem. Um, well, if it's got numbers in it, it could probably use calculator for it.

SPEAKER_00:

When you were in school, what hacks or tools did you have to make your work easier and faster? Because I feel like I know you did.

SPEAKER_02:

Well, I mean, we just had the internet. So, I mean, this was like the dawn of Wikipedia. And then all the other obvious stuff, calculators. You know, I remember seeing Spellcheck for the first time. Um, SparkNotes was pretty big. My teachers hated it in retrospect. I understand why. And maybe the biggest one was, you know, my parents, especially my mom, just like helping with my homework. Um, I talk a lot, I talk a lot about my my mom GPT. Like that was definitely a a huge, a huge hack um when I was in school.

SPEAKER_00:

And now today I hear about students using AI. And I guess I wonder if it's accurate to equate AI to how we used to have hacks, or if we have to reimagine in school working versus homework.

SPEAKER_02:

Yeah, I mean, this is a conversation we have a lot with teachers in our work at AI EDU. And we actually literally have these cheat a thons. And so it's literally like we give them an assignment and we say, okay, we're gonna actually help you use AI to cheat on this assignment so that you can understand the capabilities that students have in their hands. And from that, you can then start to build around it. So, like you build your assessments based on your knowledge of what the technology is capable of.

SPEAKER_00:

Wow. That's exactly what I want to know. And today we're going to hear how one college professor is making AI an expectation in his assignments to make his course maybe not hack-proof, but at least AI ready. And then we're going to talk about what kinds of knowledge kids actually need to acquire in an age where information and even output is at the tips of our fingers. We're going to get into all of that on this episode of Raising Kids in the Age of AI, a podcast from AIEDU Studios created in collaboration with Google.

SPEAKER_02:

I'm Alex Katron, founder and CEO of AIEDU, a nonprofit helping students thrive in a world where AI is everywhere.

SPEAKER_00:

And I'm Dr. Aliza Pressman, developmental psychologist and host of the podcast Raising Good Humans. On this episode of the podcast, we're discussing how schools at every age level are reckoning with generative AI and the strategies evolving to reshape assessment and evaluate learning. Plus, Alex sits down with Shantanu Sinna, the founding president at Khan Academy, and the current VP and GM for Google Education. He'll share his approach for balancing AI use with more traditional methods in a classroom setting.

SPEAKER_02:

But first, we'll hear from Nick Mattei.

SPEAKER_03:

I'm uh Nick Matte, and I'm an associate professor of uh computer science at Tulane University in New Orleans. I teach classes mostly related to artificial intelligence, uh data science, machine learning.

SPEAKER_02:

Nick co-teaches a course that's cross-listed with history. It's called The Digital Revolution from Ada to AI. I think this is fascinating, the idea of bringing students across the humanities and social sciences into computer science. But that actually wasn't the main reason we wanted to talk to Nick. We'd heard that he and his co-teacher had brought one more thing to their class.

SPEAKER_03:

Last year, I actually got to teach a class with uh Walter Isaacson, who's written a lot of biographies of Steve Jobs, Elon Musk, Ben Franklin. And what we kind of ended up doing was we combined elements of uh a number of classes that we teach in computer science, like a little bit of AI, right? Like a little bit of programming with more of a broad historical survey course that Walter was already teaching that focused on, you know, the invention of the semiconductor, the invention of the transistor, and you know, sort of the history of the technology itself, and then sort of the impacts uh of that technology and how it sort of shaped society.

SPEAKER_02:

So the curriculum was set, but what about the assignments and the assessments? How could Nick tell the students had actually grasped the concepts when so many take-home projects nowadays seem suspect?

SPEAKER_03:

Now, with the prevalence of LLMs broadly and students sort of use them all the time, uh I was sort of under the assumption that anything I asked them to write outside of class is going to uh involve a lot of LLM work. And so it's like it's like me handing them something and say, here, you go take and give this to the LLM. Um, which I know was cynical, but um and so we try to do a lot more in class and do a lot more of that assessment in class. We still wanted to have um, you know, a more traditional essay, their sort of big term paper, history paper thing that we all remember from our history classes. Uh, but they had to write it with the LLM. We gave them all sort of the same starting prompt um that it was like write a history of the digital revolution starting with Ada Loveless and going to AI or something like that, which is the title of the course. And then sort of we had them do like a first round where they like gave it to at more than one LLM and then brought the results back. And then we talked about those and about what like, you know, what Grok said versus what Gemini said or what Perplexity said versus what somebody else said. And then we made them write something that analyzed, you know, the what one said versus the other, and then you know, sort of evolve the prompt um and their writing as it went. And so parts of the assessment were like, hey, take the dry, you know, the the actual result from let's say ChatGPT, and and given what we've talked about in class, given what's in the book, like tell me where it's wrong, because it's always wrong somewhere. You know, when it when it was clear that they were being graded on how well they critiqued the LLM, they got a lot more excited about it instead of like them trying to hide the fact that they use the LLM from us to generate the text. So it's kind of a little flip there. That was the first little bit of the assignment. And then the second little bit of the assignment was, you know, take that, add some more color to it, like try to get the LLM to go in a direction that you want, or add some of your own writing um to kind of augment it, you know, and kind of almost do writing as a writing with the LLM as a process and your in product being generating the whole essay. Or the last bit of the big assignment was now use an LLM, take your essay and and and make something else. Make a podcast, make a play, make a song, make a video, make a comic strip. You know, the traditional essay is the backbone for then some other creative product. And I think we we tried to in the class, you know, make sure that the assignment did model, you know, using AI responsibly. And that goes back to, you know, asking them to give it to multiple AIs and then like critically evaluate the output that they got back from that and and trying to get them to focus not on the product, right? Which is like, here is an essay, but on the like accuracy of that essay, on the veracity. Does it communicate the things that you want it to communicate?

SPEAKER_02:

There's a bigger question that AI forces us to address. When all said and done, what exactly are we trying to teach? This isn't just about changing the way we assess learning, it's also about answering the question of what do we want students to get from their education? What are we trying to assess?

SPEAKER_03:

So much of what we want education to be is relational. You know, it's it's like, can you can you explain to someone else? Do you understand it? Can you identify the gaps in it? And I think a lot of times when you ask a question about is this a hack-proof assignment, you're you're considering uh education as transactional, right? It's can you demonstrate this finite skill? And and I got news for you. You know what computers are really good at? Demonstrating very finite skills. Can you add two numbers together? And computers are better at us than that, and they are going to remain better than us at that for a very long time. You know, you should probably know how to add two numbers together in your head mechanically. But in reality, when's the last time you added two complicated numbers together without a calculator? It never happens, right? Even in my technical classes, um, when I would give them programming assignments, you know, there was a lot of times, even this is you know, five years ago, the code would be good, but it was like clearly copy and pasted from GitHub, from Slack Stack Exchange, from you know, something like that. And like it kind of worked, but like when you asked the students, like, hey, like explain this to me, like they really couldn't. Um, I started doing what I had to do in industry, which was I I sort of made them uh I would pull their code up when they turn it in and I'd be like, oh, explain this to me. Like get up here and explain what's going on to me. So then in industry, this is called like a code review, right? You know, just the the process of making them explain it, even if it kind of came from an LLM, they would get worried enough about having to explain it to their professor that they would make sure they really understood it. Um, which is my goal as an instructor at the end of the day, right? What I want to assess is that like, could you, if I put you in a room with someone else who doesn't know what this is, can you communicate it? Are you conversant in sort of the details of it? Um, and if you get there by working with an LLM, like that's fine. It's probably what you're gonna do in your real job anyway. Still gonna try to train my students to, you know, be able to, you know, defend their ideas to other people who are gonna ask them hard questions.

SPEAKER_00:

Honestly, we would all be so lucky to have Nick and Walter Isaacson for professors. But also, Nick's approach reminds me of something I read about recently in the Ethicist newsletter from the New York Times. It was about the results of a nonprofit organization's annual scholarship essay contest. This year, the judges' panel felt certain that several applicants, including the winner, had been using AI to write their essays. And they wondered if they should confront the winner about her use of AI. I thought this was really interesting. And the ethicist felt that unless the rules explicitly prohibit the use of AI, then there's a good chance the winner didn't think anything of using AI to write the essay. It didn't seem wrong to them. Maybe it even seemed like a good idea, probably. And so, in the response to the reader's question, here's what the ethicist writes what happened this year should be taken as a wake-up call rather than a crime scene. I suspect that it's time to rethink that format. I think that was so accurate. And this is what Nick and Walter have done. They know students are going to use AI. So they're reimagining what coursework looks like with the added element of AI.

SPEAKER_02:

Yeah, we give this advice to teachers all the time. You have to be a hundred percent certain, even if you're like 85% sure, um, it's just not gonna be enough. There's basically no way for a teacher to prove with absolute certainty that AI wrote it. Um, and so that kind of illustrates the challenge in sort of like the the design of assessments. You know, uh enforcement isn't gonna be enough. You actually have to really rethink design. And I think that's why it's so important for us to like dig into what does it look like when folks like Nick and Walter are rethinking, you know, the design of a class and and sort of the assessments that go into it.

SPEAKER_00:

Inside the mind of a lot of teenagers is the belief that they will potentially get caught. Like that scare tactic is so real. So I think it's really interesting that that's just kind of uh folklore, it sounds like.

SPEAKER_02:

You know, there there are detectors out there. Um, they're not, you know, 100% accurate. And I wouldn't necessarily share that. You know, if I was a teacher, I would I would be telling my kids, I'm using all of the latest and greatest tools to like catch you. I'm using tools that you don't even know about. So just be careful. Um but if I was a teacher, I also would be thinking about, you know, what are some other creative ways that I could approach the design of my class? One thing I want to say about this is at first, when I was hearing some of their solutions, my initial reaction was like, oh, I don't know if that's actually like hack-proof because you can still use AI to critique itself. And you can also use AI to write a reaction to the critique of itself. But then they added something very important, right? Which is we made it very clear to the students that they're gonna have to come into the class and be able to answer questions and sort of be put on the spot. And what's beautiful about that is it's actually a relatively small modifier, but it totally reshapes, I think, a student's approach to the project because everything they're doing now is with this idea in mind that like, I don't want to be put on the spot in class and and have and be found out, you know, for um for uh uh not having actually done the work.

SPEAKER_00:

Yeah.

SPEAKER_02:

Any literally any teacher can incorporate some kind of a mechanism like that. And so I just love how applicable this that example is.

SPEAKER_00:

And to the point about like what do we really want kids to learn? You know, when you can teach something, it means you really get it. And that's kind of where they're the it seems like they're coming from. Up next, Alex, I'm looking forward to hearing your conversation with Shantanu Sinna.

SPEAKER_02:

Thanks, Lisa. I was really excited to get the chance to meet Shantanu. Uh, he's a bit of a legend in the education nonprofit space because back in 2010, he became the founding president of Khan Academy, a nonprofit organization dedicated to providing quality education to anyone anywhere. You've probably heard of them. They have a massive online platform. Um, today he is the VP and general manager of Google for Education. So here's my conversation with Shantanu. Is what students need to learn in school different today than it was, you know, let's say, well, four or five years ago, not that long ago, but before sort of this sort of age of AI that we find ourselves in.

SPEAKER_05:

What AI is putting more pressure on is to think less about the end products and more about the process. Because if you really think about what education is about, when I think about my own kids, when I think about uh even myself as a student, there's the content that we learn, right? I may have learned how to uh do an integral, I may have learned um all of these concepts, but then there's the skills that we got along the way: the critical thinking, the problem solving, the creativity, the communication. And in life, those are the skills that you use every day with AI is it's putting more pressure to focus less on the output. Anyone can put the right answer into the box and much more on the process of how did you get there? How did you think? How did you uh really communicate with others? What was your teamwork? What was the process of the learning experience?

SPEAKER_02:

Yeah, and you're getting it. I think something that's, you know, maybe perhaps concerning to parents and to teachers who they see the power of these technologies, and there's this concern about it becoming a crutch.

SPEAKER_05:

You know, when you give technology, students can start to use it as a crutch in the wrong ways. But I just came back from school, I had basketball practice, I want to, you know, talk to my friends, and I got an essay due in two hours. How do I get that done as quickly as possible? But then there's the longer term thing, which is ultimately you're gonna have to show this on an exam and on the final. Ultimately, you're gonna have to have these checkpoints where you're demonstrating that you really did learn this material. And students need to ensure that they're not shortcutting their way through that. It actually reminds me a little bit of when I was uh teaching my oldest son how to learn to ride a bicycle. And I put training wheels on his bicycle. And very quickly, you know, he's pedaling and he's getting from point A to point B and he's loving it. Right. As soon as I took the training wheels off, he would fall, he lost his confidence. It was a really difficult experience. And then a friend of mine, he was like, Well, your problem are the training wheels, because the hard part of riding a bicycle is learning how to balance. And the training wheels are actually making it harder for your son to learn how to balance. And I think that analogy in many ways applies to how people are using AI today, which is if you focus the user on just that endpoint of putting an answer in into the box, AI is gonna help you quite a bit on being able to do that. But the question is, are you actually getting the right skills along the way? And are you actually developing those critical thinking, those problem-solving skills? So for an educator, I think the way to think about it is put much less focus on just seeing an end product and much more on really being able to have artifacts along the way. So one of the things that I've seen in my son's school, what the teacher does is in the first day, they say, on paper and pen, write an outline and brainstorm ideas and turn that in. And then the next day it's like, well, go collect your sources and share that. And the next day it's, you know, have a first draft. And actually, it's okay to use AI and get feedback from AI on how you can make your essay better. And it's really this judging the different steps along the way. This is why it's so important for parents and educators to play a role here, because students on their own will have a really hard time making that choice. There's not a lot of times that you look at something and say, let me take the hard way to do that. Uh well, there's an easier way right here. But it really takes somebody with broader judgment to say, well, what's the point here? That I think educators can really support students to really make those appropriate choices.

SPEAKER_02:

Some students are going to be applying to college or to scholarships, and there is sort of this the dreaded application essay. Can you like walk me through practically how should or shouldn't you be using AI as you're going about writing the essay? Like, could you just talk through what that process would look like in sort of like the ideal scenario in your mind?

SPEAKER_05:

What people are really looking for is to understand the student deeply, right? To understand like what part of the personality can come out through this vehicle. And I think it's important to remember that because AI is not going to really be able to do that for you. Ultimately, even when I think about the essay, it isn't about the most beautifully written work and writing. It is about who are you as a person, right? And that's what that essay is fundamentally about. And I think, uh, like I said, I don't think AI can capture that. That has to come from you.

SPEAKER_02:

So, you know, I've talked to a lot of teachers over the last couple of years, and a lot of them are feeling overwhelmed by the technology. And one of the things that I tell them is, you know, you're in good company. I'm feeling overwhelmed, and business leaders are feeling overwhelmed. What would you say to teachers who are trying to like sort of make sense of uh all of this new stuff that feels like it's happening to them?

SPEAKER_05:

Yeah, it's something we think about quite a bit, which is how do we support educators through this change? How do we make sure that AI is being integrated, incorporated responsibly in education systems? So we've been investing quite a bit on building tools to really bring teacher in the loop in that process. And also for students, right? They don't know what's acceptable and what's not, right? Like mom GPT, where I got help from uh my mother on this problem, that used to be acceptable. So if I do the same thing with Gemini, is that okay? If like where's the line on this stuff? So it's actually very valuable for us to give teachers tools so they can say, this is the approved appropriate way that I want you to be using AI. So we've done things like on Chromebooks created class tools where teachers have much more control over the devices in a classroom. They have the choice in the classroom to say, for this assignment, I want you using just this window to do this assignment. Please don't um use AI to help. Or they can uh kind of give a different assignment and say, well, here's the custom Gemini gem that I want you using for this assignment, but you're only allowed to use that assignment. So really the the work here is for us to uh make sure we're providing the right tools to minimize the downsides of how people are using this technology and maximize the upsides of it so they can really create those richer experiences.

SPEAKER_02:

And I think the call out to teachers is, you know, practice what you preach in a way. We talk about lifelong learning and curiosity, and that's important for students. And it's especially now also really important for teachers to just approach this with that curiosity, you know, how might they, you know, like leverage their expertise to level up their classrooms for this new world that we find ourselves in. Shantanu, this was awesome. Thank you so much. All right, thank you. Really enjoyed it, appreciate it.

SPEAKER_00:

I think it's so fascinating what Shantanu said about AI putting more pressure on the process rather than the end products when it comes to learning and education, because as we've talked about, process is so hugely important, whether you're talking about AI or not in learning. I think we often miss the point or the purpose of learning at times. So to think of AI as a way to refocus that, especially when kind of achievement pressures are at an all-time high, that we can refocus everybody on what actually really matters in process. Um, that the outcome of learning is actually that you went through the process. Learning how to learn, learning how to think, that is the huge task. Thank you so much for listening. Join us again next week as we discuss how to prepare kids for the future of work with AI. We'll hear from scientist Ashley Kaiser.

SPEAKER_01:

I don't think that we will ever run out of problems to solve or things to optimize. Um, scientists, engineers, pretty much everybody in STEM, I think we would all agree that even when you wrap up part of a project, you're not really done.

SPEAKER_02:

And we'll hear from Google's chief technologist for learning and sustainability, Ben Gomes.

SPEAKER_04:

Very often I talk to people about what they've chosen to do or what they want to do. And very often they want to do that specific thing because they know one person who has done that specific thing and succeeded. But the world is a lot broader than that.

SPEAKER_00:

They'll explain how AI is already impacting the workplace and how it will continue to evolve. Plus, learn what skills they think your children will still need in the future to be successful. Find out where AI will take us and future generations. Next, on Raising Kids in the Age of AI, a podcast from AI EDU Studios created in collaboration with Google.

SPEAKER_02:

Until then, don't forget to follow the podcast on Spotify, Apple Podcasts, YouTube, or wherever you listen so you don't miss an episode.

SPEAKER_00:

And we want to hear from you. Take a minute to leave us a rating and review on your podcast player of choice. Your feedback is important to us. Raising Kids in the Age of AI is a podcast from AIEDU Studios in collaboration with Google. It's produced by Kaleidoscope. For Kaleidoscope, the executive producers are Kate Osborne and Lizzie Jacobs. Our lead producer is Molly Sosha with production assistance from Irene Bantiguay, with additional production from Louisa Tucker. Our video editor is Ilya Magazanen, and our theme song and music were composed by Kyle Murdoch, who also mixed the episode for us. See you next time.