What Teachers Have to Say

Scaffolds Were Always Meant to Come Down

Jacob Carr and Nathan Collins

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 1:31:42

Send us Fan Mail

Jake and Nathan just got back from their third Stanford AI + Education Summit — The AI Inflection Point: What, How, and Why We Learn — and a week later, they still can't stop talking about it. In this episode they dig into the tension at the heart of AI in schools right now: how do you protect the human skill development that education exists to build, while letting AI do the things it's actually good at? They get into the AI Assessment Scale, why cheating is the wrong frame, what it means when kids turn to AI for emotional connection, and whether the "perfect tutor" is the answer anyone thinks it is. Honest, critical, and grounded in classroom reality.

Referenced in this episode

Stanford AI + Education Summit 2026 The fourth annual summit, held February 11, 2026. Full conference on the Stanford HAI YouTube channel.

AI Assessment Scale (AIAS) Developed by Mike Perkins, Leon Furze, Jasper Roe, and Jason MacVaugh. Five levels of acceptable AI use — from no AI to full AI with the student as director and evaluator. First published 2023, updated Version 2 in 2024. Adopted by hundreds of institutions worldwide, translated into 30+ languages.

Matt Miller — AI for Educators Source of the 12 cheating scenarios Jake has been using to poll educators across the country. Miller also runs Ditch That Textbook.

Google AI Quests Free, code-free, game-based AI literacy tool for students ages 11–14. Students step into the role of Google researchers solving real-world problems in climate, health, and science. Co-developed by Google Research and the Stanford Accelerator for Learning. Complete lesson plans and teacher guides included.

Ethan Mollick — Co-Intelligence: Living and Working with AI (Penguin, 2024) Source of the centaur/cyborg framing. The centaur divides labor strategically between human and AI; the cyborg integrates the two fluidly within the same task. Mollick's Substack One Useful Thing is one of the more practically useful ongoing resources for educators thinking about AI.

Cheating research Jake references "Cheating in the Age of Generative AI: A High School Survey Study of Cheating Behaviors Before and After the Release of ChatGPT"Computers and Education: Artificial Intelligence (2024). Note: Jake mis-attributes this to Stanford — the actual source is below. Key findings: overall cheating volume stayed stable after ChatGPT launched; students who self-reported higher AI competence cheated less; clear boundaries and consequences remained the strongest deterrent.

A note on homo technologicus was attributed to Yuval Noah Harari. It circulates in academic commentary on Harari's work but doesn't appear to be a direct Harari coinage. The concept maps to themes in Homo Deus, but we can't confirm the specific term originated there. We're leaving it as spoken and flagging it here.

Got a question? We'd love to answer it! Leave us a voicemail on SpeakPipe: https://www.speakpipe.com/whatteachershavetosay

Want more EduProtocols from Jake? Check out his book at Amazon, Barnes and Noble, and more.

1 (00:00)
I think just zooming out, this might be the most powerful technology that humanity has ever created. And so we should at least have some assumption and curiosity that that would have a big impact on education, both on the opportunities and the risks. I think that's just kind of my starting point. This is a very big deal for humanity and it'll probably be a big deal for education. Not a bad way to start an episode, I think, right? So just that. That was one of my new favorite human beings, Nerov Kingsland.

from Anthropic. And that was a clip from the AI and Education Summit 2026 at Stanford University called From Possibility to Progress. And that dude dropped truth bombs left and right.

2 (00:40)
Dude, Nirav was my favorite. Talking to him afterwards was like, he said to me directly, we just need to become better people. Like, just that. Just that also. Wow. Okay. Hi. Well, if you've forgotten who we are, I'm Nathan.

1 (00:48)
Yep, we need to become better people.

Jake.

Welcome back to another episode of What Teachers Have to Say on Hiatus.

2 (01:02)
Woohoo! Yeah, we're both deep in our careers. I am now becoming an elder mentor to other teachers and all these other things and you know, and you're over at the district office. Yep.

1 (01:17)
breathing AI all day every day.

2 (01:19)
All

day every day for the entire North States and the state of California by extension.

1 (01:25)
Yeah, really exciting things happen, but this episode is fun. So basically we were able to, for the third year in a row, go to the Stanford AI and Education Summit. It happens in February and arguably my very favorite thing to do all year. My very favorite moment in education because it's industry leaders, it's thought leaders, it's researchers, it's students.

It's people like us, teachers, and more teachers, even high school students are showing up there.

in one room all day with panel discussions and this final, we're going to be talking more and more about it, but this final panel discussion, Neerav Kingsland from Anthropics, Susanna Loeb, who's a researcher at Stanford, Shantanu Sinha from Google for Education and Rebecca Winthrop from the Brookings Institute. And then it was moderated by John Hennessy of the Schreiber and Family Director and things like that. But really,

Nirav and Rebecca, they got into some conversations that we've been thinking about. so we get into, I think the overarching claim for us is like, how do we protect the human skill development, the human-ness of education while allowing AI to scaffold to get there I mean, we go down some rabbit holes.

the

2 (02:45)
So I in a panel with the CEO of GoodNotes, Jason from Samin's company, I can't remember. It's Samin's company, that's not his company, but he works there now. Love you, Samin. And then two representatives from College Board and one from CTS, I believe. And so we're having this conversation about.

1 (02:52)
just refer to it as some means come.

2 (03:05)
the product itself and the AI product in the classroom and they were very concerned with it being good. so good so that students will engage with it to mitigate some of the engagement problem that we're seeing. But that brought up that relationality piece, is like, which is why it's so dangerous. That's danger zone. We do not want children having relationships with artificial intelligence.

Agents we do not want that to happen. It is happening and it's very that's the scary stuff

1 (03:36)
You're like, not yet. Like maybe 100 down the road. Like, I don't know, maybe cultural shift where it's, but we are not there now.

2 (03:44)
Right. And then, you know, to make the product better, it's not about making the product better for the kid because the buy-in for the kid is the relationality of the teacher. And so they're trying to make the artificial intelligence agent more relational to get engagement. But the teacher is the relational piece that needs to be there. And so like in the earlier panel with Amanda Bickerstaff, she was bringing up some data.

showing the like the relationality of AI and then how more dysregulated kids that have more challenges in their lives are looking for relationality. And the kids that are more, you know, secure in their lives and have more secure home life are not looking for that. They're looking more for it to behave as a machine. They do not need the relationality.

But her takeaway from that was that she was basically saying like artificial intelligence right now is revealing what is missing from life, like from the lives of students. Like you need a 1 a.m. thought partner, right? But then that 1 a.m. thought partner becomes a partner.

1 (04:52)
Right, and then you became friends.

2 (04:54)
Yeah. And I think that is, you know, obviously net bad. Even though there's data to say that it's a net good, that it can reduce suicidal ideation, I don't think that's the way. This is not the way.

1 (05:01)
Absolutely.

This is not the way. I mean, yeah, is it a scaffold? scaffolding what's missing in that person's life. What they're seeking is emotional connection, healthy mentorship, healthy parentage, healthy, whatever. Right. And so it's not that, but at the summit, I love, I don't know where it was in the eight hours we sat there, but somebody was like, well, don't forget, like scaffolds were always intended to be removed. Yeah.

2 (05:11)
Yeah, maybe it's a sky.

I've been saying that to everyone that will listen. Like it's legitimately like...

1 (05:38)
And yet also, why do we forget that in the classroom? Scaffolds were always intended to be removed because we built capacity. Build them as the training wheels, not the actual vehicle.

2 (05:51)
Yes. Scaffolding is just the frame to support the work to create the thing. Yeah. Not.

1 (05:58)
to get them up to that, to fill the gap.

2 (06:00)
Yeah, it just seems like we've taken scaffold and over time, like the meaning has become crutch, basically, or to put it more kindly, modification. You're bound in it. It's a modification that you just have, which you may or may not need. But I think most of the time that's doing a disservice.

1 (06:21)
Like, I get two or three emails every week by some AI startup company asking me to jump on a zoom and share my expertise. And we used to call that colonialism and just extracting resources. It's extracting resources. That's all

2 (06:36)
Dude, you went there.

It so is. But that is, you know, to say it, the capitalist mindset of getting something for nothing. Our friend Sam, most amazing ⁓

1 (06:46)
That's like when we had our friend Sam, amazing capital

handler on the planet. he was the perfect disconnect. He, that man was intelligent, emotionally and cognitively super smart. man, he was a whiff of

2 (06:57)
He was.

Yale Yale man

He was a whippin poof

1 (07:08)
that was the best thing ever. But also working in venture capitalism in EdTech and what a 10 minute conversation had him fetal position in a corner quivering because he didn't understand the school actually was running.

2 (07:11)
venture capital.

I remember you were like, okay, so LA Unified is this. And they're to take three years. Like he literally, the look on his face was so incredible.

1 (07:27)
So three. To adopt a math curriculum.

2 (07:36)
The, the praxis of the classroom, the reality of the classroom is something that they're very divorced from. and, and definitely, you know, Sam acknowledged that too. And we were just like, push him a little bit. He's like, yeah, I'm very privileged in a very privileged area. And then I was like, so tell me why does venture capital ruin everything? He's like, there are perverse incentives.

which is the most beautiful way to put it. And I so appreciate him for saying that. I've been telling everybody that I can't listen. There's perverse incentives. that also gives me some pause to like, are the perverse incentives that we're not seeing right now?

1 (08:18)
Well, and I don't remember who brought it up, you or me, the like syllogism or whatever, where we're like, the purpose of education is to build better humans. It's to impart cultural knowledge so that the next generation has the platform to build the next version on, right? And the purpose of capitalism is to amass wealth. And so what happens when you have to extremely diametrically opposed

purposes, whole purposes, working in the same thing. And that's when I was like, how do we stop from the phonics versus whole language problem, right? When we understood that the whole language thing was wrong and the publishing companies wouldn't move away from it because of loss, know, sunk costs, all these things, the money. And now we have a generation that's having a rougher time reading than they should because

2 (09:05)
because of the money.

1 (09:11)
teachers, well-meaning teachers were taught how to do it, but the publishers were laughing to the bank the whole time. Perverse incentives. So one of us brought that up with Sam being like, how can I ensure? Right? Because so much of the summit was like, what does education need from industry? And it was like, well, when you realize it's going sideways, but you're making a lot of money, how will you divest yourself from it?

when the whole purpose is to extract resource and wealth to amass resource and wealth, which is absolutely the opposite of education.

2 (09:44)
Right.

And then I, I remembered his response was like, okay, but what about profit sharing?

1 (09:53)
what? now you want the board to have, you're gonna lie in their pockets if they keep using your product. That's a good idea. ⁓ that's when I brought up the Coca-Cola and sodas. Yeah. And I'm like, they were all over campuses because they were profit sharing and that wasn't good for kids, but they kept doing it until laws kicked them out.

2 (10:12)
I think that's why Nirav's point hits so hard where he's just like, yeah, but all we need is the perfect, we need the perfect tutor. Cause I think he's coming at it from a perfect tutor will democratize learning period. Like if it's accessible in the same kind of way that the MOOC thing that they're talking about, where it's like, it revolutionized.

the lives of youth, young people, students in developing countries. I think Neerav is on that.

1 (10:44)
the giant free open source classes.

2 (10:47)
Right, exactly. Like code in place.

1 (10:49)
just get access to it, they have the possibility of learning.

2 (10:54)
Exactly. And so I think that's where Nirav is coming, where he's just like, once we have a perfect tutor, nothing else will matter. I love when he dropped that. He was just like, yeah. And me and Samin went up to talk to him afterwards. And it was just like, know, Samin was kind of like pushing him like, you said you were an English major. How's it?

1 (11:02)
and former English teacher.

or something at Anthropocene.

2 (11:20)
Coding, he's just like, I don't code at all. Like I don't have to, I'm gathering information, I'm training, I'm reflecting. It's not something that I, know, what he's bringing to the table is a different thing. All of those skills that teachers discount for themselves. Like he was an English major and he worked in, you know, schools for like 10, 15 years after Hurricane Katrina, like rebuilding their educational system.

Like it's very interesting to me that Anthropic has a guy like that on deck and sends him to represent their company at Stanford.

1 (11:55)
It

also makes sense that it's anthropic though, and not that they don't exist other places, but like, anthropic is so transparent about the weird stuff they do.

2 (12:03)
They are. I love that they post all their system prompts and everything. They're just totally clear.

1 (12:09)
Hey, this was really scary for our researchers. Read it.

2 (12:12)
Yeah, yeah, I love that. Yeah, transparency is going to be really, really important. Yeah, as we move forward.

1 (12:18)
Well,

and also like speaking of him personally, right, he mentioned he has children that go to public school in Oakland. know, I'm sure that they don't live by Jacqueline square. their neighbor might be Mike from Green Day. Yeah.

But it isn't, I think that's why he's so fascinating because we're like, you cut your teeth in education, you've been in the classroom long enough, high school English teacher, and you've seen poverty and you're raising children. They sounded like they were pretty young. But there is still this weird disconnect where he's like, well, if we just have a really good tutor, an AI chatbot.

2 (12:53)
But I think that's what it comes down to where it's like, it's the democratization. It's like, if we have this tool and this tool, everyone can access this tool. Yes.

1 (13:01)
Yeah, it has to be possible.

Then, like what? Then we work on getting kids to it. So it's like you can lead a horse to water, but you can't make a drink. We make good water.

2 (13:07)
thinks.

I mean, I almost, the

way I would relate to it from my millennial experience was getting a computer and access to the internet in the seventh grade. Like there was absolutely no oversight for me whatsoever with that technology, but it worked. And the internet, these were the early days of the internet. It still had innocence and freedom and freeform. Like it was very open source at that point. HTML was, you know,

barely 2.0. So it was like, yeah.

1 (13:39)
But no, Napster, that's for real.

2 (13:42)
The

peer to peer sharing, internet was a very community based thing where people would share information and resources with each other readily and openly and freely. And so it hadn't been co-opted by advertising, like, know, like Google was not the first search engine I ever used. And I remember even like a

1 (14:00)
Why does venture capitalism ruin it?

2 (14:06)
applying for a Gmail account. Like it was those early days. So for me, that technology just existing in my life was pivotal. Like suddenly this very geographically isolated kid growing up in a very impoverished community, generational poverty, now has access to

all of this information in these various communities of practice that were existing on the internet at that time. And it totally changed my life. Like there's no way that I would be who I am now if I didn't have that machine. So I think for me, that's why that example hits so hard.

1 (14:44)
you haven't mentioned is Linda Draper.

2 (14:46)
Okay. Yeah. Linda Draper was high school though. So it took me a few years to even like, be like, no, I should, I should learn. And there's a whole world out there that I don't understand that I need to understand. then, and then finding good mentors. Like I knew I needed the mentor. So I was looking for the mentor, which maybe is how this will play out where it's like, we have the tool, people access the tool.

1 (14:50)
But right, like.

2 (15:13)
then they understand they need a mentor.

Even though that's very uncomfortable to say as an educator.

1 (15:19)
Can we apply the hero's journey?

2 (15:21)
Sure, why not?

1 (15:22)
In

the beginning, ordinary world, then seeks mentor, crosses threshold into AI-ness.

you know, bringing back the elixir,

2 (15:32)
I feel like it almost is going to fall out that pattern. mean, you know, hero's journey.

1 (15:37)
Because

we can't discount, I also don't want to be the mentor that dies because they always die. But like, RIP.

2 (15:44)
other.

1 (15:45)
And so at first I was really put off by Nirav's like if we just, If we just build the best tutor that does the thing the way that it should.

2 (15:55)
real information, perfect information.

1 (15:58)
I that like bro like

don't need a better mp3 player I need human connection I need how do you make meaning but maybe that becomes

2 (16:06)
Right.

Yeah, I think that's the next.

1 (16:12)
to the mentor. Because the mentor is what makes everything work and flow and valid and the wisdom and then the mentor goes away and you're left with that gained knowledge to return home and pass it on.

2 (16:25)
Dude.

1 (16:26)
Campbell.

2 (16:27)
⁓ Joseph, Joey Cam.

1 (16:30)
Hmm, I'm gonna have to think about that for like another year.

2 (16:33)
Yeah, right. I think that's that's just why it hits me that way, because I am a millennial. Like I'm in like in that specific generational timeline where I had a youth that was no technology. I mean, especially in my community. Yeah, we were hitting each other with sticks and rocks like that's like climbing trees, like going fishing. Like we were not, you know, I had a vocabulary of 100 words.

We got. Yeah, exactly. But it's like, you know, I remember I really wanted a computer. I went to a school where people from the higher socioeconomic classes of my community went and like they had computers and I was like, that is amazing. I really want to have access to that technology that I'd like because the curiosity was turning on, you know.

1 (17:02)
Seventeen of which were excellent

2 (17:27)
And so I think that's why I see it that way. Like I lived in a very specific time and technology hit me at just the right time in the right way.

1 (17:35)
Generations would be an interesting way to look at this because I look at me like I Remember when our school library got a computer That was Like fifth or sixth grade, I think it was an Apple 2e with the green screen

2 (17:49)
trail.

1 (17:50)
Oregon Trail. No, we didn't even get to play Oregon Trail in school. I was old when that happened. Of somebody from the late 1900s. But to me, computers didn't really matter until I had moved home from Europe 20, 21.

2 (18:07)
Right.

1 (18:10)
and started going to college and people were like, you don't have an email yet. Yeah. ⁓ But I do remember in high school, my teacher went to some, my, my Mike Jenga honors English teacher in like 11th or 12th grade. He went to some kind of national conference.

2 (18:14)
You know? Yep.

1 (18:28)
And he came back and he was trying to explain to us. He was like, it was the weirdest thing. I typed a letter and then a couple of days later I came back to the booth and they had a reply from my brother and his brother was like a professor at some university in Brazil. And so that's how they were able to do email. And he was like, I don't understand it, but there was a letter from my brother, you know, it's like, that's my beginning of.

2 (18:51)
Yeah. Yeah,

it's like the telegraph version of email.

1 (18:56)
and then moved away, you know, and really dropped out of kind of the world for a while and then came back and... But I had like amassed all these foundational skills through blood, sweat, grit and tears.

2 (19:10)
you next.

1 (19:10)
Gen

X way before like I remember my parents did buy me an original Apple Performa 450 in like 1994 my graduation president but it was really just a word processor.

2 (19:28)
Right. Yeah, it's just a fancy typewriter. That's what a lot of early.

1 (19:31)
Computers and I could play alley cat on it. Yeah. Yeah And then I remember, you know, just a couple of years later it would have been 97 98 when I started community college Waking up to it and being like oh this is but I want I guess back to my point and I'm meandering a lot I feel like yours it hit in a formative Mine was in an augmentive way

2 (19:57)
Yes.

Right, so later.

1 (20:02)
It

was later, I was an adult and I was married before I really had a computer that I was working with.

2 (20:08)
This makes a lot of sense. Right? Yeah.

1 (20:10)
Where

you were forming around it, I was going from cassette to CD. You were exploring what music was.

2 (20:16)
Thanks

Right, yes. That's a great way to put that, 100%.

1 (20:22)
But now we have kids that are, I mean, if you think like the kids that were born, you know, the kids that are born in the past couple of years will never experience a world without artificial intelligence. And that is bizarre.

2 (20:32)
I know.

Well, mean, all our high school students, you know, they were born in like 2007, 2008. So they don't know a world without smartphones. And so when I tell them, I'm like, this device completely shifted my adult life. So I think for me, I'm closer to the social media phone thing where the computer was to you. It was just sort of like, all right, this is

1 (20:58)
I

got one because Julie was pregnant and we were going to have a kid and at some point I needed that instant phone call to go to delivery. That's why I got a cell phone.

2 (21:04)
Yeah.

And mine

was I need to like calendar and schedule an email and do the things I would do on a computer on the go. And then you look at Gen Z and on like their relationship to their devices is very different. It's so like, it is not a transfer of skill. is a whole new.

1 (21:29)
It's like students that you talk to and they're like

2 (21:31)
Yeah, like a lot of times students will try to work on their phones or will in my class if their computer's dead because they never charge their computer. Why don't you guys charge your computer? I have five extension cords in my class and multiple splitters just to keep you online.

1 (21:46)
Success and equity.

2 (21:48)
Right? But, but yeah, I think it's a different relationship to technology depending on

where you are developmentally.

And I don't think that's something that anyone is talking about.

1 (21:59)
So I go back to your previous point though about like kids who are dysregulated. Like we live in what used to be the highest ACEs scores county of the nation. Yeah, some someone else. Someone in Utah. I mean in Oregon the last time I looked not at not a trophy you want, you know these these kids we have pockets in our county that are so deep.

2 (22:05)
Yeah.

is it not? Who took our crimp?

1 (22:25)
you know, generational poverty and trauma that they know no other way. So I think back about what you said about you finding the computer, me finding the computer, and then now this kid finding an emotional support person in AI. Right? And so like, what does it even mean if a seven, eight, nine, 10 year old

Before their brain is even really working an abstract thought it's all concrete But yet they found this thing that makes them feel good It talks to them when they need to they can tell it their secrets that it helps them It's like the mentor that we all sought through life. I think about the mentors that I find but that mentor is not human

2 (23:10)
Right.

1 (23:12)
And

like, I don't wanna ever hobble the future, but like right now, like I don't love the idea of somebody's primary mentor not being of the same species.

2 (23:24)
Right. Yeah. I think that it should be human. ⁓

1 (23:29)
always

reserve the maybe in the future I don't know yeah but right now like what does that mean that you find if and right especially the bots right now like the sycophancy and

2 (23:40)
the relationality is way too high. It's way too high. And then there are, you know, relational specific agents too, you know.

1 (23:43)
way too

that are tuned for that and I think that there is a lot of good that can come out of it but kind of narrow

2 (23:57)
Very narrow. With a lot of care. Yeah, and also a lot of care and a lot of... So, know, open honesty on this podcast always. Like, I have definitely used artificial intelligence for mental health. absolutely. 100%. Yeah. Because it's tracking my thoughts, kind of. Kind of. It's like that I'm putting data into it.

that are my train of thought, that are parts of my life. You know, I've been working with AI agents since the beginning. So my chat GPT, I was in one of the first 1 % users. you you are as well. So like our memory is old. So it's seen me go through career changes, relationship changes, like real life things. And I can go to it and be like, why am I so mad about this thing that happened at work?

Like I don't understand what is triggering me about this. Can you help me figure that out? And that is awesome. That's amazing. It's so good as just an extra layer of emotional self-regulation. I'm just like, all right, I can get some immediate perspective. Would that replace a therapist? Absolutely not. But for someone with a well-established protocol for mental health like I do from going to therapy and learning about all these things.

1 (25:17)
adult brain.

2 (25:18)
In an adult

brain it's very useful, but to a kid?

I don't like that AI would have to be so, so perfect. And even if we had the perfect tutor, don't think that that's.

1 (25:28)
think that being perfect is actually a problem. Right? Like, mentors can be messy.

2 (25:33)
Yeah, they usually are. They usually are. ⁓

1 (25:36)
Cause there's the navigation. And then I'm also wondering like, heaven help us if like, you know, let's get a psychiatrist in here or, you know, some, for somebody who works with the brain to say like me as an adult forming whatever relationship I have with my LLM, when I ask it for parenting advice and mental health and things like that. And then I reviewed the output with my adult brain and I sort it and I take what's interesting and, and not.

How is that different from our, you know, six year olds who are forming their ego at the same time that they're using relational time with an AI? I don't even know.

2 (26:14)
Right.

Me neither. And so that's where I'm very like pushing, like I'm very like pushing back on the, need to make the tool better because it seems to be like from tech, relationality from the tool is what is going to help it access children basically.

1 (26:32)
that's the problem, but we also don't want bad tools. So you're like, how do we make a better tool that doesn't become cocaine?

2 (26:39)
Right.

1 (26:39)
Woof.

2 (26:40)
Yeah. So the mentor piece, yeah, it has to be.

1 (26:44)
You

know, it's that human in the loop, that term is so important that I think we just use it as jargon now. Maintaining humanality, maintaining, you know, the interaction of my personhood with its flaws to your personhood with your flaws. That's what I worry about. The perfect AI.

not having or its flaws are so different that I don't recognize them. You know, it's like our models are getting so good that the level of expertise to recognize the hallucination often is outpacing the user. They can no longer recognize the hallucination because the expert is so high. And so now relation experts like only form from the messy middle.

2 (27:10)
Right.

1 (27:30)
That's how you, you know, I think that's how I feel. I'm not an expert in this, but like, think anthropologically I'm like, you have to navigate broken systems in order to understand that the system is broken, you know, and like messy moments and getting through them, that formative friction. And if all you're getting is you're so good and here's why, even if they tone down the sycophantic nature of these LLMs,

I still think like it's too, having too perfect of a model is gonna be an issue.

2 (27:59)
Well, I think this is a really good point to transition to the other thing that we wanted to talk about, which was AI and assessment, because that came up through this and continues to come up. Some anecdotes from my class currently, as in a couple of weeks ago, right? I'm noticing this pattern. High school students.

1 (28:16)
Yeah.

context, high school students enrolled

in college English.

2 (28:24)
And enrolled in college English. so I'm a college professor teaching to high school students. In a rural setting. I have to, you know, obviously teach in a developmentally appropriate way. There are definitely going to be emotions and behaviors that I would not see in a college class. Sure. But it's a really, really interesting place to be. And it's a really interesting setting. And I get so much great data from it.

1 (28:30)
in

2 (28:51)
And so in this piece, students are imagining that what I want from them is perfect writing. And so they don't quite understand that what I'm assessing is actually how they're formulating their thoughts, how they're structuring and ordering their thoughts, and how they are citing the sources that we're reading. All right, those are the things that I'm

assessing how they're integrating sources into their writing. And so what I'm seeing from students is they will draft a response because I'm like, all right, walk me through your workflow. Like, I'm not calling you out, but this is definitely AI writing. So walk me through your workflow. And a lot of it is, I see it immediately. Yeah, I recognize it. can have these conversations. so

1 (29:33)
recognize it.

2 (29:39)
It's like, all right, walk me through your workflow. And a lot of it is like very messy, but very interesting writing that's then processed through an AI agent. And it becomes perfectly sterilized and perfectly ordered. And I'm like, this is what I want. This isn't what I'm assessing. This is not what I want to see from you. I don't care about this at all. I don't care about the perfection of the writing. And they're thinking college level class perfection of the writing.

1 (30:04)
Everything

has to be turned in.

2 (30:05)
Everything

has to be perfect. And like you are actually shortchanging the both of us. One, I can't, I can't see what you're thinking. Right. Actually in like real terms, because to me, the writing is a reflection of your thinking. And so I can't help you with that. I can't help you structure your thoughts because I don't even know how they're structured. Also, there's no voice. There's no, there's nothing about this.

1 (30:17)
Park.

2 (30:29)
that is particularly interesting, it goes for that middle, it shoots for the middle, right? So then they're thinking I'm assessing something that I am not. And honestly, that is something that I need to be more clear about and that I do need to really, really probably dedicate a whole unit to the AI usage scale that you've brought to my attention.

1 (30:54)
assessment scale.

2 (30:55)
Yeah, assessment scale. Yeah. Because in that, like, I'm also trying to assess how they're using AI at the same time, you know, and guide them in that way. But yeah, if you're just processing, processing everything. And the hardest part too, is it's often students that have IEPs, where they think that I want the writing to be perfect and the writing is so messy that they don't think I'm going to be able to.

parse what they're saying or what they're going to try or they're embarrassed by it because our writing is usually social and shared with other students so that so they want it to look better than it does. And the embarrassment or fear and and so they think the product is the thing but it's the process that is the thing and so rethinking assessment for process I mean that's something we've been talking about in this podcast for years but

1 (31:33)
12 years.

It's becoming so imperative. Yeah. And messy.

2 (31:48)
Very messy. Take me through the...

1 (31:50)
Yeah, the AI assessment scale, it's got five levels. My pushback to begin with, I don't think that it is the final tool it will have, but I think that it's a really good place to start conversations. It was researched by a number of researchers whose name are on the website and not in my head.

there's five levels and so level one being no artificial intelligence usage, right? And so I jokingly say that it's like the kid with a slate board laying in the hayloft. You know, and then...

You know, level two is basically in my brain, it's UDL. And so that's like, can you use AI to gain greater access to the knowledge? So it's like, if you gave it a writing sample, it's gonna levelize it, it's gonna pull out vocabulary. It's the scaffolding. How do I get to the information? And then the student does the rest of it. And then three is collaboration. And that's now like,

2 (32:33)
and ultimately score.

1 (32:43)
I'm working back and forth a little bit. It's not really finishing the draft, but kind of neither am I anymore, right? So it's taking the output. I do think that level two is the sweet spot.

for most places in education and three seems to be the adult way of working with LLMs. And then you've got four which now like the AI is doing all of it and you've become an assessor essentially like you're making certain that it's right and you're doing QC and quality control and like steering it. And then level five is basically like I don't.

2 (33:00)
Right.

1 (33:16)
care anymore, use AI every way possible, but take us somewhere we've never been before. Right? That's kind of the, all the rails are off. It's a great scale. We've talked about it on the pod before, but I'll bring it up again. Matt Miller in his book, AI for Educators, gives these 12 scenarios of student cheating and they're all in like a language arts lens. And so I threw that into a Google form a couple of years ago and I've been,

polling educators around the country. And it's all, and I should say it's not just- Yeah, I have. I'm really sad. I had a data set of about 1200 respondents and I lost it when I changed my schools. So I'm back to about 400 I'm gaining. basically it takes this scenario of like, know, student with a sleep board laying in a hayloft.

2 (33:43)
doing that for you.

1 (34:01)
no AI was used and then teachers or the educators and and to clarify it's like adults in education I've had everybody from janitorial, secretarial, bus drivers to superintendents, classroom teachers, kind of any adult working in the space they have to rate on a four Likert scale from not cheating to absolutely cheating right and then somewhat is in the middle

And so that goes through all these scenarios, these 12 different scenarios of ways that kids use AI to quote, cheat or not, all the way down to like the student copied and pasted the essay prompt into the LLM. It ran the essay and then they pasted it back and turned it in. And so what I love is I have people take this. I was just in Alaska at ASTI.

I had a whole room of educators take this and then I have a discussion like what did you see? What do we think was absolutely cheating and the room generally is like, yeah, that one totally a hundred percent cheating. I'm like what was one that was totally not and they're like yeah, a in a hayloft, a hundred percent. But then I'm like what about the middle?

2 (35:04)
Right.

1 (35:04)
And like

nobody wants to raise their hand. They feel it, but they don't know. And what I love about this data set that I've been collecting is it's not really helpful other than I scroll through the pie graphs and I say, just look at it visually. Don't worry about what it is. Just visually look at the dispersion of the pie graphs and all that messy middle is like.

25 percent 25 percent 25 percent 25 percent and so I use it to just show that like if we are the Bastions of knowledge we're the educators making a decision on what is cheating and what is not and we have zero consensus on What cheating is on these levels? And they're pretty good scenarios. They cover a good gambit of usage even down to like kid in a hayloft. There's

not a, you know, there's not a small number that have said like they might be cheating. Might be cheating, you know, with AI somehow.

2 (36:00)
They might. They might.

Maybe they're older brothers up there and took a class last year.

1 (36:07)
Yeah, and so and then even down to the like they pasted it and and you know Did no work other than copying and pasting and there are people that are like that's not cheating They just have this like super futuristic point of view of like that's not cheating. It's obsolete. It's just not cheating Needs to be bucked is kind of what people I've talked to them about it, right? But ⁓ what I look back to the AI breaking

2 (36:30)
assessment.

1 (36:30)
Yeah, they're breaking the assessment on purpose because it's a BS assessment at that point. Which goes back to good old McKinsey and company Deltas of breaking orthodoxies. It's a skill that our students have to have moving forward. Anyways, I digress. The AI assessment scale, what I love about it is we don't have a cultural framework for students having this technology, right?

We don't have it. This is my new fun thing that I'm playing with is the cultural development of mores. You know, going from culture ways and folkways through mores and taboo because cheating is taboo. Cheating is where you have tread the line into what is unacceptable as understood by the culture where you are. Right. But we're pretending that we have a taboo when we don't. We just have these really ensconced emotional opinions.

2 (37:01)
Yeah.

Right.

1 (37:23)
And so the AI assessment scale, what I love about it is it has some flaws in it, but it gives us a good beginning framework, a good place to begin saying, I, as the educator in the room, know that you need to develop this certain skill. And if you use AI in a way that circumvents that knowledge, you haven't done the thing I need you to do. Right. And I think that's the heart of the

cheating in the block and ban is because like, well, if we just block it all, then we're protecting it. know that that doesn't. Yeah, it just doesn't work. And then I always, I always bring up my rebuttal of what about the amazing things that AI can do with students with disabilities? Are you going to prevent them from using it? You know, right. It's an instant, like shot in the bow, like

2 (37:55)
rights. But the national access.

Yeah.

1 (38:14)
Wait a minute, but what about this literally viable problem? So the AI assessment scale, though, to get back to it, the idea is that you spend time, and to be clear, we're not talking elementary, we're not even talking middle school, this is a high school and college thing, when they're using large language models, not just AI in the classroom. There's so many good AI uses in the classroom that have nothing to do with chat bot.

2 (38:36)
Right.

And increasingly products that are tailored just to that, like that AI Quest thing that we saw from Google. literacy. AI Quest is rad.

1 (38:43)
from Google.

with feedback, Cure-a-Pod, with immediate generation of lessons for students and engagement. There's so many great ways to use it.

2 (38:55)
We should probably do a another tool guide. Yeah

1 (38:59)
But

yeah, great ways to use AI that have nothing to do with students in a chat bot scenario. But if you were to spend time going through individual workflows and saying like, OK, this is AI assessment scale, so the AI AS level two. I want to show you. Let's take this writing sample, levelize it, make it simpler, give me an outline of it.

pull out the academic vocabulary, translate it into whatever language you need. Give me a reading guide, create a list of prompts that I can answer along the way to better learn it, right? All of those things, it's all UDL. then you close the laptop and you get the paper out and you read the thing now that you have guidance. And scaffold together.

2 (39:35)
Yep.

Get rid of the scaffold eventually,

1 (39:49)
And

I always jokingly say like, people call this cheating. I always learned it as pre-reading. Right? Jinx. Like I learned that it was pre-reading. You're calling it cheating now. Okay, we'll have that conversation again. But you know, so like to spend time to teach them, to skill them up on how to use an LLM specifically to do those things. And then with level three, the same thing, you go through the skills of co-authoring.

collaborating, judging, editing, is it correct, verification, all those things. And then level four and level five. And the idea is that when you have purposefully, almost like boot camp, like purposefully trained students the skills to use generative AI, large language model, know, specifically, usually Gemini because Google is in schools and they have guardrails up for student usage. Then,

2 (40:20)
Bye.

1 (40:38)
you would tag your assignments like, okay guys, this is the assignment and it has an acceptable AIAS of two.

2 (40:46)
Ooh, I love that.

1 (40:48)
So now it doesn't matter what I think is cheating or what the student thinks is cheating What matters is we have a shared definition of acceptable usage and then also To get into the kind of the dark side of the AAS. So I've been starting to talk about this I've held a couple of sessions around the county and online and then now in Anchorage Alaska this conversation this conference we were just at the dark side of the AIAS

2 (40:59)
Yes.

1 (41:14)
is you have to be able to pinpoint the actual skills that you need assessed. And that is, it shouldn't be, but that is really problematic. I jokingly say, like from my presentation that I've been doing, I say, like, imagine that you're teaching the Industrial Revolution and you want to do an oral assessment where the student has to reply with like two,

2 (41:21)
Yes.

1 (41:38)
Two things that happened in the industrial revolution and the shift that that made in the culture, right? Standard boilerplate high school question for industrialization. But then I say like, I love tricking audiences. I'm like, do we think this is viable? They're like, yeah, it's a good question. I'm like, okay, but you can only do the oral assessment in French.

2 (41:49)
Easy peasy.

you

1 (42:00)
And like, I have a little working knowledge of French. I could even say Dutch. Like, I used to be a very fluent Dutch speaker. I'm pretty fluent now. But the reason I say that is we're not assessing my historical content knowledge anymore. We're assessing my ability to convey that in a foreign language. oui. C'est chouette. Ha ha ha. Well, then we...

Then I take it a little further and I'm like, okay, but that's a farce. Okay, cute. You get it. That's silly. But if I said the same essay, the same prompt was now a written essay, what are you assessing? And most people go to, oh yeah, they would have to assess their historical content knowledge. Do they understand implications of the industrial revolution? And then I clap back with, aren't you also assessing writing process, syntax, grammar, language, construction?

executive functioning, know, all of those other things too.

2 (42:52)
Compound

1 (42:53)
all of it. It's a compound lift. Right? It's a compound lift.

2 (42:57)
Just to go to powerlifting. To go back.

1 (42:59)
to former discussions. And so that's the problem is that all too often educators, especially newer educators, unfortunately, like they're not being trained in pedagogical strategies. They're not, they're not being trained as artisans, but they're being expected to be artisans and craft curriculum without the tools. And so, you know, or they're being given a canned curriculum and being asked to teach it with validity.

And you're like, okay, so here's this activity that's outlined in unit three, chapter four. And you have to do it because that's what your district says. What is it trying to assess? And they're like, well, it's vocabulary, or it's this, you're like. And it forces you to start seeing the secondary and tertiary things that are being assessed there. And so the AI assessment scale says what has to be assessed, how do you protect that so that you can...

2 (43:30)
Yeah.

1 (43:51)
actually assess it and can AI scaffold and lift them to there because we recognize that the other things, while important, are not important in this assessment. And so if we go back to the essay, right, if you're at like a level three collaboration and co-authoring, so now like the student is using AI to access the reading, maybe it's like literacy analysis, like critical analysis and literacy.

I mean literature. Talk Wuthering Heights. Why not? There's a movie out now. so like, Cliff, get off the dang moors. Right? Enough.

2 (44:21)
yeah. Crazy.

Those heights they be weathering.

1 (44:28)
But it used to be like, you know, discuss the theme of, you know, unrequited love in Wuthering Heights. And then the kid has to read the book and do it. But now through AI Assessment Scale 3, it's...

We're letting them co-draft to the students reading on their own, but also asking questions with this AI chatbot tutor to deeper understand the knowledge. Like your kids do this. Back and forth and back and forth and having conversation with humans, having conversation with AIs, seeking clarity and annotation, really doing this kind of cyborg nature, right? Ethan Malik talks about the centaur. We are both.

2 (44:52)
a lot in

1 (45:07)
man and horse, right? We're man and AI, back and forth. yeah, homo technologicus from Yusef Noah Harari. ⁓ that's that author of Sapiens. But then the end product is like, I've written an outline with AI and then

2 (45:13)
Was it homo technologic?

from them.

1 (45:30)
AI drafts the first version of the essay, but then I take a pencil out and I'm editing it and I give my edits to the AI and it gives me pushback and then I ask it if I'm, you know, hitting cognitive bias and things and it exposes those and then I work with it back and forth and back and forth to craft this final product.

2 (45:48)
And Claude is very much moving in this direction. Like I love how much Opus... Opus 4.6, even the one below that, Sonnet, it will push back on you like a lot.

1 (45:54)
a lot more than I do.

2 (46:04)
Like there were things that I'd assumed that I got information wrong and it's like, actually, I'm gonna push back on you. You don't understand this exactly how the idea is meant to be expressed. I was like, wow. Like it's gotten to that point where it's just like, no, this is the real thing. You have an understanding of it, but it's not quite accurate. Like that's where we're moving toward the perfect tutor thing.

1 (46:31)
Because the sycophant see if that's the word for it that's so it's that nature

2 (46:34)
It's got

to get away from non-existent in the higher levels of Claude. It's really more of a thought partner.

1 (46:43)
was the thing when they like they changed the temperature overnight and then people were like I think that adding cotton candy to my steak would be a good idea and it was like absolutely you know and now it is it's like mmm you could do that yeah maybe you know so but yeah wrapping around AI assessment scale that's my hope for it is the unwritten problem is

teachers have to be much more critical about their process and what they're being assessed at a given time. And it's no longer valid to say like, I'm assessing all of it. You're like, well, then they're just gonna use AI for all of it and fool you. And so I love that even though there are issues, I don't like that it's a numbered and ranked scale because

2 (47:11)
and what you were assessing. ⁓

Right.

1 (47:32)
it feels like we progress from one to two to three to four to five and

2 (47:36)
It's definitely

not a progression, it's just different styles.

1 (47:40)
Wouldn't gone

with that when it's like it's each is its own independent way of behaving and they're all of equal value right based on what you want but my experience anecdotal as it is often teachers cannot quickly When you show them an assignment like I did this is what I do They they pull up something that they've recently assigned and I say what skill are you assessing there right and they say well, it's this

And then I always push back and like, what else? What else? Even down to your knowledge of the English language. know? Yes. Not a problem. Non-native speaker could be a real problem. And so that's why I think that the AI assessment scale is a really important tool right now because while we don't agree, we have to recognize culturally, we

2 (48:15)
That level of specificity.

Yep, you do.

1 (48:32)
do not have a shared understanding of what it means to cheat with AI, but we do have an understanding of the importance of developing skill. Students understand that intrinsically. They'll push against it, but they do. They understand that we don't want their time wasted. And so now, if I tag an assignment as AI assessment level two or one or whatever, and they clearly use it

more than that. It's no longer my version of cheating versus your version of cheating. It's that we've established a community baseline. Yes. And tread that. You over tread that.

2 (49:08)
I love that because every conversation I have had with students that are really overusing AI.

They'll fight you. They will fight you so hard and they'll double down on, you know, but you said, and you do do that. And it's just like, no, dude, like this wrote your paper for you. Like needed. Like, like things are still bolded and bullet pointed and like, like just like straight up, like, like rookie mistakes.

1 (49:28)
Yeah.

you to do act do act

2 (49:43)
copying and pasting from AI into like final essays and stuff. It doesn't happen very often. I've had it happen a couple times. The little gray background, the different fonts, the bold, the knoppable, the bolded header. It's like, bro, it is so brutal. It's so brutal.

1 (49:48)
Different highlighted.

formatting just

copy, control shift V, paste without formatting.

2 (50:06)
Yeah, but they will fight you. so there's like, oh no, it's not that much. I'm like, bro, you do not know the word epistemology. Like, I love that. Oh my God. You're, you're one student that you stay out early on. I was a lover of books.

1 (50:16)
as a

But it comes down to the extracurricular problems that my pushback for Nareef and things at Stanford is you can't code the actuality of a classroom. You cannot code the reality of a student. And so with that student, right, as a lover of books, I do love that kid. ⁓ Because seeking through curiosity and generosity was...

2 (50:40)
Thanks

Yeah.

1 (50:53)
you're not burned because you used AI, but how did you use it and why? And you didn't learn the thing that I needed you to practice, so you do have to do it again. Yes. You know? But that, if you remember, that brought out, I don't remember exactly what it was, but there were like actual problems at home. Like we've been traveling and I keep getting drug around the state because of my siblings' like you know, like time constraints. And it was

2 (51:15)
Yeah.

1 (51:17)
I get it. it was at the same time that like a big chemistry thing was due.

2 (51:20)
Yep. So

yeah, they do not have the executive function to.

1 (51:25)
But also

like the motivational triad, human biology says we will select based on like we'll make choices that for the least that we can do the least amount of energy expended for the highest amount of pleasure that causes the least amount of pain. Yeah. And so we are fighting biology when we ask students to not push the easy button.

2 (51:44)
Yes, you really, really are.

1 (51:47)
And like, this isn't being disrespectful. This isn't cheating. This isn't lazy. This is literally, they have a tool at their fingertips, maybe multiple on their phone, that will get the job done. There's a high likelihood that they'll get away with it, at least often enough to gamble.

2 (52:05)
Right. Especially with a disengaged teacher.

1 (52:08)
And then we haven't even said the darkness out loud, which is the assignment might just have been total bull crap to begin with.

2 (52:14)
I mean, there are so many assessments out there that I see that I'm just like, I would have done this with AI. Like, this is DOK1, dude.

1 (52:24)
If

somebody puts like a 12 fill-in-the-blank vocabulary sheet in me, I'm going to just take a picture and have AI finish. It's not worth the payout. I'm not gonna learn anything from it. And yes, I am saying that out loud. The 12 fill-in-the-blank worksheet for vocabulary is garbage.

2 (52:30)
Yeah, no way, no.

You gotta do it different folks. gotta do it different. GIMM kit. Any Cura pod.

1 (52:45)
There are Literally.

it cure a

hundred different ways that's more beneficial to student will actually learn and you know what you're gonna give them 15 minutes to do 12 fill-in-the-blanks in three minutes I can have a class of 30 answered like 900 questions

That's a really uncomfortable conversation in education right now. It's like, you're worried about cheating. I'm actually worried about skill development. Yep. And I'm willing to protect that voraciously.

2 (53:09)
this.

over everything.

1 (53:18)
But that means also allowing AI to scaffold the other things that I'm not protecting at that.

2 (53:25)
Yes.

1 (53:26)
the

training wheel, the scaffold that eventually can go away because also co-authoring collaboration with AI is the adult way of working right now. It will become actually the lowest way that we're working, very soon.

2 (53:35)
totally.

It will become standard work. Period. Like if you work in an office, that is your future.

1 (53:46)
Like I don't want to misquote myself, so here we go. Salesforce announced that like the majority of their jobs is slash management and they mean specifically managing a team of AIs.

Yeah, like, so, you know, like here's your crappy fill in the blank worksheet for a high school student. When for, you know, six years from now, they're expected to understand how to manage a chain of AI agents along with the content of their job.

2 (54:03)
Yeah.

Yep. And the content bank of their job, their institutional knowledge will be an AI that you are leveraging to do customer service, to do data management, to do, you know.

1 (54:28)
But then put back

the human side is still even more important. It will always remain a higher important. So I heard just a terrifying tale. A superintendent was in a quite AI forward district. Something happened where there was a death involved, a death of a student on a car accident, something tragic. Something tragic happened. And so as always, superintendents have to put out

response.

2 (54:53)
Right. Message to the community.

1 (54:54)
to the

community and this superintendent crafted what I understand to be a beautiful message to the community and at the bottom was the tag like would you like me to turn this you know something from AI. Yeah and so the superintendent had copied it had worked through AI to create this and all power to them I think that's fine I know I know people are judging me on it.

But if you've lived, it depends on like the trauma of that event. If you are expected to create a beautiful press release and you are living in trauma.

2 (55:30)
grab Claude. Yeah, you're not going to say the right thing. No, you're not going to say the right thing.

1 (55:36)
I'm gonna take it wrong,

but this person pasted the bottom of that output on there. so people reported, I guess, that it was such a beautiful message, but then they got to the bottom and they just...

2 (55:47)
Yeah, just a gut punch.

1 (55:49)
because

they had expected that it was a human-based thing and then they realized it was not. I also think that that line of thinking is gonna go away.

2 (55:59)
Yes, so.

1 (56:01)
Judging, was this AI written or not is already becoming less of a question.

2 (56:06)
It is. So the skill becomes even more important to emphasize because if you're co-authoring with AI, you have to have the skill. Otherwise.

1 (56:17)
You have know what makes a good a good humanity. You have to know that it's too long or that it's not long enough or that this one point that it glossed over and didn't make any sense.

2 (56:19)
brief.

So relevant example from my class, I have been doing this for years now. I mean, this is fourth year of, and multiple iterations because it's a semester. this is eight different iterations of discussing AI art. It's a fantastic, I recommend anybody, anyone get kids analyzing AI art.

It's so fascinating to see what their responses are. And so I've just been gathering data on this over time. Students used to hate it. Like, I think she was in first year of semester iteration. So I did. the second year I was doing. it was, I hated AIR so much that whole class did they, and consistently for a few iterations there, the pushback

1 (57:00)
I think my daughter was in your first year. ⁓

Alright, is this second? She She.

2 (57:20)
against AI art was so intense, like very much like, I hate this. Like a soulless. And we would always discuss like, why? Why is it, what is art first of all? Like we have to define what art is. And the definition always comes down to human intention is what is important, even to the more recent students, which are much, much more comfortable with AI art and just do see it as art.

1 (57:26)
Yes.

2 (57:46)
I'd be like, yeah, that's art. Like, it's really about... Yeah, exactly. So it's like 50-50 now, and it was like 90-10. Like, there'd be like one or two devil's advocate kids that were like, I think it's art, because AI is a tool. And then other kids would be like, shut up. You're a tool. It was, you know, and I always loved that kid for being like, yeah, no, it's a tool though. And I would always...

1 (57:48)
a year and a half later.

2 (58:09)
be more usually in that corner just to make everybody especially mad. Cause you know, like.

1 (58:14)
Until somebody shows you a

crafted vanilla version of metal music created by AI.

2 (58:19)
I know

that I'd be rude. I hate it so much. my God. But I hate it. OK, so as part of this whole process, like I pull up Jason Allen, who he created the theater, the opera special. So that's like the space opera. You can look it up. It was one of the earliest like applying AI art.

1 (58:25)
Quick rip.

2 (58:43)
for copyright. So the Copyright Office that's responding, by the way, did not get copyright, cannot be copywritten by the company that creates the AI or the individual, which is an interesting way to frame that. anyway, show that one. That's the main piece that I get them going on.

Where in an interesting time is what I'm trying to say, where like AI art is becoming seen more as art, And then having a really interesting conversation about like, what is important? Like what is important there? Is it the skill that's important? Is it the intentionality that is important?

And I think it always comes down to the human intention. That's the thing that everyone consistently, no matter what side of this coin they fall on, they're consistently wanting to see human effort, human intention, human intentionality.

1 (59:35)
I think that that is it. Back to hero's model, passing the threshold, does it require to go into the special world, the magical world of learning with AI, co-learning, is it maintaining intentionality?

2 (59:53)
Yup. Is it or is it not?

1 (59:54)
And maybe the rub is education has, I keep saying like it's broken, but like it's still doing exactly what it was designed to do. It's just that we don't like that output anymore. It retooled the factory. Yeah, retool the factory. But like the such a problem.

2 (1:00:05)
Right. Right. Factory model need to be retooled.

1 (1:00:12)
is fear, right? We have these veteran good teachers. I'm not talking crap teachers, like good teachers, well-intended, skilled in their craft. And they see the Titanic. They see the iceberg coming of AI of like, these kids don't know how to you know, and so if we're not developing skills through intention and protecting that time, like you ever think like what an amazing gift, public education?

actually is to society. this beautiful time where like you don't have to go work the cotton gin. All you have to do is learn. All you have to do right now. I joke with my kids all the time. What can I do? Like you have a job. You actually do have a job. You're being paid to learn and do that right now. And like you need to do your job. And so, you know, when I think of these veteran, beautiful, well-intended teachers, is it that they

There's fear that these students will skip over the hard parts. I think it's a real valid fear.

2 (1:01:09)
Yeah.

Totally, because of the easy button problem.

1 (1:01:14)
because

of the easy button. But the general lack of knowledge around artificial intelligence, generative AI and these things in those same adults says, I'm afraid of it. So I'm not going to allow any of it.

2 (1:01:27)
Yeah, and then it's access and equity. We're right back.

1 (1:01:29)
if you know Nirav says if we just build the best tutor does that mean that the students who have the ability to access that and I don't mean like because of money and wealth and stuff let's say it's available at every single school this theoretical philosopher king right this

2 (1:01:44)
Yeah. Perfect. It's available

to anyone with an internet connection.

1 (1:01:48)
anyone with an internet connection. Okay, but we still have such trauma such

lack of access based on cultural human problems. think about the reality schools that have chronic absenteeism students that are missing 40 % of a year. Right? You know, like I know, I know, I know a principal who just sat down with a family and was like your fifth grader reads at a first grade level. Yeah.

For the past three years, they have missed 40 % of school. Could you imagine every third year just taking the year off and then wondering why you're behind? Yeah. You know? And so I think that ignorance on the part of the educator not understanding how these things work and how to protect them because none of us really do try.

2 (1:02:36)
Yeah, don't fault

anybody. I'm still trying and messing this up.

1 (1:02:41)
I love the way that our students, we were like, are no experts when it comes to AI in education right now. And then one of them was like, well then are you guys vanguards?

Questions before other people did, that's all they are. don't know who that kid is.

2 (1:02:54)
Yeah, actually

Actually, ⁓ my,

I do. And it reminded me that I now in my college level English class have the same kids that were in your English 10 class that ⁓ two years ago that like wrote all the AI policy and did that whole project. They're a cool bunch, dude. They have some skills. They have some skills.

1 (1:03:10)
Two years ago.

are. So

I purposefully taught how to cheat with AI for two weeks.

2 (1:03:24)
Yeah, they're crushing it. Like, like there there's parts of this like, you know what you did? You vertically aligned to me without meaning to.

1 (1:03:32)
I've aligned to you over years. Well, but you know, think that they, would love to like go back and chat with them and see, cause when we left that project, like the lasting thing was, you know, they were saying like, write my essay is the most boring thing I can do with AI. You know, and that's where I wanted to get them. Tooling them up, giving them the tool. So there's some interesting research. I'll screw this up, but.

maybe I'll put it in the show notes. have the research. There's a couple of key things when it comes to cheating. First off, Stanford has a white paper that came out pre-COVID. And then when generative AI popped, they reran it, specifically tracking adding generative AI to it. And they found a couple of things. First off, the volume of cheating did not change.

2 (1:04:18)
Yes.

1 (1:04:19)
Cheater

gonna cheat. That's an important thing. And these were high school students. Stanford ran it with high school students. I don't know where the students were, but so that's an important thing. The volume of cheating did not change. Also, students who self-reported, I think this was Stanford's paper. I could be wrong, but students who self-reported feeling like that they were good with AI, self-selected to cheat less. And this is like,

my English kids that I taught how to cheat. It's because when they're tooled, they don't need when they understand how to build the scaffold themselves to get to the knowledge, they don't want to waste their time. If they understand why they're doing it and they feel like they can access it. They cheated less. And then the third one that I think is interesting, along with the AI assessment scale, this, I think this did come from the Stanford.

2 (1:04:46)
seen this.

Yes.

1 (1:05:09)
white paper, was the greatest deterrent to cheating were systems with very clear boundaries and consequences. And so it is on the punitive side, but like we talk AI assessment scale, students that understand how to scaffold their own knowledge, you by our anecdotal evidence and theirs will be less likely to cheat.

And then in an environment where we have very clearly delineated how you can and how you cannot and not just left it up to like, don't cheat quote, you ⁓ we don't have the framework. But so if we're saying like AI, this is an AI assessment with scale of two, you over tread that line that we understood as a community. And now the teeth are academic dishonesty, whatever your school's policies are. And so I think those are some really important.

2 (1:05:39)
because we don't have the

1 (1:05:58)
pieces navigating the uncertainty that we're in right now. And calling it out and being like, we don't know, so we're going to lean on this.

2 (1:06:06)
And I also think like making any assignment social and that there's a social responsibility to do your part. On homework debt. Is just, yeah. Don't do that. Just social things. Because it will become self-regulating. If you build the culture and you build the culture correctly, then you know, somebody who phones it in, obviously, like the other kids are bummed. They're just like, why did I...

1 (1:06:14)
is homework died a long time.

You're making my life hard.

2 (1:06:32)
Yeah, like exactly. So I think that's a great strategy. Let's try to implement it now. Make something social. Yeah. Like, and I'm not talking group work. I'm just talking about have kids do their thing and then share it with other kids. turn and talk, man. That's all you need. Yep.

1 (1:06:49)
other.

I mean like you know the edge of protocol is like almost every single one of them has some kind of presentation aspect. It's collaborative, they're seeing each other's work, they're learning from each other. I don't know that we can ever bulletproof something from cheating. I know we can't, right?

But back to intentionality, I think it has to be shared intentionality. Like how often do we black box education for kids? know, instead, what I mean by that is behind the curtain is like, Hey y'all, this is what we're And here's why I need you to do it. This is the thing I need you to practice right now. I think that that, you know,

2 (1:07:16)
Right, what's behind the curtain?

Science Teaching.

Yes.

1 (1:07:30)
It's like the year that I started teaching Bloom's taxonomy and depth of knowledge scale and stuff. love doing that. know, pulling back the wizard's curtain, which I'm so sad kids don't get anymore. ⁓ Maybe after Wicked, I don't know. shared intentionality of like, really need you to be able to...

2 (1:07:35)
I love

I know.

1 (1:07:48)
take disparate pieces of knowledge found in these resources and find a logical tie. That's the thing I need you to do right now. When you have that figured out, I don't care if AI drafts the paragraph, but it needs to be your thinking that it drafted.

2 (1:07:59)
Mm-hmm.

on. Right. I don't want the sterilized, the perfect sterilized writing. want to see your thoughts. I want to see your process. Give it to me.

1 (1:08:17)
scribbled out napkin. But

yeah, I also don't think there are good frameworks. I don't know. I'm sure there are. On assessing process.

2 (1:08:28)
Nope, we didn't have to.

1 (1:08:30)
Yeah, like we didn't have to. It used to just be that the product was understood as the final of a process. Right. The process had to have been enacted to get to there. And it's not anymore. But I don't know what the answer is on how to assess process. And I think it's going to be really content specific. Like math makes sense. Like show your work. Right. You know?

2 (1:08:42)
It really isn't.

1 (1:08:55)
And then there's photo math where you're like, I'm just copying down what I saw in the picture. So that's, we jumped that shark, but like, yeah, conversation, it's back to the human conversation, relationship, discussion. Can you tell me about it? When I ask you questions, do you fold real fast because you don't know, you know?

2 (1:09:13)
I mean, I would say that to students and to teachers. Like if you cannot explain why, you know, like you were talking about earlier, if you cannot explain to a student why you're doing the thing that you're doing, if you do not know what you are assessing, you do not have the clout to do it like you don't. So the only engagement you're going to get is the type A

Yeah, super good, super good kid at school. that's the and that's not praxis. That's not the reality of what we're working with people.

1 (1:09:43)
So literally just drop your lesson plan into Chats GPT and ask it. It'll tell you what you're assessing.

2 (1:09:49)
Yeah. So, man, we've been all over the place in this conversation. I don't even know how long this conversation is right now, but we've been just going straight at it because this is all me and Jake have been thinking about for weeks now. And you know what? It's been a long time since we checked in with you guys. this is really kind of bringing it all together, all the things that we've been working on the past, you know, year.

And I kind of wanted to wrap this up with a quote from Bethany Maples from the Stanford conference, from her presentation, which was really, really interesting. You were in a different session. Yeah, she is a very interesting researcher to follow at Stanford. She did the study on replica and decreasing suicidal ideation, like very interesting work that she's doing.

1 (1:10:28)
I'm I missed it.

2 (1:10:41)
And she's identifying AI as a potential leap of human intelligence. And this is what she says. The generation learning to think alongside AI will either become the most cognitively sophisticated humans in history or the first to mistake information access

for understanding entirely. And I was like, mic drop. That's Bethany, lay it on me. Like that's where we're at. And that's why it was the inflection point. And that's why we're doubling down on this. You gotta get clarity about assessment. You have to rethink assessment. You have to rethink skills.

get that, you know, not bulletproof, cause we can't get there, but clarity.

1 (1:11:27)
And I go back to my worry. Our listeners are already well along that path. They're self-selecting. Things like a...

2 (1:11:38)
Yeah, we definitely

have a very niche audience.

1 (1:11:40)
Yeah, and yet when you get out there, our colleagues are learning how to play checkers when 3D chess is what we should be doing.

2 (1:11:48)
It's, yeah. I know. Well, and you could even feel that at the conference. You could feel it where, you know, these are highfalutin, PhD holding and extremely privileged, extremely wealthy. Like, you know, like this is the heart of Silicon Valley. You know, we snuck in the back door.

1 (1:11:58)
right?

Literally, snuck in.

2 (1:12:12)
Like

this is our third year going and they know us now but like... They were like, why are you here? Who are you?

1 (1:12:15)
Weaseled

our way, that's the only word for it now I feel part of the community.

2 (1:12:24)
Yeah, I love going. It's my favorite learning thing to be a part of it my whole year. But you can sense.

that among the AI experts, among the people researching AI applications in education, they know, they know that the labor force is woefully, woefully untrained. Like you can feel it when they're talking about things. like even, I believe it was Nirav in the last session, which we've been referencing a lot. Yeah, definitely. We'll put a link to it in the show notes. It's, it'll

blow your mind. That's really, really powerful session. He brings up, I wish there was a way to start new schools in easier way. New schools, because it's just almost like we're not going to.

1 (1:13:03)
next time.

back

and watch it but didn't he say like the only way is gonna basically be to start new schools?

2 (1:13:14)
Yeah, and I think Amanda Bickerstaff is kind of in there too. She has an interesting framework.

1 (1:13:19)
We think about how slow the ship of education is to turn. Somebody pushed back on him, he goes, I mean public schools. mean normal public traditional schools. But you can't shift a paradigm fast enough in an existing educational setting. And so he was saying we might need to just build brand new schools.

2 (1:13:41)
Yeah, just from the ground up, different mission, different mission.

1 (1:13:45)
I'm

like, wow, you think charters are bad? I hate ⁓ that.

2 (1:13:50)
Yeah, I know. But, ⁓ you know, there was a representative from Gavin Newsom's ⁓

1 (1:13:57)
So many more things to come out and my prediction for next year, I was thinking about this. I think so. very first year was like kid in a candy shop. We found people to talk to. The second year was like, this is interesting. Right. Like, OK, this year was like, OK, we're getting some work done.

2 (1:14:05)
Definitely.

Finally.

Yeah.

1 (1:14:19)
I wonder if next year is going to be sadness.

2 (1:14:23)
Dang. Bye.

1 (1:14:25)
because four years later we still haven't been able to make an impact.

2 (1:14:30)
You think so?

1 (1:14:32)
You know, like the understanding of the possibility and that people aren't doing it. I think so many times about, I think it was the Loma Prieta earthquake in San Francisco, what was that, like, 88, 89, something like that. There was an image that has just haunted my whole life and it was the Bay Bridge. So people that aren't familiar with the San Francisco Bay Area.

San Francisco's on a peninsula and it's got all these large, large freeway bridges going into it, right? And so the Bay Bridge connects, it might be the longest one, Golden Gate Bridge is there, think about the Golden Gate. Well, the Bay Bridge was two levels, and so northbound traffic is on the top, southbound's on the bottom. And when the earthquake hit,

a section of the Bay Bridge collapsed. And like people in the North State, remember this. Well, there's an image out there of a man standing in the middle of the freeway on the Bay Bridge right before the collapse, but because of the arc of the bridge.

people couldn't see that it had collapsed and he was just flagging them down, like waving his arms. I remember seeing this on the news and people just driving around him and careening off into the chasm. And I wonder, I started feeling it a little bit this year, I wonder if next year the summit is gonna be a little bit underlying of that.

we've been waving our arms on this bridge and people keep careening off. for example, part of a lot of my work at the county office is helping school districts onboard policy and things like that for generative AI. Do a whole assessment, their risk and exposure, operations, security, all this stuff. And...

2 (1:15:56)
Yeah.

1 (1:16:11)
It's been like a year-long project getting training. You know, I flew to Texas. I've been reading, just really chewing on it and working with a couple of districts. Well, we got grant funding. I don't think we've even talked about this. So in a couple of weeks, we have our first two-day workshop and we got it grant funded, so it's free. anyone from the North State, from California really, can bring a team and we will do a full assessment.

They will walk away having built two roadmaps based on what they need like here's Policy integration and Trent, you know professional development organizational management legal risk and exposure all these things got it. It's really comprehensive And I can't get people to sign up Yeah, I currently have seven people from about four districts

2 (1:16:44)
For AI policies.

No way. Are you serious?

Whoa.

Whoa. That's surprising to me. It is. Because BCOE kicks out the jams. we do. They reach out, they...

1 (1:17:06)
And

It's going to be an incredible day, two days of learning and workshopping. I hope that more people come. By the time this comes out, it'll probably be over or coming out. But that just, I just wonder when are we going to hit the honeymoon is a little bit over and the head in the sand is still prevailing.

2 (1:17:33)
Well, I can't remember who it was, but in one of the panels, they were just like, AI implementation as we're doing it right now will be bad. It will. How this is going is bad. Like in no uncertain terms. And whenever you see somebody on a panel at a university, be like, no.

Please, the man in the street waving the arms to the careening cars to signal your metaphor. When I am seeing people doing that, experts doing that, it does give me the existential feels where I'm just, okay, this could be, I mean, we all know it could be bad. We've been, most,

people who are not AI experts think it's going to be bad. When you have AI experts really like being like, if we keep on this trajectory, we're not going to see the outcomes that we want. And I think that Bethany like kneels it right there in that quote. Just like they're either going to be the most cognitively sophisticated human beings in the history of human race, or they're not even going to understand

1 (1:18:32)
Thank

2 (1:18:44)
information.

1 (1:18:44)
Yeah. Society of Dunnoo-Cougar.

Cougarites. Well, and dare I say, I think it's going to be both. Talk about the gap, AI widening the gap, and how do we prevent that. I don't, gosh, I wish we could. I don't think we ever can. Humans are just too different and complex from each other sometimes.

2 (1:19:06)
And

we have perverse incentives.

1 (1:19:08)
perverse

incentives. And so what if we built the perfect AI tutor? Like, you know, Nirav says, I still don't have kids coming to school to use it.

2 (1:19:19)
Yeah, I know, but...

1 (1:19:21)
And so then do we have like haves and have nots even though we're, know, that, yeah, that's what I think about. We have kids that will learn beautifully and exponentially in a human rich environment and all of these things. And then we have kids that choose, you know, dare I say like choose a lame homeschool model with a workbook and that's what they do. And so then what, you know, then I started thinking about like the movie Gattaca.

Right, we're like we have selecting out people are self selecting out of the genome project, but then They're selected out of society and advancement When it becomes the norm when it becomes the way that the modus operandi Essentially people who aren't engaging and this is all speculative like, you know, just imagine dang, dude, you just take Here it will go so dark. So imagine that ten years from now

2 (1:19:46)
out.

in the dark.

1 (1:20:11)
We've figured out at least a good way, not the end all be all way, to be utilizing appropriate forms of artificial intelligence to scaffold education. Students are better engaged, they're learning, right? We're not stripping humanity. This is Nirvana, I get it.

2 (1:20:27)
Great.

1 (1:20:27)
but the families that self-select out of that.

because personal choice is important.

Have we self-selected them out of the new Hermitage?

2 (1:20:36)
Well, I almost feel like, and maybe this is kind of an extreme, you know.

1 (1:20:41)
Periah, Periahness.

2 (1:20:43)
Right. But I'm almost like, okay, that will probably be the equivalent of being Amish. Like at that point. Like it is a technology that will overtake all aspects of society and there will be niche communities that do not participate. beautiful lives. Yeah. And that's fine. you know, sometimes you feel bad for

1 (1:20:50)
Luddites.

That's community.

2 (1:21:09)
Yeah, that does communicate.

1 (1:21:11)
Will these enclaves?

2 (1:21:13)
I know man. I'm more hopeful on it though.

1 (1:21:16)
I know. To me, the cultural

2 (1:21:18)
I think it is because I am the millennial.

1 (1:21:21)
Always people. Some

people will progress and some people will dig their heels in based on traumas that I can't assuage as an educator.

2 (1:21:30)
Yeah, this is true. But the technology is hitting at the right time for a generation. if they're able because think of it this way, like the millennial experience with the early Internet was very interesting and unique. And you you are correct in that. Yeah, like you were you were being a grown ass man. Yeah, you know, I was not. Yeah. So like.

1 (1:21:47)
And it's over it.

2 (1:21:55)
the millennial experience to early internet is very unique in that there was almost no oversight whatsoever at all, like at all. And there was almost a sense of camaraderie in that, where it's like we were exploring this technology and access to information that human beings hadn't seen in the history up to that point.

And I knew that because I also had a set of encyclopedias in my house that my grandfather ordered in the sixties. it's like, I had seen what the old technology was. I had the new technology in front of me and was like, this is amazing. this changes my life. This changes my life trajectory. This, this helps me reimagine what is possible for me. This kid growing up in the middle of nowhere with no resources. Like,

It changed my life in a very positive way. So for me, I'm always probably going to be more on that positive side of that coin because I'm like, dude, for the underdog kid where nobody cares anyway, that they're really neglected, they have the ability. So that's why I'm like perfect AI tutor, that will change things. That will be a groundbreaking moment in the history of human learning. That's what's exciting to me.

And I think why I'm not as afraid of it, what I am afraid of and what I have been bringing up is the relationality piece. if, like when I was seeking relationality in the early internet, I was finding other young millennials that were out there that liked the same bands that I liked, that we shared MP3s over the P2P services and like.

1 (1:23:20)
So.

2 (1:23:35)
Like it was a community of other young people. It was not an artificially intelligent AI agent that was giving me relationality. for me, that's the thing that I see as being the dangerous path.

1 (1:23:44)
Also that.

so this makes me go back to the closing session, right, from possibility to progress. Rebecca Winthrop from the Brookings Institute, she said something about

we're seeing the fraying nature of the age of achievement. And then that moved her into, are we moving into the age of agency? And so what I like there is it's that idea of like, previously, if I can amass enough information and then monetize that knowledge, I can make money.

2 (1:24:21)
make products.

1 (1:24:22)
Products

and I can I can sell my time essentially from having amassed that knowledge work as a product. Yes and so and Nirav talks about it too, like people are gonna have to get better with not working

2 (1:24:37)
Yeah, I love that you said that, you know, but I told him afterwards, like we yeah, we had that conversation.

1 (1:24:42)
I love that

you that.

2 (1:24:47)
When you have this level

of efficiency, that's another thing. People are worried about jobs. Yeah, it's real. will be jobs that will no longer exist.

1 (1:24:55)
Yeah, but her bringing up like, if we're moving away from an age of achievement into an age of agency.

Agency also denotes that people can choose not Yeah, and so what is that's where like my fear is not a good word for it I'm so hopeful when it comes to artificial intelligence, but also Recognizing that like there will always be a contingent that turns away from it

And not that everybody needs AI in every way of their life at all, like the AI overlords, but what will it mean that they don't know? And I think in the realm of education, what will it mean that a parent, I mean, let's be honest, like this, live, eat and breathe artificial intelligence. I am so lucky. My job is just to research and understand and translate it.

2 (1:25:48)
Yep. At a high level.

1 (1:25:48)
Right? I get

behind. It's so fast. So now we've got to, you know, a work in mom and dad barely making ends meet because our economy is absolute garbage. Mm hmm. Trying to get their kids to school, trying to do right by them. They don't have time to figure this out.

2 (1:26:09)
No.

1 (1:26:09)
And so now they don't they don't get to go to Stanford and listen to the people they don't get to do that. yeah. Right. And so like I am super privileged. My privilege shines bright in this ability. But then I recognize like if all they get is the nightly news that comes in and it is so skewed and dark and fear mongering and they select out of it for their children.

How will they make up that gap?

2 (1:26:34)
Well, here's maybe a little bright counterpoint to push back on you. Kids are smarter than their parents. are awesome. And they're going to find the way around it. their friends are going to... There's always... Look at the phone. Look at the phone. I mean, this technology is extremely powerful and it's double-edged sword here. But like, the pull... Pull forward...

1 (1:26:38)
That went dark.

are pushing back on.

2 (1:27:01)
for kids to have a device has been huge because of the social impetus behind it. And so I feel like it maybe won't hit quite as hard except for that kid that really does want to opt out. know what? And that's okay. don't, kids are gonna kid. So like there's gonna be rebelliousness. Navigation will happen. Because I think that

1 (1:27:17)
Yeah, that's totally okay.

Kids are gonna kill you.

2 (1:27:28)
increasingly as technology advances more and more and more rapidly, you're having these like kind of generational fractures where each generation is growing up in an entirely new technological world and that is felt and understood. so I don't worry quite as much about that because I really do think that like

1 (1:27:46)
It is.

2 (1:27:54)
young people are always going to be like, you don't know how it is. Which is true. Like I'm always checking in when my student's like, what is it like to be a teenager now? You know? I don't know.

1 (1:28:05)
And maybe it's this transitionary period that we're in right now where I look at students that are just finishing their college. And there's no.

2 (1:28:13)
Man, I

would hate to be in that position right now.

1 (1:28:17)
degrees that they were sold and you know a line that there would be ample jobs and now those jobs don't exist.

2 (1:28:26)
Yeah, especially coding. Entry level coding positions are disab-

1 (1:28:29)
Big

coding, we're getting into editorial issues. There's a lot of job markets that are already being taxed. And so what does it mean to graduate from a master's degree in something with $100,000 of debt and no viable work product?

And you built into getting the degree so hard that you skipped the foundational skills itself that might have translated to other degree It's a it's a real question. And so like this

2 (1:29:02)
Man,

you are on the darkness today.

1 (1:29:06)
This moment right now I Tell my own kids. I've got three kids. Yeah, I'm like, whatever you're doing is also slash AI You know my daughter in college studying ecology and water reclamation and these things I'm like and also slash AI period for Education I get to say yeah and slash AI

2 (1:29:22)
Well, yeah.

AI quests from Google. That I love. I love that. If you're struggling to understand what AI systems can do, because they're not just LLMs, AI systems can learn weather patterns, can learn watershed, can predict flooding. Like they're a very, very complex probability machine to predict next most possible outcome. So like

there's huge applications for AI and ecology. they use it for sure. Like any program to crunch statistics, like that's been a big deal for a long time in different scientific.

1 (1:30:06)
⁓ I'm

glad that she's going to California Polytechnic.

2 (1:30:11)
Yeah, yeah,

1 (1:30:12)
I think the future is bright.

Because also humanity will always win out.

2 (1:30:16)
Yeah, will. Relationality between human to human is always what's going to be important.

1 (1:30:21)
But to go back to that whole thing of like, is just another calculator. This is just a better search engine. And those, those bother me. Really bother me. They're way too dismissive to the, you know, like even when the internet came out, the internet wicked powerful does not have the capability to topple governments and change geopolitical lines. The way artificial intelligence does. It just doesn't. The internet was a huge sweeping change. It will not affect the ground fabric of

2 (1:30:28)
Yeah, that's not it at all.

1 (1:30:49)
society the way that AI can. You know, to me those are not parallel.

2 (1:30:51)
Yeah. The

internet is the data. The internet was the data gathering phase. Artificial intelligence is the condensing all that data into actionable. Do with it.

1 (1:30:58)
Yeah.

What can you do?

It's action. And so I think that's where I'm like, let's stop with the oversimplified metaphors that really aren't doing anyone any good.

before we turned on recording I said like we keep saying how do we not repeat the failed social media experiment, right? And I've been thinking lately like are we actually hobbling ourselves by trying to relate AI to an existing problem that we've faced and I think we are I think I think it's actually a bad thought experiment for us to try to couch it in something else because it limits

what is almost unlimited.

2 (1:31:41)
Yeah,

it's like nothing else.


Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.