What Teachers Have to Say
What Teachers Have to Say is a podcast about teaching, AI in education, instructional practice, and teacher identity. Hosted by Jacob Carr and Nathan Collins, it centers real classroom experience, system pressures, and how AI is reshaping learning.
No performative edu‑influencer culture. No toxic positivity. Just honest conversations about what’s actually happening in schools.
What This Podcast Covers
- AI in education and classroom use
- Teaching strategies and instructional design (EduProtocols)
- Teacher burnout and system design
- Student skill development and transfer
- EdTech tools and practical workflows
Who This Podcast Is For
- K–12 teachers
- Instructional coaches and leaders
- Pre‑service teachers
- Educators exploring AI and EdTech
- Anyone tired of surface‑level PD
Who We Are
Jacob (Jake) Carr
EdTech Coach for a County Office of Education, author, and speaker on AI in education. 15+ years across K–12 (grades 1–12) in diverse settings. Brings a philosophical lens, connects classroom practice to systems, and pushes conversations deeper before landing on something usable.
Nathan Collins
High school English teacher, dual‑enrollment instructor, and Personalized Learning Teacher in a rural hybrid model. Grounds the show in current classroom reality, student data, and practical constraints. A measured counterbalance to big ideas.
What We Explore
AI in Education — A structural shift, not a novelty. Learning, assessment, and independence in an AI‑rich world.
Burnout as a System Problem — Not a personal failure. We name the incentives that reward unsustainable work.
Instructional Routines That Work — Repeatable structures that lower planning load and raise thinking, repetition, and collaboration.
Skills That Transfer — Thinking, communication, adaptability. Not just content.
The Format
Long‑Form — Monthly flagship episodes with deep dives, interviews, and debates.
Short‑Form — Field notes, solo reflections, headlines, and listener voicemails between major episodes.
Your Voice Matters
Leave a SpeakPipe voicemail with a question, win, or rant. We feature listener voices in episodes.
Beyond the Podcast
The companion newsletter goes deeper: AI in education, teaching strategies, and teacher identity. Free, weekly, and practical.
FAQ
What is it about? Teaching, AI in education, and real classroom conditions.
Who hosts it? Jacob Carr and Nathan Collins.
Is it AI‑focused? Yes, always tied to real practice.
How often? Monthly flagship + shorter episodes between.
Where to listen? Apple Podcasts, Spotify, and all major platforms.
Subscribe and Follow
- Apple Podcasts
- Spotify
- Newsletter
Stay curious. Keep thinking. Keep showing up.
What Teachers Have to Say
Make America AI Ready: The Stove Isn't Going to Blow Up
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
The federal government recently launched an AI literacy program delivered entirely by text message. Jake has completed the first five days, and also looked into a network of teachers, HR professionals, and writers and to ask what they thought. The results were predictable in one direction and surprising in another. This episode is less about the program and more about what it exposes: who gets to define AI literacy, what it's for, and what the cost of doing nothing actually looks like.
What You'll Hear
- Why Jake's reaction to a federally branded program shifted once he actually went through it — and what changed his mind
- The DOL's AI Literacy Framework broken down: five foundations, seven principles, and why the pedagogical thinking behind it is more serious than the branding suggests
- Nathan's argument that 28% of students being able to describe how an LLM works is a real problem — and why understanding the engine matters even if you never plan to drive
- The "professional chef critiquing a how-to-boil-an-egg pamphlet" problem, and who the pamphlet is actually for
- Jake's prediction that the 2026-27 school year is when schools start approaching AI literacy systemically — and what that should and shouldn't mean
- Why excluding AI from your classroom is becoming harder to defend as a pedagogical choice rather than a protective one
- The adult literacy statistic that reframes what's actually at stake when we talk about the AI access gap
Resources Mentioned
- Make America AI Ready — Federal SMS-based AI literacy program from the U.S. Department of Labor. Text READY to 20202 to enroll. [beta.dol.gov/ai-ready]
- U.S. Department of Labor AI Literacy Framework — Five foundational content areas and seven implementation principles for workforce AI readiness. [https://www.dol.gov/agencies/eta/advisories/ten-07-25]
- Slow AI (Sam Ellingsworth) — Substack publication examining AI adoption at a more measured pace. Recommended reading for the "email problem" analogy. [https://theslowai.substack.com/?utm_campaign=profile_chips]
- Quick, Draw! — Google experiment using a neural network to guess your drawings. Referenced as a Day 1 challenge in the Make America AI Ready program. [quickdraw.withgoogle.com/]
- What Uses More? — Tool for comparing energy and carbon footprint of AI tasks vs. everyday activities. [what-uses-more.com]
- Stanford HAI — Stanford's Human-Centered AI institute. Referenced for statistics on AI usage by age and the research on AI in classroom settings. [hai.stanford.edu]
- NCES Adult Literacy Data — National Center for Education Statistics. Nathan cites current figures: 28% low literacy, 29% basic proficiency, 43% proficient — among adults ages 16-65. [nces.ed.gov]
Connect & Continue
Jake writes about AI in education weekly on Substack. Subscribe at whatteachershavetosay.substack.com
Stay curious, stay hopeful, keep learning.
Got a question? We'd love to answer it! Leave us a voicemail on SpeakPipe: https://www.speakpipe.com/whatteachershavetosay
Want more EduProtocols from Jake? Check out his book at Amazon, Barnes and Noble, and more.
Sound Clip (00:04)
It's designed to be accessible, quick and simple, which is another way of saying your AI education now competes with spam texts and verification codes. The future of work, apparently, fits in your notifications.
Jake (00:15)
That's a quote from Lance Hahn from his sub stack called Beacon Turn, which is an HR publication on sub stack. So that's the topic of today, actually. So the federal government is officially sliding into the DMs of every worker in America, every adult. They've released a new AI literacy texting platform called Make America AI Ready. And we're going to deconstruct it.
Nathan (00:41)
Yeah, I had a lot of complex feelings about this one. I really, yeah, but you know what? It does the thing. It's a useful tool. I can say that.
Jake (00:52)
Yeah,
so if you're here to learn about AI literacy, you're in the right place. We have a pretty fun conversation about this new platform from the Department of Labor, the US Department of Labor. And we weigh in on if we think it's good or not, how it's usable. And it takes us into a pretty great conversation about why AI literacy is important, even for people who are against the use of artificial intelligence.
Nathan (01:15)
Yes, if you hate it, if you don't want it in your life ever, you still need to understand it. in order to make an informed decision, like you have to get out of ignorance and that's where we need to move. like, yeah, I wasn't sure about this, but I think it's a really useful tool
Jake (01:32)
stick with us and come out of ignorance. I'm Jake.
Nathan (01:35)
Yeah, I should probably have not said that that way. I don't mean ignorance in a bad way, just like that you don't know, okay? Okay, okay. I'm Nathan and welcome to another episode of What Teachers Have to Say. This is a good conversation.
Jake (01:56)
Okay, so I would wager to say that most of our listeners have not heard that the US Department of Labor has released something for AI literacy.
Nathan (02:06)
support you on that. Because I have not heard. Yeah. You like just shared this with me.
Jake (02:10)
brand
new. It's called Make America AI Ready, which I don't love that name.
Nathan (02:15)
No.
I want to get as far away from it as possible.
Jake (02:18)
I know and and and I had to do some soul-searching because I'm like like anything coming out of any federal level right now I'm super skeptical of like Melania just marched out her That is not the vibe I don't want to go down that rabbit hole, but no absolutely not but the Department of Labor To okay two key things came out recently. Yeah, the US Department of Labor is where Donald Trump and the the executive branch has kind of
Nathan (02:29)
I bought dude, I saw that
Jake (02:45)
landed AI. It's part of top work workforce initiatives, which I think is a great place.
Nathan (02:51)
This makes sense that is a good, I mean yes. That's what I keep telling my students, this is the way you will be working.
Jake (02:58)
And if anybody has
money, the workforce initiatives have money to push these things out. Education doesn't have money. So, okay, so they put that out and then the White House also put out their AI framework as a directive for Congress. And the two documents align really, really well.
Nathan (03:03)
resources we don't.
Jake (03:17)
the US Department of Labor's AI literacy framework, I like it. I like it. I think that they're doing a good job. They came out with some guidance. I know. They came out with some guidance a while back, and it was very clearly written and thought out. And so this is like the next iteration of it. But what I love is
is there's these, yeah, so there's these like five foundations, the foundational content areas for AI literacy. One is to understand AI principles, two is to explore uses, three is like how to direct AI effectively,
then like the fourth pillar would be evaluating those, Like, evaluating the output and then five is responsible use. Okay. So those are kind of like the five big pillars of it. But then here's what I also like. They delineated these seven principles that have to be done for an effective rollout.
Nathan (03:58)
How about we reading the outputs?
Jake (04:18)
So they're not just creating policy guidance, they're actually creating the pathway for that.
Nathan (04:24)
think the thing that I need to remember because I'm in real time having the reaction that you probably had when you first saw this is that I do not trust this, I don't like it, I don't have anything to do with it. But at the same time, I think we have to remember that there are always good people doing the work in any organization. And I'm just gonna remind myself.
Jake (04:47)
had to check myself because I wanted to throw out the baby with the bathwater. I had to check myself. So here's the rollout. I mean, this is like their pedagogical approach, if you will, a little bit. So one, it has to be experiential. Two, it has to be contextualized. So they're like, you have to integrate the AI learning into the context of industry for people so that it makes sense to them.
Nathan (04:53)
So take us through this. So.
Jake (05:12)
So it's usable, it's practice based. Three is it has to, is using AI to augment human skills, complimentary, not taking over, not replacing, but it's complimentary to the human skill.
Nathan (05:23)
Yeah,
that is really interesting to me because everything that I have heard is replacement.
Jake (05:32)
And
like that's in there right like there are things that are a hundred percent replaceable But when I read the briefing documents. It's more complementary like okay AI is gonna do that task now and so the compliment is that your way of directing it has to humanize it and Right. It's like when we talked about salesforce. We're like, right so many of them were slash managerial managing teams and swarms of AI agents
But yeah, so three, the complementary skills. Four, addressing prerequisites to AI literacy. I thought this was important, right? And it says, here's exactly what it says, addressing barriers to participation and success with AI literacy, including digital literacy and broadband access.
Nathan (06:15)
What? Are you kidding
me? The federal government is finally going to prioritize.
Jake (06:19)
They're to prioritize things like that. then five, creating pathways for continued learning. So once you've started, how do you keep going? Six, to prepare enabling roles. This is interesting. It says, equipping managers, counselors, and others who play a supportive role to the participants' AI learning. That's Yeah, that's
Nathan (06:39)
you
Jake (06:40)
It's not just like, here's a curriculum, go with God. Now it's like, okay, need to, recognize we need actual people who are experts in translating this to the human experience so that they can help people through that pathway. And then the last one is number seven, design for agility to ensure that there are proactive built-in mechanisms to rapidly update content and deliver as AI capabilities evolve. So now we have recursion.
So they very clearly pulled in people who know what they're talking about in educating.
Nathan (07:11)
interesting stuff.
How did you get clued into this? Because I have not heard this at all.
Jake (07:20)
I'm glad you asked. So without going down the rabbit hole of Claude, okay, so Claude, is Anthropix flagship LLM, released, did they have pushed so many products since January?
Nathan (07:25)
Okay.
love Claude.
It
is insane. you have not been checking out what Claude is Claude is absolutely crushing.
Jake (07:40)
They're crushing it. So Claude
Cowork is a very, basically they came out with Claude Code, which opens a terminal and it's coding language. And you know, like, I don't know Python and it's really intimidating, even calling it code. And I know that I don't need to code in it, but it's intimidating. So then as like a side project, this dude at Anthropic was like, well, what if I coded
Nathan (07:51)
Thank
Jake (08:06)
different user interface shell around Claude code. And that's, as I understand it, Claude co-work that created it. Which he now has said he, it's 100 % AI generated code too, to make co-work. Which is crazy. But so, back to how did I find this. So there's like a little snippet called a skill that you can preload in these things that are repeatable. It's like a little agent.
Nathan (08:20)
because
Jake (08:30)
of sorts and I have one called the morning briefing and so every morning at 8am I ate nothing I copied and pasted it from somewhere. What? I downloaded a whole set of skills because there was a YouTube creator using it as a lead magnet. Okay well. him my email and I got a bunch of resources. I should figure out who that is.
Nathan (08:34)
You created a little... ⁓
All right.
Jake (08:50)
So imagine this is just living in my Claude account. And so every morning, it automatically at 8 AM, I've connected it to my calendar and my email. And it goes through all of my email, and it prioritizes, and it tells me the things that I need to respond to. It generates possible replies for me. There's ways that it'll actually put it into a drafts folder in Gmail for me. I haven't done that. Yeah. OK.
Nathan (09:14)
Yeah, you've gone way
Jake (09:15)
I've hard.
I'm really excited about this because I hate email And like if I had a slack channel, it'll do the same thing. It'll do it all to slack to you but but we use whatsapp and Facebook doesn't like that. So yeah, but anyways How did I find this? Another thing that it does is I have it crawl everything within the last week In AI news that might pertain to me
And then it kind of gives me a rundown of like what not to be missed. And so the day that this had launched, my morning briefing told me about the Make America AI Ready had launched. And it was like hours after it had launched. So I immediately had that knee jerk reaction of like, ew, you know, I don't want to give them my phone number. And then like, oh, actually, they already have it. And so.
Nathan (09:56)
Yep.
Yeah, in every database. If
you have an ID, you're fine.
Jake (10:06)
Yeah,
and so I immediately kind of I went to their website and I read up on it and it was interesting And then I still was like But then I went to LinkedIn and the first creator that came up when I searched for it was Pat young Pratt it Yeah, he's been on the on the podcast. yeah, we really like Pat He was at code org for a long long time and he now has moved to Microsoft
Nathan (10:21)
Aw, Pat.
catch.
Jake (10:32)
⁓ I think he's over like global education or something. He's a bigger wig than he used to be. Nicest guy ever. Former teacher. Incredible human being. But so when Pat was like, actually this is cool, I'm like, okay, I am now influenced. I will do it.
Nathan (10:35)
He's a big
Right. So I
signed up for it. That was the influencer.
Jake (10:52)
was the influencer
I needed at the time to do it. And I feel like I'm spiraling through this.
Nathan (10:55)
Yeah
This is a really good But it is, here's the workflow. This is how we find information like this.
Jake (11:04)
We'll end like, not to go down a caveat, like...
I get paid to keep up on this and I still get behind and miss really big things. There's no way a classroom teacher has the space to maintain cutting information.
Nathan (11:12)
I would have imagined that.
Absolutely
not. What's surprising to me about this is I have not heard of it at all. So this is like a federally supported program. You would think there would be a, I don't know, a press conference. It would be in the news somewhere. Like this is just straight up, I have seen nothing of that.
Jake (11:40)
It's
brand new like today is the 28th of March. I think it came out on like the 24th It's it's super super brand new. Okay. So What is it? Okay, so it's called make America AI ready And I know which I'm like the MA AI are like mayor right? Okay
Nathan (11:45)
Okay.
Thursday.
Jake (12:02)
I'm glad it wasn't like, make America literate again. Like, things are not going there.
the Department of Labor. They basically built a text-based, like on your phone, like an SMS texting-based course of basic fundamentals of AI literacy. It comes out, it's like 10 minutes a day for seven days.
There are seven main topics that it goes through. It's not exactly aligned with those seven that we talked about, but it is totally aligned with the Department of Labor's AI literacy framework. it's all there. And it's cool. I've done five of the days. So that's how long it's come out.
Nathan (12:38)
Have you done the whole thing?
Actually, yeah, fair enough. Fair enough, Jake. I didn't think that through. like, yeah, seven day course and it hasn't even been out for seven days. I tried to go ahead. really? And it wouldn't...
Jake (12:55)
It was like, like you finished, like you finished the day. I'm like, great, let's do the next day. And it was like, no. I'm like, I would like to be able to. But yeah, it's 10 minutes a day. It sends you a text. Basically, for people, I encourage you to do this. There's a number. It's 20202. And then you text the word ready. That's all you do. And it signs you in.
Nathan (13:00)
can't do that yet.
That would be a nice one.
Yeah, I
just did it. It's pretty quick. You know, I know we are used to government systems just not working at all. That's just how our government is now and has been for a while. So like, yeah, the DMV, oh, I need to reset my password. I'll get that reset email in two hours. This was very quick. I was surprised.
Jake (13:38)
Really quick and so whatever time of day You text it it sets that as the time every day it texts you the next so you read it and then it has like a little question That's multiple choice at the end and then you text back like a b c or d as your answer And then it it sends you the next text and there's like three maybe four
of those cycles per day. It takes about 10 minutes to complete the whole thing. It's really short.
Nathan (14:05)
It's so short.
Mini lesson. you know, as edge of protocolers, we know you can get a lot done in 10 minutes.
Jake (14:12)
Yes, and so here's what I like about it. It is the bottom floor. It is absolute low barrier to entry.
Nathan (14:24)
terms of like information and how to... So is it trying to train you how to use it or just explain to you what it is?
Jake (14:32)
It's really teaching you the basic fundamentals of AI literacy. so without going into all the parts and pieces of it, right? It's like, what is an LLM? What is a prompt? What goes into a prompt? Those things that...
Nathan (14:47)
right.
Jake (14:49)
are just the very, very beginning ground floor of those things. And what's interesting though is when I look on LinkedIn, like this morning I was reading LinkedIn and Substack, those are kind of my two main outlets that I go to. A lot of people are trash talking it.
Nathan (15:04)
Well, I
was about to. I was like two seconds away from doing that just now.
Jake (15:11)
Yeah, they're trash talking it.
But I see it a little bit differently. Nobody's heard of this. So the only nerds that are doing it are the people who are watching every single day. And so I saw this little this quote. I love it. It said, We're seeing a professional chef critique.
how to boil an egg pamphlet, but the pamphlet isn't for the chef. It's for the person currently afraid to turn on the stove.
Nathan (15:35)
Okay, yeah, I follow you.
Jake (15:37)
So to me, that's where I'm like right now, like, you know, they're not publishing numbers of usage and demographics and things like that. But I can only imagine the only people who know about this are the ones on the cutting edge of understanding it is really, really foundational entry level. And I love that it gives very concrete back to those principles. It gives really concrete contextualized examples where it's like,
here's how, like, have you tried building a recipe in AI? Uh-huh. It's not like, let's talk about agentic coding.
Nathan (16:05)
interesting.
yeah. Okay, so let's get some practical, like actually everyday uses.
Jake (16:14)
Yeah, so watch, I'll grab
my phone and scroll back. We'll get some exact things. my pro tip, when it comes in, make a contact and label it Make America AI Ready. Because then when it comes in, I know what to
Nathan (16:28)
Yeah, not like it's some spam
Jake (16:30)
Yeah, like, ⁓ look, somebody trying to give me another loan. lot.
Nathan (16:33)
Yeah, getting a lot of that Gosh
the AI phone calls. I just block every number it helps if people just seen them
Jake (16:43)
Okay, here is lesson one is what is AI? Okay. And it's like AI is already working for you. In fact, you probably used AI before you finished your morning coffee and didn't know it. And then it's like Google Maps, Netflix, your phone suggesting the rest of a text, right? So like really
Nathan (17:01)
All
those little things that people are seeing. I do this a lot with my students where I'm like, look, we've had artificial intelligence since like 2000. Like you just know that. Like it's been using you more.
Jake (17:09)
like usable.
1953
is when they coined the term artificial intelligence. Really? then 1970, 73, 74, something like that, is really when it started. And then we start thinking about early spell check was actually AI.
Nathan (17:18)
It's for that.
Really?
Jake (17:33)
in the 80s
they were using AI to do stock market predictions. AI has been embedded in our workflow for actually a very long time, but it was just under the hood. The difference is 2022, chat GPT 3.5 was good generative AI. That was the difference. It was accessible. But yeah, that's lesson one. And it's like, so how does AI work?
It's a system that looks at massive amounts of data, finds patterns, and makes predictions. That's it. Patterns in, patterns out. But it doesn't stop there, right? And so it's just going. It's less sci-fi than robots. But I like it. Yeah, and then there's reflection. so it's like, reply back. How confident do you feel about understanding AI and using it in your daily life? Zero? I have no idea. Five? I'm starting to get it. 10? I could teach it.
Nathan (18:06)
I love it.
Jake (18:20)
And so like I 10. And it's like, thanks for sharing. And then in this course, here's what you're going to get. You're going to get hands on practice. You're going to get all these things. And so I just like that it's really, really contextualized. It's really usable. It talks about the prediction model and how hallucinations happen.
Nathan (18:22)
self-levelized a little bit.
Jake (18:40)
and then it ends every day with a challenge and so this was like check out the quick draw game at Google. don't know if you've seen it. Uh-huh. cool. And so I love it. And then they also have all of these gifs. Gifs come in. Now like those of us that are in the know, I read this, like, this is a hundred percent also like...
Nathan (18:46)
Oh yeah, yeah, no, I played it with QuickDraw.
Jake (18:59)
AI. This is Chad GPT, I know the flavor.
Nathan (19:00)
This is AI. This is, well, this is. When
you are, yeah, me too. As soon as you started reading it, I was just like, it's sad.
Jake (19:08)
This is Chad and like the same emojis that he's choosing and all this stuff. But it goes into like machine learning process, you know, it's just it's super usable. I actually really want to get my parents to go through it. Yeah, right there in their 80s. Yeah. My mom is computer literate, not modern computer literate and has no framework of how artificial intelligence works. Like to her, she's like, it's magic.
Nathan (19:35)
mom is pretty much the same. She's actually really tech savvy
Jake (19:38)
I want my dad to do it because he's not tech savvy at all. at all. But yeah, so that's the Make America AI Ready program that I think is super cool. It is the beginning. It's the beginning.
Nathan (19:42)
Yeah.
first, the step one.
Jake (19:55)
Ten minutes a day,
totally approachable, it gives usable challenges, little snippets. So I've asked a bunch of people to do it.
then I just reached out to a couple of friends and I wanted to see their their take on it. Okay, and so a couple of things came back, right? Now most of these are pretty like pretty heavy users These are our people and so I'm getting the like it's pretty simple
Nathan (20:20)
Right, so this is the throwing the shade part that you're talking about. Like we have experts that are interacting with a tool that's for dead novices.
Jake (20:28)
This the very, very beginning. But if I go back to their framework, part of it is they acknowledge further growth. So I wonder what's going to come out next. Is there going to be part two, part three? Because they actually call it.
Nathan (20:43)
It's like, this is an iteration of it.
Jake (20:45)
Go into
their platform, it's called AI 101.
Nathan (20:49)
So they're framing it like there'll be a 102?
Jake (20:53)
I don't know. Yeah, that's what I'm curious about. Let's see what I'm gonna say. So let's see.
Nathan (20:57)
But let's see what our homies say.
Jake (21:01)
one.
Benjamin Bowles, okay, so Benjamin Bowles I came across on LinkedIn. Okay, and I like he says Meeting people where they already are instead of expecting them to come to the content Yeah
Nathan (21:16)
That is very different. ⁓
That's a very different approach. Usually we just expect people to figure it out and then if you don't, you face the consequences. The resources of the time. There's no scaffolding for you.
Jake (21:26)
Yeah, like if you don't have the baseline, then that's on you.
And then Pat, Pat Young-Predit. Yeah, good old Pat. Said, really nice to see a government release a framework and follow it up with learning resources.
Nathan (21:37)
⁓ all right, he's.
Right, so you get the framework which is like, all right, if you don't figure this out, welcome to PUNK.
Jake (21:52)
Here's
the framework, put it in your Google Drive and let it live there.
Nathan (21:56)
but the learning resource part, okay.
Jake (21:58)
Yeah, so this is the first thing. And then also the follow up of like, at the end of every day, there's a link where you can go to their course website and they have more. There's a video. It doesn't have to be. Yeah, you can dive deeper.
Nathan (22:09)
So it's not just a 10 minute.
So there's more to it.
Jake (22:17)
So the pushback, like we've said, is that it's really foundational. But I like that. And this wasn't made for students. This was made for the workforce in general. But I think I would have no qualms at the first five days that I've done giving this to a student. Middle school and up, high school, definitely high school. I think it's a great one.
Nathan (22:35)
Yeah, totally.
As soon as you can handle having a phone, you can probably handle this. Yeah.
Jake (22:44)
Okay. Exactly.
And I was curious your take, like being in a high school, I hear this often that like AI students need, and this is really what I want to talk about today. Okay. Okay. Like AI literacy, my prediction, I'm starting to get more call at the county office from schools wanting resources on AI literacy. Like I wasn't getting that at all.
And it's finally filtering its way in and so I have a feeling that right now the spring of 2026 We're really starting to see administrative teams Thinking about next year. So I my kind of prediction is like the 2026 2026 2027
Nathan (23:26)
Okay.
Yeah, you got to do that. You full thing, School year. It's going to be a tough year, guys.
Jake (23:38)
Okay, okay.
A couple of weeks ago, like a month ago, I was in North Dakota, wonderful middle school, and I was curious, do you know how long it took before the 6-7 reference happened? Like from the time I entered the door, eight minutes.
Nathan (23:50)
in
Jake (23:54)
It took eight minutes to be confronted with 6-7 in the middle school in North Dakota. But I really think that next year, we're going to see schools really beginning to approach AI literacy systemically, rolling it out. I'm not ready to give my recommendations on the literacy products and frameworks that are out there. That's part of the work that I'm doing right now is figuring out
Nathan (23:55)
So, to-
Obviously at top down.
Jake (24:20)
what I like and what I don't like. But I think that that's next year. And a lot of it is people push back that the frameworks are too simplistic. That there's more work to be done than those are doing. And I disagree with them. So my question, I love that the pamphlet isn't for a chef, but for a person currently afraid to turn on the stove.
Nathan (24:42)
Are
you afraid of turning off on the stove?
Jake (24:44)
Right,
and so you're in a high school. It's two years since I've been out of the classroom. I'm curious what you're seeing out there. Do you think students need an AI master class, or do they just need to know that the stove isn't going to blow up on them?
Nathan (24:59)
Okay, okay, okay. Well, okay, as a high school teacher, really...
I'm not sure if that because that because they are playing they're playing 40 % of AI users are under the age of 25 like we already know that from Stanford so the last episode go listen to that there's But there's only 28 % of students who can describe LLM mechanics and so to me that's actually a really important piece
Jake (25:12)
Sure, we know they are.
Nathan (25:36)
It's sort of like...
You drive a car, right?
Jake (25:38)
I drive multiple cars.
Nathan (25:40)
But
you need to... You can drive more... No, what I mean is that you need to understand something about how cars work. Like just in our society, it's very helpful to you and important in my mind to understand at least the basics of what a car is doing. Like there's the engine, okay? There's the transmission. The transmission takes the power from the engine and puts it to the wheels.
Jake (25:42)
and a stick shift.
Nathan (26:06)
The brakes do this like
Jake (26:07)
When
you open the hood, isn't like a little dinosaur in there peddling.
Nathan (26:11)
Exactly.
So yeah, like, LLM mechanics, definitely 28 % is not a great figure. And that is what I've seen out there too. Like it's just, do not know exactly what's going on under the hood. And to me, that is an important learning piece. They need to know what's actually happening.
They need to know how it's collecting and using information. They need to know how it's collecting and using their data too, especially when kids are interacting with this stuff. Because you can set up the AI in a way. It's trying to learn about you at all times. So you have to come to these things with intentionality. So I would love to see.
something like that. Just let's all get on the same page. Let's get the same baseline. And that would really help me to do higher order thinking. Like to me, this is DOK 1. Absolutely. What this course is, right? This is DOK 1. This is here is what it is. Like this is the definition of things. Yeah. And this is the fundamentals.
Jake (27:14)
Like the fact that it's like the first day or the second day, it's very early, where they delineate something that we feel pretty passionate about at our county office, and you do too, is delineating the vocabulary that like AI does not equal AI. Almost every training that we do for AI, we start with what...
AI tools are you using in your personal life? Right. We ask and almost every single answer, no matter where we are, no matter where I'm in the country, it's all ChatGPT, Gemini, Alexa.
Those kinds of things and those are all well less Alexa, but those are all generative artificial intelligence, right? And so I love that in this in this text AI course in the first couple of movements they delineate the difference between that You know under the hood AI that you don't interact with Auto focus and lane assist and all these things happening the algorithm The suggestion models with Netflix and all that stuff and
Nathan (28:09)
technology you know
Jake (28:14)
The reason we're all talking about AI now, which is that in 2022, chat GPT 3.5 was good generative AI. They had generative AI before then, but it wasn't good. so I love that on that idea. Yeah, clippy. Rest in power.
Nathan (28:32)
Rest in power, bro.
Jake (28:33)
But I love that they're they're teasing that out and and for educators. I think that is one of the most critical shifts right now in their literacy and understanding because the parent pushback on technology in a classroom right now is really powerful and dare I say really misguided
Nathan (28:53)
I agree.
Jake (28:54)
Right,
so I think it was, last week I was looking, there's like 16 states with legislation to remove devices from classrooms. Or severely limit them. And that's not just AI, that's like devices, like screen time. And so a lot of this is coming out of the anxious generation book, which has some really good points to it, but I think it does the problem that most of these reports are doing that I read about.
They aren't discussing the usage of the technology only that the technology was used Okay, yeah, and so okay, so let's take So Stanford just put out this big paper again. I love the Stanford HAI. So much good there's so many good resources there good so basically they're showing that This was specific to AI AI tools so AI tools in a classroom
Nathan (29:33)
Right.
Jake (29:43)
they're having a really hard time justifying whether or not these tools are helpful in a classroom. Because they're showing like there is causality that the students perform better, but then when you take that tool away, they don't. And so they're missing that. And like, I don't want to go deep into that rabbit hole, but.
Nathan (29:56)
⁓ the dependency upon this and like what we talked about last episode yeah
Jake (30:04)
they're starting to really understand it's a pedagogical choices of the tool. That's what's important. And so these state houses of legislature are trying to block, ban, limit, ed tech in a classroom. They're lumping it all together. And to me, it's like, well, yeah, I've been, I just wrote about this on Substack. Like when I talk about the Fast and Curious Edge of Protocol, I like GIMM kit.
Nathan (30:27)
Yeah, you rock fast.
Jake (30:30)
When
I do that, I often get pushback and they're like, yeah, like I've used Fluke it in the classroom, but the kids just get bored with it. I'm like, well, okay. And I always like, how are you using it? And you find out it's like babysitting. You're like, well, yeah, I'd get bored too if you put me in front of this thing for 20 minutes with no interaction. Whoa. And so these state houses of legislature are showing actual research that like using ed tech platforms in a classroom.
Nathan (30:45)
Yeah.
Jake (30:54)
has not led to any gains. In fact, has probably been bad. There's negative.
Nathan (31:01)
Yeah, mean, that was...
Jake (31:02)
Never
discuss the pedagogical choices behind them because you and I were like, ⁓ absolutely not like
Nathan (31:09)
If you pulled devices out of my classroom, it would no longer function.
Jake (31:13)
Yeah, and not because it's a crutch, but because you are leveraging that to aid in their learning. It's very different. And so to get back to this thing...
Nathan (31:25)
What it would be like is going to shop class and having no wrenches. Yes. Like that's what it would be. Yeah. Like, OK, we're here to do this work, but now we've taken away all the tools that are purpose-built for that work. Yeah. So good luck. You got to figure it out. Yeah.
Jake (31:40)
Now imagine you have a shop class, but the kids come in and then it's like free play. And then you say shop classes are not effective, they did not learn how to do the craft. Well, yeah. It's literally, to me, that's parallel to the discussion in Ed 10.
Nathan (31:47)
That would be awesome.
Yes, I think that's a really good example. Me too.
Jake (32:01)
It drives me insane.
Here's a quote. So Akeesha Horton, she's on, I believe she was on Substack, and she's talking about a critical flaw in AI literacy is that it's all about how to use a model or how it works, but not the pushback. And Akeesha said, teaching prompting without evaluation.
is like teaching someone to cook without asking them to taste the food. It's so true.
Nathan (32:28)
we all know if you've been listening to this podcast, I'm also a musician in my other life. is my 50 % education. 50 % music is pretty much how I roll. And so we got an upcoming tour. We're going through the gear. We're going to rebuild some things and we're going to rewire some of our speaker cabinets for the bass guitar.
And I know a lot about the electrical engineering of that and was just, but I needed to see the options, what's out there. So I'm working with Claude to do that.
Jake (32:58)
to
like help you what?
Nathan (33:00)
To help me pick out speakers, I don't know what's on the market right now.
Jake (33:06)
You're like,
I know how things should go together, but help me find out the pieces.
Nathan (33:10)
Yeah, need to know, give me some guidance on what would best work for my application, right? That's what I'm looking for. And then also, like, let's brainstorm how to do some wiring because some speakers only come in certain configurations and then affects the wiring and everything. So it's too, without getting too technical, like, you know, I have expert knowledge in this area. Claude also has expert knowledge in this area and we're working together. And then
we get to this point where there has been a fundamental misunderstanding between me and the agent. I am thinking about how to balance this electrical problem in terms of sound. And it is trying to balance the electrical problem in terms of functionality, in terms of the electrical part of it working.
Jake (34:00)
Okay, so you're asking something and it's doing a different parallel.
Nathan (34:05)
slightly
right like like like you'd have to you'd have to have the really intense expert knowledge where it's like if you do it that way it'll work but it won't sound good like and so I just like hey you're forgetting that this is for a bass guitar all right you have to balance this differently the wiring has to be different and it was just like
Jake (34:08)
I wouldn't know the difference.
Nathan (34:28)
you're right. And then like shifted and corrected and finished the project with me. So while that's a very, very niche, very intense example, I'm just sharing it here to be like, even if you are at that expert level, you always need to have the expert knowledge more than the AI. Like the AI is there to assist you. You have to own the knowledge.
Otherwise, if I had wired everything up just how it told me, it would have probably blown speakers and it would sound like absolute garbage. See? So it's like, I'm solving a problem. You need to have the knowledge. Don't let the AI overreach for you. It's just a little bit of expert guidance here because...
Jake (35:02)
bright.
Nathan (35:13)
that's where we need to get. Everyone needs that understanding. Everyone needs the understanding of, I can't let it do it for me. if we can get that baseline across the board with a national educational initiative, I'm on board.
Jake (35:28)
Yeah, they're like, no, the stove isn't gonna blow up.
Nathan (35:32)
The
stove ain't gonna blow up, but you gotta know when it might be dangerous.
Jake (35:37)
it might be dangerous. Back to that, know, so many people kind of couch artificial intelligence in this almost magical realm. I mean, how would we not? We think about our ancestors, the way that they processed the unknown, like it often was just magic. We don't know what it is, so it must be magic.
Nathan (35:55)
Efficiently advanced technology is seen as magic. It's a quote from somebody and I don't remember ⁓
Jake (36:01)
Cited here.
But yeah, one of the things that we do at the county office, you know, we do all these trainings We we are very upfront and so when somebody comes and I go we want you to train our teachers on Magic school. Mm-hmm. We're like great. We're happy to do that Mm-hmm, but one of the agreements that we have as a department is that we always begin with literacy. regardless Okay, and so
Nathan (36:24)
So they're stealing your homework here.
Jake (36:27)
Dude, I have to say there was a cathartic moment going through these days where I'm like, dang, like, did they take our side decks? know, because I'm like, wow, this made me feel good. I'm like, I'm really in alignment with what's being pushed out at some of the highest levels.
Nathan (36:40)
Also, that
makes me feel better about this thing as well. Pat's endorsement, good. But if it's also aligning with the training that you're doing, okay.
Jake (36:45)
Yeah.
Totally happy with it totally happy with it. And so okay. So last week we were in a local community and we trained their entire staff there was a hundred and eighty people in the room like It was like 30 more than they had told us Okay But one of the things that we do is that foundational literacy we always get eye rolls
Nathan (37:08)
Right, this is a big...
Jake (37:16)
You know, people are like, I don't want this. This is boring. It's too technical. we always do OK. We do OK one. And so for example, we take them through Legos. We play with Legos. We have these series of activities to understand supervised and unsupervised learning in LLM generation, token weights, predictability, then bias and
Nathan (37:22)
world.
like.
Jake (37:42)
⁓ bias and ethics and security of data sets in the training corpus. We do it all with Legos. And so at first when we tell them like we're gonna take you like what is happening under the hood, we get these eye rolls of like I'm not a computer programmer. I won't understand what you're talking about. But when we're done we usually, you know, can't say always, we always hear back like ⁓ that was actually really understandable.
Right, so there's nothing magic about AI. It's just math under the hood, right? It's a function machine and the reason we do it the reason I'm so diving into AI literacy is You can't make informed decisions about things you don't understand At a very basic fundamental level and teachers administrators, know people adults in education are being asked to make
really important decisions on technologies that they do not at all understand. And so we're seeing them not make the decision. well, we'll just block and ban. We'll just ban it all. No, no, no, no. And so that's why as a department, we're like really heavy into this, which is why now we've been like pushing it. It's been a push model.
sometimes met with resistance of like, no, we don't really want that. We just want to know how to use the tool. we're like, well, this is something we do as a department. We're going, you know, if you want us to train. But now I'm starting to see the beginning of a pull model. I'm starting to see people saying like, could you come in and help my department understand how AI works so that we can start making decisions? I'm like, my gosh, thank you. Like we're growing. So like my idea.
Like I said, I think that next school year is really going to be focused on literacy. And I think it's really important because there's a lot of anxiety founded and unfounded around. Yeah, they don't want to the stove on. A lot of anxiety around it. And so the more we can demystify how this works, get away from media hype cycles.
Nathan (39:37)
Yeah, that has been so detrimental.
Jake (39:40)
detrimental
like I I I jokingly say I'm like dude like write my essay is the most boring thing you can possibly do with AI and that's what people are worried about come on like do we want to talk about like
Nathan (39:50)
So dumb.
Jake (39:56)
facial recognition with C4 attached to drones and I can have a thousand drones on my phone and wage war on a city? Like there's real things to be afraid of. Right? My essay isn't one of them.
Nathan (40:08)
at Ukraine, just to go back to that question, just to pull back, like in my class, which everyone knows is AI focused, and we use AI to, you know.
Jake (40:11)
Yeah, yeah. Yeah.
Nathan (40:25)
produce generative writing, we use AI to edit, we use AI as a thought partner. Trying to tool them up in all of these things that I see as absolutely crucial for their futures. Even in that class, literally two days ago, and this is a little sneak peek where this is gonna be the next episode.
I'm doing this assignment. I wanted my students to experience Stanford with me, right? So built all these resources. It's super cool. They were so into it. even that, even on that assignment, OK, I did this on purpose. I procrastinated the assignment literally as long as I could. So just to clue you in, I also often do my own assignments. If you haven't done that as a teacher in a while, you should.
Jake (40:54)
I'm totally into that with
Nathan (41:11)
because you'll see it from the student perspective. And you'll find things that aren't right and that aren't doing what you want them to do. But that as an aside, I was doing my own assignment and I purposefully procrastinated it as long as possible. I was trying to teach my students about active procrastination actually, because I assigned it before spring break.
Go scroll down a little bit and listen to the procrastination one. It's really, it's good stuff. Anyway, I was trying to teach them how to do that process, and we'll go more into it in the next episode, but it was like in that class, in this thing, we're doing all this AI-focused stuff, and then.
we get to the debrief, right? And so I walk them through my workflow and it was a very, very, it was a quality product with very little time devoted to creating the product, but a lot of time before pre-reading, I in to create the product, right? Yes, in this context, I have...
Jake (42:08)
ounce of prevention.
Nathan (42:13)
student who looks up at me wide-eyed and says You're really gonna show us how to cheat Right now hopefully on our papers and I said Yes. Yeah, that's exactly Yeah, that was the whole thing where it's just like this is not cheating my friend Yeah, but but just in hit and he's like we can do that. I'm like
Jake (42:23)
you
you think it's cheating, we'll talk about that.
Nathan (42:40)
you haven't been doing that. Like it was just kind of like, oh man, lock in dude. Like you're reacting from a place of fear. You are. And then in other kids in that class have already built fully functioning notebook LMs with additional resources based on the one that I gave them and are exploring this issue on like a high, high DOK.
level, like societal issue, trying to plan future policy. So it's like, that's the difference that I have in my class based on different access to tools. So like people, this is a really important thing.
Jake (43:17)
I mean, we literally have teachers saying, like, don't use AI in the classroom to do my fill in the blank worksheet. then high school students that are building apps and selling them on the Apple App Store and making money. Yeah. Like vibe coding apps through AI. Yes. And they're the ones that aren't allowed to use AI to fill in. Yeah. We're in a really weird place.
Nathan (43:42)
Insane
disconnect right there. Like wild, wild stuff like that. So when I see things like that, like that's what brings it back for me that we absolutely need to be doing this learning. We need to be doing this training. Otherwise the access and equity, the inequality that we're gonna see in the next couple years in terms of learning, it's...
Dangerous back that it's that could be so bad that Like the kid that's behind will be 20,000 years behind. Yeah, not like because it's a technological shift That's historical This would be like No, I'm I'm never going to learn how to drive a car. I'm gonna ride my horse forever. Yeah, this is you know, I'm Never going to learn how to do email
Jake (44:11)
Like I...
It's so fast.
Nathan (44:31)
or text, I'm just going to, you know, use the telegraph. Like it's like, that's where we're at. That's the position where. Make American facts again. Seriously. OK, so another aside, like actually, if you're working on like SSI benefits or anything, sometimes they want you to fax forms. ⁓ to that, to the government. Like, not anymore. What are you? What?
Jake (44:40)
Make America facts again.
I mean, okay, people that are in disbelief go back and watch Ted Lasso, they make a joke about it.
It's still there, right? But I think that that's why AI literacy is really important. We know that things take three to five years to just roll out ever in education. And so if we want kids to get to where they're able to eat this amazing Thanksgiving meal of artificial intelligence, and I don't mean like skipping skill development, like what it's capable of, what is out there, but there's a lot of fear.
you don't know about this. There's a new project I'm starting at the county with a colleague named Philip James. Amazing dude. He's in like the social emotional learning side of things. So he was doing he was doing some empathy interviews at some schools. And one of it was just like, what kind of like what's what like what would we need to do or not do to make a beautiful future?
So it had nothing to do with technology at all. I This was totally not... I know, right? No discussion of technology. This was just like a group setting and Philip was telling me that an overwhelming number of the students all took it to AI and said, if we want a good future, we have to get rid of artificial intelligence.
Nathan (45:53)
that right.
Wow. Yeah.
Jake (46:14)
really interesting and so this led to we're in the very beginning of a project of conducting empathy interviews which is a formal scaffold framework for research in middle and high schools to better understand the student voice around artificial intelligence right now. Yeah cuz my like my immediate question is like
Nathan (46:32)
That's so important,
Jake (46:38)
You know, I've taught second grade, they're little parrots. You know, you hear exactly what their parents are talking about. And so when you get kids that don't have any understanding of how the stove works, right, any understanding of what AI actually is, how it functions, all these things, but they have opinions on them, they're reflecting the adults around them. It's the way that it goes. And so I always question like, where did they get that from?
Are the adults in their world AI positive? Are they AI negative? Have they experienced anything? So we're going to be doing these interviews to try to tease that out of do our youth feel about AI? Because all the adults in the room are saying, AI literacy, AI usage, these things. And we're hearing a decent contingency of students that are against it.
There's one thing like, things being said in rooms about AI because they think that that's what should be said? And then I don't want to negate kids that are knowledgeable and against. There's totally that. Yeah, know what it is. So no, I'm excited. We're going to start doing these empathy interviews. We're hoping to do.
Nathan (47:34)
Right.
Know what it is.
Jake (47:46)
conduct about 50 interviews for the first round before the summer. Just to get an understanding of kind of the landscape of that. But to me, that's all like how...
So we have this matrix that we use at our trainings sometimes. We put out this physical board with a matrix. And on one axis, it is your level of comfort with artificial intelligence. And on the other axis, it's how much you use artificial intelligence. And so we have it in these four quadrants with like cautious adopters, pragmatic explorers, creative people like that. And then we just ask them to put a sticker on this thing.
of where do you fall on the chart? Do you have really low usage and really low comfort? That makes sense. But to AI literacy, everybody deserves to be comfortable. So I will never be like, you need to use AI more.
Nathan (48:43)
Right, like the pushiness.
Jake (48:44)
I'm not doing that. I want you to know what's available. I want you to feel.
Nathan (48:49)
Yeah, want you to know that there are choices. There's a choice here, but you need to know the affordances of the choice, like what you're giving up if you take it away.
Jake (48:50)
Thanks.
To me,
literacy, knowledge about a contact, it's all about safety. It's all about psychological safety. It's all about empowering somebody to make decisions and choices for themselves. I'm a heavy user of artificial intelligence. I find great value in it and great problems. I find all of it. We all know people.
Nathan (49:21)
Any tool. Both sides.
Jake (49:25)
that have a really solid understanding of artificial intelligence and don't want to use it. That's cool. That's totally fine. Right? Because to me,
AI literacy is just about increasing someone's comfort around it. And we all deserve to be comfortable. That's what I say that in trainings all the time. Your pushback is more than welcome here. so many times it's met with negative resistance because it's fear and it's protective. And I'm like, you shouldn't have to live that way.
Nathan (50:00)
Yeah.
Jake (50:02)
deserve to be comfortable.
And that has nothing to do with how much you use AI. But that's all about exposing, right? You become exposed to something. so there's these frameworks to do that. But I do love, there's a guy named Douglas Goslin. I think that's his name on LinkedIn. he said that he's like, have to remember that exposure is not readiness.
so Douglas is actually a systems analyst. I read his stuff on LinkedIn sometimes, and he said this, he said, exposure is not readiness. A workforce is not stronger because more people touched the tool. That's a good framing to me, right? So like, that's why AI literacy to me is important, or part of that is that I need to expose people
to this so that they can choose to use it or not. I have clear-cut on whether it's useful or not.
Nathan (50:57)
Me too. But I cannot for another person prescribe. I don't know your life. I don't know. But I know it could help you with something. I know that it could. It's just... You must have the choice. You must have the choice.
Jake (51:12)
And we've mentioned it before, sometimes when I have somebody that's really resistant to AI and education, there was this wonderful teacher. When you looked in her eyes, she was a very veteran teacher, very close to retirement, and was really lamenting the fact that she had to sit through a whole day of AI training on a few levels. One.
Nathan (51:32)
I
Jake (51:35)
It was a long day. I actually don't like the whole day model at all. You're asking people to confront things that are uncomfortable and alien to them for a long time. It's a lot of heavy loading. But she was also really, really resistant to AI. most of her reasoning wasn't actually sound. It was kind fear-based, hype-cycle media-based stuff.
You know, she was saying it's unethical as a teacher to use artificial intelligence because the district hired her, not an AI, to do that work.
well, like I push back with like, it's unethical for the district to ask of teachers what they do ask.
Nathan (52:15)
have to have the skills and knowledge and I mean deep institutional knowledge and practical knowledge of like three to four different disciplines usually in most teacher role like you have to be highly organized you have to be all of these different things we know this you can't just be professional in one area you have to be professional
Jake (52:34)
In so many areas, right? And so like how can, to me, a teacher like that, I have so much empathy for her. She's had this beautiful career and you know these teachers, when you look in their eyes you're like, my gosh, you have hundreds of students that remember you with great honor.
Nathan (52:41)
I have a lot of empathy.
yeah, thousands. Yes. ⁓ yeah,
my third grade teacher, like that.
Jake (52:58)
Yeah, like you could just tell this was her.
But she didn't feel the agency to be able to use AI. To me, and just talking with her, she was like, there's nothing wrong with what I'm doing. And so sometimes I tell people like that. I said, you know, it's true. We all used to wash dishes by hand.
Nathan (53:09)
Yeah.
Jake (53:16)
There's nothing wrong with dishes by hand. I still wash some dishes by hand, but we also have the ability to use a dishwasher. And so it's an option. It's out there. And I think that increasing literacy to assuage that fear.
Nathan (53:23)
up.
Jake (53:32)
to make your own informed choice on your usage. That's why literacy to me is so important. I have my personal belief that as knowledge increases, you will remove some of those barriers and people will start to see healthy, productive ways to use AI. For sure. You know, like I do see that. There was a quote by Matt Meador on LinkedIn. I love that he said, the real barrier isn't awareness.
Nathan (53:51)
Yeah.
Jake (53:59)
It's applied literacy, the difference between knowing and using. that, to me, that's an important distinction, right? So like moving forward, my hope for this next year, I do believe we will be pushing out a lot of AI literacy, like tangible, usable tools. So like our listeners, stay tuned. We will have an episode where I finally give my recommendations. I'm not ready for that yet. There's too many.
Nathan (54:13)
This is what people need.
Jake (54:25)
things out there that I'm not familiar with yet. ⁓ Yeah, you know, there's some good, there's some great resources, like, my gosh, like AI for education, like amazing resource policy, teach AI, like all of these things. But to me, that's the thing, like I want, I want, I want to increase access. I want to increase choice. I want to increase.
Nathan (54:29)
Me too.
Jake (54:49)
someone's human capability to use or not use based on actual reasoning, not fear, not discomfort, and not just a lack of curiosity. Or we know like in the trenches, back to good old soldier-based war metaphors, teachers don't have the time or the emotional...
you know, what am gonna say, like the emotional bank account to be diving into how they should be or should not be feeling about AI right now.
So huge amounts of empathy when it comes to that. then also, it's no secret, I'm a parent. I've got three children between 15 and 23, which is hard to believe now. like, I'm turning 50 this week, right? So I've done some living. There are things that I know that my children don't.
Nathan (55:36)
Zany is 15.
Yeah.
Jake (55:48)
There are things that my kids have made decisions on that are wrong because they're misinformed. so to me, back to that idea, I wish we had a better word for ignorance.
Nathan (55:55)
Yes.
it's so connotatively like insulting. But really the denotative like definition of ignorance is just lack. Okay.
Jake (56:03)
If you're ignorant, you're negative.
A lack of knowledge. And so
feel a pressure to combat ignorance, denotative version and connotative, of like, you deserve to have the information to make an informed decision. And sometimes that requires me as someone who's a couple of steps ahead to push back.
Nathan (56:32)
From a place of gentle, from a place of care, from a place of kindness.
Jake (56:37)
there's too many things out there, there's too much understanding of where the economy is going. So I've always had these like truths that I talk to you about students, I'm like it doesn't matter if we agree with them or not, they're
whatever like one of them I'm like no matter what if you write something with poor handwriting and bad grammar You're going to be read as less intelligent. It's not fair. No, but it's true out there, right? It's true
And so if you don't incorporate artificial intelligence in your classroom, you are harming children.
Nathan (57:10)
Yeah, I'm there too.
Jake (57:12)
And that's
a stance that people are gonna be shocked at. And they say like, if you are the person that says never in my classroom, there is no place. And I push back to people to their face on this. And I'm like, well, then the way I see this going is you will burn down your career and retire feeling angry, disrespected, right?
Nathan (57:36)
now.
Jake (57:37)
resentful these things because students will be using AI every day and so there's I'm adding that to one of these truths in my in my litany of like and regardless of if we agree or disagree on AI I don't believe we have the power to stop it any longer
Nathan (57:41)
be destroying you every day.
Jake (57:57)
Like, we cannot combat that unless we go like Fight Club and you're gonna like take down the whole system. You know, that's the level that we...
Nathan (58:06)
If we enter into World War III and everything's torched, then yeah, I guess clean slate for all y'all.
Jake (58:11)
got it right in the stand. I don't know. But to me, there's an agreement, a social contract agreement there of like, regardless of if I agree with the use of AI or not, if I'm in education, I have an obligation to teach students how to use it in a healthy, reasonable, respectful manner because they will 100 % be required to understand that in the job market.
Nathan (58:37)
Yes, that's exactly how I frame it in my mind too. Where it's just like, I'm such an incredible disservice to my students if I am not incorporating artificial intelligence into whatever subject material I'm working with. Like I just...
Jake (58:53)
Yeah, so Matt, I've brought it up before. I'd love to talk to somebody who has deeper knowledge on it, but the statistic about children who are read to by their parents entering kindergarten and the size of their vocabulary is never, like somebody who's not read to as a baby and as a child, the difference in the vocabulary between those two students is never made up.
Never? mean like, I'm not an expert in this but that's what that study's claiming. And I really feel like students who have access and quality guidance with artificial intelligence versus those who exit schooling without it, I don't know how that will be made up. Now, of course like the indomitable human spirit, right?
Nathan (59:41)
Bye.
Jake (59:41)
Kids
will learn, they will have to pick it up. But imagine, let's just say like, you know, whatever case scenario, two people enter the same job. One has deep experience of using AI productively, healthily, strategized, all of those things, versus the kid who, I don't know, has to sign up for the Make America AI ready to get the very basic foundation. I don't see them being able to compete in the same job market.
Nathan (1:00:04)
Right.
Jake (1:00:09)
And so the haves and have nots just skyrocket.
Nathan (1:00:13)
disparity there, which is gross exponentially.
Jake (1:00:16)
And
I look at just you and I, right? So like November 2022, I remember that night. It was early, like two o'clock in the morning.
Nathan (1:00:24)
I
remember so clearly when you came into my room too.
Jake (1:00:28)
Yeah, it was the next morning and I signed up for a pro account on chat GPT moments after seeing it on TikTok. I just knew. And so we've been using AI in education with students in some format or another. Like we used to just drive in front of them. Like the difference in usability between you and I and somebody who just today is saying maybe I'll
Nathan (1:00:45)
Right.
Jake (1:00:55)
make an account with ChatGPT and see what this is about. The chasm between those is huge.
Nathan (1:01:04)
Well, and actually I don't think that that's recoverable because...
Jake (1:01:10)
hard, yeah.
Nathan (1:01:12)
I'm gonna say this. Me and Jake have been evolving with this technology as it has been developing. like if you, like it's going to keep going. There's things that I haven't even said. Like Jake is talking about co-work, right? And like I'm not really into co-work yet. I just haven't had the time to see how it could fit into my workflow.
Jake (1:01:27)
Dispatch Cowork.
Nathan (1:01:35)
Where you're like a master. going to, but yeah, it's not something I need right now. So I don't need it in my workflow. it's just out on my periphery. But like, that's happening every week. Like if you don't jump in, you will never swim the channel.
Jake (1:01:53)
You've built
a couple of custom GPTs. Yes. Right? So custom GPTs are like the gateway to agents. Yeah. It's like AI. Exactly. So now imagine every single training that I do, I come across people that have not put their hands in an LLM. Not on Snapchat, not chat GPT, not anything.
Nathan (1:02:04)
Thanks.
Jake (1:02:19)
You know, they've never heard of Claude and that's fine. Whoa. No, no, no. Still. Absolutely. Really? Yeah, absolutely. So but that person for the very first time is doing what we did in twenty twenty two, which is like, right? Mia Limerick.
Nathan (1:02:34)
Right, yeah, And then, you know, pushing it too. I was just like, what's the content knowledge? How can it help?
Jake (1:02:40)
exactly here we go, exact experience. This was last week on like Wednesday. So we did this training for Paradise Unified School District, whole, and 180 people there from ID2. So we had everyone from the superintendent, Betsy, who is incredible and so willing to dive in and explore this.
I don't want to say all the way down to, but like hierarchical. Everybody, paraprofessionals, people from transportation, bus drivers, like everybody came. the reason that there was this discrepancy on who was like, they told us about 150 and then 180 came. It's because all of the like optional, like we'd love for you to come, but it's not required because of your contract. came because our team part, partly, I don't want to say because, but partly.
Our team, Jana and I, had done two sessions with their admin team before this. And so they, and this is what they've told us, they were like, hey everybody, we have our whole staff training and we've been working with this team, they're awesome. And so they came. But so now imagine, I'm literally sitting with people showing them the equivalency of like how to turn their computer on. Like here is Gemini, that's what we were using.
Here's Gemini, this is where you can type a question and now just go play. And these people are, it's that jaw drop moment.
Nathan (1:04:00)
Yeah. Okay. I remember. Okay. I remember it well.
Jake (1:04:03)
But meanwhile, I have Claude Cowork running on a desktop. And when I'm at the park eating lunch with my colleagues afterwards, I essentially text Claude as if he's my intern, he's my assistant, to run a research report on some certain things. And when I came back to the office, it was done. And so like, you know, like,
The chasm of usage here, I'm going to continue following that frontier, that jagged frontier, the bleeding edge of AI. How will the person that just found out you could write an email get there? Not that everybody has, but I mean, the capabilities are a...
Nathan (1:04:41)
No.
The disparity is wild. and I mean really, even just come to quantify it in terms of like workload. Super simple, right? Like you have the capacity to do the work of like five to ten people in less time. Someone who does not have your skill set can do, has the capacity
to do the work of one person.
Jake (1:05:10)
so Sam Ellingsworth is a writer on Substack. Wonderful thing. I think his publication is called Slow AI. Highly recommend it. Sam Ellingsworth is definitely his name with an ⁓ I. So he just posted something about, we've talked about, like
Nathan (1:05:24)
I don't know this cat.
Jake (1:05:30)
So if you look at my workflow, you just said, I have the ability to do the work of five people in less than one person time. And he was writing about this. But the problem of
we've talked about email. When email came out, there were all these ads of like, it'll be amazing, it'll save time, you can go home and have dinner with your family now. And we like, that didn't work. know, AI land, I mean the email landslide that.
Nathan (1:05:56)
You can go home
and write emails next to your family that's having dinner without you.
Jake (1:06:00)
To me,
another aspect of YAI is we know it's out there. We know it's going to happen. We know that it is only increasing in prowess and in usability and in expectation of usability. So now, if the modus operandi is a heavy user that can control an army of five agents, 10 agents doing the work for them, and you roll up not being able to do that,
You have to replicate the work of those five agents to keep your job.
Nathan (1:06:32)
Right? terms of performance.
Jake (1:06:33)
And so I'm like, okay, like on
So and what's expected of an employee? I don't want to repeat the email thing. I'm like, no, no, no, no, we have to get back to black. We've been in the red. Back to black first. But the average work product of an employee is going to change, is going to increase.
Nathan (1:06:39)
And like
We've to get the
Yeah, more productivity will become the expectation. And so... about this for years. It will. Any shift in technology that pushed efficiency forward, more productivity was expected of the worker,
Jake (1:07:00)
Yeah.
So now imagine you have a student in, let's say, fifth grade. It's kind of the beginning of really awakening into using AI.
middle school they're starting to learn about maybe using like embedded chat bots in school AI and magic school like I'm not forgiving a kid at chat GPT you know high school to me is where that lives and then really grades 11 and 12 I think like nine and ten it's using them
Nathan (1:07:39)
Even though it's
and a lot of literacy there.
Jake (1:07:45)
And California
came out with their guidance that's really good on their age banding. But they're talking about 11th and 12th graders are going to be exponentially driving AI to reach things that we couldn't any longer. And so now imagine you have a kid that goes through a program where they're empowered that way.
and then go into whatever, you know, let's say it's not a college-bound kid, it's just a workforce kid, maybe not even like a skilled workforce kid.
Nathan (1:08:14)
Like the kid who's gonna... I need to get a job. I need on my own. Okay, that guy.
Jake (1:08:16)
I a job, get some money.
Okay, that guy. And then couple that with the same kind of brand of student that didn't get AI. That's gonna be detrimental.
⁓
And so when, to me, maybe like at the bleeding heart side of the AI debate in education, when we have these teachers that are like absolutely not nowhere in my classroom, to me, it is just becoming more and more clear.
back to Morrison's framework of adoption of technology, they're protecting their station of power. They don't know it, and of course we will, it's human to do that, but at the expense of a kid's own comfort level once they reach the workforce. And so imagine that that teacher got what they wanted and didn't have to ever use AI in their classroom and...
Nathan (1:08:49)
⁓ yeah.
Yep.
Jake (1:09:11)
that probably isn't a culture of a school that tolerates it and multiple teachers are gonna be under that. And so now you that kid go through that program completely disabled to enter the job market.
Nathan (1:09:23)
Disabled, I mean. I don't know. Yeah, that is the word.
Jake (1:09:25)
What else to call it? So to
me, there's protecting your station of power is becoming more and more nefarious. It's dangerous.
Nathan (1:09:35)
actually
like yeah negative mm-hmm I really net bad impact
Jake (1:09:44)
Yeah,
I had a really interesting interaction at a high school. We were doing a whole teacher training for all the teachers. And this one teacher was really hemming and hawing around it, but she finally posed the question, right? And she couched it in like a hypothetical, but we knew it was her. And I talked to her afterwards about it. And she was like, well, what do you do if we're expected to teach AI literacy and usage in a classroom?
But a student is diametrically opposed on moral and ethical grounds to the use of AI. so people often bring up environment. There's a lot of misinformation around it. But there are real problems with environmental impact and AI. And it's really misunderstood. It's about training models, not using models.
Nathan (1:10:36)
wildly misinterested.
Jake (1:10:38)
By the way, there's a really cool website. It's called Which Uses More? And it has hyphens between, like, which-uses-more.com. And it's cool because you can choose AI usage of like writing a paragraph, making a song, da-da-da-da. And it just compares the energy usage and the carbon footprint of those things. It's really good. But, so I had this teacher push back of like, what do you do when they are morally and ethically opposed to it?
Nathan (1:10:42)
Alright.
into.
Jake (1:11:05)
And I loved it. My colleagues looked at me because it was kind of the first time we had been asked that, like so point blank. And here's what I told her. And I stand by it. I said, you know, we're actually asked to do that all the time. That's not new to us. OK, so think about our families that have religious reasons why they don't interact with things. Right. Right. We've accommodated for that all the time. It doesn't excuse them from understanding.
the knowledge behind what we're doing, just the practice of it. And so to me, I'm like, wait a minute, that's when I started talking about evangelizing. We're not here to evangelize artificial intelligence. I'm here to teach you what it is so that you can make your choice informed or
You
Nathan (1:11:48)
I'm gonna raise you one even darker because while you were... love that you're me. It's going real dark today because I really care about this issue a lot. really care about absolutely learning how to use AI. It will change your life and it can change the life of your students. The lowest income, most underdog kid can perform at the highest level.
Jake (1:11:50)
Okay.
the dark one today.
With
like a nominal fee a month, they can be vibe coding and selling apps.
Nathan (1:12:18)
bucks a month and a phone. Like I'm not even exaggerating. Like I'm not at all. Like this is the world that we're entering. But in the United States, now this is literacy, period. This is literacy, because we mentioned this earlier, and I wanted you to grab the numbers and bring it up, because it came up at Stanford, because they're trying to use artificial intelligence to create
basically adult education.
Jake (1:12:47)
Basal
literacy, the ability to read or AI literacy? Yeah, we know that those numbers are bad.
Nathan (1:12:50)
the ability to read. Okay,
so the ability to read, this is the current numbers from NCES. U.S. adult literacy proficiency ages 16 to 65 currently. 28 % low literacy, that's 59 million people. 29 % basic proficiency, that's 61.5 million.
people that's more than half of our population are defined as lacking literacy for the modern workplace. That's over half of Americans. Only 43 % of adults, actually not even adults, 16 to 65 is the figures. 43 % are literate, can read.
Jake (1:13:43)
proficiently
literate to
Nathan (1:13:45)
can
read and understand information and can actually function in the modern workplace, 43%. even just starting there with basic knowing how to read, we're already failing at that. And so then,
You introduce AI literacy on top of that already existing disparity. Like things can get so dark so fast here. that's why I really care about this and that's why I'm going dark on purpose.
Jake (1:14:16)
Well,
and how about back to ignorance? I've had people kind of bring up that it's inaccessible to people with literacy problems.
Nathan (1:14:24)
It is inaccessible. It's not. It's not? No. What?
Jake (1:14:28)
Back to, it changes and evolves so quickly. Have you not used advanced voice with Chachi PT?
Nathan (1:14:35)
right, of course I have. Dang, you're right.
Jake (1:14:38)
like, it's all in English and it's all written. You're right. ⁓ No, it's not. You can do, you know, dare I say everything you can do the majority of AI, at least a huge lion's share of it. Just don't. And that was early. I used to go for walks in the morning.
Nathan (1:14:42)
It's not.
You're so right. That was- in morning.
I was doing my Japanese with Chad for a while.
Jake (1:15:02)
Like
if you haven't tried it, your, it's usually the app-based version Put your headphones on, go for a walk and learn something with AI. I'll be like, hey, I heard this thing. Let's talk about it. Okay. Yeah. And so, you know, I see this literacy statistic come out and it's God awful. It's awful. In fact, John,
Nathan (1:15:13)
So real.
Jake (1:15:22)
John just shared an article with me about No Child Left Behind.
Nathan (1:15:25)
Yeah, every child left behind. child left behind. I haven't read it yet, but...
Jake (1:15:27)
You
the numbers are coming to light for 20 years, we've been losing literacy and a lot of it is based on the structures of education that are there. And then like the science of reading problem and the whole language and the phonics instruction, there's so much of it. Perverse incentives. But...
I've seen people try to use the lack of literacy, lack of reading literacy as a reason why we shouldn't use AI in the classroom because it creates haves and have nots. I say no, we haven't made the connection of like audio-based artificial intelligence.
Nathan (1:15:58)
doesn't. You're right.
can I?
Jake (1:16:04)
Can I interface
Claude Cowork and have a swarm of agents working for me? No, I can't do that without writing. But, I mean, percentage of usability? Audio. You know, like, dare I say the Pareto principle, like 80-20 rule, like 80 % lift you can just do by audio interface talking to your LLM.
Nathan (1:16:11)
That's kind where my mind was going.
Yep. The last 20 % is where you come in and tinker.
Jake (1:16:29)
And so maybe they're not going to be able to text their dispatch, text their thing.
Nathan (1:16:35)
to work at that level. Yeah, but they still work at A level.
Jake (1:16:37)
And how about they can have it teach them how to read.
Nathan (1:16:41)
Yeah. Yup. The old... That is meant to come down.
Jake (1:16:42)
you know, the ultimate scaffold
help bridge that gap by orally teaching somebody in the comfort of their home, not under the judgy eyes of people. Like I'm not saying that's the best way. I think that working with a human is better for that, but they could.
Nathan (1:16:52)
No No pain.
Yeah, easily. Easily.
Jake (1:17:03)
And it's going to create writing samples on the screen and read it to them. And then they read it back. And it's going to understand and correct them according to their personal learning. No, it's huge. It's huge. to me, that's why that Douglas Gosselin quote is so important. Exposure is not readiness. So to go back to the very beginning, the Make America AI Ready platform, it's text-based. text-based.
Nathan (1:17:26)
this.
It is text-based. But...
Jake (1:17:30)
If
you are savvy on your phone, it can read it to you.
Nathan (1:17:33)
Yeah, yeah, okay, you can do dictation.
Jake (1:17:35)
You can do dictation
and things like that. But my idea is like, OK, so let's say we start rolling out some foundational AI readiness next year to increase people's access to the choice of whether or not to use AI. What happens when that foundation is done?
Nathan (1:17:53)
That scaffold has been scaffolded.
Jake (1:17:57)
Just say that
a large proportion of the American workforce has gone through Make America AI ready or something similar as a very quick pamphlet version of AI literacy. Then my question is, what is next? And I love that the Department of Labor has it built into their framework
creating pathways for continued learning. Providing structured routes to progress to more advanced specialized AI skills and AI related occupation. of, no, you don't have to eat the whole elephant. Just take a bite. Understand what the taste of an elephant is first. To me, that's next year.
Nathan (1:18:28)
AI-related occupations.
I agree. Baseline. We need a baseline.
Jake (1:18:40)
You know?
What is AI? It's not magic, it's math, it's not trying to fool you, it's making predictions based on mimicking the human experience. It can be wrong, it is often wrong. That level of expertise to spot wrong is getting really high.
Nathan (1:18:52)
And it can be wrong.
It
is getting really hot. nobody would unless they had my specific knowledge.
Jake (1:19:03)
I would not know the difference in that.
But if somebody has that opportunity to become AI literate, don't need the level of expertise to... It's not about getting the right output then. It's that they're literate to say this might be wrong. I don't know how and I don't know why, but I do know it might not be correct. And so a lot of people are using AI for medical stuff. Right? I've used it.
Nathan (1:19:23)
Yep.
It's helpful. It's extremely helpful.
Jake (1:19:34)
But now imagine somebody who doesn't understand the predictive nature of the generative pre-trained transformer, this LLM structure, that to them it gives an answer. And now they're googling something, and maybe it's the opposite of what their doctor's telling them, where you and I might be like, hmm, I'm AI literate. Either the doctor is actually wrong or
the AI is wrong or there's a nuance in there that it's not under.
Nathan (1:20:03)
Right, that it's missing. But if I don't have... know that it was audio. did. It me an extremely expert level with options, with different things that I could do. it was...
Jake (1:20:07)
gave you an expert level, gave you a correct answer.
for a different question.
Like we've said before, I love telling people that Gen. AI, artificial intelligence, is the greatest golden retriever you'll ever own. It is very well trained, and it does exactly what you ask it to do. But sometimes you asked it something that it didn't understand the same way that you did.
And so like this one there, you asked it to do a thing and it was like, yeah, I know exactly how to do that. I'm going to put on the hat of an electrician or an electrical engineer and I'm going to give you an expert level couched in that lens. But you were like, nope, I need you to behave like a musician. ⁓ sorry, I got the wrong expert. And so to me, that's why literacy is so important because we will never match.
Nathan (1:20:44)
⁓ electrical engineer.
Yeah, this is audio.
Jake (1:21:02)
Moving forward, we'll never match the expertise. Generalized expertise of AI,
was just on the Lex Friedman podcast saying like, think we actually hit AGI. And like, they've said that before, but.
Nathan (1:21:12)
Yep.
Yeah, it's so close. Like, like.
Jake (1:21:17)
we're so close, even if it's ten years away.
Nathan (1:21:20)
Yeah,
another interesting aside, I was at Home Depot the other day, as I do, as a DIY enthusiast of all things. And in the parking lot, I just happened to park next to a family, but they had the hood cracked on their car when they were all standing there and just seemed like really in a horror. Staring into this. Staring into this, they were having a hard time.
Jake (1:21:40)
into the abyss.
Nathan (1:21:43)
I get out of my car, walk up next to him, go, all right, what are we looking at? And like, you know, I don't know this vehicle at all. was an engine I've never seen before, wiring harness I've never seen before. Like, I have no knowledge of this car. But.
I have Chad in my pocket and me and another dude rolled up and we both just tag teamed this car problem and there you go. Like I just like took a picture, asked it, all right, where is this? Where's this wiring go? Whereas, you know, and went through my own checklist of what I needed to check on the vehicle to see if it was working or not, because it wouldn't start. And
Jake (1:21:59)
.
Nathan (1:22:21)
that expert knowledge paired with the artificial intelligence in real time, finding a solution to help this person, good samaritan them on their way.
Jake (1:22:29)
was a success.
Nathan (1:22:31)
Because it was
Jeep Grand Cherokee don't buy one They're like but we but what what it gave me though was that because I just I didn't I did have other things to do But I just was in the zone, right? I don't want to help these people. So we spent probably like an hour Just trying to diagnose the problem. So I
But I know that we went through every possible thing and that it was actually out of my depth.
Jake (1:22:57)
Yeah,
I mean there was a huge likelihood you could have solved that problem.
Nathan (1:23:01)
Very likely if I had a little bit more time and I had a few more tools in the car. Yeah, was a really weird wiring problem as an electrical problem. yeah, you know, it's like using, it's so useful and just having that literacy helped me with a real life practical problem that the average, you know, American deals with.
Jake (1:23:06)
maybe a less severe problem with the car.
It's a.
Nathan (1:23:27)
You gotta keep your car running and you're keeping an old car going. That's how we live our lives. So it's honestly the most incredible thought partner for working through mechanical problems that I've ever discovered. Because the old way of working is I have to research online, I have to go to the forums, I have to
Jake (1:23:47)
going to like that baby blue manual
Nathan (1:23:49)
Or, okay, there's the Hanes manual. Hanes manual. Yeah, you need the Hanes manual. You need the Hanes. You need the Hanes. And even better is to get the repair manual that the dealer has, the actual official one. It's all in the training data. And...
Jake (1:23:54)
We're gonna rebuild a bug in 50 days.
Nathan (1:24:08)
It linked to manuals lib and just grabbed the repair manual online for me and just brought it to me to look at. ⁓ start on page 13. It was like that specific. And that's a real problem, a real application
Jake (1:24:24)
And good old Ethan Molyk, today is the worst AI you will ever use again. I'm like, dang. You know?
Nathan (1:24:30)
Yep.
It
has gotten so good in two years.
Jake (1:24:39)
I I feel like, honestly, since September, we've hit this other... We now have... Anthropic has released one of their current research models they might have to abandon because it's too powerful. It's literally hitting their protocols of like, nope.
Nathan (1:24:43)
It is otherworldly.
one, it's
of like dangerous. Yeah, I believe, I love anthropic.
Jake (1:24:58)
I
love that. I didn't understand, Dario Amadei and Daniella. They broke off because they wanted a transparent, safe...
Nathan (1:25:06)
Yeah.
Yeah. Yeah. Yeah. Yeah.
Jake (1:25:10)
a little slower in the back end sometimes, but it's always been I just wish that there was like like a education walled garden Like I can never recommend it to people for professional use ⁓
Nathan (1:25:18)
Yeah, if they... I know.
I know that's the one thing that I get stuck on too. it just absolutely beats Chad for those kinds of things. Chad is really funny and creative and weird and silly and like definitely still want to have Chad in my life but like Claude is the one I go to for any work.
Jake (1:25:44)
And then, yeah, I don't even want to start the Anthropic versus the Pentagon discussion yet.
Nathan (1:25:48)
Yeah, we're not there yet.
Jake (1:25:51)
AI literacy, that's my prediction. right now I believe schools that are beginning to grapple with it, right, there's still a lot of head in the sand happening, but we're starting to see initiatives of schools rolling out or planning to roll out AI literacy. TK12, right? Yep.
all the way. California has really cool guidance. I've looked at some of the other states too. They're good. I love like, yeah, TK2 is like total screen free. Right? I love those amazing ways to do it. did it with Legos with all of our adults. Yeah. And then just building in complexity to increase capacity, not increase usage. I think that usage happens because people start to see a value in it.
Nathan (1:26:16)
Yep.
Jake (1:26:32)
but it's really about increasing comfort and understanding so that people can make informed choices. And that's why, to me, it's so important. It's the same thing, the AI illiterate side of the argument is when a parent who's living under one paradigm,
hears that they're using AI in the classroom, they storm the gates and they're angry because they've heard about chat bots and suicide and fear mongering and relation building and things like that. And then they get there and you're like, ⁓ absolutely, that's not what we're doing. We're using snorkel, which is not generative AI, but it is using AI. Just like my phone.
People are like, nothing good will ever come out of AI. I'm like, really? Because I really like the auto focus feature on my phone, on my camera. I really like the other day when I stepped out of my car in front of Day Camp coffee. I didn't see a car coming, and my car beeped. Beep, beep. To let me know that there was somebody really coming close. That's AI. It is artificial intelligence doing that.
Nathan (1:27:19)
Yeah, I really like it.
Yep. It is. Yeah.
If you have any modern vehicle with any kind of lane assist, that is artificial intelligence on board in your vehicle with you at all times. Yeah. You very helpful.
Jake (1:27:45)
And so I
think there are very few people that look at those kinds, or like Netflix, suggesting shows that are...
Nathan (1:27:52)
actually just found a new show that
like blew my mind that way. was like the algorithm clocked me. I love this.
Jake (1:27:58)
Yeah, and so I hope
that increasing literacy and awareness around artificial intelligence shows people AI does not equal AI. ⁓ We don't have to be afraid of any of it, while also understanding that there are legitimate problems.
Nathan (1:28:06)
Yeah.
Yes, are legitimate concerns.
Jake (1:28:16)
things to be afraid of out there when it comes to AI. Right? But literacy is how we approach that with power.
Nathan (1:28:24)
Which is power.
Jake (1:28:24)
My GI Joe heart just went off. Because it's half the battle. All right. Well, I would say to our listeners, just to wrap this all up, you have my recommendation as of right now. I think that the US Department of Labor launching the Make America AI Ready program is a good one.
Nathan (1:28:43)
What I thought. ⁓
Jake (1:28:43)
I would love
people to let us know how it goes. It's super easy.
Nathan (1:28:49)
two zero two zero two. Ready.
Jake (1:28:52)
Yeah, it's very easy. It's not the end all be all, but it's forming a shared vocabulary. It's forming a foundation of understanding to assuage fear. There's a lot of fear out there and understanding.
Nathan (1:29:06)
the leaders.
Jake (1:29:11)
See, I'm not nerdy enough. I haven't read Dune.
and to stay tuned because we're going to be coming up with some pretty cool workflows.
Nathan (1:29:18)
Lots of praxis. Praxis. That's all we do. and and and you know again caveat I know we have a pretty niche teacher audience like if you're listening to this like like probably you're probably in the game. Who's in Iowa? Shout out to Iowa. 12 A bunch of small towns are like the whole town's listening to us over there.
Jake (1:29:37)
Sorry, if you are from Iowa and you listen to it, please reach out because 12 % of our downloads are coming from three small towns in We're only getting 100 downloads. We're getting really good downloads. So somebody out there is sharing.
Nathan (1:29:45)
Yeah, it's awesome. What educator am I?
Yeah, like a lot.
It's really cool. But yeah, so I get that like this might not apply to you. But at least look at it and see if it could apply to somebody in your life. You know, like I'm definitely sharing this with my mom. Definitely gonna do it with like, you know, just yeah, those people on the fringe that just haven't even touched it and are starting to say a lot of things that are very fear based, which is mostly what I'm hearing from everyone in my life that isn't.
Jake (1:30:05)
kids.
Nathan (1:30:18)
in this conversation with me, you know?
Jake (1:30:21)
Cool.
Stay curious.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
My EdTech Life
My EdTech Life
Rebel Teacher Alliance
Rebel Teacher Alliance
Deep Questions with Cal Newport
Cal Newport