
aiEDU Studios
aiEDU Studios is a podcast from the team at The AI Education Project.
Each week, a new guest joins us for a deep-dive discussion about the ever-changing world of AI, technology, K-12 education, and other topics that will impact the next generation of the American workforce and social fabric.
Learn more about aiEDU at https://www.aiEDU.org
aiEDU Studios
Peter Gault: Writing as a superpower
"When you can write out your own ideas, explain your thinking, build an argument and use evidence to support it – that's an incredibly valuable skill for kids."
In this week's episode, we spoke with Peter Gault of Quill.org about the evolving relationship between AI and education. Peter shares how Quill transitioned from basic natural language processing (NLP) to sophisticated language models, making a strategic decision to rebuild their platform on generative AI despite having invested years developing their NLP models. The result? A more powerful tool that helps students develop critical writing and thinking skills when they need them most.
At the heart of our conversation, Peter offered a crucial insight – as AI becomes increasingly capable of generating content, the ability to think critically, evaluate information, and form independent opinions becomes even more valuable. While some worry about teaching students to use AI tools, Peter thinks focusing on foundational skills creates more resilient learners who can effectively collaborate with technology rather than be replaced by it.
Perhaps most compelling is Quill's approach to AI literacy which integrates discussions about algorithmic bias, ethics, and the future of work directly into writing activities. By giving students agency to understand AI as a malleable tool rather than a mysterious black box, Quill and educators can prepare the next generation to shape technology's development rather than simply consume it.
What skills will remain essential in an increasingly AI-powered world? How can education evolve to prepare students for this AI-driven future? Listen to the episode, and let us know if we missed anything!
Learn more about Peter Gault and Quill:
aiEDU: The AI Education Project
Peter Galt nice to see you.
Peter Gault:Great to see you as well.
Alex Kotran (aiEDU):When did we first meet? You were a Fast Forward alumni, right.
Peter Gault:Yep, I was in the 2015 cohort and were you 2018, 2017? 2020. 2020, oh wow, it all blends together. But yeah, I remember seeing your pitch when you were first getting started and it's amazing to see the progress you've made over five years what high quality and success looks like.
Alex Kotran (aiEDU):And it's funny because I don't know if you feel this way, but if I look back on, you know where we were back in 2020 and where I am today I would have I felt like wow. I would have been, you know, enthralled to know that we were able to grow to the extent that we did. And yet now I find myself, you know, quite anxious about continuing to grow, and I'm sure you find you're in the sort of the same boat where you're. You never really get to sort of like rest on your laurels. There's always more to do, more work to do, more funds to be raised, teams to be hired, if anything.
Peter Gault:It just gets more complex. It feels like a board game you're playing, but every round is more possible moves, more options, more things to consider, and it just keeps getting more and more complex. It's exciting, though, that we've both started with nothing and then both these organizations that are now having a national impact and reaching thousands and tens of thousands and hundreds of thousands of kids. It's really incredible.
Alex Kotran (aiEDU):Yeah, well, millions in your case. Yeah, Peter, why don't you just give our audience, tell them a little bit about Quillorg and what you do?
Peter Gault:For sure.
Peter Gault:We are a nonprofit that helps students improve their writing skills.
Peter Gault:We really see writing as a superpower.
Peter Gault:That, as a young person, when you can write out your own ideas, when you can explain your thinking, build an argument, use evidence to support it, that's an incredibly valuable and important skill for kids. But it's so hard to become a good writer. You need so much practice and feedback, and that with AI, there's this incredible opportunity to give kids immediate feedback and coaching on their writing and give them those practice activities that helps them build those skills. For us, what's really critical is doing that in the context of the really important courses that kids are engaging with today, and so that's things like what's happening in English classrooms and what's happening in history classrooms, but also things like AI and how can we teach kids AI through writing, and that's just this incredibly important topic that is new and everyone's grappling with and trying to figure out. What do we all need to know 10 years from now is just this really difficult question, but one where it's absolutely critical that young people really we get this right for them or at least open the door to what's happening in the world.
Alex Kotran (aiEDU):And we can pull up maybe some screenshots or some just some visuals in the video. But you know what you built is it's. It's this student-facing product. Yeah, can you just describe sort of like what happens when somebody logs into Quill? What does the product look like?
Peter Gault:Yep. So when you go to Quill, first of all, everything is free for all students and teachers. So that's absolutely critical to our mission that you go to the Quill homepage. There's a quick sign-in button. Teachers quickly import their students. They'll use a service like Google or Clever to import their students and then they'll have access to more than a thousand activities that they can assign.
Peter Gault:And what we see is that we have this range of activities supporting students through grades 4 through 12, primarily in the context of ELA, but we are now creating new offerings for social studies and STEM classes, and in these activities there are about 10 to 15 minutes where kids are writing on different prompts and then receiving feedback as they're writing. And these are all very quick prompts. They're one-sentence prompts where the kids are able to quickly build an idea and then get feedback to revise and strengthen it. And the core of Quill is in that feedback loop that students will write, they'll be struggling with a concept and they'll continually and patiently be given feedback and over like five rounds of feedback the students are able to really build these really critical skills.
Alex Kotran (aiEDU):So, and Quill, it's like bite size, it's like you're not going in and replacing an English curriculum. You're basically providing teachers with, you know, these opportunities to sort of integrate technology in a way that gets students to sort of like in real time, think critically, write and respond to different prompts, build critical thinking skills and also just build their writing skills and grammar. And it gets sort of the real time sort of coaching and feedback. And this is even before generative AI you were actually giving. You had sort of like a system for feedback.
Peter Gault:Have you been?
Alex Kotran (aiEDU):using AI since, I mean, I mean language models seem, you know, pretty well suited to a product like Quill.
Peter Gault:We've been building our own AI since 2018. Back, when it was NLP and you would build these models, it was very basic. Back then, you could do things like grammar analysis. People think about Grammarly, for example, which has been around for many years and they've had their own AI. We had a similar process where we would use AI to analyze the student's writing, looking for certain patterns in the syntax and then using that to trigger feedback and coaching. And so, with Quill, we never fix the writing for you. We always help you to build the skills so that you can learn it yourself, and so, under the hood, our AI looks similar to Grammarly and some of those other platforms, but the student experience was the exact opposite, where, again, grammarly would just fix your writing for you.
Peter Gault:We wanted to help you build these really critical skills, and so that was some of our initial AI work. We built our own algorithms, we trained this meta model that had tens of thousands of different training responses in it, and we built these data sets by hand, and that was really time intensive work, and really critical is getting these AI models to be highly accurate, and so, around 2022, we had completed all of this work, where we had these amazing models spent six years building them. They were working really well, and then the bombshell of large language models comes, and now we're in this whole new era, and LLMs are amazing in many ways, but it's a completely different technology from an approach of building your own fine-tuned model, and so over the last year, that's meant for us essentially scrapping everything that we'd built over the last six years and rebuilding natively on generative AI, which allows us to do a lot of sophisticated things that we couldn't do in the past. But it comes with its own challenges and risks as well.
Alex Kotran (aiEDU):Yeah, I mean, what are some of those challenges?
Peter Gault:Yeah, so one of the big problems that all of the players who are using generative AI to help students are grappling with now is that these underlying models are trained to be very helpful for students, and so what that often looks like is that when a student is, say, struggling to produce the answer, the LLM says oh, I see that you're stuck. Have you thought about? Blank. And then it will tell the student what to say, because the LLM knows the answer and it wants to help the student, and telling the student the answer is the fastest way to help them, and so what we have to do on our end is build guardrails so that the LLM isn't giving away the answer and that the thinking is on the student, and for us, what that looks like is building a multi-step prompt.
Peter Gault:You have a sort of a chain of thought process where it's first producing an answer and then checking is this the right feedback for students? Is the student doing the thinking, or is this an example of feedback that is simply revealing the answer to the student? We didn't have this problem with our old models. With our old models, we control the entire model ourselves, we control the output, and so we didn't face that risk. But with the LLM it's unpredictable and you can build guardrails around it. But if you don't have those guardrails in place, it can sometimes go off the book.
Alex Kotran (aiEDU):And yet you switched wholesale to language models, and that's because you saw some serious potential. I mean, like, what prompted that pivot, you know, given that you had something that was already working quite well, being used by millions of students. And these new models have you know, given that you had something that was already working quite well, being used by millions of students, uh, and these new models have, you know, obviously lots of limitations and they hallucinate. Um, why, why did you make the decision to to go all in?
Peter Gault:Sort of like a couple of really thing big things for us. Um, that drove this decision? Um, the first is that these models are a lot smarter, and so, while they can be unpredictable, you get the benefit of a much more fine-grained analysis of the writing versus when we created our own models. It was sort of like putting the writing in categories and the LLM just can understand things in a very nuanced way, depending on what model you're using and how smart it is. But that unlocks for us the ability to go deeper.
Peter Gault:A lot of our work is focused on writing, but it's in service of critical thinking that students are building an argument, using evidence to support it, and in that process we want to sort of be able to go deeper in thinking about, like what are other questions the student could be asking? How could they use more evidence to support this claim? What's a really good thesis statement that summarizes this idea? These are all pedagogically research validated strategies that we couldn't previously build into Quill because they were too complicated and the AI wasn't sophisticated enough for these strategies. And so, with generative AI, what's so exciting is that some of these really advanced strategies, things that researchers have known for 50 years, work really well, we can now build in a way that we couldn't build two years ago, and so for us that's really why we made the switch is to unlock sort of these deeper forms of learning that we previously couldn't address.
Alex Kotran (aiEDU):Yeah, I keep coming to Quillorg as an exemplar, for you know, how AI can actually, because there's a lot of you know hyperbole around like AI is going to transform education, how AI can actually, because there's a lot of you know hyperbole around like AI is going to transform education. And I think sometimes I, you know, you know, my advice to folks is that you know AI is a really powerful tool, but it's really not the, it's not the solution. It's sort of like part of it's something that we can use on that journey, and I think Quill is. You know, I don't think of Quill as an AI tool. I think of Quill as a reading and a writing tool.
Alex Kotran (aiEDU):Now you've integrated AI in a way that really enhances the sophistication and capabilities, but what you didn't do is scrap Quillorg and start a whole new company and I think a lot of the stuff that's in the field right now it's like these wrappers and I think some people will say things like well, you know, teachers will be able to just build these things for themselves, and I think, just like reflecting on, like the amount of work that went into fine tuning and even sort of like the prompt you know, I don't know if I'd call it prompt engineering, but you basically think about, like, prompt design. I mean, is this something that that, like any teacher, can just do for themselves? I mean, is this something that any teacher can just do for themselves, or do you feel like there is actually maybe a bit of barrier to entry to really effectively using MLMs?
Peter Gault:It really depends on the task. I think there are certain things that a teacher can do themselves and can be really effective. There are things like draft me emails to parents based on my students' progress, and those emails could have taken you a couple hours to write, and that could now be a 10-minute process, and so I think there are some examples of ways in which AI can help to speed up communication workflows. Some other examples there are a whole bunch of things that it can do well. We're seeing some areas, though, where you're stepping backwards a bit when it comes to education. So in the world of education, the headline is high-quality instructional materials, and there's been this huge effort over the last 10 years to go from these not-so-great programs that were being provided by some of the big publishers to much better, smarter, more sophisticated curricula that could be used across an entire school, from grades K through 12. Edreports led the charge here with how it rated curriculum programs, and the goal here was to go from incoherence, which is how education felt often, to a coherent program. What was happening in parallel were some tech plays that didn't pan out. So you had sites like Better Lessons that tried to aggregate teacher lessons and that they would host 100,000 different lesson plans, and for a teacher, trying to create coherence out of 100,000 different lesson plans impossible.
Peter Gault:We then saw Teachers Pay Teachers, which took that idea and added a payment mechanism on top of that, but still didn't have that coherence of K through 12 and a program that worked for the entire school.
Peter Gault:You still had individual teachers making individual decisions, and while sometimes that can work well, oftentimes it led to students not having a consistent learning experience. And so, as we think about AI today, the next version of teachers pay teachers is magic school, and so you have teachers generating lesson plans and, rather than say, having a coherent learning experience, sort of recreating some of that, that incoherence. And so I think that's sort of the big question is how can AI fit into this bigger picture of coherent learning? And that happens by starting with the curriculum. It starts with what is the teacher doing in the classroom, and then how can you layer the AI in in a very surgical and mindful way so that it's supporting the classroom instruction, as opposed to trying to sort of move away from that altogether and propose something that lives completely outside of these existing programs, and so I think that's really where the smart players are today, so they're trying to work within the constraints of what really good programs look like, and Quill has certainly been successful by taking that approach.
Alex Kotran (aiEDU):Yeah, I sent you something over text. It was a study that some of my team shared and this is from Microsoft Research. Let me pull it up just so I can get the date right. This was published, oh, this year, so very recent. I think. It was like literally like a few weeks ago, and the sort of headline is as they looked at the use of generative AI in knowledge work, they found this correlation between knowledge workers that are increasingly reliant on generative AI actually have a deterioration of their critical thinking skills. Um, and this is not you know it.
Alex Kotran (aiEDU):It's not necessarily terribly surprising. This is something that people were, I think, postulating for some time that this would become a crutch. It's interesting that it's starting to play out and actually be validated by research. Um, and and it, it.
Alex Kotran (aiEDU):It sort of brings me to this question of, like you know, there are some folks who are they see what ChatGBT and other language models are capable of, and their reaction is well, we just need to teach kids how to use these tools, because these tools of the future, like we can't you know, we can't put Pandora back in the box, and so I'm curious what your response to that is. You know, is it the case that we just need to sort of pivot and adjust, educate the goal of education, from teaching students how to write critically to teaching them how to, let's say, become really effective prompt engineers? As someone who is employed, you know prompt. You know I don't know if you called the folks on your team prompt engineers, but they've been doing quite a lot of prompt engineering.
Peter Gault:Absolutely. Yeah, I know we've created thousands of test prompts and that's very much what they do To me. I guess they are. There's more overlap here. That prompt engineering is writing, that you are writing prompts, and I do think the ability to write well does enable you to do things like query an LLM and being able to specify what you're looking for and why you need that information. In so much of my own experience using LLMs, it's not that first query that gets you the answer, it's that process of digging in and interrogating the answers, and so I do think that that is very similar to the process that English teachers are trying to create for their students. They're reading a novel, they're trying to unpack the novel, trying to interrogate it, and so that critical thinking skill of trying to unpack information I think that skill remains critical for students and that writing is a way to build that ability to interrogate information through building your own ideas.
Peter Gault:There is certainly a counter argument that as these models get smarter and smarter, there will be less need to even prompt engineer, and you've made this argument to me a few times, and I actually am starting to come around a little bit more to this than I was initially. So I've been using the deep research function that's in ChatGPT and a few of these other tools now no-transcript and that our ability to navigate this world and all of its complexity. If you don't have these skills, I feel like you're going to be left behind. That being said, in terms of AI as a skill what that skill is, I think it's very up in the air of what will AI look like a few years from now. It's really hard to predict that.
Peter Gault:The one thing I do know, though and I do think this is a useful comparison is, I think, about the era in which we all knew how to read maps and you look at folks like taxi drivers and folks who had to drive for a living, and they would build this knowledge of the world through driving, where, for a lot of us, you know you would get that by reading maps and trying to have that spatial recognition. Today it's a little bit of a lost art. I love maps, so I try to force myself to sometimes not use Google Maps, just as like a fun exercise in thinking. But if you're not forced to figure that out day by day, it does change how you think, and so it's certainly a big ethical question of what does thinking look like in a world where AI can think itself in a really meaningful and deep way.
Alex Kotran (aiEDU):Yeah, the maps example is fascinating because I'm someone who really struggles with sort of like geospatial reasoning and I find myself, you know, even with maps and even with Google Maps, sometimes getting lost, including in New York City, which is hilarious because New York City is one of the easiest cities to navigate and I actually find myself I've become much worse and increasingly dependent on Google Maps, where you know it's like New York is a great example, like you shouldn't need Google Maps. You should be able to just like look at sort of the grid, like roughly, okay, I'm on like 41st street and, um, you know it, I shouldn't have to sort of open google maps and like do the thing where you sort of like are like turning around and trying to get a sense of, like which direction you're pointing. Um, and I don't know that, like my, you know, the duration of my ability to navigate the world is, it's definitely bad, but in the hierarchy of skills that maybe I've lost, it doesn't necessarily impact me day to day, like you know. You know, and generally I have access to, to my phone in many cases, to my phone in many cases. Critical thinking, though, like if we sort of abstract, prompt, engineering, what you've described sort of like the process of writing, which is really sort of like an underlying skill that leads to, and maybe is like one of the necessary if not sufficient but certainly necessary conditions for critical thinking skills. It feels much more serious if we're in a world where there's this reliance on AI to supplement critical thinking, especially in a world where you know one of the things that you described, even deep research, like I'm actually I have a deep research query running right now on that study.
Alex Kotran (aiEDU):I asked like, oh, can you find some more studies that maybe corroborate this or not, you know, being able to effectively use it? I have to actually, you know, critically evaluate the output. In many cases it's made stuff up or it's not quite right, um, and so that's not really a question, it's more just like sort of like an open concern. That, I think, is sort of like underpins this. The big question about ai and education is um, you know, do, how do we balance the utility of these tools and the fact that if you don't know how to use the tools, you're going to be left behind. That's probably true, but if you use the tools and if maybe you spend too much time using the tools, you actually have less. You're sort of less effective as a complement to the tools.
Peter Gault:Yeah, I'm not quite sure how it will play out. You know, I think we think of this. I like to think of it like an orchestra and that as the conductor of the orchestra, you are got a symphony of different queries that are doing different requests, and that the human being is driving that, and that I think the orchestra conductor remains, and that that skill of how you manipulate LLMs and find information, that will remain a critical skill for sure. But I think there'll be a difficult question of how do we build those skills and what does writing even look like, when writing is a co-creation process, and I do think that those are big questions where today, when I write, I'm building a thesis, I'm taking a point of view on the world, and there's a world where you could ask the LLM like what should I think about this? And you could imagine the LLM giving you your opinions on things, as opposed to having your own point of view.
Peter Gault:I'm somebody who has a lot of opinions. I love doing debate. Throughout high school and college, I found that to be the experience that built my own critical thinking skills most effectively, and so I do think that for young people, it's really important that they have their own opinions and they have this ability to think critically, but I do think that to do that, they need to spend a lot of time doing this and they need to be given those opportunities, and if they don't, there's certainly a world you can imagine where that gets outsourced to the LMs in lieu of their own ideas.
Alex Kotran (aiEDU):I think, for me, the reason I feel so confident in my, our advice to educators being, you know, you shouldn't be focusing on teaching students to use AI tools. You should be focusing on critical thinking skills. It comes from this, like you know, even when we've done hiring and we've thought about, okay, we certainly and we've actually struggled to hire people who are sort of like super users of AI, I just think, like our generation is still like there's some anxiety about like, is it even fair to you? Like, is it cheating to use AI to write something, and so there's some hesitation. But if I think about, like you know, like we're hiring someone to help, as we discussed, to help with, like fundraising and one of the things you really pushed on is, like you know, being having exceptional writing skills is critical to that I still feel like, in my hierarchy of of skills that we'd be looking for for that role, I would way rather have someone that's a really good writer. I feel like I could teach somebody to use a language model if they have the writing skills.
Alex Kotran (aiEDU):I don't know if it's the reverse, like if someone's really good at chugging out a prompt, I feel like the minute that I need something that goes beyond what the LLM is capable of. They're going to sort of hit that roadblock and that feels much harder to teach. Like, I don't know that you could teach someone to be a good writer on the job. It's sort of something that you basically have either developed during your educational experience or not. And, um, and some people maybe have an act port for some people don't. Um, I mean, have you had any? Something like, how has your organization thought about sort of like talent, you know, given that you obviously have a lot of people that are using the ai almost on a daily basis? Like, do you have a sense of, like, what skills are really, you know, set them up to be really effective, prompt engineers or users of AI tools?
Peter Gault:So I think, both with prompt engineering and writing, we have a pretty specific definition of it or at least I do which is that writing is knowledge.
Peter Gault:And I say that to say and this is definitely a hot take in the world of education there's been a big push called the Knowledge Matters campaign, which is the idea that your ability to write is predicated on your knowledge of the world. And so when you ask kids, for example, to write about baseball and they're big baseball super fans you'll get these long essays of baseball strategy and hits and great teams and they'll have their writing will look really great because it's a topic that they have knowledge about. But then you ask them to write about a book that they're not interested in and the writing looks quite different. And in this world, writing isn't just a skill of you can construct a sentence. Writing is your knowledge of the world, and so I say that to say that when we think about like a fundraising writer role, for example, we find that the world of philanthropy is very fractured. There are many different causes and issues and strategies and theses of how the world can be improved, and as we're working with different partners, our work needs to align to their work, and so that really is requires knowledge of how do our partners think, what do they think the future will look like? How does Quill align to that vision? And, as we're writing, it's not just an artfully constructed sentence, it's about how our mission aligns to their mission.
Peter Gault:And so, as we think about AI and as we think about these critical thinking skills, students need to know a lot of stuff about the world.
Peter Gault:They need to know how it works, they need to know what these different ideas are to be able to be a member of those conversations and to be able to contribute their own ideas.
Peter Gault:And so I do think, as we think about how AI will develop, there's sort of this underlying question where the more that you know right, the more you can contribute, and for us that's really critical, and that when you have that knowledge, the writing will flow from there. And I say that all to say that those skills look quite different, that the world of knowledge is often thought of something that, like a history or social studies teacher does, but it doesn't exist across all classes, and that's also a shift that's starting to happen right now. There's a big movement to move towards knowledge as the main way in which students are assessed as opposed to things like reading comprehension as a skill, where being able to read any article on any topic is less valuable than knowing about particular topics, and that particular knowledge is more valuable than that general ability to read. And there are lots of proponents for both sides of this, uh, but I think we're starting to see how knowledge itself is durable in a world where some of these skills become less important.
Alex Kotran (aiEDU):Yeah, it's like the the.
Alex Kotran (aiEDU):I don't know if you've been following the um all the noise being made about vibe coding and the y combinator survey that came out yeah, it's wild and for our, our listeners and viewers, this is a survey that y combinator did with their, their latest cohort, and this is like the leading technical I mean like not, these are some of the most technical founders right in the world.
Alex Kotran (aiEDU):Like these folks know how to code um and full I, and a quarter of the cohort reported that 95% of their code is written by AI. And so, like, right out of the gate, the headline was like vibe coding is here. If you're not using AI tools, you're at risk of being left behind. But then when you sort of like, if you listen to the full podcast and you hear sort of like the nuance, they also will say that well, it's also the case that you know vibe coding is great for getting you to an mvp. It's, you know, a great tool. But what they also have seen is that the most effective uh founders and companies had folks who had I guess they're calling classical coding, which I find funny, but had classical coding skills, um, to be able to do things like debug, which the AI isn't very good at, at least right now, and you actually have engineers in your team. I mean, are they vibe coding? Have you had conversations internally about this?
Peter Gault:I was just talking to our CTO, akhil, about this and he was sharing that at Quill, it's around about 10 to 20 hours of extra productivity per week. It's not that we're doing 95% of our coding through AI, but it's certainly helping us with particular problems. It's great at writing SQL queries. It's really great for certain projects. That number will just keep increasing, though, I think, as we build our own knowledge of how to use AI and as these tools get more reliable. I think we'll certainly see, you know, in a couple years. We've got six engineers at Quill, which is a small team, you know, relative to some of the big ed tech players, it's not one engineer like some small nonprofits where, like you, only have one person, but what it means is, if they can have the output of, say, 12 or 18 engineers, that's a huge win for us, and so I do expect that number to increase, especially as these tools get better at debugging and that still is a big obstacle right now, where they write code Sometimes it's good code, sometimes it's not so great code the ability for the AI to refactor itself though, for it to improve its own code writing skills, that will certainly happen. It's already happening, but that will just keep getting better and better.
Peter Gault:And so I think there is a big question where one there's a lot of advice like what jobs should people be pursuing now?
Peter Gault:And there's this huge question mark around like is software engineering the path to, like, economic prosperity? And I don't know the answer to that question, but it seemed to be that was like hey, if you're trying to find sort of a path towards having a comfortable and economically successful life, that was the path. And I don't know if that will be the case in a couple of years and I don't know what happens in that world, but we're certainly starting to see some of that. I'm curious, if you are hearing from others, how that conversation is shifting now about things like what does CS education look like? Is how that conversation is shifting now about things like what does CS education look like? Because, again, the ability to code does allow you to manipulate these systems and so that becomes incredibly valuable. But if the learning curve of building those skills is too high relative to that ability to just use an LM, yeah, I don't know what happens, yeah, it was just so.
Alex Kotran (aiEDU):I'm going to read a quote. This is from Tom Blumfeld. He's a partner at YC. He's my husband's former boss, founder of a company called Monzo, which is multiple billion dollar valuation, now wildly successful, and he's been experimenting with vibe coding. And here's his take. He says software engineers are like highly paid farmers tending their crops by hand. We just invented the combine harvester. The world is going to have a lot more food and a lot fewer farmers in very short order. So my response to this is I think we have to take very seriously that the folks that are really at the bleeding edge of these technologies, who are really getting hands on that's a pretty consistent take. Um, I don't know that.
Alex Kotran (aiEDU):I've talked to many engineers that are like totally complacent and saying that, like AI is just, you know, it's just a fad. I think there's actually folks who thought it was a fad. Um, that are coming around, but most of the folks in CS who are, like you know, actually building, um, building, are like coming to terms with the fact that this is, this is going to make software engineers more efficient and, you know, you sort of just by extension right, like if you can get more productivity out of one engineer. You don't necessarily need 20. Maybe you only need, you know, 18 or 16 or whatever the number is, or 16 or whatever the number is, um, and I, yeah, I, I worry about I think, I think, because education has traditionally been very oriented towards these like super discrete career pathways.
Alex Kotran (aiEDU):You know, my parents are immigrants and for them it's like doctor, lawyer, maybe engineer, and that was basically it, everything that or bust. And like I went into political science and they were like aghast. They were like, well, you know, you can go to law school still. And they actually still asked me if, like have you thought?
Alex Kotran (aiEDU):about going to law school and, yeah, I think you described it really well. It's like the reason that we hyper focus on those, you know, those pathways, is, you know, for someone who grew up, you know, in a lower middle class household, you know one of the most certain ways to achieve economic mobility was to go into one of these careers where you're, you know, essentially going to be guaranteed, you know, like you know, six figure income. So yeah, I mean, if we have at some point there's probably going to be this shift where you know, you know, if companies start laying off certain percentage of their engineers and then now they're flooding the the job market and then you have all these sort of graduates that are coming into the job market and and then you're in high school and you're trying to decide okay, do I go into? And I don't think it's just computer, I think computer science is the canary in the coal mine, just because it's such. I mean, you know, software is a language and so language models are especially adept. It'sept it's like there's a lot less friction to applying it into that specific type of knowledge work. But I don't think you know accountants or financial analysts or you know lawyers, I think a lot of knowledge work is really, you know, in the crosshairs.
Alex Kotran (aiEDU):It's just a question of, like, how long it takes for institutions to kind of figure out the implementation and get past the friction. Like, how long it takes for institutions to kind of figure out the implementation and get past the friction. And it brings me to just sort of like, how are you going to add value? Because being able to write lines of code is probably not sufficient anymore. You know, my sense is that if you said you have six engineers, let's imagine you had 100. You know, and if you were thinking about, ok, well, I'm going to get rid of, let's say, I can get rid of 10%, you know, the question would then follow and maybe you can answer this for me Um, who are the engineers that you keep? Who are the engineers that you get rid of? Like, do you have a sense of? Like, what the complimentary skills beyond just?
Peter Gault:like knowing the software language that you know are going to be important. Yeah Well, my own intuition is that folks who have jobs now I don't know what the big corporations will do, but I think in general those folks will be able to find new jobs. But I do worry a lot about those new graduates. I think when we think about hiring, we've hired folks who are brand new software engineers and we always know that there is a bit of an investment upfront, that sort of coming out of a boot camp you're not quite ready to be a full contributor to an organization, but that over six to 12 months you do become a contributor and that, as LLMs get stronger, that ability, that entry point, I think, is going to be what's most at risk, and so that's certainly something I'm really worried about. It's so funny my parents also pushed me to become a lawyer as well, and that feels the most at risk of any of these professions.
Peter Gault:When you look at the deep research tool, which I love and I keep running out of my 10 credits per month that I need to upgrade now to the $200 plan because it's so useful, but it's a complete game changer in its ability to take a week's worth of research and do that in 10 to 15 minutes. And so we see this future where the LLMs can do these sophisticated research tasks in a way that would just take a lot of time to do ourselves. I've spent so much time plumbing the depths of Google searches on page 50 or whatever to try to find some information, and now that that can be done automatically very quickly, that really changes the needs of the workplace, and so I don't quite know what the answer is here. But I do think that those folks who have jobs will probably be in a better position. But for those folks who are trying to enter the workforce, yeah, there's a really big question mark of what do those entry points look like now?
Alex Kotran (aiEDU):yeah, and you have um, I don't know if this is an announcement or a leak, but there was, you know, the news that openai is going to be offering like a 20 000 ai agent. Um, you know, I have to, you have to take sort of all this through. I think a lens right, which is these companies are also trying to command big valuations. They're beholden to their investors and trying to justify Because they need to also raise tons of capital to train the next-gen models. So I'm less. There are folks who are saying we're one to two years away from AI being better than any human engineer. There are also folks I think Amadei at Anthropic has said we're maybe one to two years away from like artificial general intelligence whatever that means is a whole. So you could spend 90 minutes just talking about that. But I think, even if we're like much more conservative, even if we say, okay, you know five X, that it still doesn't really put any of this outside of the realm of like. You know, if you're in like middle school, like we're still talking about like your first job out of college or maybe even while you're in college, you know us getting to this moment, um, and and just to like prove the point.
Alex Kotran (aiEDU):So the uh, chat gpt deep research just finished its work and so, as I said, I, I I gave the initial um initial research pdf from microsoft research. It summarized it and then I I didn't even like spend that much time prompting, I just said, okay, could you conduct some research to identify other sources that address this question? And, as you said, it kind of like you know chat GPT before would have just immediately started generating something. In this case it stopped and actually asked for, like, which direction you want me to go, and I didn't even. You know, and I think this is like for me.
Alex Kotran (aiEDU):This is what concerns me is that you know, prompt engineering doesn't have to be terribly thoughtful. I mean, you can sort of just like, kind of like hack your way through it and you still will stumble into sometimes you know, high quality outputs I haven't had time to actually read through this, but you know it pulled some legitimate sources. I found that Microsoft Research Report. It pulled something from, you know, springer Open. It sort of outlined, it talks about sort of the benefits and opportunities. Then it did this clever thing. It's very in-depth um, it did this incredible this table.
Alex Kotran (aiEDU):It is and it's like okay, here's a table that summarizes the key benefits and risks and so, basically, this, like you know, um, you know very I mean I was going to actually ask like, oh, this is really dense, can you simplify it? And it. And then I got to the point where I had already actually done that and, you know, I guess it brings back to like, what's the point of education? It still feels like and I think this is the good news for teachers and maybe tell me what you think about this it's like education doesn't. Like the good news is education actually doesn't need to change that much.
Alex Kotran (aiEDU):Like to your question about computer science, like we still need to teach students computer science because their ability to be really effective vibe coders, if that's what we're going to call it will actually be predicated on whether they have the ability to evaluate and critically analyze the outputs and debug and also the computational thinking skills and um, and and those students that have that knowledge, uh, are going to be by far the best able to add more value alongside ai, and so, and then the other good news is that I don't know that education really is going to have to solve the problem of teaching students to use these ai tools.
Alex Kotran (aiEDU):I think the tools it's like we didn't we didn't have education didn't have to solve the problem of teaching students to use these AI tools. I think the tools it's like we didn't we didn't have education didn't have to solve the problem of teaching students to use their phones, and I think in large part you know. The same goes for like the internet and social media, like these. These are technologies that kind of by design, become sort of like ubiquitous and seamless.
Peter Gault:So I agree with you, but I have a couple of hot takes here and I have a few ways in which I think there are some pressing questions. So one of those, I think, is when you look at education, I completely agree with you that it remains vital, and more vital than ever, that if an LLM can produce a 10-page report for you, your ability to read and understand and build your own knowledge from that report is critical. That the LLM can't just download the information into your brain. We're still, hopefully, many years away from that scary reality, but it's all to say that that sort of becomes. We'll live in a world of more information, not less. That's certainly something that we know is true that information is not going to become more scarce, and so your ability to parse it will become more critical.
Peter Gault:There are more immediate challenges, though, of like what does education look like today? You know you're talking about middle school students, and it's wild to think that a 10-year-old, a decade from now you know this question of is AGI. A year away or two years away, it doesn't really matter. It's going to come whenever it comes, and there's no one definition of it. It's, you know lots of definitions, but 10 years from now. We know the world's going to look quite different than it does today, and we know it's going to look quite. The world today looks quite different than the world 10 years ago did, and so we've been worried about driverless cars, for example, and that has another huge impact on society, and that's taken a lot longer than people have expected to become this ubiquitous thing. But my friend was just in LA and in a driverless car and I was driving him around the city, right. So it's like these things are real now, and so I do think, if we take a 10-year horizon, I do think there are a couple of really critical things that we do need to do now.
Peter Gault:One of those is when you look at ELA instruction, one of the big changes over the last 10 years has been towards less fiction writing and more nonfiction writing. So when you looked at English education, one of the big heavyweights in this space was a woman, lucy Calkins, and she ruled the roost when it came to literacy and she really loved fiction writing getting kids to write stories about their lives and it was a really fun and engaging experience for kids, but it didn't build the critical thinking skills in the way that a nonfiction text does, where you have to build an argument, you have to find sources and evidence to support it, and so when you looked at classroom instruction, about 90% of writing was fiction writing and about 10% was nonfiction. And there's been a big push to shift so that maybe 70% is nonfiction and 30% is fiction. I love fiction writing. It's not that it should go away, but kids need to be given these opportunities today, and so if you really look at what's happening in classrooms, how much nonfiction writing is happening, sort of directly connected to how quickly and effectively we're building critical thinking skills. So that's one really critical question.
Peter Gault:A second, though, is like what should computer science education look like? And should we keep teaching JavaScript to students, for example, which has become the mainstay and I know I'm sort of stepping my foot into a very hot water here, but I expect that in a couple of years, like, we won't teach JavaScript as the primary language that kids engage with as their CS classroom and I don't think CS classrooms will go away, I think they'll become more vital, but that classroom will look quite different, and I do think that having coding skills is important driver of that skill if JavaScript isn't a language that we use anymore, because your Figma designs turn into front-end code automatically. What is the role of JavaScript in that world becomes a big question mark. So I do think those are some of the questions that aren't happening today but will happen within the next two years.
Alex Kotran (aiEDU):Yeah, I mean, I think that's like the right aperture. I mean, it's not that, you know, do I think we need to continue teaching computer science? Absolutely yes. I also think that the have and the have not everybody worries about this digital divide and like, oh, you know, the kids that don't have access to AI are going to be left behind, and I actually worry that the digital divide will look more like the kids that are over-reliant on AI will be left behind and the kids that toiled through learning JavaScript, even if they don't know, even if they're not using JavaScript specifically, they have gone.
Alex Kotran (aiEDU):I mean, like you know, I'm not an engineer, but my senses from the engineers that I've talked to is you know, you struggle through learning your first software language, your second software language and then, at a certain point point, you kind of develop the instincts for being able to, like, learn new languages, um, but you can't skip that process. It's like there is something sort of like this the productive struggle that comes is, like you know, malcolm gladwell's like 10 000. Is it, malcolm gladwell? The 10 000 hours, um, you know, you have to put the time in and you know, I think we need to be really clear that, like AI, not only can it not replace that time, it risks making it much harder to motivate students, design agencies, and we were talking about sort of ai art, and he made this point that, like you know, the, the motivation, the, the um, the incentive structure for, like, learning art usually goes something like you spend, you know, a year drawing and doodling and struggling to draw a human face and then you eventually get to a point where you can create something really cool that you're proud of, that's unique and your own and that drives you to like, learn more techniques and to spend more time.
Alex Kotran (aiEDU):Um, and if ai makes it so that, and I think, and I've already and I've talked to students who are interested, who are, you know, artists or burgeoning artists, and I asked them for their take on on ai art and they're generally not excited about it because they're like well, now, I spent all this time learning how to draw and my friends are creating way better like stuff that looks cooler. Whether or not it's art, I think is a separate discussion and it's like demotivating. And maybe the same would apply like why would you spend all that time learning JavaScript if you can get, you know, literally, a working video game with a single prompt, which I've seen now with Cloud Code and with Gemini 2.5 Pro. It's kind of wild, actually, what you can get with a single prompt.
Peter Gault:That it can build an entire application is completely wild, and I think that's what I'm concerned about is that the incentive isn't there. I think the farming example is perfect, where people still grow their own crops, People will have vegetable gardens or they will have an artisanal farm, and it's not that farming has completely gone away outside of industrial farming, but it certainly looks quite different from when 90% of society were farmers. Right, and I think that that's the question of like. If that incentive structure isn't there, the productive struggle, I think, is an incredibly valuable learning experience. So I want to be crystal clear here that, while I think the JavaScript will go away as my own hot take, I'm not saying that it should go away. It's just that I think, if the incentive isn't there, the sort of value out of doing this and the time it takes to get there versus spending that time on something else, right, Education is all about opportunity costs that you have very limited time in the classroom.
Peter Gault:You've got like 30 weeks per year of instruction, of instructional time, and that time flies by, and so what do you spend that time on? It becomes that really pertinent question and is spending that time learning? What was it? Hand coding? What was the new Vibed?
Alex Kotran (aiEDU):coding or classical coding.
Peter Gault:Classical coding now I mean, that's my first time hearing it, but it's already too funny that, uh, that's now um, in the rear view mirror, uh, so yeah. So all to say that those are all things that I think will become questions about two years from now, and I don't think these are happening today. The world of always lags behind a little bit of the workforce and these things sometimes take time, but I think that it's valuable to try to get in front of these questions and try to think about what is the best use of that time, and I don't think anyone has the answer to it, but certainly, what is vibe coding is a huge question to figure out and unpack is a huge question to figure out and unpack, yeah, but there's, I think it's easy with AI to go sort of go down the glass half empty road and there's a lot more to talk about.
Alex Kotran (aiEDU):We haven't even gone to sort of like artificial general intelligence, where what do you even do in a world where nobody has to work and you get to a place where there's like maybe it's very important and interesting philosophical questions. But there are also questions in which I see very little agency for myself and our organization and, frankly, for the education system to fully address. I think it's, you know, to me AGI is like a question about sort of social safety nets and our ability to, you know, figure out like the fiscal policy such that we have the resources to be able to provide people. So, anyways, it's like a, it's almost like a political, you know, political organizing question. But the glass half full version of this is also, you know, I think one of the big deterrents to students coding, like going into computer science pathways, is, you know, today, or at least, let's say, two years ago, it was really hard and required a lot of like annoying work and effort to get to a place where you could create even a rudimentary or interesting video game, and I think, with vibe coding in the hands of the right teacher, you're not necessarily replacing cs class with vibe coding 101, but your first day, like not even the first week, your first day in introduction, not even the first week, your first day in introduction to computer science, you are creating a video game and to me, like I would not even have a computer science class in my high school, but I can tell you I probably wouldn't have taken it, but I could. But if, if on day one I was able to create, you know, some sort of, it probably would have been some sort of fantasy, you know Lord of the Rings type of video game thing, but that might've hooked me, you know, that might've actually like drawn me in.
Alex Kotran (aiEDU):And so I I'm curious, you know, just to bring things back to Quill, you know, one of the things that that we've been really impressed with your team's, it's been your team's ability to, you know, not just use, you know, your technology platform to really effectively build sort of the critical thinking skills and provide feedback, uh, but also as a way to like really efficiently um provide teachers with like current and just interesting and engaging topics that students are um just respond well to, and your point about baseball was like, I think well taken right is like students are more more likely to lead into the learning experience if it's something that they actually are, you know, interested in or or feel somewhat passionate about. Um, and and to that end, I know that this is something we partnered on right it's like creating some specific activities around artificial intelligence, which is a bit meta right, because we're almost like using ai in the back end to help teach students about ai conceptually.
Alex Kotran (aiEDU):Um, but just to sort of, can you just paint that picture of like what are those activities? Like, how have those been received?
Peter Gault:Yeah, they've been some of our most popular activities. We're seeing that kids are really fascinated about these topics. You know, ai is so interesting in so many different ways, and so we have things like how AI is advancing animal conservation and this is one of those areas where AI is amazing that it is helping to protect endangered species and doing things like being able to use AI to protect elephants or being able to use AI to communicate with whales. These are these ideas that really get kids excited about the future. And as we've been building these new activities, we've been getting some emails from students which almost never happens where the students are sharing their opinions and saying, oh, you covered this, but what about that? Or this feels too optimistic on this particular topic, or what about this other question, and so you're really seeing that the students are talking about these issues, they're unpacking them, they're debating them, and Quill had never really gotten to that level before with kids, and so for us, that's a huge win and 100%.
Peter Gault:What we're trying to do in these activities is to really help students to be curious and excited about the future and to be able to think critically about it, and so focusing on AI knowledge is just an incredibly interesting way of opening up this door for kids.
Peter Gault:We're seeing this happening the most in English classrooms, that when we're building these activities, they can be used in a STEM classroom, they can be used in a CS classroom, but that English teachers are looking for these opportunities to get their kids debating ideas, to build their own opinions, their own ideas, and that this content has just been an incredibly rich opportunity for kids. And so we're rolling out a whole series of new activities over the course of the next year focused on all of these really fascinating topics so how AI is impacting art and creativity, for example, and how AI is impacting things like the future of work, how it's impacting algorithmic bias, for example, and how researchers are addressing and changing AI to mitigate bias, and these are all really critical topics that we think kids will really, be really excited to dive into.
Alex Kotran (aiEDU):Yeah, I love the call out for algorithmic bias and like AI ethics, because I sometimes am frustrated when people describe, describe AI literacy and they're like, well, the key is that, like students just need to know about algorithmic bias, or like they need to just know about, like, the risks and benefit of AI, and I worry that they like that sort of there's the approach of a literacy and thinking of it as a content knowledge is actually not quite there, because what really matters is not so much like the awareness that it exists, but providing students with the agency to actually, like you know, start to dictate, you know what, what their knowledge of AI, algorithmic bias, like how that is now informing their perspective on you know if and how they should be using AI, and like I think what's powerful is that we don't have answers to all these questions, and that's maybe some of the most interesting questions to then pose to a student, because they have, frankly, as much entree to the conversation about ai art, to give another example, as anybody else um, do you what's like?
Alex Kotran (aiEDU):Yeah, what, what is like the, what is coming now? Like like two or three years from now? Like how is, how is quill different or how is it the same? I mean is is is your vision for growth more, more scale and reach, or do you have like, also sort of like a product vision that is maybe expanded, expansive beyond where you know what you're currently providing?
Peter Gault:so we're really thinking deeply about what are those big questions that kids are going to be excited about, that teachers are going to be able to find to be quite valuable. And one example here is the researcher Joy from MIT. She's been doing a lot of research on things like facial recognition and how these tools can sometimes not have enough training data to represent different ethnicities and races and have misclassification as a result. Rather than just saying, hey, this is a problem, she was able to build her own data sets and retrain the models so that they were able to be more responsive, and to us, I think that's just a really powerful example of how AI is a really malleable technology. As you feed more data into AI, you change its output, and that can really be used for good. It certainly can be used for bad purposes as well, but that ability for students to dictate what AI is and what its output looks like is a really powerful thing, and I think we'll see that this next generation of students they'll inherit this technology and they'll control it and they'll be able to choose how it's used and understanding that they can change how it works rather than it being just this black box that's beyond our control, I think, is a really important lesson for us to teach now, and so these particular case studies of retraining a system and improving it, making it more effective, these are all examples of how AI can change.
Peter Gault:We're big believers in this idea because that's what we do every day at Quill. When we're creating feedback for kids, we're building our own custom data sets, and that we're not just taking the output of the LLM and serving it to the student. We're building data sets of more than 100 responses to a particular question, for example, where we're mapping out what are all the different things that kids are saying and how would teachers engage with these students if the teacher was sitting down next to the kid and working one-on-one to give feedback. And by building those data sets, by showing those exemplars of how students are writing and how teachers engage, we're able to inject that all into the LM, to give it our own opinions of what good learning looks like, and so we believe ourselves that that's absolutely critical for good education.
Peter Gault:But there's also this meta concept that for kids, they need to know that AI is malleable and it can be changed, and that doing that impacts their lives and can impact their lives in a positive way, and so we think those are some of those big ideas that we're excited to tackle over the course of the next year. There'll be this very meta level where we want to cover Quill's own AI and explain it as they're using it, and so we see this all little turtles on turtles on turtles here, but we see this as a really powerful opportunity where we can unpack the how we train our AI ourselves and how we try to mitigate bias within Quill and use that as a learning opportunity for kids. But in doing that work, it's really to try to help them understand that AI again is not just a static thing and that they can change it and they'll own it and that in owning it they can hopefully steer it in a good direction they can hopefully steer it in a good direction.
Alex Kotran (aiEDU):Yeah, and I can see why Quill is really well placed to do that, because your approach is all about having students like hone their ability to sort of articulate their opinion or criticism or support for something, and I think that's something that you know. I think students actually have the example that you gave in terms of just like, even like the partnership that we have, and students were getting really activated. It's like students, once you sort of provide some scaffolds, they become quite articulate in their in the development of their opinions about AI. But I don't think that happens just by accident. I don't think, just because they're using it and they're digital natives, that they necessarily have the tools to become really informed.
Alex Kotran (aiEDU):And I think about, like you know, the TikTok algorithm and the fact that you know we don't need to teach students that algorithms exist. They talk about the algorithm, you know, like they sort of innately understand that there is this sort of like thing in the shadows that dictates the content that they see. Um, what we have found is that and we had this activity a while back that was actually like it would challenge students to train their algorithm to feed a certain type of content. And, you know, I don't think that kids necessarily realize like all of the ways, and I think that I did the. It was like owls. It was like how can you get as much as much owl content on on tiktok as possible, um, and this is the thing we use in the classroom, so we kind of phase it out.
Alex Kotran (aiEDU):It's more just sort of like a you know an at-home activity, um, but you'd have students that were like, oh, I didn't realize, like there are so many different aspects to how I use these apps that we're we're generating and it makes them, like I think, much more resilient, as when they see something, it's like, well, you kind of have control, like you're seeing that kind of because you've been creating the reward mechanism for the, for the tool, um, and it also shifts this narrative Cause I think, again, like the AI conversation can get, can get very depressing if you feel like you don't have agency um and I think what's important is, even with jobs, even with the future of work.
Alex Kotran (aiEDU):You know, when you talk to, this is like darren assamoglu, I think, who was really pushing this, is like we, and david ator as well.
Alex Kotran (aiEDU):Um, I think it's actually ator who is like closing remarks to one of his his talks that he gave recently and he was was like look, um, you know, ai is not going to happen to us like this. Like like when we talk about, like what does the future look like in terms of jobs, in terms of its impact on society? Like we are going to make decisions about the degree to which we use it to automate skills, the degree to which we prioritize, you know, building human capacity alongside it. And given that the kids today are really going to be the primary recipients of those decisions, I think it's it's both powerful and also like quite necessary right for them to have the to be a part of that. But it's like, it's not like. I think what you're, what you're describing, is not so much just like giving students a seat at the table. It's like you can't like giving students a seat at the table. It's like you can't just give a seat at the table.
Peter Gault:They have to have the rhetorical tools to be able to like participate in the conversation and really contribute um absolutely it's being advocates, I think, is critical, and I think that that example of controlling your algorithm is a fascinating one, because I do think there's sort of the sense of this technology and you could. What is an ideal algorithm? What content do you want to see? What makes you happy and joyful? Should there be more cute animals in your feed, because that makes you happier? Those are all questions where, ideally, students are the that right that they're driving their own engagement, and they often probably don't think about that or consider that idea, and I think that's where Quill and AIEDU really step in to try to give them this chance to reflect and think about these questions that they might not have thought about.
Peter Gault:You know, our programs work really well together because Quill provides an introduction to a topic that we're covering, an article. We're letting kids write about it, we're getting them to build arguments and use evidence and in doing so we're really opening the door. And then from there, aiedu, with your lesson plans and your activities, really goes towards that building experience of how do you take this and run with this and build something new. You take this and run with this and build something new, and so I do think we're both really trying to give kids these opportunities to reflect on this thing that impacts their lives every day. Right With the amount of screen time happening, you know kids are spending what like eight hours on their phones, or something like that, god is that the latest?
Peter Gault:These things impact our lives in such an insane and intense way. We all know this, and we all know that it's not the most healthy thing, but to give kids a chance to reflect on that and to think about that, I think it's really great the work that we're doing, and there aren't a lot of folks right now doing this work. I think it's really important that, as this technology evolves, that kids are seeing this happening and so the work that we're doing together. We're in the early innings of this work. The AI is going to be around for the rest of our lives. It's not going away. The snowball is only going to keep growing in mass, and so doing this work is really going to become more and more vital.
Alex Kotran (aiEDU):Yeah, I mean just to, because I was going to ask what you're obsessed with. But maybe I'll refine that question to you know, compared to like, where you were last year and based on what you've seen, I mean, are you, how is your thinking about, sort of just the timeline that we're on, changed? I mean, do you feel like, are things accelerating? Are things are just like steady state, high velocity, slowing down maybe?
Peter Gault:Definitely think so. For us, the big thing is the rebuild on generative AI. For us, the big thing is the rebuild on generative AI, and to us, it really feels like day one that there are a ton of opportunities for us to go deeper and build in critical thinking. These are things like teaching students how to build a thesis statement, which, for me, was one of the biggest things that I remember struggling with as a student. That I would be writing an essay and I'd have to build a thesis and no one ever taught me what a thesis was. I just didn't have that class or none of those teachers covered that, and I'd be like is this a thesis? Is that a thesis? Like? What should I be saying here? And this is actually a hard skill for students to like build a thesis that captures their entire point of view and getting practice with that.
Peter Gault:There's research that shows that this is incredibly impactful. Building a topic sentence even is incredibly impactful, but there's not a lot of instruction explicitly that gives kids those opportunities, and so for us, that's sort of something that we've always wanted to do. It's been on our roadmap for 10 years, and the technology was never there for us to allow a student to build their own thesis and then for us to be able to evaluate and provide feedback and coaching on it, and so that all feels very doable today in a way that was not doable again even in that sort of first iteration of generative AI, where it wasn't quite there. Now you have that really fine-grained analysis, and so I think that's awesome.
Alex Kotran (aiEDU):Was that like GPT-4? What was the inflection point in terms of capability?
Peter Gault:In our own journey, there's been, I think, two really big inflection points the shift from GPT 3.5 to 4, and then, for us, the introduction of Gemini Flash 2.0. We spent a lot of time on GPT 3.5 when it first got released. We've been using this technology within like a I don't know a month or two of when it first became available I think within a week of the API access, and it was not reliable at all. It was just so bad at hallucinating and repeating itself and all of these problems, and we spent so long trying to make it reliable, and what we should have just done is waited. To be honest, at the time, we thought this is the technology we got to make it work, so we spent so long trying to build these guardrails, and it was a good learning experience for us, but certainly we spent so much time playing with this thing, which ended up throwing out all that work. Four, though, was a big step forward, where four was able to give us much more reliable analysis, with the caveat that four was really slow and really expensive. We're helping millions of kids per year, we're giving feedback on around 500 million sentences every school year, and so, at that scale, four would have cost us something like $6 to $10 million per year to run. That's bigger than our entire budget of our organization. And so you saw this powerful technology, but it wasn't at this sort of scale. It also was too slow. You know, for us, when kids are writing, we need to give them feedback in under a second right.
Peter Gault:We can't sort of have that long analysis period, and we knew that models would get faster and cheaper, but for us, the Gemini Flash model has represented a really fast model that gives really great output and is reliable while also being cost effective, and we see that Flash 2.0, there's still room for growth.
Peter Gault:That, versus the really powerful pro models, there is a gap there, and that gap will get closed over time.
Peter Gault:But it certainly is at that point where we feel confident that we can deploy it to production in a way that's real and scalable, and so for us, that's been a really exciting threshold, and that has only been in the last six months that this technology is available.
Peter Gault:I think it came out in August of last year, and so, while there's been a lot of FOMO around generative AI since essentially as soon as it came out, the truth is is that there was a sort of period of getting from this technology exists to it's reliable and it's fast and it's cost effective. And I think we're in that territory today, but we only entered into that territory very recently, and so for us that's critical towards actually being able to use this at scale, versus our own models, where when you build your own model, you have a lot of control over it. You can control the cost, the model design, but that was just a very slow process, so now our iteration loops are a lot faster as well, and something that allows us to do a lot more than we ever could do in the past.
Alex Kotran (aiEDU):Yeah, I think I'm curious if you've struggled with this, because I've often had conversations with foundations and you know they're trying to figure out what their AI strategy is and there's this like sense that what they need to do is invest in AI nonprofits. And I don't know if I'd consider Quill an AI nonprofit. I mean, you're using AI, right, but I mean I think actually you're an organization that is working to use technology to help students read and think critically and articulate their opinions. But you're really nimble and you've been able to really effectively deploy AI and it's and it's interesting is in the venture spaces is actually something that a lot of VCs have been talking about, right, like, even like, uh, like uh, andrew Ng's AI fund. I mean their whole thesis is, like you know, look for companies that are really well-placed to leverage AI like rapid iterate and get to sort of like an MVP.
Alex Kotran (aiEDU):Um, but yeah, but yeah, I'm just curious, like from your perspective, I mean, do how can we do a better job of articulating? Because I don't think it's just funders, I think it's like school district leaders as well, like the buyers I know you're a nonprofit, but but to your, to your sort of like customers, let's say, I mean, is there? Have you? Have you run up against? You know folks saying, well, ah, I'm really trying to figure out what, like the generative AI tool is that we buy? And is Quill really like? You know, quill isn't competing per se with like Gemini, like they're they're. They're very different tools and, at the same time, it feels almost more important for teachers to be using something that has sort of like the pedagogical structure that you've put in place than to just sort of like have this like multi-purpose tool that doesn't necessarily have the like, the deep thinking behind it yeah, I think for us we are very much a literacy non-profit that our goal is to build strong readers, strong writers and strong critical thinkers, and that's what we're all about.
Peter Gault:That's the end game here how do we help of kids build these skills? And AI is just a tool that we use in that process, and so I think we are an AI nonprofit because we have that expertise in AI. As a nonprofit, a third of our team are software engineers and product managers and that our team is doing all of our own in-house AI development. So we have that skill. But it's a little bit like saying we are a nonprofit that uses software.
Peter Gault:Or an internet nonprofit that has an internet nonprofit right or a database nonprofit right, that all software uses databases, and that's just part of it, and so I do think that right now, there is this class of organizations where AI is part of their model of delivery, but in 10 years, every organization will use AI in some capacity, and that distinction of like are you an AI? Every organization will use AI in some capacity, and that distinction of like are you an AI nonprofit or not will go away. There'll be a question of like who's building novel use cases on it, you know, because there are a lot of these thin wrapper tools where they're just taking the output of ChatGPT and trying to sort of wrap some service around it, and sometimes that can be valuable, but sometimes that's not valuable, and so I think that's a very different question of. That will become very easy for anybody to do, and you won't be an AI nonprofit, because, under the hood, a model is helping you in some capacity, and so I do think those are some of those distinctions that apply now but won't in the future. I think the more interesting question, though, is that AI is getting a mixed reception in schools. We are working on our messaging right now, and we don't talk a lot about AI on our website. We have our AI program for kids, but it's not splashed across the homepage and we figured maybe we should do more on this right. The AI is so central to our work. We're developing it, we're using it Like let's make that sort of part of this sort of special sauce of Quill, using it like let's make that sort of part of this sort of special sauce of Quill.
Peter Gault:But teachers reacted pretty negatively to it, that when they see the words AI, they worry that this is just going to be a cheating tool, that this is going to replace them.
Peter Gault:That in our own surveying of teachers, the sentiment was fairly negative, that as being an AI company is not why they love Quill, and that calling ourselves an AI company felt like the wrong step forward and I think we all sort of saw like crypto company. You know we don't want to have the like crypto company vibes, right, and so that's a little bit of the feedback that we're hearing from teachers today and I think it's nuanced because there are some really good folks using AI and then I see a lot of like not so great products as well. So the sort of space has a mixed reputation right now because of those like really ethical AI players doing usually very narrow use cases and doing it in a highly customized way, and then folks who are just sort of promising the world with AI in a way that doesn't actually deliver what students and teachers need and comes with a lot of potential problems as well. Yeah, I mean.
Alex Kotran (aiEDU):I was actually just talking to one of the biggest school districts in the country and they actually have someone that's like leading their, their generative AI strategy and they just banned I won't say the name of the company, you probably guessed one of the big rapper, one of those popular sort of for-profit rappers, because what they found is that it was. You know, there's like some instances of teachers using it really well, but there was like lots of instances where just like it wasn't being used effectively. Um, and you know, we actually have to go to lengths to like we I open up almost all my meetings now with school districts and I'm just like we are not, we are not the AI implementation project. Um, like we're actually, in most cases, our advice to schools is, you know, when they come to us and say, oh, what AI tool should we be? You know providing to students and teachers, we're like none like you're like you should actually, you know, pump the brakes. Focus on the question is more like how do we provide a sandbox for teachers to actually start to experiment with stuff?
Alex Kotran (aiEDU):I think Quill is interesting because it's to me like that's actually easier to take to a school district because it's already aligned with priorities that they have right Like schools, and the NAEP scores really underscore this.
Alex Kotran (aiEDU):Like you know, many, if not most, schools have like significant ground to cover in terms of literacy, and so solving for that problem, I think, resonates with like a much broader audience. I am interested, though, in this sort of like meta component that you're talking about, where, like as students are using quill, they're also kind of like learning about how quill works. I'm curious if there's like a teacher facing component to that as well, because I'm I'm fascinated with like how do we build it like? I almost worry more about teachers and students. I think the students are going to figure it out like far more quickly because they're just saturated with it and they're sort of, you know, uh, very tech forward. Um, yeah, I mean, does quill, like does quill have a teacher facing component? I mean, do you see any, any opportunity there to just sort of use your, your platform as a way to help teachers kind of see sort of like what it looks like to implement AI, you know, really effectively on the back end?
Peter Gault:Yeah, so everything I'm talking about now is a project that we're working on and something we'll hopefully be shipping with AIEDU sometime over the upcoming school year. So a big caveat that this is not yet live, but we do want to build activity specifically around how we build our training data sets and how we use those training data sets to evaluate writing. And this is a really important topic, because evaluation of writing is being used for things like Quill, where Quill is a very low-stakes practice platform where kids get to practice and receive feedback, but it's also being used for testing purposes. Feedback, but it's also being used for testing purposes. Every year, folks like the College Board hire tens of thousands of educators to grade all the AP exams, for example, and there's a ton of work that goes into those evaluations. And, as AI builds these skills, evaluation of writing is going to become part of education, and so getting that right and making sure it's reliable and accurate those are all really critical problems, and you solve those problems again through good data, and so for us, this is a really critical topic that we want to introduce to students, but also as an opportunity for teachers to learn as well, and so we're excited to cover this topic. It's a little bit scary, though, because we're pulling back the hood a little bit here on what we do in our work, and so it's not yet quite available, but we see it as a really powerful opportunity for us over the upcoming year.
Peter Gault:To your broader question, though, about how teachers engage with Quill we provide them with a platform with access to more than a thousand activities. They get to assign activities to students, view their results. They get to see all the writing the kids are doing in the platform. One of our tools Quill lessons is a multiplayer tool where the teachers and the students are learning together. That's one of our most beloved tools. Kids get to share their answers with each other. They get to debate the answers. That's a real time tool for students, teachers, and so we're trying to really intentionally think about how to create tooling specifically in a K-12 context and a higher ed context, where teachers are empowered to engage their students and be partners in their learning.
Peter Gault:And there's a couple of big design decisions Again. Things like these are bite-sized activities that are layered into the classroom. They're used a couple of times per week. All these decisions are critical towards Quill being a partner to teachers and helping to advance their goals, as opposed to something that lives a partner to teachers and helping to advance their goals, as opposed to something that lives apart from the classroom and tries to replace a teacher. And so we think that by making all these smart decisions, teachers feel really empowered by Quill. We also have great training and webinars and all those things that connect teachers together, but the heart of it is really this sort of intentional design where it's designed for teachers and really to be a partner to them.
Alex Kotran (aiEDU):Yeah, I mean, that's the golden goose, right. It's like, can we use AI and technology to actually enhance collaboration and you know, sort of like these, like human, durable skills that students need to build alongside, like like, yes, critical thinking and the knowledge base itself? Um, and I think, in the right hands, ai can absolutely make. You know, because, like, not every teacher knows how to create a really effective project-based learning activity around like any topic, right and so, um, you know it's, it's. It's definitely not so simple as like, oh, ai is going to, is it's good or it's bad, you know, for use in the classroom.
Alex Kotran (aiEDU):I do think that teachers are really well placed to look to organizations like yours that have literally been obsessed with this question for a very long time, because it's not necessarily like something that can be turnkey and so, but the good news is there are free products like quill available and also, you know, ai edu sort of has a very similar approach modular bite size. Not trying to be the curriculum, um, it's hard to imagine what the curriculum would be for ai readiness. It's actually more about how do we get more organizations to sort of like start to adapt some of the practices that you've you've taken like this very sort of like self, like introspective approach to you know your product design and like being really intentional about when not to use AI, when to use it and role of teacher, et cetera.
Peter Gault:I think the headline here what I hope everybody is doing or trying to do and I think we're on the precipice of this moment but it's to really engage in deeper and more active learning. I think that, when I look at ed tech, quill's mission statement from day one has been disrupt multiple choice questions. My very first grant application to the Gates Foundation was just about how multiple choice isn't the best way of learning. I remember as a student doing so many multiple choice questions and A, b, c, d select the right answer. You've got three wrong answers, one correct answer and so you kind of just could guess, like this is clearly a wrong answer, I'm going to go with A or B, and you're not building an argument, you're not expressing your own idea, you're not building something, and I think that's a really powerful way of learning and that, as we think about AI in the future, using a thin wrapper tool to generate multiple choice questions for you isn't really advancing learning forward. You're taking us something that we've been doing now for decades and just making it a little bit faster.
Peter Gault:But the more exciting thing is how can we go from multiple choice to writing and to project-based learning and towards collaborative learning and things that are very hard to do. Well, right, there's been a big push for many years for project-based learning, and it's very hard to implement in the classroom it's hard to get 30 students all working on projects but you can imagine a number of ways in which AI can serve as a partner to the teacher in a way that previously was not possible, and I think all those opportunities lead to deeper and richer and more effective learning. So I think that's the name of the game here is how do we reimagine learning and what are those opportunities that we can now pursue that were just hard to do in the past? And that's where we should be applying our effort, as opposed to just using AI to automate what we're already doing, which is not the most effective and deepest form of learning, and made what we're already doing, which is not the most effective and deepest form of learning.
Alex Kotran (aiEDU):Yeah, I mean, I couldn't think of a better way to close it. It's like the status quo is clearly not working and if AI just becomes a way of allowing us to sort of like cover resource gaps to maintain the status quo, we'll have failed. But there is this opportunity. If AI can actually it's almost like the Trojan horse for these sort of like much longer, you know, sort of very, very old and, frankly boring conversations that have been had for decades. Right, like 21st century skills.
Alex Kotran (aiEDU):You know, digital readiness, if you know, project-based learning, critical thinking, like none of this is new and I think there's actually power in that, because there's a lot of disruption happening to schools right now. Um, you know, at the national level, at the state level, uh, it's not necessarily, you know, I don't know that educators respond well to like more disruption and they don't see it necessarily as a positive um, but I think there's there sort of subtle but really intentional ways that the technology can actually just make it easier, um, for teachers to start to implement some of these practices that you know are not necessarily intuitive but but can be turnkey with, with amazing tools like Quill. Um, peter, anything else that we missed, that you want to, that you want to share before I let you go. I know it's relatively late on the East coast. Thanks for making time for me today.
Peter Gault:Yeah, it was a lot of fun.
Peter Gault:We covered a few really big questions and really excited that this is just again the early innings of this world.
Peter Gault:The AI is going to be here for the rest of our lives. There are a lot of things to figure out and I hope that we can spend more time trying to get ahead of some of these questions, that it's hard to imagine what the world will look like 10 years from now, but we can certainly see certain trends. We can see that we'll have more information than ever. We'll have deep research queries that are feeding us 100-page documents and that we'll need to be able to think critically, to be able to parse them, to be able to have our own points of view. Education will become more important than ever and that if we do it really well, if we make it active, if we make it joyful, it will be a really amazing opportunity for kids. But if we don't get it quite right, I think we're going to feel a somewhat scary world where we're all a little bit taken aback by the world, where AI is sort of something that lives beyond us.
Alex Kotran (aiEDU):And happens to us Well, and you know who better to help us answer these questions than the students themselves who are going to be both a part of that world and also building it? Yeah, absolutely, peter Galt. I'll see you in, I guess, a few days. Right, are you going?
Peter Gault:to be at ACDS. Yes, I'll see you next week, okay.