Edtech Insiders

Week in EdTech 11/19/25: OpenAI Launches ChatGPT for K–12, Google Deepens AI Push, Edtech Tools Face New Classroom Backlash, and More! Feat. Janos Perczel of Polygence & Dr. Stephen Hodges of Efekta Education!

Alex Sarlin Season 10

Send us a text

Join hosts Alex Sarlin and Ben Kornell as they break down OpenAI’s unexpected launch of ChatGPT for K–12, Google’s accelerating AI momentum, and what these shifts mean for schools, teachers, and the edtech ecosystem.

Episode Highlights:
[00:02:03] OpenAI unveils ChatGPT for K–12 educators—secure, curriculum-aware, and free through 2027
[00:03:02] The emerging AI Classroom Wars between OpenAI and Google across major U.S. districts
 [00:07:36] Google’s big week: DeepMind tutoring gains and Gemini 3’s multimodal upgrades
[00:10:25] How district leaders will navigate growing community divides over AI adoption
[00:14:04] What OpenAI’s move means for MagicSchool, SchoolAI, Brisk, and other edtech players

Plus, special guests:
[00:19:26]
Janos Perczel, CEO of Polygence on scaling project-based learning with AI and why TeachLM trains models on authentic student–teacher interactions

[00:41:36] Dr. Stephen Hodges, CEO of Efekta Education on AI-powered language learning for 4M students and early evidence of major test score gains

😎 Stay updated with Edtech Insiders! 

Follow us on our podcast, newsletter & LinkedIn here.

🎉 Presenting Sponsor/s:

Every year, K-12 districts and higher ed institutions spend over half a trillion dollars—but most sales teams miss the signals. Starbridge tracks early signs like board minutes, budget drafts, and strategic plans, then helps you turn them into personalized outreach—fast. Win the deal before it hits the RFP stage. That’s how top edtech teams stay ahead.

This season of Edtech Insiders is brought to you by Cooley LLP. Cooley is the go-to law firm for education and edtech innovators, offering industry-informed counsel across the 'pre-K to gray' spectrum. With a multidisciplinary approach and a powerful edtech ecosystem, Cooley helps shape the future of education.

Innovation in preK to gray learning is powered by exceptional people. For over 15 years, EdTech companies of all sizes and stages have trusted HireEducation to find the talent that drives impact. When specific skills and experiences are mission-critical, HireEducation is a partner that delivers. Offering permanent, fractional, and executive recruitment, HireEducation knows the go-to-market talent you need. Learn more at HireEdu.com.

As a tech-first company, Tuck Advisors has developed a suite of proprietary tools to serve its clients better. Tuck was the first firm in the world to launch a custom GPT around M&A. If you haven’t already, try our proprietary M&A Analyzer, which assesses fit between your company and a specific buyer. To explore this free tool and the rest of our technology, visit tuckadvisors.com.

[00:00:00] Ben Kornell: I am feeling very strong, negative reaction from parents and from teachers about the push of AI into classrooms. We were in like an experimental mode where it was like, yeah, try things, learn things. We shouldn't be Luddites and afraid of AI or new technology. But now that we're a couple years into this, feeling like there still is not clear guidance of what best practices are, when to use, when not to, I think this will be an alarm flag.

[00:00:32] Alex Sarlin: They're trying to make it work, and you could frame this as corporate double speak, and that they're just trying to sort of fool people into using their tool by talking their language. But that's not how I read it. I read it as OpenAI and certainly Google as well, are really being very thoughtful about how to get these tools to work in an environment where it's not the best idea to use them in an off the shelf commercial capacity.

And the idea that teachers are using in great numbers ChatGPT or Google Gemini, outside of their school context, outside of any kind of memory, outside of any kind of structure, is not actually the best way to do it. Within the structure might actually help.

Welcome to EdTech Insiders, the top podcast covering the education technology industry from funding rounds to impact to AI developments across early childhood K 12 higher ed and work. You'll find it all here at EdTech Insiders.

[00:01:28] Ben Kornell: Remember to subscribe to the pod, check out our newsletter, and also our event calendar.

And to go deeper, check out EdTech Insiders Plus where you can get premium content access to our WhatsApp channel, early access to events and back channel insights from Alex and Ben. Hope you enjoyed today's pod

at Tech Insider listeners. We have a special weekend at Tech with some breaking news around what's going on in ai. Alex, share with us the breaking. 

[00:02:03] Alex Sarlin: So the breaking news is that OpenAI one of the absolute largest AI providers in the world. One of the fastest growing tech tools of all time, has just launched a version of chat GPT built specifically for K 12 educators.

And what they mean by that is it's designed to be enterprise secure. It doesn't train on anything. It's designed with a lot of preset prompts created by educators. It's created to play nicely with other school systems and it's designed literally to try to support teachers as its core mission. It's a version of ChatGPT designed for the school environment.

So in the ed tech world, that's obviously a very big deal. We have seen Google doing all sorts of things in K 12 for a year now. But ChatGPT has been mostly talking about teacher training. This is a big, big deal move for them to release a version of their core consumer tool designed specifically for K 12 educators.

What do you make of it, Ben? 

[00:03:02] Ben Kornell: I mean, my take is that the AI wars are coming to classrooms near you, like it or not, and I think we've heard from educators a lot of concerns and skepticism at the same time, all the statistics show that there's increasing utilization of AI by teachers and huge utilization by students.

So we are now officially in the AI in school era where the major, major companies are now with K 12 versions of ai. And I think this really comes down to a battle between open AI and Google for the hearts and minds of classroom teachers. The biggest change in this announcement. Is one, an acknowledgement that different security parameters are necessary for k12?

Yes, but also OpenAI, which is here to, for Ben, all consumer oriented in their motion, has actually signed deals with major, major school districts, over a hundred partners, and these are some of the largest school districts in the country. So the battle shots have been fired. Now it's really up to educators.

What do they adopt? I will say as a parent, I'm hugely concerned about all of this being rolled out without the proper training or supports. And we're seeing with all of the kind of studies on screen time and overstimulation, that there's negative effects that could happen with kids being directly exposed.

So this is the time I think, for educators to make their voice heard. This is the time for parents to make their voices heard. How much AI do you really want in your classroom? Because Open AI is going after it. Google's going after it. We also have magic school. We have school ai. We have a bunch of other ed tech players.

So 2026, looking like the year of AI in the hands of educators. 

[00:04:57] Alex Sarlin: It's a great point, Ben, and I think there's like two concurrent narratives happening here that are in pretty direct tension with each other. One is, as you're saying, there's an increase in concern about AI and concerns about big tech entering school and concerns about screen time in school, which are all intermingling.

There's been a lot of pushback about ed tech in general about big tech. In this case, they often mean companies exactly like OpenAI and Google entering the school space. People just have funny feelings about it. At the same time, the companies that are doing it are being very. Thoughtful. I mean, some of the features in this rollout are really school specific.

They're trying to do personalized teacher support, and what they mean by that is that teachers can set their grade level, their curriculum, their subject once, and then the tool will always respond in context of that grade level and that curriculum and that tool. That seems like a small thing, but it's actually not something offered that frequently by other providers or certainly not by off the shelf commercial LLMs.

The security piece, the SSO is a big part of it, and they're clearly trying very hard to think about how this would work in a positive way. In education. They're working with educators, they're trying to make it work, and you could frame this as corporate double speak, and that they're just trying to sort of fool people into using their tool by talking their language.

But that's not how I read it. I read it as OpenAI and certainly Google as well, are really being very thoughtful about how to get these tools to work in an environment where it's not the best idea to use them in an off the shelf commercial capacity. And the idea that teachers are using in great numbers, ChatGPT, or Google Gemini, outside of their school context, outside of any kind of memory, outside of any kind of structure, is not actually the best way to do it.

Within the structure might actually help. You mentioned some of those districts they're working with. I mean, we're talking about Houston, we're talking about Dallas, we're talking about Fulton County, Georgia. These are enormous districts with many, many, many students and a lot of the KIPP organizations we're talking about Idaho, just a bunch of Virginia districts.

This is a very big, as you say, shot across the bow from OpenAI saying, we are not going to seed the K 12 space to Google. And not only that, we're also not going to let the K 12 space use our normal tooling. Like the regular old Chacha PT is just not tailor made for education. So they're trying to tailor make a version.

Oh, and one thing we didn't mention, it's free. I mean, this version of ChatGPT for teachers is being made free for teachers for both this and the next school year. This is a pretty assertive attempt to move into the K 12 space. You can read it cynically, you can read it optimistically, but it's definitely big news.

[00:07:36] Ben Kornell: Yeah. I think this also needs to be put in context of all of the new releases from Google in this past week too, and we covered this on our last week in EdTech, but breakthrough results from Google DeepMind on a tutoring pilot they did with Edie, which basically shows that tutoring prompts generated by AI and overseen by a human tutor working with 20 kids at the same time, were equivalent to 5% better than what the tutors themselves could generate on top of it.

Gemini three just launched yesterday. To, I would say overall very rave reviews, and I think the multimodality or omni modal characteristics are just accelerating so fast. So it's quite interesting that ChatGPT feels like their, or OpenAI feels like their best move. To counter that PR bump is to focus on these K 12 partnerships, which for a long, long time, for decades, big Tech has stayed away from the kids space.

Yeah. Has stayed away from K 12 due to regulatory concerns, et cetera, et cetera. But six months ago, seven months ago, they did that partnership with the American Federation of Teachers, which I think really signaled this idea that they were going to throw their lot in with K 12 educators. And that's what we have here.

I am feeling very strong negative reaction from parents and from teachers about the push of AI into classrooms. We were in like an experimental mode where it was like, yeah, try things, learn things. We shouldn't be Luddites and afraid of AI or new technology. But now that we're a couple years into this feeling like there still is not clear guidance of what best practices are, 

when to 

use, when not to, I think this will be an alarm flag for many of those worrying about big tech in the classroom, worried about AI in the classroom.

And so I would just encourage people to continue to come back to the use cases that on EdTech insiders.ai, it's not about tech, it's about the teaching and learning. And if we could start with the teaching and learning. What a great world this will be to have all these free tools that you can use. But if we start with the AI and we put that cart before the horse, I think we're gonna see a bunch of backlash.

And this is where Google has an advantage in that their AI is integrated into all these normal teacher moves and teacher moments, whereas the opening eye is just dropping it like live grenade in the classroom. Sorry to use all the war imagery here, but it does feel like. A battle among corporate titans is now coming to the K 12 classroom.

[00:10:25] Alex Sarlin: It does, and I think for any individual educator, for any individual school leader, for district leader, I think they have to look at this moment and say, what do I believe is sort of the end goal and the core mission here? Do I feel like these tech companies are authentically trying to offer their tools to transform and improve education?

Or do I feel like it is a power grab or a land grab or trying to get their tools into the hands of people as young as possible? And realistically, there are elements of both. I mean, there are, but having talked to a lot of the people within these companies, I tend to be a little bit more on the positive side.

I don't think that, you know, I think when you read some of these sort of alarmist editorials or comments, uh, I spent a lot of time in the comments sections of articles about education technology, and it is just a. Pile on where people just are so upset about screen time, they're so upset about big tech.

They're so upset about they, you have a lot of professors and teachers saying, I'm trying to ban ai. I hate it. It's, it's, it's destroying my classroom. And, you know, you cannot ignore those narratives. That is absolutely real. At the same time, I personally don't think that these companies are out to get anybody or fool anybody or sort of destroy education.

I mean, I know personally many of the people in some of these areas I've worked with, some of them in the past, and I just think the paradigm that people put on this, that they think, oh, you know, big pharma, big oil, and now big education tech. It's like, that just does not resonate with me at all. I encourage people to try to see it both ways, to try to be nuanced in their understanding.

I think anytime you feel like this is all pure or positive, it's just, oh, they're being incredibly nice. They're giving this free to teachers for a couple of years and it's gonna be, it's just the most amazing tool. It's just all great. Well, you gotta. Be a little more cynical than that, but anytime you find yourself, I would say, saying, oh, it's being purely cynical.

Be saying, it's just yet another moment of big tech trying to take over our kids' lives and destroy society and do it all for profit and all steal our attention and sell our data. Like I think that knee jerk reaction is also too extreme. And in a moment like this, where, as you say, Ben, it's, it's dropping into the K 12 ecosystem especially, you know, we know, and they talk about in the, the press release around this launch that three out of five teachers are already using tools like Chachi BT to improve their practice in various ways.

Teaching is one of the most common use cases in their dataset. It's people are using it already for this, so they're trying. To make a version that's actually somewhat optimized for it and I will continue to optimize it for it. So I try to make sense of both sides. But it is a really tricky moment we've said on the podcast for a couple of years now.

At any moment there is truly the ability to fall prey to the backlash and to have that backlash go all the way up to policy and districts and states starting to say, well, maybe Houston is doing this, but if we in X, Y, Z district don't believe in this and we're not gonna allow it and we're not gonna make it happen, the backlashes continue to be real.

At the same time, I think some of these features and some of what they're doing here is being done very thoughtfully. They've worked with educators, they've collected all of these use cases and all of these prompts from educators around the country that are being incorporated into this. And I think there is a, a well-meaning attempt to try to, as OpenAI, you know, releases, starts to relax their adult at the same time, right?

They're saying, oh, for adult users we should identify them and allow them to do things that are adult only. They're saying, well. Okay. And for K 12, we need there to be a really solid, serious product designed for the K 12 use case. We can optimize for that. I think it's a realistic attempt to do that. So if you find yourself being too gullible or too cynical in this moment, uh, probably time to sort of adjust.

That's my take. 

[00:14:04] Ben Kornell: Yeah. I also think, what does this mean for the ed tech right participants? So let's talk about our ed tech world. This feels very in line with the same kind of distortions that happen back when Google Classroom was first released. The ability to like command a price for a LMS over the last decade has been suppressed because there was a free option that had largely feature parity and was like 85% to 90% of the way to an LMS.

I think this is a really important moment for the Magic School, school, ais and Bris to demonstrate their unique value proposition. We've had Armand on the podcast from Brisk, and you know, I think he saw this coming and has from the get go, been building deeply into the education tooling with the idea that generalized AI is not as valuable as embedded AI is in.

Inside of real tools that teachers really need to work with students. And his thought from origin days was, if you can build all of that one in a seamless platform, it allows the, like the benefits to be across surfaces. I would say the same for magic school and school ai. They've really tried to differentiate around the features that educators and in some cases students need most.

But I think it is hard for a school district to justify a budget spend on any of those tools. If you've got, you know, the Google Suite that's already in classroom and you've got open ai, which also we should remind listeners has a partnership with Canvas as well, and the Canvas partnership, you could imagine that's a distribution channel for them.

That's a way for them to embed it, but the kind of price of admission is they have to be compliant. So this launch of ChatGPT for teachers also seems to be. In line with that. And furthermore, what you mentioned about what they're doing for adult consumers, they're basically going to release some of the constraints for the adult users.

And so they had to have a version that is more constrained. So all of this being said, this is a breaking story. We are excited to like share this news, but it's an evolving one that's been evolving for a couple of years and it will evolve here in a couple months as we see teachers using it and testing it and giving feedback.

But let's remember the hype cycle here. These are short waves of like rejection, positive intent. And so I'm gonna be very interested to see how these superintendents navigate the kind of. Yes, we should do this. No, we shouldn't do this. Dialogues in their communities. 

[00:16:58] Alex Sarlin: Yeah. But if you combine some of these strategies, right?

Coming out with this ChatGPT for teachers doing the mission to train 400,000 teachers in AI tools that they announced a number of months ago, at the end of New York Ed Tech Week, they had Belski as a keynote speaker who's the head of education at OpenAI. And she sort of said in passing, and we're going to K 12 by the way.

And I remember being like, oh, okay. She didn't describe this, she didn't announce this, but she said, we're heading there. And you know, even beyond the things we've already seen, the working with educators on prompts, doing the the training, and there may be more to come, but this feels like this is the official flag in the ground saying.

Open AI wants chat BT to be a major tool used in classrooms. It's also worth noting, you know, this free period, they specifically say, right, they say the current free period runs through June, 2027. After that, we may adjust pricing, but our goal is to keep chat BT for teachers, affordable for educators. And again, that cynical note might sound in your head when you hear that, oh, you're trying to do a loss, you know, a loss leader get embedded everywhere.

And then you can set a price, or you might see it the other way and say, yes, of course they want expansion. But you know, they know the education system is only one of many, many different areas they play in. I mean, ChatGPT is selling itself to whole countries at this point, to governments. There's a lot of different areas they can go in.

I don't think their big scheme is to squeeze schools and teachers for all their worth. But you know, again. Both can be true. It's a really, really interesting moment for them to sort of go deep into this. And to your point about education, other EdTech players, like I don't think this is means, you know, anybody should hang up their hat and say, oh, I can't compete with that, because it is still early days.

And honestly, a lot of the features they're launching here are security features, interoperability, integration features. It's a long way from the suites that we've seen from Magic School and School AI and Brisk and, and others and EDU aid and Almanac and, you know, a number of different other people in this space who have spent a couple of years thinking really deeply about many, many, many different teaching use cases.

So I don't, I think there's still a pretty good headstart for EdTech, but this is a big deal and I, it's yet another 800 pound gorilla in the K 12 EdTech world. 

[00:19:11] Ben Kornell: Yeah, for sure. Well, as it evolves, you're gonna hear about here on EdTech Insiders. We have some exciting interviews coming up. Check out our social media posts for the latest, and thank you all for listening to this special episode of Week in EdTech.

[00:19:26] Alex Sarlin: We have a truly fascinating guest this week on our deep dive for a week in EdTech. We're talking to Janos Perczel. He is the co-founder and CEO of Polygence, which is a project-based learning platform that helps students explore their passions under the guidance of expert mentors. So interesting. He holds a PhD in quantum physics from MIT where he won the institute's graduate teaching award, and his current research focuses on post-training foundation models to become better teachers.

Another thing we're gonna get into today, incredibly interesting, Janos Perczel, welcome to EdTech Insiders. 

[00:20:03] Janos Perczel: Thank you, Alex. It's great to be here. I'm a big fan of your work. 

[00:20:07] Alex Sarlin: Oh, that is so nice to hear, especially from somebody who is so deep in the AI learning space as you are. So first off, tell us a little bit about what Polygence is.

It's incredibly interesting model where you're doing passions through mentorship, combining students at different levels to create incredible work. Tell us about what it is and what role you see AI playing in it. 

[00:20:27] Janos Perczel: So Polygence, as you said, is a platform helping students explore their passions under the guidance of expert mentors.

And I co-founded Polygence with my co-founder, Jin Chow because we have a fundamental belief that one-on-one personalized project-based learning is the best kind of. And we both benefited tremendously from one-on-one mentorship early on in our careers. And we wanted to enable high schoolers to be able to pick any topic of interest, whether that's detecting cancer, using machine learning, or recreating a historically accurate dress from the 18th century, or even shooting and editing a documentary about current events in rural China.

And we wanted to make sure that students can complete such projects at the very high level under the guidance of experts who can make sure that they deliver an awesome project and learn very well along the way. And this, of course, works really well in practice. We have built a platform, we have served over 10,000 students to date, and I would like to say that project-based learning is indeed scalable in many ways.

But the issue is that it's costly. Such an experience costs thousands of dollars because you have to bring in these expert mentors and ask them to dedicate time to the learning of the high schooler. So to answer your second question, this is where AI comes into the picture. We've been studying AI very carefully because we've been very intrigued by the possibility of bringing costs down dramatically, and that's where I see AI hopefully playing a tremendously important role in the future.

[00:22:12] Alex Sarlin: That concept of one-on-one project-based learning with an expert mentor, and in many cases, your expert mentors are university students, either undergrads or post-grads or graduate students in really amazing universities. They're true experts in all of these different fields. That kind of connection just feels like truly the sort of platonic Socratic ideal of what education should look like and the idea that that can be actually made scalable.

Uh, you're already making it a lot more scalable than it once was, but for AI to make it even more scalable could truly be transformative for what education looks like. Tell us a little bit when you. Introduce this idea to parents or to students themselves. I imagine they have to sort of think about it for a moment and say, wow, this is so different than what education looks like in traditional systems, and they sort of need you to walk them through it.

What is your message for people about how this project-based one-on-one, passion-based education, it seems like such a world away from what many people get in their traditional school systems. How do you sort of explain the discrepancy and help people get their head around this incredibly different and very exciting type of education?

[00:23:19] Janos Perczel: That's a great question, Alex, because we have been socialized in a context where classroom-based education is so prominent that people are even confused by the idea of a student-centered approach. And we see this confusion in actual conversations where students are confused when you ask them, so what is your passion?

What do you wanna do? And they're like, well, what do you mean? Well, and they need time to even process the question because very often this is the very first time that they heard that question. And so this is where we try to go in and explain that. Well, unless you're truly passionate about something, it's very hard to do really well.

But here, what we are trying to do is help students take their interest to the next level and through that journey, become real experts in a field and be able to showcase their passion in a very tangible way. 

[00:24:16] Alex Sarlin: And I imagine on the other side, once people have started to engage in this type of learning, they've gotten to meet their mentors.

They're building a relationship, they're building a project. It's probably hard to go back to traditional education. They probably feel incredibly inspired. And you work with a lot of high school students who are heading towards graduation, making their way into the next phase of education or their careers.

How have you seen the outcomes when people finish these types of projects? You've mentioned a few incredibly inspiring examples, but how do you feel like it changes their perspective of what their future would look like and what the future of their education should look like? 

[00:24:48] Janos Perczel: So that's again, a great question that we have studied in great detail.

We try to be as quantitative as possible. So we do these final surveys at the end of projects, and what we see is that students express increased confidence when it comes to their academic abilities and sometimes even non-academic abilities and their. Outlook on the future becomes more positive. And we've even done randomized control trials and published some papers showing these precise effects.

But generally, we see that getting that personalized one-on-one attention in a field that the students care about can do wonders to the students and not just for traditional high flyers. In fact, on the contrary, for students who may not have excelled in the traditional school context, might benefit even more from that kind of personalized education.

[00:25:39] Alex Sarlin: I can imagine. I like that term, high flyers. It feels like we can all identify, sort of remember, or some of us were those people in high school. Others we know exactly who we're talking about here. The sort of real achievers. But it's, I agree this has been a experience that has been very rare for almost anybody in classroom education.

But the ability to take somebody who has not necessarily excelled or seen very high grades in their traditional system and get them to focus on their passions and work a one-on-one way, I'm sure is absolutely transformational. One other project you've been really deeply involved in that I think is so core to the field I, I think everybody in the EdTech field should know about it, is this teach LM paper and this idea of how can we train AI models to actually be effective.

Teachers and support teachers effectively and go beyond prompting or sort of just simple instructions or system prompts to get an LLM to act like a teacher. You've really moved that field much further forward than than others have. Can you tell us about your model of teach LM and what you've done? 

[00:26:39] Janos Perczel: Yes.

So at Polygence, actually, initially when we got very excited about AI's potential, we attempted to build products by prompting AI models. And our hope was that by creating a prompted AI product, we would be able to replicate that magical experience that we just discussed and make it much more scalable, much more cost effective.

Because right now, obviously not all families can afford that one-on-one relationship with a highly qualified tutor. But what we found was a huge golf between human performance when it comes to tutoring and what AI had to offer. And we had the early realization that the Gulf may be just too wide over a year ago.

And in that past year, even though there has been a lot of progress with the AI models that we all know. ChatGPT, Gemini, Claude, and so forth, that gap is still there. And so at some point we just realized that prompting will never get us there. And these AI models need to be trained on actual authentic learning data to become better teachers.

And so this teach LM study. Is our attempt to show the community that by training these models on actual authentic learning data, that includes data from students who are engaged in the learning process, we can improve these models and make them. Better for the purposes of teaching, 

[00:28:12] Alex Sarlin: which is an incredibly important task at this particular moment.

And we've seen all of the models you just mentioned, the frontier AI labs release study modes, release versions of their models that are attempting to act like a tutor or teacher. And they're trained on certain kinds of data in a way. But I think they really, in a lot of ways are based on prompts. Even though they're coming from the frontier AI labs, they're not necessarily working with data in the way they could be.

So teach LM, your model is trained on absolutely authentic student and teacher data. And when you do that, what comes out in the model? What kinds of teaching interactions, what type of feedback, what type of effective educational styles or moves come out when the model has to sort of learn how to teach effectively?

[00:28:57] Janos Perczel: So what we see when we train these models on our data is that the interactions become much more human-like you mentioned these models, education focused, prompted products that these Frontier labs have released and very often. They miss the learning context that is necessary to guide a student. So for example, if a student wants to do a coding project, quite often they actually fail to ask the question, okay, so what kind of a coding background do you have?

Right? Which it turns out human tutors ask 100% of the time because they're keenly aware that it's impossible to do a coding project without understanding the background of the student. So that's one area where we saw that once we train on our model, the model begins to establish the learning context much better and begins by understanding the learners' background before jumping in.

Another important area is how you ask questions. So these frontier models often ask, say three questions. It's almost like a multiple choice question when it is. For example, offering you project ideas. In contrast, humans often ask open-ended questions, so what are you interested in? And that's a big difference because by making it the multiple choice question, you immediately narrow down the space of choices and that doesn't give enough agency to the learner.

And this is something that obviously our model also picks up on how humans ask questions. Another important area is just the number of questions you ask. I don't know if you've had this experience, but sometimes these AI models ask three, four questions in a row and expect you to like type out or respond with answers to all of them.

It turns out humans don't ask questions like that. Most humans ask one or maybe two questions on average and then give space to the other person to respond. I mean, in fact, that's how you've been asking me questions so far. And so that's another thing that we see our model pick up on quite rapidly. And then one other aspect among many others that I did wanna highlight is dealing with confusion.

So very often when you interact with these AI models, if you express confusion, they either give away the answer, which is often not appropriate, or they just simply rephrase their question. And if you again, say you don't understand, they rephrase yet another way or throw yet a different kind of explanation.

Humans actually do things differently. They take a step back and try to understand the confusion first. So, right, what is it that you don't understand? Right? Maybe respond to a question with another question. And so what I'm trying to point out here that. The differences are not even necessarily entirely nuanced.

They're very fundamental differences about how AI deals with students today and how humans do. And what we've seen early signals off is that by training on authentic learning data, that actually includes the data from students. We have some of that behavior come through in the train model. 

[00:32:04] Alex Sarlin: That is so exciting to hear, and frankly, very validating to hear because it is something that the lack of context, both context about the learner and context about the question, right?

If a learner comes in and says, I wanna do a coding project, a human tutor says, why? Why do you wanna do a coding project for fun? Or is it an assignment? Like, but LLMs almost never do that. And then of course, exactly what you're saying, asking, well, what's your background? Do you, do you know how to code? Do you wanna know how to code what?

And then this idea about all the questions, all these insights about how LLMs ask questions, they hammer people with multiple questions and expect these multi-part answers, these sort of numbered answers. It's so, it's so strange. That is incredibly exciting and makes so much sense. I think anybody, I, you know, when I played with all the Google guided learning and and study modes from all of the different providers, I noticed as well that you, it just doesn't ask you enough.

It sort of wants to jump right in and solve things for you and throw things at you, throw facts at you, throw ideas at you, and a human tutor talks almost as little as possible, especially when they're first meeting somebody. You do a lot of questioning. You do a lot of open-ended questions, as you mentioned.

So those are insights that I think could be transformative for the field. And frankly, look, we've talked to all of these Frontier labs. I think they know that they don't have the answers yet. I mean, I don't think any of them say, oh, we cracked the code. This is working perfectly. I think they're putting things out.

They're trying their best, but I think this teach LM concept could be transformative to them and to the future of what. Any, you know, large scale LLM system does when it's trying to teach, you know, how do you envision the next generation of AI tutors evolving, whether it's from these Frontier labs or whether they're they're from startups, you know, how do you hope this, some of the insights from the teach LM work that you've done will get used in practice?

[00:33:49] Janos Perczel: That's a great question. So our purpose with this paper was not by providing all the answers, but by having people ask more questions and by prompting a discussion. And I think in order to answer your question, it's important to understand the source of the problem. And the source of the problem is that most of these LMS attain their capabilities by getting trained on internet scale data from the internet.

But actually the internet doesn't have a lot of examples of high quality teaching. In fact, high quality teaching always involves students. There is no way to test how good a teacher someone is without having a student there, right? You don't just like read their resume and decide whether they're good teachers or whether they've written great blogs about how to be a great teacher.

And so the fundamental issue I see, and this is a known issue, that the pre-training data that these AI models ingest just do not have. High quality, authentic learning data, and until that problem is rectified, we'll probably see these models continue to have these shortcomings. So to answer your question about and how we, or how I envision the next generation of AI s evolving, we truly need to make sure that these AI models begin to think about the authentic learning data that needs to be ingested in order to increase their capabilities.

Because we really wanna nail three things here at the very least. One of them is these AI models need to evolve to be able to gauge the understanding of students and pinpoint their misconceptions in ways that can only be done if you have data about what sort of misconceptions and issues with understanding students have.

Then second, they need to be able to ask questions at exactly the right level, and then customize answers to, and then dialing in how they guide the students. Because if things are too easy, students get bored. If things are too difficult, students shut off. And so until AI models get that data and learn how very scared human teachers do this, I think it'll be difficult to make progress with prompting alone.

[00:36:08] Alex Sarlin: Agreed. Hopefully silver lining. Good news there is that for one thing, the EdTech community that we, you know, serve with this podcast, many of them are tutoring companies or they have teaching data. People have various levels of proprietary data that they've collected over the years of exactly the type of conversation that would provide the data set for this type of training.

We've also seen really interesting initiatives, the like the Million Tutor Moves project, which is out of, you know, Carnegie Mellon and Cornell, and they're working together to sort of aggregate. Tutoring data from different tutoring companies and use it to start to train and identify the right type of moves for this model.

So there are some lights of hope that even though the original LLMs were trained on internet scale data that was very static and didn't have a lot of student data, didn't have a lot of conversational data or these kind of tutoring or teaching experiences, it's this really interesting insight. A hundred percent true.

The EdTech field is actually right out front on this and, and there are ways to pull together mass scale data of effective tutoring conversations and use them in exactly the way that you've been using in them with Teach lm. So I'm thrilled about that. And I hope, I mean, I, I imagine you're gonna be a significant part of that evolution to the next level of AI teaching.

So how do these ideas go together? So tell us about how Polygence is using some of the concepts in Teach LM to advance its mission. 

[00:37:31] Janos Perczel: So our goal is not to replace humans, but to. Essentially augment the learning experience that students are having on our platform and thereby bring down the costs. So we are working towards producing production ready versions of our model and inserting them at strategic points inside the learning journey.

And through that, we are looking to learn even more about how AI can help students. Where it cut full short and where humans are absolutely needed and eventually create hybrid learning experiences where students are guided by ai, where AI can be effective. And we of course, continue to have those wonderful human interactions that are so essential for learning for every student.

[00:38:18] Alex Sarlin: A hundred percent. That concept of hybrid learning experiences feels very much like, uh, you know, the direction that the field needs to go in. I always remember I spoke, it was now a couple of years ago to a head of a tutoring company, and one idea that he had that always stuck with me is the idea that if a student has a tutor, or in the case of Polygence, an expert mentor who could be a university student at a top university, you know, that person builds a strong relationship.

They learn to know each other and trust each other and be accountable to each other, and all of those wonderful human interaction elements that are so important to learning. But an AI model can learn from those conversations as well and can actually, you know, start to mimic or support some of the things that the human tutor is doing.

Not in terms of trying to tru fool anybody or build a pseudo parasocial relationship, but just in terms of being like. The bot knows exactly what this relationship is like. It knows what the project is, it knows the student's background, and it can provide so much additional support in answering questions and providing practice and providing ideas.

So when that student is up in the middle of the night doing their historical dress and the tutor is not available, they can still get meaningful support. It's such an exciting vision for what education could look like. 

[00:39:30] Janos Perczel: I agree. And Alex, I think you are spot on. That relationship building piece is often underestimated in education.

It's so critical. As you said, we are not trying to create these parasocial connections, but still relationships will be important and relationships can take many forms. And just an AI tutor. Understanding the learning history of a student will be very critical. And reflecting on the past challenges and victories that the student had throughout the journey will be very important.

So actually in our data we do see that relationship piece being very important. We actually studied this in our TLM report as well. And so this is again, a pointer to the fact that authentic data. Has these gems and unexpected insights that people might not even think about if they are just working with synthetic data or data created by contractors who never, who will never pick up on the importance of something like building relationships in, in education.

[00:40:32] Alex Sarlin: It's an incredibly exciting vision of the future that, you know, really creates this sort of triangle of learning, right? Where there's a student, a teacher, and maybe more than one student, a teacher, and an ai, they're sort of all working together towards a common goal. And when you put it in the kind of context that you're doing with Polygence, it's passion-based, it's project-based.

It's authentic to what the student is trying to accomplish. We talk on this podcast all the time about how AI is really sort of an opportunity to rethink some of the fundamental concepts of how we do education. And what you're doing at Polygence with this mentoring model is so, so exciting. Thank you so much.

I wish we had more time, but we, you know, we, we should definitely come on for a longer interview in the future as you continue to evolve this work. Janos Perczel is the co-founder and CEO of Polygence, a project-based learning platform helping students explore their passions. Under the guidance of expert mentors and with some AI support.

Thank you so much for being here. This is an amazing conversation at EdTech Insiders. Thank you, Alex. It's 

[00:41:34] Janos Perczel: been a great conversation. Indeed. 

[00:41:36] Alex Sarlin: We have a really exciting guest this week. On the week in EdTech. We are speaking to Dr. Steven Hodges. He's the CEO of Efekta Education. That's E-F-E-K-T-A. It's a global leader for ai, teaching assistant technology, serving 4 million high school students and 25,000 teachers worldwide.

Steven began his career at McKinsey and Company and spent 16 years as president of HALT International Business School, leading its evolution to a global institution. Stephen Hodges, welcome to EdTech Insiders. Thank you very much for having me. So 4 million high school students. That is a pretty serious number of high school students.

Before we jump into anything else, let's talk about how you evolved Efekta Education, how it sort of branched off from its parent company and what you're doing to improve AI teaching assistant technology 

[00:42:27] Stephen Hodges: worldwide. Sure. So Efekta is a company owned by EF Education First. EF is the world's largest private education company.

One of the businesses operates is the world's largest online English school. And we've been doing that for 30 years, so very early into online education. And throughout that period we've been unusual in developing our own technology in-house. So it's, we own the whole stack, so it's our proprietary curriculum, our proprietary content on our own in-house technology platform, taught by our teachers to students that we've recruited.

And as generative AI emerged, we started like everybody else building technology to support our teachers and AI teaching assistant. We realized that that technology was incredibly powerful and that was really wasted being locked within. Just the EF school systems. So we made the decision just over a year ago to start supplying the software to any school in the world that needed that technology, including companies that we previously regarded as competitors, right?

We were like, uh, let's supply. It makes no sense for every language school in the world to try and build this. Let's supply the industry. And in that process, we've started supplying public school systems around the world, which is where your 4 million student number comes from. 

[00:44:01] Alex Sarlin: That's really exciting. So for one thing, I always thought EF was English first.

It is education First. I just learned something very important about one of the biggest education companies in the world, as you mentioned, biggest private ones. Tell us a little bit about education First's mission, and you mentioned language teaching, which is core to some of what it does when you are building this technology for the EF environment.

What were some of the things that you built into the teaching assistant, which you then found to be really relevant and useful across the board for anybody doing teaching? 

[00:44:29] Stephen Hodges: So e well, let me go back to what is ef So you, you know, the origin. So EF started as an immersive language experience, so basically taking school kids abroad and let's say Brighton UK to learn English or Paris to learn French.

And that's the origins of the company started 60 years ago. So EF has been teaching languages and English in particular for over 60 years in the US we're best known as an education travel business. So go ahead. Tours and such like, so bringing, um, Americans and high school students, typically to Europe, but now Asia is very popular.

So as a series of businesses, as I said, 30 years ago, we moved into online education realizing that students didn't necessarily need to travel abroad or attend a physical school to learn a language. And so we moved into online it. The technology over those 30 years has evolved a lot. Now, if I take the recent history, then sort of about five years ago, we started moving beyond sort of Zoom type environments to create more immersive classrooms where we would drop.

Students into situations to sort of, whether it's checking in an airport or talking to the passenger next to you on a flight, or those kind of things to help create as immersive a learning environment as possible. At the same time, when sort of tensor flow started coming out, we started building ML models because we own our own technology platform.

We've been recording all the data streams for several years. So as TensorFlow came out, we started playing with what can we use with the data we built pronunciation. Engine so we could really get into helping fix a student's pronunciation. We also built tracking of a student's English proficiency, so we look at 150 different variables and hyper accurately can track what a student's English capability is.

And that also helps us diagnose what a student needs to work on. So that was all designed to feed back to our human teachers on, okay. If you're teaching Alex, well, Alex is doing very well at this, but Alex really needs to work on this. So put emphasis on, let's say his grammar or his fluency. And then obviously about 18 months, two years ago when LLMs emerged, that's where we started.

Building a teaching assistant, which could take over some of the instruction, but also automate a lot of the activities that were being manually done in the school. So let's say writing corrections or creating after class feedback reports, or automatically generating what a teacher should focus on in the next class.

And such like giving all of our students wanted. More feedback than we could really afford to give them. With ai, we can very affordably create personalized feedback after every lesson. So it was tools like that that we started developing. 

[00:47:39] Alex Sarlin: It's fascinating. I mean, you were mentioning three different areas of AI that are all, I think, very fast growth in education.

One is this idea of tool suites for teachers with things that can create efficiencies and productivity gains and support teachers. A second is support for sort of human to human interactions online, right? The ability to give your, you mentioned your human tutors or teachers insights into what people need to, how to differentiate instruction.

And the third, which I wanted to ask you about is language practice. This is a really fast growth field in ai. We've seen. Dozens, frankly, of companies, including some very highly ranked apps in the app store emerge based entirely on the idea that language learning can be done a lot better in AI in particular, because of the feedback that the idea of being able to hold endless conversations with a virtual being that speaks the language you're trying to learn.

Learn is very immersive. You've obviously been involved in all three of those areas and leading them in certain ways throughout. I'd love to hear you talk about, especially that last one, this idea of the feedback loop that you're mentioning, the idea of immersive doing a scenario. How is language learning changing from the traditional way of doing worksheets and and exercises to much more immersive and conversational in the world of ai?

[00:48:53] Stephen Hodges: So speaking English or speaking a language as a skill, like any skill, you only learn it by. Practicing it yourself and getting feedback. You know, you wouldn't learn how to play golf in a classroom, right? So get out there, you know, you may watch a few videos, but at the end of the day, you need to get out on a course and get a few balls and get feedback.

And so language learning, the traditional problem with language learning has been getting students that practice affordably, right? Which is why in EF schools, the majority of the education provided online is one-on-one personal tutoring, which is the maximum. But that of course is pretty expensive. I mean, we would sell that to a student typically at sort of $1,500 per annum, which is way more than a lot of people can afford.

But as you increase the class sizes. In language learning, of course the tuition fee drops, but also does your opportunity to practice. I mean, that's why most language schools wouldn't have a class size of more than, let's say 10 students. 'cause otherwise, you as an individual don't get the chance to practice producing language and getting feedback.

And of course, by the time you get to a public school system and you've got 30, 40 kids in the classroom, well then you'll almost give up on the students practicing individually and you go back to comprehension. So reading, listening, grammar, vocabulary. But the kids never, you know, my children are learning Spanish.

They've done three years of Spanish in a very good school with a very good Spanish teacher. But quite frankly, they don't have the confidence to produce language on the fly because they've just not had enough practice. And I think what AI does is give every individual. That ability to practice and get feedback, add an incredibly affordable price, and that now will transform all parts of language learning.

But I think it has the biggest benefit in the large class size environments because no matter how good your teacher is in a class size of 30, 40 kids, it is hard to give them the confidence to speak or write English. So I think it's worth pointing out that we, you know, a lot of apps focus on speaking, particularly when you target adults, because that is the skill they want in a more academic environment.

We focus on all four skills, including writing and such. So it's not exclusively. Speaking, but again, producing language, whether it's a memo or an email, and then getting feedback on how to improve your written English is an important part of our product as well. 

[00:51:35] Alex Sarlin: Yeah. Let's drill down on that idea of transferring what EF has been doing in private tutoring in travel, but you know, all of these different mechanisms for learning that, you know, immersive students actually traveling to different places to learn the language.

You know, you are now taking Efekta, and as you mentioned, you're actually offering it to companies that may have even been considered competitors. You're offering it to schools themselves. You have 4 million high school students, 25,000 teachers worldwide. I'm curious about some of the translation that you've seen, you know.

In moving from an environment, like you mentioned, adults want to learn it maybe for business reasons or for to be able to speak and meet people in the language. In a high school or university setting, it can be more of an academic, people learn to write or they're learning to translate, you know, donkey hode or all of these very different types of mechanisms.

Tell us a little bit about that. You know, how the tools that you've made in a proprietary way for EF are operating in an environment with 30 or 40 students in a class. How are they supporting the evolution of language learning and actually bringing some of that voluntary one-on-one learning that EF is known for into a much more structured or academic school environment?

So I, I think it's a really exciting prospect. 

[00:52:47] Stephen Hodges: So if I take companies or organizations first, you know, what we see as the ability to use AI to provide this individual practice and feedback. Um, as you drop the price point, instead of a large company selecting a few people to receive the education, they're now rolling out to a much wider set of people.

Right. And particularly where. Speaking the language is mission critical. So let's say aviation is an industry where English is the language for safety. You know, a lot of the operators need to be able to speak. Then you're seeing companies saying, oh, fantastic. Now I can actually afford to provide this education to a much wider set of people than before in the public school system.

It's, it's interesting, we never realized. When we first sold the product into public school systems, we kind of imagined it would be used as homework. So the teacher would do their thing in classroom as they'd already done or always done, and then the students would go home and then use the system, you know, to do homework exercises, including, you know, in the privacy of their own home, practicing conversations or you know, other activities.

But in reality, what we largely see, not exclusively, but largely, is the teachers have automatically adopted a flipped classroom model where, you know, they come in, they're sort of like, okay, everybody sit down, settle down, log into a factor, right? We're on whatever, you know, lesson 24, right off you go. Then they are at the start of the class, they're checking that the kids don't have technical issues or whatever on, on Instagram.

And they're actually, you know, logged into Efekta on their devices. And then both the support that our system provides as well as them, you know, just patrolling the classroom, they can see the students that need their help most. And so they'd say, right, okay, you know, you guys are clearly stuck on this activity, right?

Like, take a break from Efekta. Now let me provide you, you know, let's say six students. Let's huddle around the whiteboard and let's talk through this concept and practice before I let you back on to the Efekta system. So the teacher is choosing where to spend their time. You know, they may be giving the top students a high five saying, well done, but they're really trying to bring the bottom of the class up.

And it's so interesting that that happened because now we recently. Didn't design the product with that as the only use case. I mean, we knew it could be used like that, but we weren't mandating. That's how it's used and certainly the government. Wasn't mandating, that's how it should be used. So, you know, but the majority of classrooms I walk into, like I said, this sort of flipped classroom model is where they seem to have naturally evolved as being sort of optimal use of the technology.

So yeah, super interesting. Completely changes the teacher's job because they're no longer. Providing sort of, you know, basic instruction that's left to the system and they're really providing the support and motivation and remedial education where needed, but it's totally their choice. 

[00:56:03] Alex Sarlin: Yeah. It's also exciting because it's an environment where you have every student practicing continuously, which is something that, as you mentioned, very, really doesn't often happen in large scale language classes.

In schools. You have a teacher in front of the room, they're calling on students. Maybe they're doing a little bit of group work, but there's not a lot of chance to practice. So the continuous practice, I imagine it's probably having some really excellent effects 

[00:56:26] Stephen Hodges: that it's amazing. I mean, even. For the Efekta team.

I mean, we've been going into these classrooms over the last sort of year or so since we, we moved in this direction and when we first moved into the classrooms, and you know, typically, let's say in Latin America is where we've had the most traction so far. The students couldn't speak a word of English and now when we go there, they can actually have a conversation with us.

I'm, I'm not pretending that they're anywhere near fluent, but they have the confidence to talk to us and we can understand them and we can, you know, they've certainly made progress in the year on the state tests. Where there's sort of third party tests that the state runs there, we've seen a significant improvement in scores on these tests sort of pre and post the rollout of Efekta.

You know, so sort of the order of magnitude of 30% improvement in test results, which for when you're talking about the average student out of millions of students, is. Incredibly impressive and shows the power of ai. So yeah, 

[00:57:29] Alex Sarlin: agreed. And it shows the power of AI in transforming, as you say, what educational experiences look like on really on a base level.

I mean, you know, what does a language class look like five years ago? What does a language class look like today? AI can truly transform what is happening in that class. I have one more question for you. It's a little bit of a curve ball, but I'm really, I'm sure you'll have great thoughts about it. You know, one thing we've been talking a lot about recently with various entrepreneurs and and company leaders is AI in a social context, AI as a facilitator of learning between two people.

You've already mentioned how Efekta can be used in, in online tutoring to support the tutors and giving differentiated instruction or helping teachers. But do you already have any features or do you anticipate any in which the ai, basically you have more than one student talking to each other and the AI can be supporting that kind of conversation.

[00:58:16] Stephen Hodges: It's certainly on our roadmap. I mean, the AI can identify students, say within the class or within the school system that are reaching a similar level at the same time, and then you would put them into an exercise, which now for one of the ideas we had, for instance, was sort of playing Pictionary online where I would get the picture and then I'd have to use English to describe it to you.

I love that. And you would have to guess what picture I was describing or you know, you could imagine facilitating what we would call in the UK pub quizzes where teams of players play against each other. So now I think. You know, we absolutely peer to peer learning where the AI is in the background grouping us based on our abilities.

Like an online game would do. Yeah. Or providing support if required, you know, on the exercise, so, so we've built out a whole bunch of ideas, which will come out sometime next year. 

[00:59:19] Alex Sarlin: That's very exciting. I, it feels to me like that is a yet another evolution of how AI may transform education already creating immersive practice, creating differentiated instruction, creating, you know, real time support for teachers in all kinds of ways.

I feel like the social aspect is gonna add even another really exciting layer. Imagine if that's what language class was for, for your children or for you and me. You go in and you play Pictionary and you play games with your friends in the language, and you're getting real time feedback and the teacher's popping in to give you tips.

I mean, that sounds like a blast. 

[00:59:50] Stephen Hodges: It's so much more fun than traditional language learning. You know, as I said, I even in a year of traveling into these public school systems, it is. Transformative in what was really a vocab test or reading from a 1980s textbook, you know, or a textbook doesn't change much since the 1980s.

And don't forget, like in many parts of the world, there is no English teacher, right? So we sort of take it for granted that our language teacher in the Western world speaks the language and is a good teacher. But in large parts of the world, there is no English teacher. The English teacher doesn't themselves don't speak English.

So particularly in those classrooms now, you're sort of really solving the qualified teacher shortage, you know, around the world for those kids. So I'm not surprised. That we are seeing a dramatic improvement in test scores in environments like that. As I said, going back to the enterprise example, I think the real power of AI there is just, is less transformative at an individual level.

What is transformative is just the number of employees that can now receive the benefit because it's just become more affordable. Right. You know, it's a reach transformation rather than a quality transformation in those environments. 

[01:01:09] Alex Sarlin: Right, because those are already one-on-one tutoring environments, so you can then scale that.

Exactly. 

[01:01:14] Stephen Hodges: Yeah, yeah, yeah. You, you make it much more scalable. Yeah. So you can start. Training tens or hundreds of thousands of employees if you have to add a fraction of the price. 

[01:01:24] Alex Sarlin: It's really incredible. I'm really excited about all of this work. Dr. Steven Hodges is the CEO of Efekta Education, the global leader for ai, teaching assistant technology, serving 4 million high school students and 25,000 teachers worldwide, and it sounds like it's just getting started.

Thank you so much for being here with us on EdTech Insiders. You're 

[01:01:43] Stephen Hodges: very welcome. Thank you, Patman. 

[01:01:45] Alex Sarlin: Thanks for listening to this episode of EdTech Insiders. If you like the podcast, remember to rate it and share it with others in the EdTech community. For those who want even more, EdTech Insider, subscribe to the Free EdTech Insiders Newsletter on substack.