Literacy Talks

Rethinking AI Through a Literacy Lens: A Conversation with Viv Ramakrishnan

Reading Horizons Season 8 Episode 15

In this episode of Literacy Talks, we sit down with Viv Ramakrishnan, founder of Project Read AI, to explore the intersection of artificial intelligence, pedagogy, and equity in literacy instruction. Viv shares his journey from teacher to tech innovator, the surprising limits of AI in decoding language, and how real-time, research-based tools are being designed with teachers—and students—at the center.

SHOW NOTES

Resources Mentioned in This Episode

Project Read AI Viv Ramakrishnan’s platform offering AI-powered tools for literacy instruction. Tools include:

  • Decodable Text Generator – Generates decodable stories aligned to various phonics scope and sequences.
  • Fluency Passage Generator – Creates passages by grade level, aligned to readability research.
  • Comprehension Question Generator – Builds questions aligned to Common Core or Florida B.E.S.T. standards.
  • Decodable Games Generator – Generates literacy-based games like bingo and rolling reads.
  • Assessment Planning Portal – Streamlines data collection and lesson planning based on encoding assessments.
  • Student Tutor Tool – Provides real-time decoding and fluency practice based on individualized needs.

The Reading League Conference: Where Viv presented key insights into AI and literacy.

SPACHE Readability Study – Used to align fluency passages with grade-level readability.

💬 Want more insights like this?
Subscribe to the Literacy Talks Podcast Digest for episode recaps, resources, and teaching takeaways delivered straight to your inbox!

Do you teach Structured Literacy in a K–3 setting?
Sign up for a free license of Reading Horizons Discovery® LIVE and start teaching right away—no setup, no hassle. Sign-up Now.

Coming Soon: Reading Horizons Ascend™
From Pre-K readiness to advanced fluency, Ascend™ offers a consistent, needs-based reading experience across every grade, tier, and model—so every student can build mastery, one skill at a time. Learn More.

Narrator:

Welcome to Literacy Talks, the podcast for literacy leaders and champions everywhere, brought to you by Reading Horizons. Literacy Talks is the place to discover new ideas, trends, insights and practical strategies for helping all learners reach reading proficiency. Our hosts are Stacy Hurst, a professor at Southern Utah University and Chief Academic Advisor for Reading Horizons. Donell Pons, a recognized expert and advocate in literacy, dyslexia and special education, and Lindsay Kemeny, an elementary classroom teacher, author and speaker. Now let's talk literacy.

Stacy Hurst:

Welcome to this episode of Literacy Talks. I'm Stacy Hurst, and I'm joined by Donell Pons and Lindsay Kemeny, as we are every week, and we have a special guest today, and we are so excited to talk about this topic, which essentially is AI in education, and we have VIV Ramakrishnan, did I say that? Right? I really like to say that it's just a fun last name, and he is from Project read AI, and many of you have probably at least heard of things he's created. So Viv, before we dive into questions and discussion, will you tell us a little bit about your background and what brought you to this point where you are in your career today,

Viv Ramakrishnan:

I'm happy to I think there's a long version of that and there's a short version of that. I'll try to go somewhere in the middle. And what I would say is, like many of you, I'm sure that are listening to this podcast, I'm sure that you've arrived at the conclusion, at one of the, if not the single, most important thing that we can do for students is make them into confident, skilled readers. I arrived at that conclusion, though not because from the jump I knew I wanted to get deep into literacy. I arrived at that conclusion going all the way back to high school, when my high school football coach in Madison, Wisconsin, where I'm from, asked me to start a kind of inter teammate tutoring program for students on our team that were not eligible to play on Friday nights. And that got me started in setting up and this program for students that you know by Wednesday you need to be passing all your classes, your teachers need to sign off in order to play on Friday nights. And I got deep into tutoring a lot of my friends and teammates at the time. And I just that kind of lit the spark for me around educational equity. I felt something profoundly unfair about seeing friends and teammates who were equally talented, hard, working, curious, who just had very different outcomes in school and ultimately in life based on factors that were really outside of their control in many cases. And I also played ice hockey, and I saw students from much more affluent backgrounds who, with all the out of school tutoring and parental support, really were able to paper over maybe some gaps in work ethic, quite frankly, and just this profound distinction between my groups of friends became very clear to me while I was running this tutoring program on the football team, and that really just lit the spark for a passion in education. So from that point onwards, I got involved in advocacy in Madison, Wisconsin, which is my hometown, to the school board on a variety of initiatives. I helped start a preschool that we grew vertically into Madison's first public charter school. I did a lot of education research in college, so I worked with an economics professor whose area of study was all sorts of educational issues, from school funding to the importance of school based mental health supports. I was a GED tutor in college, and then I ultimately moved to Memphis, where I taught high school, and then I was, I was teaching high school in Memphis. I very much saw the impact of literacy gaps, even at that point. And then I moved back home to help grow that preschool I'd help started vertically into a public charter school, and I wore many hats there, including first grade teacher. And once again, I saw the impact that early literacy gaps can have even more palpably than I saw, you know, and for the 12th graders, and that ultimately led me to grad school at Stanford. And I think I really I loved working in a school. I loved building a school like truly, from the ground up. I think I was a bit burnt out, but I also wanted to try to work on and solve some of the problems at scale that I encountered. So, you know, day to day as a teacher. So that's the long story made short. A bunch of different experiences in and around the world of public education. That ultimately led me to starting this.

Stacy Hurst:

Well, we love hearing that, and I should have prefaced this by saying that Donell and I attended your session at the reading league conference, and you did give your background then. And I thought, What a great background to have for what you're doing now and the perspective you're bringing to technology and AI, really, but Donell, do you remember him mentioning the pre K story?

Donell Pons:

Yeah, yeah, it was part of everything that you talked about. But it is interesting, because you are that unique perspective of building from the ground up. I don't think you can emphasize that enough. You really get to see the nuts and bolts of everything, of how it goes together. And I think that's really important for your vision for scaling. As you say, No.

Viv Ramakrishnan:

I mean, I think that everybody on this podcast probably forgotten more about literacy than I know. And I say that because I think what my unique lens is the ability to connect dots that typically don't talk to each other, right? Yeah, I have been a school chef for three months. I was the custodian from seven to midnight for 2.5 months and change, right? I taught first. I taught 12. So I just, I guess, have, in a fairly short career, have been lucky enough to wear a lot of different hats and try to think about maybe, instead of trying to coordinate different moving pieces, like at least, I can oftentimes get up and running with the first draft of what we're building, for example, that's rooted in my experience.

Stacy Hurst:

Yeah. Well, no wonder you can relate to us and many of our listeners too. In fact, as you were talking. I was thinking, Lindsay's currently teaching first grade. I'm a former first grade teacher. Lindsay, I mean, Donell taught high school, and so lots of relating going on. There one thing that there was an insight from your session that I really took away, and I've been thinking about it since then, but your understanding of how literacy develops, and not only with instruction, but what's happening in the learner brain, it was impressive, but also then comparing it to how something like AI is created with what you started out this session talking about llms large language models, and in that moment, that gave me the insight of, oh, yeah, because we start with phonemes and build so how have you addressed that? I don't know. Maybe I want to say, say more about that.

Viv Ramakrishnan:

Yeah, it's, it's a good question, and it's one that I didn't realize would be kind of a fundamental incompatibility between how these general AI systems are trained and how we actually think about language development. Think about it this way, when a large language model, whether it's chat GPT or Gemini from Google or Claude from anthropic, decides what next word to predict. It's essentially a next word predictor based on all the context it has and what you've prompted it to do. It essentially has a barcode or a QR code, associated with every single word in the English language, and in some cases, associated with snippets of words. But those snippets of words don't actually track how we think about breaking down those words in third letter sound relationships. So I think the example I gave during the talk was, like, the word bench was one unique token from chat, GPT, and like, crisp was two. It had, like the initial blend, and then it had ISP. And you were like, wait, what? And so all of which is to stay these large language models have statistical representations of our language that are essentially like QR codes for every word, or in some cases, combinations of words or parts of words, like bench might be a totally different different one from bench is, and those just do not map on to how we Think about the relationships phonetically between letters and sounds,

Stacy Hurst:

which is really interesting, because that had never occurred to me before. I know Julie Washington has pointed out even the just kind of essential issues with large language models, it depends on who's inputting the language right. It may not, may or may not be sensitive to dialect, but then also thinking about it in the way that you just described, I think that's helped me conceive the differences. I'm curious too, because you have such a breadth of experience in education. Was there a moment or a time when you thought, you know, I think technology could help solve this problem, specifically, AI, how did you come to that?

Viv Ramakrishnan:

You know, it's interesting, because I don't think during my time in the classroom, there was ever really an ed tech product that I had, like, a really, deeply positive impression or experience with, you know, I think some people are like, Oh, I love this product, or I love that product. There was no. One product that I could tell you I would just hang my hat on and be like, yes, without reservation, that's what I want my kids doing every day. So all of which is to say, it might be counterintuitive that I don't think I came into it with that impression. I think even you know, just from being in school, my question was, like, I've the thing I've always cared about is, is impact, efficacy, and then the intersection with scale and so, like, I always thought that I was going to go into public policy and education policy specifically, at some point down the line, and then, without going into, you know, much more detail, I think I really lost faith, both at the state and federal levels, in our in our politics and and how to affect change with that. And there's many people fighting a great fight in that realm. I don't know if I have the patience for it, and so I think of technology, perhaps more, as a means to doing something at scale without needing the permission structure of that policy requires. I don't think they're substitutes. I just think I'm an impatient person that likes to work on things quickly, and technology was a good fit for that. I also think that I don't think I was like, like a techno optimist that was looking for somewhere to apply my solution. It was more of a, you know, realization at the point I was in grad school that AI and these large language models will change education one way or another. And I think it behooves us to have more people that have been in the classroom, and not just in any classroom, but Title One classrooms, for example, and care about equity, to have a voice and role in shaping what that looks like, and then within that, I think the most important problem we can probably solve, amongst many, is early literacy.

Stacy Hurst:

Well, it certainly is a good, powerful angle to take, honestly, because your mention of scale and efficacy, because now we can gather data from how these tools are used and improve on them. Tell us a little bit about project. Ai, how that came about, and what's available for teachers on your platform.

Viv Ramakrishnan:

Yeah, absolutely. So what we have on Project read AI's site right now is kind of a suite of tools, most of which are for teachers and one of which is for students. And the way this started, actually, was with the question, could we build an AI powered phonics tutor to give real time feedback and support to students, just with the sort of time that I personally did not have to do individually for every student in a way that was helpful for them, and where, like we could just do so at super low cost as well, and where it didn't require a school to pay 1000s of dollars for a virtual tutoring seat, or where it didn't require parents to pay hundreds of dollars a week out of pocket. So that was kind of the vision to start with, and that's what we spent that whole first six months working on, let's say in parallel. I should mention that I was working on, can we build decodable text for this end while in grad school and hitting a wall and getting really frustrated? So I was like, Okay, I guess we're gonna have to hard code the text that we're writing into this we'll have to use AI for, like, speech recognition, but not for every certainly not for feedback. But as we worked on and released that, within like a week, I kept getting feedback from teachers, often on Facebook, being like, I can tell you, or you know, you have decodable text under the hood. And by this point, the models had gotten better we'd started they were probably in the high 80% range on decodability. So still not where I would want but better from where I was, you know, almost a year prior in grad school, they're like, chat, GPT, can't do this. Can you build us a tool for building decodable texts? And I kept hearing that, and so I kind of hit pause on the stuff we were doing on the student feedback tool, put out a survey into the Facebook ether, and like, within one day, there was like, you know, like 400 responses of teachers saying things like, I'm trying to remember the exact stats, but it was like, 55% were paying extra money out of pocket for additional decodable texts. 1/3 were spending time outside of school hours writing their own decodable texts. And obviously this is a subset of people that are on the Facebook groups and that filled out the survey, so maybe that's not a representative sample, but a ton of people basically said this is a huge problem for me, and I would love to have more varied decodable texts that are actually decodable for my students, and so we kind of hit pause, spun up that decodable generator, and then it just spread like wildfire and continued to get it better and better and better over time, given the limitations of large language models, put a ton of time into that, and there's still probably five things I wish we could do. But, you know. There's a finite amount of resources we have to build everything. So yeah, we just kept improving it. But I guess what I would say is that the long game for me was always, can we build this student facing tool that would be helpful they could use as a vitamin every day, and that would give all this data to teachers that didn't have time to collect it themselves. But we hit pause in response to just hearing from teachers, there's a component of what you're building that would be super helpful for me, like today, and that's what led to that decodable generator that that went viral. And that's that sounded right? I mean, when I, you know, taught, I had one text for each lesson in my curriculum, and I think that's the experience that most people have, and then they're supplementing in different ways. So building that was something that people really enjoyed.

Donell Pons:

Yeah, Viv, you had something interesting too. When you were at the reading league conference, you gave us an overview of AI and kind of defining things that we refer to in the field, if maybe we don't have a background. And the difference between a large language model, for example, can you give us just a little rundown of that for just an understanding, a base understanding? Because I don't think I'm that unusual as an educator, that I've heard a lot of terminology. I think I know what I'm referring to, but maybe I'm not sure, and it made it you made it very clear that understanding some of these nuances could be important?

Viv Ramakrishnan:

Yeah, totally. I think I it's easier to do with the diagram, but I'm going to try to do it in Word. So think at the highest possible level of AI Artificial Intelligence, I'm going to peel back that onion layer by layer and the highest level is AI Artificial Intelligence, and that really is any computer system where you feed it an input and there's some sort of output, and it's meant to mimic things that or certain things that humans might do. And AI systems with this broader definition and classical definition have been around for a long time even think about like when you apply for a credit card, you're inputting all of this information, and they're using an AI model, or a machine learning model, is what they might call it, to try to assess your credit risk and decide whether to offer you a credit card and on what terms. There's not a human at almost any institution that's doing that anymore. These are algorithmic models, and they've been around for the last 1015 years at least. So Artificial intelligence has been around a lot longer than the last couple of years. Now, the subset of artificial intelligence, generative AI, is what's really had its Cambrian explosion in the last couple of years. And generative AI models are models where it is creating from seemingly scratch, text, visual output, like image generators, audio output. I during the presentation, I cloned my own voice with one of these things and kept, you know, having fake fibs, say stuff and so there's, there's different multimodal versions of these models, but underneath those generative AI models falls what I call a large language model. And large language models are the text generation models that you would see when you interact with chat, GPT, and some of those have since added all their modalities, voice, mode, images, et cetera, but they started with just text, and that's still kind of their their base layer. And those large language models are where you provide an input and the output is text that has been generated from scratch. And it's not like a decision tree where, you know in the past, it's like, oh, you input A, then options B, C and D might come out? No, this is truly like a probability distribution of all sorts of things that the model might say and what comes out is not necessarily the same exact thing based on the same input. So there's not a one to one relationship between the input and the output. It truly is generative. And those large language models are oftentimes what we think about, or generative AI. And large language models are what we think about when we talk about AI nowadays. Now when people talk about chat, GPT, that is one example of an application. It's like, you know, when we talk about television, Netflix is one application, or YouTube TV is one application. Those are ways for people like us to interact with those models, but they're not the technology themselves. So the technology itself is artificial intelligence, gen of AI, large language models, and then applications like chat, GPT, or any of the ones in the education world that we're discussing, including our own, are applications that we build on top of that technology, or in combination with that technology.

Donell Pons:

Yeah, so Viv, that helps us understand why an educator might have difficulty interfacing thinking, oh, I'll just get jump on chat GPT, for example, and write a codeable text, and they end up with something they can't use. Yeah, that's interesting. Yeah,

Viv Ramakrishnan:

helps you totally. And that was also part of some of my focus, or my thesis of the of the presentation, was that the lower strands of the rope are far less amenable to out of the box implementations of chat GPT than the upper strands of the rope. I mean, these are really open ended, like, think about a student where you're teaching them the 100 most common grapheme phoneme correspondence is not because you want to teach them. Every single grapheme phoneme corresponds to the English language, but you want to give it enough escape velocity the child enough escape velocity to encounter uncontrolled text out in the wild, right? That's the whole goal. Those models are really good. Once you're at escape velocity. They can write all sorts of stuff. They have the internet that they're trained on. They can get really creative, like you can ask it to write limericks, poems, jokes, et cetera. And you're like, Oh, dang, it's pretty good. But they think about language. They're building blocks of language. Those tokens are different from how we would think about teaching phonics systematically and explicitly.

Stacy Hurst:

I which is so interesting, because obviously those lower strands are so important to the outcome of reading comprehension. And typically, I think there's been a lot of focus there, but understanding that limitation, I think, is important. And it does seem like teachers are we're used to learning about technology and utilizing it, right, consuming it. But what I appreciated about what I saw on project, read AI on your website, I did spend some time there, is that you can tell that this was created with pedagogy in mind. So what you don't, you can't assume, right with those like chat, G, P, T, so as a teacher, I don't know. I always thought the ideal would be to see how teachers would use it, like keeping the teacher the central thing, and then creating something that would be beneficial to match their pedagogy and not the other way around, just us consuming it and trying to make it fit into what we're doing. What can you say about the relationship between that pedagogy and what's available for teachers to utilize and inform?

Viv Ramakrishnan:

Yeah? Yeah. Good questions. I think that we have tried to align as much to the established evidence around how to teach the lower strands of the rope as we can and have tried to use technology as a means to potentially extend some of those things, but not reinvent the wheel. And that's like the dance you do behind trying to, like, build novel technology that still has a research base and logic and existing evidence behind what you are building, right? So that's number one. I think the other thing is, this is where people always talk about putting teachers at the Center building with teachers. That's totally right. I think that's table stakes. I think, though, there's an analogy I want to draw when you go on one of the Facebook groups like the science of reading what I learned in college, what's hard for your average teacher to do is disentangle what seemingly equally qualified and pedigreed researchers say when it's in conflict. And you're like, I'm not as I'm not a statistician. I haven't kept up with. The like, oh, you know, Kilpatrick got invalidated. Oh, well, you know, it's like, it's hard to keep up with. How do I disentangle what is really high quality, well validated research versus what is something that are evolving has understand, or like our understanding of which has evolved over time. Similarly, I think for a lot of technologists that are building with teachers, one of the things that I think have the advantage of is not that I have the expertise that you all necessarily do, but I have enough to know that I'm speaking to a teacher who is using methods that don't actually align to the science, or I'm speaking with a teacher who is implementing you fly, who's our partner in a lot of awesome, close ways, but going completely off script relative to how the authors of the program intended for it to be implemented, right? And so it's not just a question of putting teachers front and center, but being able to disentangle like real signal around what is great practice from, from where maybe somebody's still on their journey, right? And I think for somebody who doesn't have an education background, or an elementary education background, that would be really hard to do, the same way that you see teachers struggle to, like, disentangle that at, you know, kind of high research level.

Stacy Hurst:

Yeah, yeah, that's great. So aligning it with pedagogy, evidence based pedagogy, not necessarily any teacher in the wild, teacher, knowledge matters here. Yeah.

Viv Ramakrishnan:

So much of like, I think what's happened in the last several years is people are shifting their practice, not everybody. Some people you know already understood or. Letters trained or whatever have you, but I think trying to make sure that not all feedback is automatically incorporated and built around, but that you are, in some cases, telling teachers, hey, I really know that's what you are asking for, but actually that's a visual matching activity that's not actually decoding activity, so we're not going to do that right and explaining that you're making certain choices. Okay?

Lindsay Kemeny:

Teachers, if you're like me, you want things that actually help in the classroom, right? This is one of those things. Reading Horizons Discovery is offering teachers free access for the whole year. It gives you simple, structured literacy tools. Saves you time, and you don't even have to set up student accounts. Head over to reading horizons.com/free, and grab your license. You're going to be so glad you did. So I'm curious. Viv, so I'm on your website. I can see that there's the fluency passage generator, which we've talked about a little bit. You can kind of choose, like you can use the you fly, scope and sequence. You could choose what lesson it's going to generate a story. Do you want to share what other tools? Because I see you have some other things for teachers on the website. Do you want to just describe those? And yeah,

Viv Ramakrishnan:

three buckets, Lindsay. And they're three distinct buckets or products. I would say one is that decodable stories generator, where you choose your scope and sequence. If we have that alignment you if you with the with the paid version, you can insert custom words without it, you just have standard stories, but you have some weekly limit, and that is the like tool that kind of went viral amongst teachers, you know, two years ago, give or take, and that we've improved a lot over time, and is now 98% decodable on average, versus where we started in the high 80s, you know, two years ago. We have additional kind of ancillary teacher tools as well that are kind of those quick generators. One is like a grade level fluency passage generator. And because, you know, you have dibbles, for example, but you don't have as many passages you want, and you can really customize those, and the text a bit more uncontrolled, but we do align them to like the spash readability study that the University of Oregon did a couple of years ago. So stuff like that. We have the comprehension question generator, where you can choose the Common Core standards for any text that you input to get resulting comprehension questions that are either multiple choice or open ended, that are based on informational text or or literature, that include the standards that you chose. And we just recently added the Florida best comprehension standards because we had a lot of teachers and school districts in Florida using the platform that wanted those to be best aligned. We have a decodable games generator, where if student needs word level decoding practice, then you have bingo. You have rolling reads, all those good things that you all know and are familiar with. So that's kind of like our category of decodable generator and ancillary Teacher Tools. Number two, we have, and this is a new product this year, is the U fly Planning and Assessment Portal, or rather, assessment planning portal. We just call it the ufli portal. And this has been something that's been in the works for about a year and a half, and was piloted last year, was conceived of, prototyped, worked on with the U fly team internally, then piloted with several 100 teachers last year, and then this year, we've released it and are continuing to iterate on it. I always say it as a work in progress, but that's where we worked with the team at U fly, where you turn your weekly encoding progress monitoring data into your small groups and lesson plans for each small group for each day of the following week, according to the U fly tier one program, and that is really intended to solve two challenges. One, well, a couple different challenges. I don't know if it's just two, but one being that, that has been probably the part of you fly, which is notoriously a teacher friendly program that teachers have struggled most to implement, because it requires them to do this end of week progress monitoring assessment, then down to the graphing funding correspondence circle which students got which concepts correct, group them in a way that aligns to the PD that they've provided on the subject, and then match each of those small groups During the 15 minutes next week, which are actually split into two small groups of seven to eight minutes each, based on the concept with lesson plans that target the additional needs that those students that were identified in progress monitoring have to do that in earnest every week probably takes like 90 minutes at least 60 and that was just such a pain point for teachers, and what it meant is that a lot of teachers weren't, and probably are still not, administering progress monitoring and designing responsive small group support as they as as they would want, the authors would want that to happen. And. So Dr lane and Dr contess reached out to us because we'd asked for their blessing on the Decodable generator alignment. Nice, which I think they appreciated, and that kind of built the relationship. And like, two days like, I flew down to Gainesville, and, like, we whiteboarded out the problem and what a dream solution in theory would look like. And then, like, two days later, I had a prototype that I built, like, is this kind of what you have in mind, and they were like, yes, and it was more it was like, still in the world of code, it wasn't like a friendly user interface or anything, but we kept iterating on it, and it really brings that process down to about 10 minutes, all in which is just inputting your students data, either by typing in only the misspelled words, by scanning it in, and then editing what the AI might have mistranscribed. And then it turns it into your groups for each of the five days, across both concepts along with the exact activities from the manual that the U fly team has curated for each of those lessons in each day. So that has been like awesome. And then the third tool, which, again, I was like, put on pause for a long time, and we've like, kept working on it as time and resources allow, is the student facing tool, the tutor, as we call it, where students, again, now aligned to the U fly scope and sequence, either in Word mode or in story mode, can respectively practice decoding and focus on accuracy there at the word level, or can read sentence by sentence and work on kind of the early, early stages of fluency. We don't yet have, like, full paragraphs or stories on screen, but that is where, you know, I've I've thought just like, that's the thing where I'm like, Yeah, this could really change and move the needle directly for students. And where we've seen really positive early data. We have our first school wide pilot from last year that Matt Burns is going to be publishing at some point soon. I don't know the exact details on when, and then we have a lot more research in the hopper on that. And where I think there's a ton of possibilities for how these tools can talk to each other in the future that we haven't even yet built.

Lindsay Kemeny:

So you have, like, the passage generator aligned with you fly. Have you done other sequences with other programs? And yeah,

Viv Ramakrishnan:

absolutely. So we have a couple others in there. We have ckla, we have MC Orton Gillingham, we have the one that yours truly made before anybody gave me permission to align, in which I'm happy to retire as soon as, you know, but it's funny because, like, we, like, retired it for like, a day, and then they were like, No, where'd it go? I want that. But before we had permission from anybody, I just, you know, built my own. And then we have a couple others as well. And then we've also done district specific ones. So we've had, and most districts don't have their own in house teams that are building curricula from scratch, certainly not foundational skills, scope and sequences. But we do have one, which is a very large district, Orange County, Florida, where they do use the Decodable generator, and they have, like, a version that is just built for their, you know, their own scope and sequence. Okay?

Lindsay Kemeny:

And follow up question, sorry, I'm getting specific. So when you choose like, Okay, you're like, This is the you fly, scope and sequence. Do you also include their high frequency words up to that point, you know as well?

Viv Ramakrishnan:

Yes, yeah, it has, it has awareness of that. And when you choose, like lesson 84 the long AI or a Y vowel teams, it's going to give you a higher density of those words too, as well as hopefully everything up until that point that's either decodable based on the graphing funding correspondences that have been introduced, or the irregular, temporarily irregular words they've introduced up until that point. Okay?

Donell Pons:

And Viv, how labor intensive is it to make a bespoke like, say, there's a district they're using this. What does that represent to be able to provide that?

Viv Ramakrishnan:

It's a good question. To do it well. Takes a lot of time to do a first draft. I've gotten pretty good at it. I also will say that I feel like I've become an expert in the like, nuances of different programs, scope and sequences. So like, one of the things we consider is whether programs teach blends as units or whether they teach blending as a process. We have allowances for that in how we've we've built the program. I also, like Lindsay, just asked some, some great questions about, you know, the scope and sequence and how we introduce things. There's some programs where I feel like their scope and sequences are research driven, well considered, but they were not driven for they were not designed with, like, how well you could design decodable texts around them in mind, because they only probably have to, like, hand make one per lesson. And so there's some programs that don't introduce really important kind of glue, high frequency words for a long time. And I'm like, we can't even get like, half decent decodables Until less than 15, right? Whereas other programs like, okay, by like, Lesson Seven, you can start to generate some short decodable. Of them make sense. So it's on science, but there's also a lot of QA and testing and iteration that goes into it from our end, before we say, hey, that's ready to go.

Stacy Hurst:

Yeah, which makes sense. And I did attend a webinar where you explained the you fly, progress monitoring, assessment, like it's essentially a feature analysis. I loved that, and you did make it very user friendly for teachers, they can either just hold up the test or, like you said, type in the words that the students spelled incorrectly, and lots of really targeted information that I think will move the dial for students, because it is targeted based on what they need. I'm wondering, because I teach pre service teachers now, so I'm at the university level, and when I saw that, I thought, how cool would that be, if it could take it one step further and say, based on your students spelling over time? This is the phase of Aries, phases that we think your student might be right and what to do, generally, to move them forward, maybe a little bit beyond the some of those foundational skills, but that that was really impressive.

Viv Ramakrishnan:

It it took a lot of time. We still learn more every day. I mean, the the infinite universe of misspellings is massive, and there's some level of judgment around, like, do I give them the new concept point yes or no and you fly has never been as explicit about that. Like, yes, like, assess whether the concept was encoded correctly. But what that means to one teacher might mean something different. And how you encode rules to, like, algorithmically score, that is not always straightforward. So it has been a journey. It's hard to build, but I think that it's been really well received. And one of the things we're actually working on right now that relates a little bit to what you just mentioned, is that cumulative data over time, where you can set the time window, where you can search by not just an individual student, but pick out the five students that you work with in your tier two setting. Here's the concepts you want to go in on, specifically irregular words, overall word accuracy. And so, like, we're building that right now, and I think that's going to be another huge step forward. And then my question is, you know, like, and what I think we hope to do ultimately, is make the tools talk to each other. So if you want to generate a Decodable and assign it to a student to read with the tutor. That's great if you want to have the tutor automatically build the next list of words for students to work on based on their most recent encoding data and trends you've seen over the last couple of weeks. We could do that too, because we have this grapheme phoneme aware system right by student.

Stacy Hurst:

Yeah, I love that, and I am someone who, literally, as a classroom teacher, was writing my own decodable text, and then I did write decodable text also for Reading Horizons, and I know how complex that can be. One of my other questions, and I'm just saying this because I don't remember on Project read, if you can adjust for the percentage of decodability, is that an option?

Viv Ramakrishnan:

We don't allow for that. And in fact, I mean, I think that there's kind of some of it's just time constraints to, like, allow for all of that. The other piece was just like, I actually think that we have pretty good benchmarks around how decodable a text should be, and I'd rather just try to hold ourselves to, like, meeting that bar more often than not, then allow somebody to say, like, oh, 80% is acceptable. So that's kind of the approach that we've taken, especially if we've gotten better and better over time. But that does mean there's some constraints, like, for example, in our grade level fluency passage generator, you can like, type in the plot, almost as like a story about Stacy Donell, Lindsay and VIV who go on a podcast together. And we can, like, make that a fourth grade text, or whatever. For our decodable text generator, we constrain it a bit more. You can insert custom words, whether that's irregular words, topics, names, but we don't allow for kind of the rich any plot possible in part because we want to preserve that decodability,

Stacy Hurst:

which makes sense, but and then you have a tool for students who may be more ready for less decodable text. Yeah, that's great. That's exciting. So related, and I'm going to be a little bit selfish right here, because of my particular situation, my students are using AI, and at the university level, we have had so many conversations about how to inform that and put guardrails around it, because I think we all recognize we could outsource thinking if we're not careful, right? And they won't develop the mindset that they need to be able to apply that pedagogy or even recognize it. And I do see this with my students and maybe others who use AI, they don't know what they don't know, so they'll put in a prompt and then not have it to review it. AI has been wrong many times, but they don't recognize it. Because they're they are relying on it to do their thinking. But my general question is, what would you want pre service teachers to know if you were in front of my students or any other, any other college student who wants to be a teacher, who wants to be a good teacher of reading, what would you say generally about using AI or technology,

Viv Ramakrishnan:

I don't think you can avoid it, and you'll be left behind if you do try to avoid it. I think in the workforce that's to come. If you're 20, and you know you're thinking about the next 40 to 50 years of working life and you just want to avoid AI altogether, I don't think that's in your best interests. Now, I think that you can either choose to use it as a crutch for doing all your work, or you can choose to use it as a sparring partner, like I have found. And this is not a unique insight. It's just very obsequious, like it'll just tell you you're right and you're brilliant no matter what, and then I'll be like, wait, but didn't I ignore XYZ? That would completely undermine the thing? I just use, like, brilliant, exactly. And you're like, you know, it's just like somebody that's always giving is like your biggest cheerleader, but it's not actually telling you what you need to hear sometimes, or is incapable of that. So I do think that figuring out ways to use it more as like a sparring partner or a coach are really important, because you'll ultimately undermine yourself if you can't distinguish when it's hallucinating from when it's not, or if your own writing skills are underdeveloped because you're outsourcing them to AI, I so all of which is to say, like, don't let it. Let you off the hook. But also, Don't bury your head in the sand at a high level. That's what I would that's what I would say to students. I mean, there's the calculator analogy or Excel, like people were doing math long before either of those things came around. And the people that I think are best at using both of those are still really good at underlying math or finance, right? So it's one of those things where, just like it is so easy to let it do your work for you and pass your classes, so you can go out and, you know, do whatever else you want to do outside of work while you're in college, but that'll really come at your long term detriment if you don't hold yourself to a high bar for learning the material,

Stacy Hurst:

that is good advice. Thank you. I will be replaying that snippet to my students sometime in the future. We've had, yeah, it's been a journey there. I did attend a research study at triple Sr, and they were talking about that in, I think it was in China. They had created robots for a first grade classroom that were powered by AI, and their intent was to be another teacher, or, you know, some source in the room of learning, but the students were smart enough to recognize when the AI robot made mistakes, and so they ended up not trusting it as a source for instruction, but more like a classroom buddy. So I think we've got some ways to go.

Viv Ramakrishnan:

I'm I'm kind of talking in real time here, because this is something literally last night I started thinking about and, like journaling about, and haven't really put it together in cohesive thoughts. I do think that there's almost like this funny analogy to be drawn between, like statistical learning, learning, phonics in the English language in particular, where, like, you are learning certain things that are predictable and true in most cases, but where it's really important for you to understand exceptions and apply judgment in cases where those rules don't hold right. And I feel like there's almost an analogy for a young child that is learning what are the rules based correspondences between your graphemes and your phonemes that typically hold, but not always, and applying judgment in a realm of human knowledge, like whether it's being a doctor, where you see a certain set of symptoms. And you might think that the distribution would tell you that it's one of these three things, but it could be something that's not in the meat of the distribution. Similarly, with learning, with AI and large language models. It's like, yeah, they might tell you things that are more right often than not, 80% of the time. But 80% is not good enough. You need to have an have an understanding of the exceptions to the rules to actually learn a given domain. And yeah, I don't know. I think like English is like the rules based nature of the English language, but with the fact that it also is uncertain, that it's also a regular, that it does have these exceptions, is almost a beautiful analogy for just what it means to learn anything right now,

Stacy Hurst:

yeah, we've been talking around here about having that set for variability so you can apply to. Different things until you get the right the right concept or word. Donell, you

Donell Pons:

look deep in thought. Yeah, so Viv, you got me thinking, and maybe I'm going to take you back to your your days in Wisconsin, because working with those older students and that, obviously that's who I work with. And so selfishly, I'm going to ask a question that's around my area, and what you find is that folks who have been left behind, because that's who I work with, are folks who definitely were left behind. They weren't the opportunities did not come. They weren't given what they needed. And so they exited the formal schooling system without the reading skills and writing skills, obviously that follow. And so now here they are in adulthood, and by some luck or fortune, they find me, and they actually realize they can still learn, because a lot of folks don't even know that much, but yet, it's difficult, because now you have a full time job, maybe you've got a family, you have so many commitments, and so it isn't the same, where you get to spend so many hours a day doing this thing, and so having some means, like a software that you can interact with so you can do it maybe when all the kids are in bed and it's Nine o'clock at night and I'm going to steal a to steal 45 minutes to get in some practice and some instruction. So I'm going back to that and thinking, how do you ever toy with that area? Still in in thinking what perhaps AI or something could help with that? Do you ever think about that still?

Viv Ramakrishnan:

Okay, so to answer that question, I think that I'm going to distinguish between the sort of AI practice for students that we are building in the realm of phonics versus these much more open ended AI tutors that older students or even adults might interact with. I think that these for in our space, we don't actually do any AI generated feedback on the fly. One, because we're dealing with really young kids, and we just want to take an abundance of caution in terms of always having a sense of what they're seeing or feedback they're getting, both for pedagogical and other reasons. Two, because, like, the surface area of what we're trying to solve for is very constrained. It's can we get you to the decoding threshold? Can we improve your fluency? That's an extremely narrow slice of content we're trying to solve, as opposed to some sort of AI tutor platform that's trying to be an adult's way to learn anything. So on one hand, I think that there's just like, a whole range of problems, from cheating to content verification that I don't have to think about because we've constrained the problem that we're working on, and therefore the solution. But zooming out from that to an area where I've not thought as much about, I think that there's immense potential for people to teach themselves things they would not have otherwise known how to do or had the money to pay for courses or whatever. So when people like ask me about the future of teaching and whether it'll change in American schools, for example, I don't know. I don't think my gut tells me it's not going to change as much or as fast as many people think it might, but I think the future of learning will really change. I think that the future of learning means that once you are able to interact with the chat bot, once you can sufficiently read, write and if not use voice, the world is really your oyster. You know, my mom is trying to teach herself Spanish now. She's now almost 65 year old Indian woman that doesn't know Spanish, and she's just like talking to chat. She was doing Duolingo for a long time, and it was fun, but she had a bunch of streaks. She was like in the 99th percentile of her leaderboard, but couldn't speak Spanish worth a lick. And now she's like talking to chat GPT and realizing her gaps and saying, Can you say that again? And like, it's like this totally different experience, because it's an on the fly open ended AI that's able to talk to her about gardening and vegetables and all the things that she likes in Spanish, right? So I think that the open ended nature is a source of risk and concern and need for guardrails in formalized educational environments. But for adults, I'm like, this is beautiful. You can truly like knowledge is at your fingertips, and it's not constrained, and it's it's cheaper than it's ever been,

Donell Pons:

and the engagement, right?

Viv Ramakrishnan:

And I think the other thing we've seen this actually, with older students with our tutor is the like space to fail. For lack of a better term, we have a lot of students that are willing to read out loud and get that real time feedback even more willingly than with a trusted adult, because they don't want to let the adult down even or if they don't, if it's an adult, they don't trust as much, maybe they don't trust as much, maybe they don't want to be judged, or if it's classmates, they don't want to be judged. But even for those adults that they trust, they don't want to let them down. And with the tutor, it's almost like this veil of of you know, just space to try and I think, for adults as well, if you have gaps in your knowledge, especially foundational knowledge. Maybe there's something that is that's liberating about being able to talk to a computer system.

Stacy Hurst:

And the feedback question is a good one, I think. And I'll be looking forward to seeing how we can you know how that progresses and improves, generally, just how good feedback. I want to ask a question about equity. How do we keep these resources equitable and accessible to everyone and and I know you mentioned that you may not have the patience for advocacy, which I can relate to, but Donell is a master advocate. She is so good at these kind of things. So even if it involves some form of legislation or advocacy, how would you in your perfect world? A perfect world, how would you see access and equity with these tools?

Viv Ramakrishnan:

That's a great question. I you know, I think that, um, the way we have tried to build project, read, AI, is to be as accessible from the bottoms up as possible. I think, you know, we've all been in education for a variety of reasons. The way it works is, typically, you go knock on schools or district stores and try to, like, get them top down to do a small pilot, then go larger. And I think to some degree that's needed, but I also know that there's so many teachers that wish they could just access what you built. And so we really tried to build something that teachers would share organically, keep our head count really low relative to our size of of schools and teachers we reach. And so, like my theory of change, I could go on all about this, like I'm super bought into making sure everything's impactful, equitable, et cetera, in ways that we might all agree upon. But maybe my different point of view on this is that you cannot divorce impact from pricing. You just can't. And I think there's a lot of ed tech tools that are just going to be outside of the realm of a teacher's budget, and I get it, and sometimes you want a ton of alignment, implementation, coordination top down. There's good reasons for that, but I also think there's something that strikes me as frustrating when you know there's not a robust free version for a teacher, or where it's the paid version is outside of their price range. So like my theory of change on this a little bit has been, how can we embrace AI internally? I'll just tell you the pricing that we do and how generous our platform is, how many scholarships we do every year, 1000s. We couldn't do it unless we fully embraced AI internally in our organization to keep our costs as low internally as possible. So I would say that, like my my view of this is shaped by just like, economic pragmatism as well, which is, I would love for every state legislature to fund the most equitable, impactful tools. Great. I'd love every district to arrive at the most evidence based high ROI purchasing decisions they can that's not the reality. That's not how things I think will ever fully go, or at least not on on my timeline. And so I think in part, that's like maybe why the the you fly partnership was also really natural, is we both thought very similarly about building super organic bottoms up tools that had extremely low friction for teachers that wanted to use them. And we've thought a ton about, how do we keep the cost of the portal, for example, super low, three, 350, to $4 a teacher, I mean, per student, for the whole year. And how do we allow a teacher to, you know, on the individual paid version, choose how long they want access for, for nine months, you'll pay 75% of the cost, right? The cost, right? So my soapbox on accessibility and affordability is maybe less about things that are outside of my locus of control, and what is within my locus of control. And that's our unit economics, how we build, how we price and like we almost need quasi viral adoption and sharing for teachers for any of this to work.

Stacy Hurst:

I love that, because oftentimes when we talk about things like equity and accessibility, we're focusing on students, but teachers are often very limited. They're not the ones that sign the purchase orders. Districts can make the Yeah, and they don't have access. In fact, there was a very good tool I became familiar with, clearly, will not say the name. Loved it early on, and it was free or very low price for an individual teacher to purchase. And then they changed their pricing model so only schools and districts can purchase it, and teachers will not have access to it so,

Viv Ramakrishnan:

and I'll tell you, like, we've been on this journey of triangulating the right model. I remember when, like, my original plan was to just have a really generous, free teacher version and no paid teacher version that schools and districts. And then I got, like, angry emails from teachers, like when I put out a survey for pricing. And what was affordable, what seemed reasonable, and they were like, No, give us a paid version. Like my my principal, no matter what, will not buy it. It's not gonna happen. Like, all caps emails just scream. She's like, okay, cool, paid version. Then we do the paid version, and then we have teachers get one license and share it with like 30 people. And then, right? So like, there's this constant need to triangulate. I don't think it's straightforward. What I will say, though, is that if you're able to build a flywheel where teachers share things at scale, then if you want, you can price much more reasonably, like I've seen. I think what are some awesome tools that are anywhere from $50 a kid, 2000 bucks a classroom, and I'm just like, they might even have depth, but they will not have scale of impact, and like that, scale piece is something that's really important to me.

Stacy Hurst:

Yeah, very great. This has been a fantastic conversation. It met all my expectations. Thank you. It's really an interesting topic, and I hope that we can have you on again when we learn more about this AI landscape that we're in and and congratulations on the success you've had, and we're looking forward to seeing what else you can help impact the world of education, literacy specifically.

Viv Ramakrishnan:

Thank you for having me. I appreciate you all listening to my tangential ramblings. These are mostly things that I lay awake thinking about late at night, and then I'm like, I gotta put this as a note in my iPhone. And then I get up, I'm like, No, I just gotta work on it and say, insightful questions and follow ups. It's been fun.

Stacy Hurst:

Well, that is the kind of conversation we love on this podcast. So thank you again for joining us, and thank our listeners for joining us, and we will link all kinds of resources to your website, Viv, so that they can have access and see what you have to offer. So thank you, and thanks for everyone for joining us for this episode of literacy talks, we hope you'll join us for the next episode of literacy talks,

Narrator:

Thanks for joining us today. Literacy Talks comes to you from Reading Horizons, where literacy momentum begins. Visit readinghorizons.com/literacytalks to access episodes and resources to support your journey in the science of reading.