.png)
Melissa & Lori Love Literacy ®
Melissa & Lori Love Literacy® is a podcast for teachers. The hosts are your classroom-next-door teacher friends turned podcasters learning with you. Episodes feature top literacy experts and teachers who are putting the science of reading into practice. Melissa & Lori bridge the gap between the latest research and your day-to-day teaching.
Melissa & Lori Love Literacy ®
Making Sense of Reading Assessments with Michelle Hosp
Episode 234
Michelle Hosp joins us to break down the different types of literacy assessments within an MTSS framework in the most approachable way.
We talk:
- universal screeners
- diagnostics
- progress monitoring
- formative assessments
Most importantly, we talk about when and why to use each one. Michelle helps us shift the question from “Which test should I give?” to “What do I need to know to help my students grow?” We also dig in to the power of curriculum-based measures (CBM), what makes assessment data meaningful, and how schools can align their resources to actually make a difference.
If you're feeling overwhelmed by data or unsure how to use it effectively, this episode will help you think more clearly about assessments and walk away empowered to use your data to help all your students become readers.
Resources
- The ABCs of CBM: A Practical Guide to Curriculum-Based Measurement
- National Center on Intensive Intervention
- Data Teaming Tools
We answer your questions about teaching reading in The Literacy 50-A Q&A Handbook for Teachers: Real-World Answers to Questions About Reading That Keep You Up at Night.
Grab free resources and episode alerts! Sign up for our email list at literacypodcast.com.
Join our community on Facebook, and follow us on Instagram, Facebook, & Twitter.
We know assessments can feel overwhelming. There are screeners, diagnostics, progress monitoring. What is the difference and when should we use each one?
Melissa:In this episode, we're joined by Michelle Hosp, author of the ABCs of CBM, who helps us clear up the confusion and make sense of how the right assessments can drive strong literacy instruction.
Lori:Hi teacher friends. I'm Lori and I'm Melissa. We are two educators who want the best for all kids, and we know you do too.
Melissa:We worked together in Baltimore when the district adopted a new literacy curriculum.
Lori:We realized there was so much more to learn about how to teach reading and writing.
Melissa:Lori, and I can't wait to keep learning with you today.
Lori:Hi, michelle, welcome to the podcast. We're so glad you're here.
Michelle Hosp:Oh, it's so great to be here. Lori and Melissa, Thanks for having me.
Melissa:Yeah, and we cannot wait for this episode to honestly help us make some sense of some assessments that are out there, because there are just so many different types of assessments. Just for literacy and if you're thinking about, you know, an MTSS framework or just how to support all your students who are learning to read, assessments are at the core. You have to know what they know right to know what to do, so you know, to sort through all those different types of literacy assessments. We can't wait for you to talk about the differences, the purposes of each, what they should be used for, what they shouldn't be used for. So we're, we are so grateful that you're here to talk about all of this today.
Michelle Hosp:Oh, let's do it. I'm excited too. Excellent.
Melissa:But before we dig into all of all those things I just said, we wanted to like zoom out for a second because when we talked to you on the pre-call, you shared with us that you like to actually start with helpful questions. So I would say, listeners, like, get your pen and paper out for these questions, but can you tell us what these big questions are like within an MTSS system? What are these big questions that would drive assessments?
Michelle Hosp:Yeah, great question, melissa. So I do like to pull it out, and so I'm going to repeat these questions as we continue to talk, right? So if you don't miss it, or if you miss it the first time, you'll catch it the second time. So when I think of MTSS and I think of the questions I first go down to, it's all about resources, right, like it literally is about what assessments schools have, what curriculum and intervention materials, how they're supporting teacher knowledge right, like all of those things are the resources that they have, and so schools should really know what those are, right, they should be able to list off what are our assessments, what are we using for core curriculums? What are our interventions? How are we supporting teacher knowledge? Right, and then we can really start thinking about aligning those resources for specific questions.
Michelle Hosp:So the biggest question is the one I'm going to start with, because it takes up the majority of our resources, right, it's like people, things, stuff, and that is how effective is tier one? It sounds simple, right, but it depends who's asking the question. So I want to pull this apart a little bit and like so, administrator hat, right, versus teacher hat? So from an administrator's hat, if I think about. So how effective is tier one right? So what I need to know is I need data about all of my kids at every single grade, across specific times of the year. So to do that, I think I could use a universal screener, right? I could also use an outcome assessment, Right? So like a statewide assessment that does tell me, like, how effective is tier one? So there is one piece, but then, if you drill down as an administrator, what I hope people are asking is so, for those students who actually started out the year on track, how many of those students ended up on track? So here we are at the beginning of the year. Another way to ask how effective is tier one is did our students grow? Did our students who were okay continue to show growth? Did our students also who weren't on track, did they bump up? And so in order to do that though, that's different data, because now we need multiple data points across the year so out goes the statewide assessment, because we're not going to use it for that, but in comes universal screening, right? Because most people give that fall, winter, spring. So I can look at that and say how effective is tier one Right? Really, from a district level, district level, and so it's also a proxy right of how good is our core, how good is teacher knowledge right implementing this? So so many things just come from that one question from an administrator's view, of how effective is tier one. But teachers also want to know that. But it's different.
Michelle Hosp:Because I want to know, because I'm responsible for these kiddos in front of me, I want to know which of my students are on track and which need additional support. I also might want to know which of my students are advanced, right. We often forget about. Well, what about those kids that are, like, already blowing us away with their skills? So I want to know. So that's one thing I want to know.
Michelle Hosp:I want to know who's on track, who needs support. I don't like to talk about it as who's at risk, because it sounds so horrible. You're at risk, right? I really want to be at risk. What does that mean? I want to know. I need support to do these things so I can get better, right, like that's such a different view of how we think about our kids. So I want to know who's on track, who needs support, who's killing it, right, who's like really hitting the ball out of the ballpark. And then I want to know for those students who hitting the ball out of the ballpark, and then I want to know for those students who again, similar question, how many for those students who are on track, are they staying on track, right? So I want to look at multiple points. Are they the students who started off strong? Are they continuing to grow and stay strong? Again, for those students who need support, if what I'm doing in tier one is helping, are they growing? So, again, that's, you know, the data I need is a universal screener, because I need multiple data points and I need it to be really robust and reliable and valid and all of those things. I know we're going to dig into that in a little bit.
Michelle Hosp:But the other thing is that that also is a proxy, you know, for for my core curriculum, right? So if things are not going well, then, and if that's also true at the district level, then it's not me, it's not my teaching, you know it's, it's probably not even my knowledge, it's that we may not have the best materials. So that's a different problem to solve, right? And again, that's all at tier one. From that I drill down into, you know, tier two and tier three, right, so it's like okay. So first, when you think about resources, the heaviest amount is at tier one, right? So we really got to have good data.
Michelle Hosp:But then, once we get that figured out, then I'm looking at tiers two and three and it's there are administrator questions, but I'm gonna. This is really when these are the kids in my classroom and they need help. So I really want to know, you know, is this student improving? Is this student that I'm providing more support to, are they improving? Another way to think about that is is this intervention effective? Right? So those are the questions that I think about for teachers at tier two and tier three, right? Is the student improving? Are they getting better? Is the intervention effective?
Michelle Hosp:So the data that helps me answer that is progress monitoring data hands down right. So I should be looking at things like a general outcome measure. What I mean by that are I'm just going to refer to like a curriculum-based measurement, a CBM measure, like passage, oral reading, right? That's the one. Actually, that was the original CPM measure and the one most widely used. So it's cool because it measures so many skills. So that's why it's a general outcome measure.
Michelle Hosp:But I also might want to know if I'm teaching a very specific skill, like if I'm focusing on letter sounds, right, or phonic skills, then I also want to look at are they making progress in that specific skill? So I really think to answer that question is the intervention working? Is it effective? Is the student improving? We probably want multiple pieces of data, progress monitoring data. More in a general, is the student getting better in reading overall? Right, that's our general outcome measure. And is the student improving on the specific skill I'm teaching, whether it's phonics, whether it's letter sounds, so those things. So the thing about that is people are like oh well, I can just guess. I just know, I know, right, I'm doing this intervention. I just know that my kids are getting. I'm doing this intervention. I just know that my kids are getting better.
Michelle Hosp:Here's the thing. It's a resource issue, right? These teachers are spending so much time helping these kiddos that need it the most, and what I want to know is is that time well spent? So we really want to make sure that we're collecting that data frequently and that it's really sensitive to growth. So those are the two big pain points that we really want to attach to that data, because if we can't collect it frequently and it's not sensitive to student growth, then we really can't answer those questions.
Michelle Hosp:And those questions are pretty darn important because, again, they go back to the resources. Am I spending my time wisely helping this student? Notice, we're not talking about the student, we're talking about teacher behavior. Right, it's all about what the teacher can do, not whether the student, because we know we can help students. So it's not the student, can't, it's the. What do I need to do? As their instructor, you know the person who's caring for their learning what do I need to do to impact them?
Michelle Hosp:On the opposite side of that, the administrator, right, also needs to know about tier all of these materials, right, it's a resource issue. Out of all these materials that we have purchased, that we're using, that our teachers are, you know, masters of which are leading to the greatest student improvement. That's a really important question, because those materials are expensive and the training is expensive and the professional development's expensive. So the best way for an administrator to answer that question, you know, for tier two and tier three, like which of these interventions should we hold on to and which should we let go of, is to be able to look at all of that progress monitoring data and literally look at the rates of improvement, the ROI, and say which of these show the greatest rates of improvement? Which of these show the greatest rates of improvement?
Michelle Hosp:I actually sat in a meeting once when I was in Hershey, pennsylvania, and I was blown away at how they did this.
Michelle Hosp:It was at the end of the year and around it sat all the interventionists, the school psychologists, the principal right, all of the leaders and they literally put up all of their progress monitoring data and went through by intervention and went through intervention by intervention and they called it their spaghetti graphs, right, because all the kids were like a line on a graph and they were looking at the rates of improvement. And by the end of that meeting they said these are the interventions that clearly work for our kids, these are the interventions that don't and we're going to get rid of kids, these are the interventions that don't and we're going to get rid of. And so they literally at the end of every year, kind of went through and cleaned out closets and said we really want to double down on this because we have the data that shows that that resource is worth our time and effort. So I thought that was a beautiful example of how to collectively look at that information.
Lori:To support tier two and three All right, michelle, I've been taking notes while you're talking. I'm furiously taking notes and I'm not quite sure that I have it right, so I would love for you to do a quick recap. Can you do like Michael Scott from the Office, like a tell me like I'm five.
Michelle Hosp:Love the Office, absolutely, lori. So because it is complex, right, and we're talking about questions and it's like, wait, I can use this for that and what. So here's the deal. Tier one I need a universal screener. I need data that tells me how effective is my core, and that's at the administrator level. The teacher wants to know who's on track and who needs support. That's like boiling it down to its essence. Underneath that, let's say we're going to come back to the support issue, but let's say we're in tier two and we're giving interventions. What I want to know, the question I'm looking for to be answered, is is it effective? Is the student improving? That's the teacher lens. The administrator lens is out of all of these assessments or, sorry, out of all of these interventions, when I look at the progress monitoring data, which ones are showing me greater rates of improvement? So we now have answered the question how effective is tier one, who needs support? Right, universal screener. Our progress monitoring for tiers two and three, our administrator lenses. Right, universal screener, our progress monitoring for tiers two and three, our administrator lenses. Which of these gives us the best rate of improvement? And our teacher is going to be able to say is this helping the kid. Is the kid getting better? So that's where we are. Oh, but there's more right.
Michelle Hosp:So, from there, what we haven't, what we haven't touched on, is well, what does the kid need support with? So lots of times we overuse, I would say, universal screeners and progress monitoring tools, because sometimes they're one and the same. We overuse that data to say I know exactly what the kid needs. Well, maybe, but more likely than not, you might be. I like to think about it, as you might be in the right zip code, but you're not yet to the right street or really to the house that you're trying to get to and so you could spend your time, you know, driving around the city and the city's beautiful and that's nice, but you have got to get to the house because the person in the house needs your help. So you want to find that street and you want to find the house.
Michelle Hosp:So that is where we think about diagnostic information, right? Because the question in that case is what skill does the student need support with? The universal screener has told us who needs support. The progress monitoring is going to tell us are they getting better once we give them the support. But wait a minute, what do we support them with? So we need a diagnostic lens for that and so we can look at multiple pieces of data and we call them diagnostic data.
Michelle Hosp:So it could be something like a diagnostic test, like a phonics assessment. The Core Phonics Screener, star Phonics, gives you a lot of diagnostic information, right? Those are just two examples. There's others out there. Or you could look at a subskill mastery, right.
Michelle Hosp:So if you had a CBM measure that was assessing letter sounds and most, most publishers have that, right, whether you're talking about Dibbles or Amesweb or EasyCBM or FastBridge or something like, right, I could go on. Most of them have letter sounds I could look at which sounds the kid doesn't know, like okay, like, then I can teach that. So that gives me more diagnostic like information. One of the things to I would caution is that are they assessing all of the sounds, right? Because sometimes some of those assessments get cut off because it's their affluency base, right? So they stop after 60 seconds. So maybe the student didn't get to try all the sounds. So some of them have extension where you can actually give the entire CBM sheet so that you can assess that. So there's ways to collect that information. But those are really important.
Michelle Hosp:Other things to think about with diagnostics is that you want to make sure that the student has multiple opportunities to respond, that it's not based on one attempt. So sometimes we think of those assessments and it's like, oh well, you know, the student actually only saw the letter S once. I really want to test that multiple times. We also want the student to produce. We don't want them to identify right. So that's the difference between a multiple choice type of test. That doesn't make a good diagnostic. We actually want the student to perform, we want to hear them, we want to see the skill in action, because that is going to give us the best indication of whether they have it or they don't have it right.
Michelle Hosp:So do we need to teach it or do we not need to teach it? So that's the diagnostic. So that's really important and sometimes, honestly though, based on the screening data and what we know about the kid and other formative assessments that we have, just by working with that kid we know what intervention. So what I would say is don't over-test kids. Not every kid needs a diagnostic, right? Like? People are like, oh, I'm going to give a diagnostic. It's like well, what's the question? If the question is, I don't know what to teach. Yeah, go right to the diagnostic. But if you kind of have an idea, dig in, start teaching and use that progress monitoring data to to give you an idea whether you are ringing the bell the right bell for that kid.
Lori:I think where it gets tricky in my brain is that I want there to be like a clear like you do this first, so obviously you do the universal screener right. That's like that is the number one, and then it's kind of a back and forth, is what you know and maybe maybe getting in the town is enough.
Michelle Hosp:Right, if you start working with a kid and using other information you have, you can like, maybe get to the street. But for those kids who are diagnostics, our assessments are really meant for those kids who are not responding, who you are still trying to figure out. Oh, what does this kiddo need? Lots of times when we work with our kids, we know is it a phonological awareness issue? Is it that they can't blend or segment sounds? Is it a phonics issue? Well, do they know all of their letter sounds? How are they with long vowels? How are they with short vowels? How are they with blends? Right, if you don't know, teach, teach and then formative assessment, right. So I don't think we talk enough about formative assessment, because this is what teachers do all day long. They're working with their kids, they are listening to them, they're hearing them, they are probing them in a most positive way possible. So we want to do that same thing with formative assessment. We really want to. It could be an informal assessment, it could be an observation, it could be quizzing the kid, but we want to be able to capture the information because we want to know how do I change my teaching right To support this student? So we want to know how much support do they need? So I'll give you an example of that. So if the student is having difficulty with a particular letter, right. So let's say the letter is A. So if I ask the student to give me the sound that A makes, I'm hoping I'll hear A, right. If I don't, then I'll do a full model, right. And I'll say that letter A says say it with me and see if they can model while I do it. Right, if they can, then I'm going to even pull back the support even more and I'm going to say well, this is the other thing I would do with the, with the A. I would give them a hand signal and I do an apple, like I'm holding an apple. I say A apple, ah, right. So I give them a hand signal. So if they can do that, and then when they get to the A, and if I look at them and I just give them the hand signal, can they get it from that? Have they learned that signal? Right? So I'm not doing the talking, I'm just giving them a clue. And if they can go, ah, cool, then I don't have to do the full model, right, like I'm pulling back the support they need. Or the next thing I would do is, when they get to that A, if they said, eh, you know, instead of ah, if I just tap on it, is that enough to get them to give me the correct response?
Michelle Hosp:So the formative assessment is like how do I need to teach this? How much support does this kiddo need? And so with that information, that really helps me a lot. So now we've answered pretty much all of our questions. Right, how effective is our core? Who needs support? Right, which skills to teach? If I need that? Right, the diagnostic, the progress monitoring is the kid getting better? And you know, if they're not, how do I change my instruction? What's support? What do I need to be doing differently in my teaching? That's the formative. That's really digging down and seeing how does the student respond? What do they need in order to get this skill? So that covers all of those assessments and all of the questions. I'm sure there's more questions.
Lori:No, that was really helpful, really helpful. I think it's just messy, so it's helpful to hear it again.
Michelle Hosp:It's so messy. And here's the thing thinking is required. So people are like, well, but I have, I have to give a universal screener. And then we also do you know like maybe they have to give like a phonics screener too. And then they're like is that enough? And I'm like I don't know what your question right, so you have to go back to what is the question you're trying to answer. And then do you. So this is what I would say do you already have the data? Do you have the data already to help you answer that question, or do you need to get additional data?
Melissa:Michelle, I'm wondering if we can dig in a little more to each of the types of assessments, because I mean, let's start with the universal screener that you just talked about, because I know a lot of people are talking about those. They're now on. These, like states are requiring universal screeners. I will say my experience you mentioned it was with we did the iReady. That was our universal screener, and what I think we did wrong, though, was like we tried to do too much with it, right, Like we were trying to like dig down to what the students needed, and it was like I don't. I mean, I was a middle school teacher and they didn't even have a fluency assessment on there, or phonics or phonemic awareness. They didn't have any of that. So I'm like I don't know that this is really showing me what their true needs are. So can you just talk a little more about the screeners and like what can we expect from them and what should we not be looking to do with them? Universal screeners.
Michelle Hosp:honestly, their main purpose is to tell us how effective is tier one. I mean, honestly, it's you know, how good is our core instruction? And then, for those kids who are on track, are they staying on track? Right, we can look at the. Then we can look at how much our kids are growing. Are the kids who are on risk? Are they getting better? Right, that screening data is like you guys said it's a snapshot, it's a one point in time indication. The reason why it's really important, though, is because the benchmarks that go with that data.
Michelle Hosp:I think this is where people get confused. They're like well, what does that benchmark mean? So what I'm sharing is that it only tells you whether the kid is on track or off track right. And then, if you put all those kids together, it's like well, how effective is our core? If we have a lot of kids on track, then our core is good. If we have a lot of kids off track, then our core is probably not good.
Michelle Hosp:But that benchmark score is an indication. It is a low threshold, and I think that's something people don't understand. They think that that benchmark means yay, means the kid is rocking it, and it's like no, the kid is like squeaking by. So I think we need to look at that benchmark. That benchmark is set at a very low threshold. It doesn't mean that the kid is rocking it, highly likely to show up bare minimum proficient on some other larger assessment right. So we think about. Often those universal screeners predict to like a statewide assessment or a large scale assessment. So what score does my kid need to get in my benchmark to get over the threshold on that large scale assessment? But the threshold is set low. It's set it like the 40th percentile.
Lori:Oh, wow, I didn't know that.
Michelle Hosp:Yeah, so the 40th percentile doesn't buy you much. That's not even college and career ready. So if we really wanted to say we want all of our kids to be college and career ready and successful, then that benchmark would be much higher. So I think we need to keep that in mind. That that benchmark, yes, it's a good indication on whether that student will be successful on another large scale assessment at a later point in time. But that success is a low bar. So kids who fall below that are really in trouble. Right, they need a lot of support. But even those kids just slightly above it probably need support. Right, but it goes back to the resources. I don't have. Like I only have so many resources. So where am I going to put my energy and effort and time? So universal screening is is is interesting, it's hugely helpful, it tells me lots of things. But I also think we need to understand is that it's not a a level of great success, it's a. Those benchmarks are set at a bare minimum.
Lori:I didn't know that either. That's amazing and I feel like, michelle, the thing that you keep talking about is like what does this tell us overall big picture? And I'm just thinking about some factors. You've mentioned a couple of them, such as like the quality of instruction what? And like curriculum materials we haven't really gotten into, like teacher knowledge, or just like basic support, right, like we know that students might need support, people that you know sometimes, you know, call out sick, there's not funding for there's teachers who get sick. I mean, there's a million things that go into this that I think, when we think about quality of tier one, I just want to make sure that we're saying this like it's like the big picture of tier one, like the every, every, everything. So is there anything you want to add to that?
Michelle Hosp:It is and I'm really glad you brought that up, lori it is every, every, everything right. So it's not just the assessments we're using, it's not just the curriculum we have, but it is the teacher knowledge right, and it is the support that teachers get. It also comes down to, you know, literally like what are our schedules for instruction? Are they uninterrupted? You know, do we really maximize every instructional minute that we have for our kids? Are we? Is there enough practice built in? Right? When you look at a lot of programs and curriculums and interventions, what they often don't have is sufficient practice, and Anita Archer is so beautiful in reminding us that you have to have practice. You can't just learn a skill and plow through the curriculum and not allow that student enough sufficient, really intentional practice opportunities. So there are lots of things right, like we could have a great curriculum and maybe the curriculum is actually well built and it has a really great scope and sequence and the teachers have the knowledge and they're teaching it to fidelity right, as the curriculum should be taught, but it's still falling flat. So then I would say, okay, then are we giving kids enough opportunities to practice? Are we doing differentiated groups appropriately? Right? Differentiated is for tier one, right, like, differentiated is not just those kids in tiers two and tier three. Differentiated is I'm teaching these core skills. Which of these kids in my classroom need that additional support? Right, we talked about that's that formative assessment. What else do they need to really really grasp this opportunity, this skill, right? What opportunities do I need to provide? And then, what additional practice can I provide for them? So, yeah, it doesn't tell you what the problem is, right, it tells you that there's a problem. And then you have to start thinking, well, what do we think it is? I mean, maybe one of the things I do here is that, you know, with the whole science of reading, right, which is beautiful.
Michelle Hosp:And then I talked to districts and they're like, well, we're doing the science of reading. I'm like, well, tell me about that. And they'll say, well, we bought this new curriculum and it's, you know, research-based, and blah, blah, blah. And I'm like, well, great, how's it going? And they're like, oh well, yeah, it's great. And I'm like, oh, how do you know?
Michelle Hosp:And then when you walk into classrooms, things are still shrunk wrap. You know, everything is still in their shrink wrap form, right, because that's what they know. And it's like okay, wait a minute. You can't just buy something and give it to your teachers and they've never before had that opportunity or experience to teach in that way and you expect them to just flip overnight. That's unrealistic and it's not very respectful to the teacher, right? Because then the teachers are like well, wait a minute. I've been doing this for 20 years and I thought that this was great, and now you hand me this package and say this is better than anything you've ever done. You need to do this now. So I think that's also part of the problem so much.
Lori:That's why I wanted to bring it up. I just think it's like a lot of things that you know. I think often it's like a blame game, right. Oh well, it's this thing or it's that thing I think about. I live outside of Baltimore and recently, like the Orioles weren't doing so hot and so they got it. You know, they were losing pretty badly, and then what did they do? Oh, they brought in a new coach and oh, this guy's really, really going to get it. The next couple of games also didn't go so well. Big surprise, right. I mean, sports teams do that all the time. I'm like one person, typically one person and I will say typically, cause I know sometimes it is actually true, but typically that one person is in the problem. There's systems and structures that are built up. There might be things that are invisible, right, like the dynamic of the team. Who knows? So I appreciate you just talking about all of these things. There's, there's just so much.
Michelle Hosp:And I think teachers you know, I think they're getting a bad rap of they're not doing things right and they haven't been doing things right and that's just a shame game and that's just nobody's gonna help.
Lori:It doesn't help anyone, and so what I?
Michelle Hosp:want. What I want people to do is to really highlight what are the things that they traditionally have done that are amazing, like those read alouds oh my gosh, right. Like, don't get rid of your read alouds, please. You know, it gives kids access to material they can't yet access. It provides opportunities for vocabulary, for comprehension, for oral language, right. Like those things are amazing. And the love of books like come on. Like the more, the more we show kids we love books and can share that with them, the more they're going to be engaged and motivated to work on the skills. Because we do have to drill down to those skills, right. So we do have to say, yeah, and you know what, someday you're going to be a reader like this and we're going to work on these skills today because this is going to help you get there, right. But like, let's just be honest about what we're doing and intentional.
Lori:Okay, so you wrote a book to help teachers unpack. So many things about curriculum based measures I want to get into this book. It's called the CBM, I'm sorry, the ABCs of CBM. There's a lot of letters here, michelle. The ABCs of CBM.
Michelle Hosp:XYZ.
Lori:So the ABCs of CBM, which are curriculum-based measures. I'd love for you to just tell us a little bit about those and where they fit into these assessments that we've been talking about.
Michelle Hosp:Yeah, so clearly I love curriculum-based measures.
Lori:I thought you were going to say letters, I was like ah, and-.
Michelle Hosp:Um, they fit in because first off, they um, they have over 30 plus years of research behind them. You know Stan Dino and Phyllis Markin and all of his wonderful uh students that were at the University of Minnesota at the time, I mean Doug Fuchs, lynn Fuchs, um, mark Shin, like all of these people have really advanced the field. And here's the cool thing they're quick, they're efficient, they're reliable, they're valid. Most of them are done one-on-one, so it's a the student has to produce, right, so the student has to say, has to read, has to do the thing. And they can be used for multiple purposes. So a lot of CBMs are the same assessments that we talked about for universal screeners are the same assessments that we want to use for progress monitoring. So if I'm doing like letter sounds, right, I want to screen for letter sounds. Letter sounds, right, I want to screen for letter sounds. And then if I'm actually teaching kids letter sounds, then I can actually use those CBMs to actually monitor kids' progress. So they're hugely helpful in those ways. But there are other assessments that do those things. So there are computer adaptive tests we call them cats that do the same thing, right, so, but they don't do it exactly the same. So cats rely most of them, although the technology is getting better, where we're now using voice recognition and things where the kids can produce. But most of the responses are identification, right, so think of multiple choice. So they might be hearing on a headset, click on the letter that says the ah sound, right, so that's identifying the sound for A, versus on a CBM, they would be looking at the letter A and they would have to say ah, right, so those are the differences. They would be looking at the letter A and they would have to say ah, right, so those are the differences.
Michelle Hosp:One of the bonuses for computer adaptive assessments is that you can give it to all the kids at the same time. Right, so it can be really efficient, right, so we're thinking about resources. If I can bring all of my kids down to the computer lab and I can gather that data in 20, 30 minutes from beginning to end, that's really helpful. The CBMs are short and efficient. Most of them are one minute around, one minute each. But when you add all of that up and then you add every single kid, the actual time it takes is more. But sometimes, I would say particularly for our youngest readers and our readers who are showing not as robust skills. That is really where a teacher sitting with a kid giving an assessment is really helpful, because you learn a lot just by sitting with a kid. But you know why? Can't there be a combination? So I also think, with a lot of these states and all of the requirements they have, it's like OK, I have to screen, then I have to give this. It's like okay, well, if I can universally screen with a computer adaptive test and find out who actually is above that threshold for being on track, and then just take those kids that are below the threshold and give them additional one on one assessments to drill down a little bit more. I think there's lots of ways that it can be done. So I'm clearly partial to CBMs.
Michelle Hosp:We do have a new book that's coming out. This is what blows my mind. Cbms have been around for like over 30 years and the last time we did the book, I think it's a 2017 publication, right, so it's getting quite old and you would think that. So from that publication, from the original because this is the third edition, which is kind of crazy, so the original edition was done in the early 2000s or later 2000s there wasn't much change between that first edition and the second edition. There are huge, huge changes from that second edition to this third edition.
Michelle Hosp:So things like well, first off, we're doing a preschool chapter, which we didn't do before, and these preschool measures have been around for a while. But let's talk about, you know, the reason why we want to screen is we want to intervene. That's, you know, screen to intervene. So the earlier we can find these kids, the better off we are. So there's a whole new chapter on preschool and then even the reading chapters. Oh my gosh. So the early literacy chapter used to have just 11 skills and now has like 14 skills and it's included.
Michelle Hosp:Things like vocabulary, oral language. And it's included. Things like vocabulary, oral language, rapid automatic naming, right Like vocabulary and oral language, are huge and they're now starting to get attention. We should be teaching it more, we should be assessing it better. The same thing is true for the other chapters. We have the chapter that we just called our reading chapter. It used to have two skills oral passage reading and mazes and now it has seven skills, seven. It now has vocabulary, silent reading, comprehension, spelling is different, writing is different. It blows my mind how much the field has advanced and a funny story.
Michelle Hosp:So I was there, the AIR, american Institutes for Research, who also houses the National Center for Intensive Intervention, right, I highly recommend people go there and look at their tool chart.
Michelle Hosp:If you're looking for CBM measures, universal screeners, progress monitors, it's a great resource.
Michelle Hosp:That center has been around forever and it started as the National Center for Progress Monitoring, then it was the National Center on Response to Intervention and now it's the National Center for Intensive Intervention. I have been collaborating and working with that center since the very beginning and I remember sitting there with Lynn and Doug Fuchs as we were training and doing this huge presentation in Washington DC, like 300, 400 educators in a room and it was all on CBM, and I remember looking at her and saying so, do you think this is here to stay? And she's like no, it's a fad. Right, because it's like no, cbf, this is a fad. This is what's you know, because also, you have to think about. You know, reading First was starting to. You know, come into and and the national panel for reading. But it's here to stay because it is amazing and it's efficient and it's quick and it gives us great data and particularly for progress monitoring. Nothing as is as sensitive as using CBM. You can use a cat to progress monitor, but it's not very good.
Melissa:So, michelle, I'm thinking about this, like things change right and I have to bring this up. I know we could talk about this for a whole nother hour, I know. So sorry to bring it up now, but people talk a lot about, like, running records and I know we used to give the QRI and there's the IRI and I know some people are moving away from those and saying we should not be giving those and you know others are still holding on to them. Can you talk a little bit about this, like, should we be moving away from those assessments?
Michelle Hosp:What we want to be asking our teachers, respectfully, is you know, what is the question that they're trying to answer is what is the question that they're trying to answer? What are they doing with that data? Because they're hugely time consuming, right. They take a lot of teacher time. They don't always have good psychometric properties, and the things that we have been told that they're good for don't necessarily pan out in research, for a quick example of that is doing miscues, right? So if a student is reading a passage and makes an error on a particular word, and we do all these miscues and we try to figure out, we try to use that formatively, right. Like oh, what do I need to teach the kid? Like, oh, what do I need to teach the kid? What we're not giving enough attention to, though, is the context. So if a kid reads a word correctly within a passage, it's not just about their phonics skills. It's about so much more. It includes their background knowledge. It includes their vocabulary right, their comprehension and their motivation, right. So here's the problem is that the error that a kid makes on a word in one passage, they actually might not make that same error on another passage that they're more familiar with. They have background knowledge on. So now we've said, oh well, the kid makes this error and I'm going to change my instruction and that actually might not be a good use of my time.
Michelle Hosp:So we're trying to use them in ways that that inform our instruction, and actually that hasn't. That hasn't played out in the research. Here's the thing, though if you're doing it because you think it's important to listen to your kids read, and you then do that right, but don't attach it to a test. Attach it to that opportunity for kids to read, try to shorten it. Try to do it when you're doing conferencing with kids and follow up right. Ask them questions, ask them to predict, ask them to summarize, right, like, ask them real questions about what they're reading, not at the word level of what they're reading, because those assessments are not going to do that. If you want a permanent product of that, though, you could use a spelling test.
Lori:It's like a window right, it is a window right.
Michelle Hosp:So it's like what is this kid thinking about? What that letter, what that sound makes and what letter are they attaching to it? So that is helpful.
Lori:So helpful.
Lori:That was actually the first thing you said, michelle, was that it takes such a long time and that I remember that from when I did these when I taught fifth grade.
Lori:I remember assessing some kids, you know, three to four weeks prior by the time I got done with my whole class, and then that data was already three to four weeks old and okay, well, what do I kind of do right now, cause it's already like a month old. And then inevitably there's like a spring break or a winter break and you know, by the time you actually come back you're like, well, that was six weeks ago, should I do it again? But then you didn't actually really get to do much with it because you were so busy giving the assessment. So I'm so glad to hear like the part that I felt was so valuable from that was I felt like it was such an intimate experience listening to that child read and I gained so much from that and you know, not necessarily the results and I feel like I could get the same results or the same kind of effect with a one minute fluency passage or something of the sort. That was like a little faster every day, right, yes, I love that.
Michelle Hosp:Yeah. So why do you? Why do you think teachers hold onto it?
Lori:Oh my gosh. Lots of reasons. I'm not sure that it's clear about what else to do. I also think resources can be slim, so if that's what you have, then you do it.
Melissa:I said that, like there's still the. It's what I, what I mean, it's what I was taught when I went to grad school, you know. So it's like hard to let go of that when you're like well, I learned that from a great university that I went to, from professors that I trusted. You know, some people still put a lot of trust in what they, what they learned, and fair enough, you know that is fair.
Michelle Hosp:So I do think bringing it back to why, why am I doing this? What information is helping me become a better teacher for all kids, become a better teacher for this student in front of me and, like you guys said, you can get that same information? I do believe having kids read aloud is really important and having that time with kids, but that could look so many different ways. And if I mean I do want to emphasize that if we think gathering that data to look at errors is helping inform our instruction, the research really is not, does not support it because of all the other stuff that we bring when we read a passage, but that's hard to break away from those.
Lori:As we gather all of this data. Michelle, teachers are gathering data every day, right? We're having all these different kinds of inputs. How do we help teachers avoid like data overload and really kind of sift through to get to what matters the most? And also I'll ask, like what do we do with all of this stuff, like how can we use it most efficiently? I'm just going to bring that on at the end here.
Michelle Hosp:Well, so you know, I mean the funny thing is, as a school psychologist you would think I'd be like, well, we need all this data, and what I would say is we actually probably don't even need half of the data we're collecting. So I think really, again, it's about the questions what information, what questions do I have? What information do I need and do I already have it in another place? Do I need to collect it? And if I do need to collect it, what is the most fast, efficient way to do it? And again, it's really like if I give a test, what I say to teachers is that if you don't know why you're giving the test, don't Just teach, because your kids will be better served. Now, that doesn't mean you don't give your statewide right, like there's also like you can't just be like Michelle Haas says I don't have to test my kids Because I don't know why we're giving it, because I don't know why we're giving it. I mean, I also think it's a good, it's a good opportunity to talk to your administrators and your leaders and say what do you do with this data Right? Say what do you do with this data right, so that you can have a little bit more buy-in and understanding and then saying, well, how can I use this data? And if it really is, it's just, you know it's a requirement, it's a reporting requirement. Then at least you know, as a teacher, of how much stake to put in that right, like, okay, I have to give it. Give it, move on and get to the business of teaching. For administrators I would say, seriously, look at all of the data you're collecting and again go back to your questions. The question should drive what data you collect. If you have those questions and you notice that you are collecting multiple pieces of data to answer the same question, then get rid of some of that.
Michelle Hosp:The other thing I was, I would say, is that the data displays right, I see often, as you know, like I'll say to teachers are you, are you doing, you know, universal screen? Yes, okay, great, what do you use? I'll just say I use dibbles, great, can you show me that? And they go to a drawer and they open their drawer and they start pulling out all of this stuff and I'm like, oh my gosh. So again, it's, it's not enough to just give the test and you have to be able to see that data in real time and make sense of it. So the data display and the reporting is really important, regardless of what assessment you're using. So it's important for teachers, because if I have to give a test and I have to put it in a drawer and then someone else has to enter the data, it starts getting stale and you know it just, it gets farther and farther away from me. So things that can be immediately I give the test, I can see the results, are really helpful.
Michelle Hosp:And for the administrator, I want a data dashboard. I want to see all my data at one place and I want to see my attendance data and I want to see my behavior referrals right, I want to see all of this together. So if I'm looking so here's an interesting thought If I'm looking at my interventions right, and we said, well, look at your progress monitoring data and look at your rates of improvement and figure out which ones are better, then if you drill down and you say, well, this intervention is not working, the kids are doing horrible, but you actually include your attendance data in that and guess what, 80% of the kids only got 10% of the intervention. Oh, wait a minute, that intervention I can't throw it out yet because the data I have is not enough to make a decision. So for administrators having a really good data system to pull all of that together to ask those questions is really important.
Melissa:Well, we could probably keep asking you questions all day. Maybe we need a part two about assessment at some point. This was really really helpful, though, and I know it answered a lot of my questions, but it left, like I said, I have more questions for you, so we might need to do a follow up.
Michelle Hosp:It's complicated, you know, and so I appreciate you guys kind of like resetting, like we could probably go back and reset all over again, right. But I would say for teachers don't get discouraged. But I would say for teachers, don't get discouraged, right, like and trust yourself. I think. I think teachers don't feel like they trust themselves anymore because they're getting conflicting information, right Like oh, you're doing this wrong. You need to do this.
Michelle Hosp:So I think, really, going back to well, what is it I need as a teacher? What information do I need to help my kids in my classroom every day? And I would also say to teachers that if you can clearly show to any administrator that this is the information you need and this is how my kids are improving and use data, that is going to be enlightening for the administrator and freeing for you, because it's proof that what you're doing is helping kids right. So, use the data to serve you. Have clear questions. Collect it the most efficient way to really show that your kids are growing.
Michelle Hosp:And people are going to be blown away and say, hey, how do I do that? How did you do that? How come your data looks that way? Those are really cool things and teachers don't get enough love, right? So I also think supporting teachers and giving teachers the opportunity to use their data as a hey, look at what I'm doing, look at my data, look at what I'm doing with my kids here's my evidence, here's my proof, here's where all of my love and work shines and just giving them the space and platform for that, that's what assessment data should be used for. Also, right, the celebrations. I can't thank you guys enough. You guys are amazing and your podcast is amazing and teachers are lucky to have you guys.
Lori:Well, thank you. Well, we're grateful that you came on today so that we know all about assessments now. Obviously, we needed you.
Melissa:We need Michelle every day in our life To stay connected with us. Sign up for our email list at literacypodcastcom, Join our Facebook group and follow us on Instagram and Twitter.
Lori:If this episode resonated with you, take a moment to share with a teacher friend or leave us a five-star rating and review on Apple Podcasts.
Melissa:Just a quick reminder that the views and opinions expressed by the hosts and guests of the Melissa and Lori Love Literacy podcast are not necessarily the opinions of Great Minds PBC or its employees.
Lori:We appreciate you so much and we're so glad you're here to learn with us. Thank you.