Science of Reading: The Podcast

Summer '22 Rewind: Myths and misconceptions about universal screening: Nancy Nelson

August 10, 2022 Amplify Education Season 5 Episode 13
Science of Reading: The Podcast
Summer '22 Rewind: Myths and misconceptions about universal screening: Nancy Nelson
Show Notes Transcript

Dr. Nancy Nelson, assistant professor of special education at Boston University, discusses myths and misconceptions around RTI, MTSS, and assessment screening in reading and mathematics instruction. She highlights what tools need to be in place for the RTI system to be implemented well, her work on DIBELS®, and the importance of dyslexia screeners.

Show notes: 

DIBELS® at the University of Oregon

Podcast Survey


Quotes:

“Relying on data allows us to engage in a systematic process to implement systems to meet the needs of all kids.”
           —Dr. Nancy Nelson


Susan Lambert:

This is Susan Lambert and welcome to Science of Reading:The Podcast. For the third edition of our summer rewind series, we're reaching back to our first season to feature a conversation all about response to intervention, what it is, where it originated, what strong implementation looks like and much, much more. Our guest is an expert on RTI, Dr. Nancy Nelson, formally of the University of Oregon, and now assistant professor of special education at Boston University. She's also co-lead of the National Center on Improving Literacy, which we discuss in this episode. This is an important episode to revisit for the start of the school year. Nancy and I cover a lot of ground, including the importance of using data to make decisions that support students on all levels. Here's my conversation with Nancy. Nancy. Welcome. Thank you so much for joining us on today's episode.

Nancy Nelson:

Thanks for having me. It's great to be here.

Susan Lambert:

Yeah. And you know, we always like to start the podcast off by asking our guests to talk about their journey into early literacy and where you, you know, where you ended up, how you became interested in what you're doing now. Would you like to share a little bit of that with us?

Nancy Nelson:

Sure . Yeah. It's part of the path is linear, but part of it is not. So out of college I was a special education teacher in the Bay Area. I taught middle school and high school math actually, so not reading related. And I knew that I wanted to get an advanced degree, that I love teaching, but it's very, very demanding on a daily basis. And so I had become interested in going back to graduate school and considered staying where I was already, which was at San Francisco State, where I got my m aster's degree in special education. And they had a joint Ph.D. program with UC Berkeley that I considered pursuing in special education, but wanted to pursue a different related field, which was school psychology. And I'm from Oregon originally. So I'm one of the few born and raised in Oregon. <laugh>. And I applied to University of Oregon's Ph.D. program in school psychology and came back then to Oregon. And coming back to graduate school, I really wanted to look at resiliency in education and in students and thinking about how what they learn about resiliency, what the practices, or sort of safeguards that they have built up around them might protect them, act as protective factors as they moved through their own schooling experiences and out into the world. And when I came back to Oregon and was enrolled at the University of Oregon, I kept finding myself sort of attracted more to the academic intervention and assessment side of things. I think based on my special education background and University of Oregon has a very strong legacy in reading and the Science of Reading implementation of school systems. And so even though my focus or sort of my niche in graduate school was a little bit more math related . I received a very well-rounded education in the Science of Reading and school systems, which school systems have, you know, been a large focus of what I've done in my career to date. So I always laugh anytime anyone asks me what my area of focus is or my area of research, because it sounds very broad and it's basically, you know, the implementation of response to intervention or multi-tiered systems of support in reading and mathematics instruction and assessment. So , I have spread the gamut, but I really got into this field with concern for students and the recognition that education is one of the few things that students experience in life that has the ability to change their trajectory. So having access to educational opportunity and being successful in education really opens doors for students and provides opportunity and access to choice. And so I wanted to make sure that my work was aligned with supporting an aim that would allow all students, regardless of their background, to have the same access to those opportunities and choices.

Susan Lambert:

Mm , very interesting. And I'm just wondering with that earlier interest in the idea of resilience, if you see any of that, you know, sort of on the fringes of what you're doing now.

Nancy Nelson:

Yeah, absolutely. I mean, I see them as being related. And so, you know, if there's, especially when we talk about reading instruction , there are , you know, the reading wars are sort of the history around whole language instruction versus systematic, explicit phonics instruction. I see as being intimately linked to resiliency in the way that explicit systematics phonics instruction provides the code, you know, it unlocks the ability for students to engage in a host of opportunities and in doing so becomes a protective factor for those students. So I feel very strongly that students get access to instruction that's delivered through the use of evidence-based practices, because that's what we know works.

Susan Lambert:

Yeah. Really important. And we've had a lot of conversations on this podcast about, you know, the topic of the Science of Reading, but similarly, right, like starting early and strong with that instruction will help kids. I know you're also involved in the National Center for Improving Literacy. Can you share a little bit about the purpose of that organization, the work and, and maybe your role there?

Nancy Nelson:

Sure. Yeah. So I am a co-principal investigator on the National Center and Improving Literacy, which means that I'm part of the leadership team with the other directors of NCILL as we call it. So that's Hank Fien of the University, Oregon, Sarah Seka And so we're in our fourth year of implementation right now. Our overarching goal is very broadly focused , but still really targeting the needs of students at risk for disabilities in the area of literacy, which includes dyslexia. So we're focused on identifying and supporting the use of evidence-based approaches for screening, identification, instruction, and intervention for students in kindergarten through grade 12 , with potentially some pre-K focus and some college focus as well , around teacher training, for example . Really focused in this space of supporting students, when they're at risk for literacy-related disabilities, including dyslexia. And so within the center , in addition to being a co-principal investigator, I lead the professional development and technical assistance strand, which focuses on providing universal, targeted and intensive technical assistance to a range of educator stakeholders. So that includes teachers, but paraprofessionals school leaders, coaches , others that might be engaged in actively providing educational services to our target population, that's students with , or at risk for literacy related disabilities. And that's one ... the technical and technical assistance and professional development priority is one of five priorities that we have for the center. The first two are focused on the content really. So the first is evidence-based approaches for screening and identification. And the second is evidence-based approaches for instruction and intervention. And then the last three are really focused on dissemination and the ways that we get information out to the range of stakeholders that the national center is intended to reach. So the third strand that's led by Sarah Saco focuses on parents and families. The fourth strand, as I mentioned, is focused on more on educators and providing support to them. And then the fifth strand is more of our, sort of our universal strand , universal technical assistance strand. That's really our website, social media, other aspects of information that's intended to provide access to evidence-based tools primarily.

Susan Lambert:

Great. And what we'll do is we will be sure to list the website, but I know there's a wealth of resources and information on that site, pretty comprehensively.

Nancy Nelson:

There are, yeah, our dissemination team, that fifth priority strand that I mentioned, has done a great job collating existing resources that are available. We don't wanna recreate the wheel in what we're doing. So if there are things that follow the Science of Reading or that have been demonstrated to be rooted in strong research evidence, we make those available on our website. We connect with those, and then there are some tools and resources that we've been developing ourselves that the dissemination team has been pushing out. So they have done a great job. And I do encourage people to check out that website and our Facebook and Twitter to get access to that information. .

Susan Lambert:

Great. Yeah. We'll link everybody to all of those, so they don't have to go out and find them on their own . You talked a little bit about your work as it relates to RTI or MTS. And you know, what I would love to do is for you to just give a little bit of background on RTI. Where did it come from? Why is it important? I think it's one of those topics that in education, we assume everybody knows what we're talking about when we say it. We don't know.

Nancy Nelson:

Yeah. Yeah, definitely. So RTI really grew out of the public health model in the late nineties and early two thousands. So this idea of providing increased intensity of support based on sort of patient or client need really took root in education in the early two thousands in a couple of different ways. And so, response to intervention is interesting because it's been implicated on the general education side of education, but also on the special education side. And two pretty important ways that I think are important to distinguish. So response to intervention, sort of the first terminology and the academic side of multi-tiered systems of supports with really two overarching goals, and one of those is this idea, like the public health model, of using increasing tiers of support and assessment to determine the level of support students need within the school system. And that is that's really a general education function, right? Like the idea that we set up a system and design to meet the needs of all students within our system is sort of that idea that underpins that general education side of response to intervention. And then in addition to that, or within the system, because we have these increasing tiers of support, we wanna make sure that the system is set up to meet the needs of each and every individual student also. So there are students that are getting these, you know, intensive interventions potentially in tier three are sort of in that, of that ilk, that are designed to meet individual student needs within the system. And then another overarching goal of RTI that distinguishes RTI, I think, from some of the other types of multi-tiered systems of support is this notion that response to intervention has been evoked in special education law as a mechanism for determining whether or not a student is eligible for special education services under the category of specific learning disability. And so those two pieces of RTI are, I think, are really important to consider. We sort of use multi-tiered systems of support as an overall umbrella term for systems like RTI, positive behavior intervention, and supports, which also grew out of the University of Oregon is another example of multi-tiered systems to support on the behavior side. And I , I really appreciate the field's use of MTSS as a way of trying to link some of these systems and system structures that are intended to do the same things to meet the needs of each student and all students, whether it be for academics or behavior or social emotional supports, because those systems have been historically siloed,and being able to connect them under one umbrella term, I think, makes some critical points to educators. And I can talk more about, I'll talk more about the response intervention piece, but it sounded like you had a question too, Susan.

Susan Lambert:

Actually I was gonna make a comment, but now I can't even remember what it was, so that's great, but , it's great to distinguish MTSS though, as sort of the overarching umbrella term. And,is that what I'm understanding you to say is, you know , MTSS is kind of the overarching term with other things that fit underneath that?

Nancy Nelson:

Yeah. Yeah. I think a lot of people would say that within MTSS RTI is the academic arm and PBIS is the behavior arm.

Susan Lambert:

That makes sense.

Nancy Nelson:

Yeah. And I think that it's a very good way of looking at it and sort of from a general perspective. Where RTI is a little bit unique in that way is that there is this structure that's set up for special education eligibility, determination decisions through RTI. And there isn't really a parallel that I'm aware of on the behavior side for , you know, for PBIS in that kind of decision-making.

Susan Lambert:

Wow. That's a really helpful sort of visual to separate those two. You know , besides , I often hear RTI MTSS sort of used similarly, like as a synonym, but that's really helpful distinguishing.

Nancy Nelson:

Yeah, yeah , yeah. That's great. I'm glad.

Susan Lambert:

When we talk about RTI specifically then , I'm curious to know about, you know, it's super helpful to hear those big ideas in RTI, but kind of curious to know about some common misconceptions within RTI.

Nancy Nelson:

Yeah. So there are a few of these that come up a lot that are , because this is, you know, sort of the area where I work frequently, these misconceptions are perpetuated and I hear them frequently. So one of those is that RTI is an intervention. That it's, you know, which in educational research, when we're talking about interventions, or even in educational practice, interventions are relatively discreet, right? They've usually been packaged in a particular way. They're intended to be delivered in a very specific way. And thinking about RTI as an intervention instead of an approach, which is really what it is, misses the fact that RTI needs to take into account the local context. And so there are guiding principles and there are certainly components of RTI that need to be in place for RTI as an approach to be implemented or implemented well. But if it's thought about as an intervention , you miss kind of the contextual variables that should shape how some of those aspects or features of RTI are implemented within a setting. So for example, people ask, you know, a lot, so there's sort of a , you know , RTI, we have the triangle, that's the visual image, or we have, you know , all students a re sort of at the bottom, and t his i s sort of green zone, and these are the students that are on track for meeting grade l evel goals. And then there are students that receive supplementary support above and beyond that sort of tier one universal support on the basis of screening data that indicate they need those a dd that additional support. And then students that either are so far behind, uh, their peers, or maybe haven't responded to that supplementary support that needs something more intensive within that system. And if we don't consider the different ways that students get there, we miss part of the aspect of the system. So people will say, for example , how many students should be in tier one, how many should be in tier two, and how many should be in tier three, and what should we do within each of those tiers? And the answer to both of those questions is really contextually relevant. And so there are sort of guiding principles around what we know a healthy system looks like and what a healthy system should do , which is a system where we would say roughly 80% of the students are on track to meet grade level goals, about 15% of students need supplementary support, and about 5% need intensive support, but it doesn't always look that way. And in fact, it looks that way, very rarely, unfortunately, in educational settings. And so schools have to make decisions often about how they're going to serve students in tier two and tier three, because often they have, you know, 50% of their students that need tier two or more than that in that system. And sometimes they don't have the resources to provide that level of support, and so they have to change the way that they think about their system slightly. So the contextual relevance of RTI and considering it as an approach or thinking about it as an approach and focusing on implementation, instead of thinking about it as an intervention, is one major misconception.

Susan Lambert:

Yeah. And I often hear people talk about an inverted triangle, but it's not , it's not one way or the other there's ranges of where that triangle lands, depending on your context.

Nancy Nelson:

Absolutely. You might have an inverted triangle, your sort of regular healthy triangle, or you might have a square <laugh> right? A rectangle. Another misconception and I hear a lot is that RCI is something that you do or don't do. And so it kind of stems from this idea that it's an intervention. It's like, it's a dichotomous variable, you know, there's not shades of gray when really there are, and it's more of a continuum, so there's strong implementation, there might be weaker implementation, or no implementation at all on any one of the features that sort of characterize and comprise response to intervention approach. And then one last thing that I've been hearing a lot recently , that has really surprised me actually, is that there's some pockets across the country that associate response to intervention with whole language instruction , and sort of a lack of use of data or evidence to inform practice. And that surprises me so much, I think because of my training in response to intervention at the University of Oregon. But also just the that RTI is, you know, is set up and what we know about what works in education. So, you know, we know that the research is very clear that we wanna focus at minimum on all five big ideas of reading , including comprehension, vocabulary, phonics, et cetera , and that explicit systematic instruction, not whole language instruction, is really what's best for teaching students, especially those who are struggling to learn to read, all of those skills that they need to know.

Susan Lambert:

Hmm . And are you' surprised by that, is that a new, is that a new misconception, do you think? Or you're just uncovering that misconception ,

Nancy Nelson:

You know, it may be something that other people are aware of, or have been aware of, for a while . It's a new uncovering for me. And I think that's, you know, maybe a testament to how we kind of all work in our own spaces and are less aware of what's happening on the other side. But that was relatively new information for me. And it's something I'd certainly like to dispel.

Susan Lambert:

Yeah, for sure. Why don't we talk a little bit then, about what, you know, what does a strong implementation look like? You talked about that continuum a little bit, but if I were, you know, a district leader or even a building administrator, and I was saying, you know, I really want this to be strong, but I need to know what it looks like. What would that look like?

Nancy Nelson:

Yeah. There's a lot of information on that and it's something that we could probably talk about for hours, but there's sort of in broad strokes, there are, and everyone does this a little bit differently within the response to intervention world, which sure potentially is part of the problem with communicating it effectively to practitioners, but there's a focus on screening and progress monitoring or sort of the assessment aspect of response to intervention. There's a focus on instruction. So core instruction, supplementary instruction , and intensive intervention. So sort of the instructional intervention piece of things. And then there's also a focus on what I would call infrastructure supports. And these are probably the least understood within RTI, but there's still some consensus around these things, which includes things like leadership involvement , professional development and coaching, the use of database decision-making to interpret assessment data and apply that to instruction and intervention. So all of those things really need to be in place in a systematic way in order for the RTI system to be implemented well.

Susan Lambert:

That makes sense. It's a lot of variance there, I would imagine at a contextual level, so from place to place to place, there's going to be different issues with that particular element of it.

Nancy Nelson:

That's right. Yeah. And across the system, you know, there are principles that should guide the work too . So the use of data is one of them, for sure. And similarly, the use of evidence-based practices across each of those sort of areas of the system.

Susan Lambert:

Yeah. And we've done again, a lot of work talking about evidence-based practice, as it relates to Science of Reading instruction in the lower grades. What I'd love to do is to segue a little bit to talk about the use of assessment and how that use of assessment fits into this model in ways that are supported by the research.

Nancy Nelson:

Yeah, sure. That makes sense.

Susan Lambert:

And you mentioned a little bit about screening instruments. So why don't we sort of start there about the importance of screening to identify risk?

Nancy Nelson:

Yeah, so this is another thing that I hear a lot in the field , that is a bit, bit of a misconception, which is the notion that screening assessment should be sort of the be all , end all comprehensive assessment to inform teachers about what their instruction should look like. And the way that screening is actually intended to be used, the way that screening instruments have been developed, for example, and the way that they're intended to be used within response to intervention systems is really around screening for risk. And so the idea a nd the way t his should be implemented in a response t o intervention system is that screening should be brief measures that are used within school settings that hopefully do have some instructional relevance, b ut are intended to sample skills that are highly predictive of where students will end up, say at the end of a school year as an example. So, in doing that, that allows us to make very efficient decisions at the beginning of a school year in determining the level of support that a student will need to access in order to be successful. And it doesn't mean, you know , that those screeners are a hundred percent accurate, right? There's always a little bit of error in a screener. We want them to be as accurate as possible, but because our response to intervention systems aren't high stake systems, and because students will move flexibly across tiers of support, It's okay if the placement isn't perfect from day one of the fall of a school year, because you'll have other data sources that you'll be using between fall and winte, at winter benchmark time point , for example, that allow you to move students to other tiers of support that they might need. And so I think that that piece is really critical is understanding the risk value. I see complaints in the field, a lot about screeners that are fluency based where teachers say, well, we're not just teaching fluency. And that's absolutely true. Of course we're not teaching just fluency, but that fluency is an indicator of overall proficiency. And that's the part that matters, and that's the reason that we measure it as a function of screening, similar to sort of letter names as a good example. In some of the screening measures that are available and used in early reading, like , DIBELS or easyCBM or AIMSweb, there is a measure of letter names and students' ability in the early grades, particularly kindergarten or first grade, to name letters. And it's not because we want to take data from that measure and go out and just teach students the letters until they have that down. Because we know that that's not actually an effective practice. But letter names is highly predictive. Student's ability to recognize letter names early on in elementary school is highly predictive of their proficiency at the end of the school year and later on in their elementary school reading development. And so by assessing that early on, we have an indicator of whether or not a student is likely to meet grade level goalswithout any other support provided essentially. So we screen those students at the beginning of the school year, place them into tiers of support and then provide them with corresponding instruction and intervention at the intensity level that we think that student needs on the basis of those screening data. And we don't just screen for letter names. Obviously we screen for other things that are that are also more instructionally relevant and are actually associated with the particular content students need to learn. But we're not going, in a screening paradigm, we're not going to, by definition, screen students for all of the skills that they need to learn at the end of the year. That's a different type of assessment. And that's not what screening is intended to do in general or within an RTI system.

Susan Lambert:

That, and I wanna make a connection back to what you talked about early on in the conversation, that really RTI was born out of the healthcare model. Exactly. Related would be why I go to the doctor and have my blood pressure taken, is that I don't go for the comprehensive tests , like all of the tests that I think need to have, but that we have our blood pressure taken for one indicator of a possible further need for tests . Is that right?

Nancy Nelson:

That's absolutely true. And you see that sort of across medicine in the way that medicine works also, you know, just in other aspects of life, right? So we drive cars and if the battery light goes on, or if the engine light goes on, we'd say, oh, that's, y ou know, that's an indicator. You might take your car to the shop and find out that nothing is wrong, actually, that it's fine. Or, or maybe it just needs a tune up, right? Or maybe it's something much more severe. You find that out through additional diagnostic assessment when there i s an indicated problem, but that screening system is built in, in order to flag those potential problems so that you can do either f ollow u p assessment if it's it's needed, or at least provide some sort of universal level treatment or targeted treatment when, and if, it's necessary.

Susan Lambert:

Yeah. That's really helpful to understand the purpose of screeners. And I'm gonna ask one sort of extension question on that. One thing we do know is that across the country dyslexia and, you know, built into sort of the state requirements are these screeners to identify risk for dyslexia. Can you talk to us about why that's actually really important?

Nancy Nelson:

Yeah. So it's something that I struggled with, honestly, when I was a teacher because I worked with students with a range of disabilities and I was , you know , to be perfectly honest, I was concerned that by focusing so specifically on dyslexia, we would lose sight of the needs of students with other disabilities within a system. And what I've found is actually the opposite. And that kind of has a couple of different facets. One is that what we see by emphasizing dyslexia, that for whatever reason, schools, families , the educational system, sort of communities at large are more able to understand why screening in general is important and screening for risk then, can help to let the system know how well it's functioning generally, what kinds of instructional intervention supports should be provided for students generally. And then really address more comprehensively the needs of all students who are struggling. Students who may end up having dyslexia, students who may have other literacy related disabilities, but other students who are probably struggling in the system that aren't getting any supplementary support that might really only need a little bit of it in order to be back on track.The other thing I see that's, you know , more dyslexia specific is that , dyslexia is a pretty prevalent learning disability. So, you know, there are a number of children that struggle with it, the prevalency estimates range because of the way that dyslexia is identified. And so, you know , basically we're setting up a cut score and looking at who falls below that cut score and those students who demonstrate particular skill patterns below that cut score will be identified as having dyslexia in research studies, for example . And so those prevalence estimates vary across research studies, but in general, even if we're talking about 5 to 15% of the population having dyslexia, that's a pretty large swath of our school-based children who, you know, dyslexia isn't -- again, it's not a black and white kind of condition, it also exists on a continuum, and so there are students with mild forms of dyslexia and students with more severe forms, and those students will probably need different instructional supports within school settings. But so often those students with dyslexia, particularly on the mild end, are able to get through kindergarten, first, maybe even second grade, and kind of fool people into thinking that they can read, because you know, they've been able to memorize words that they're seeing, or potentially the strategies that schools are teaching those students for reading, because they're not potentially teaching phonics instruction systematically and explicitly, those students kind of get through those first everal years and then get into third grade and demonstrate some pretty significant difficulties with reading fluently and decoding multisyllabic words, which then ends up contributing to, you know, a host of difficulties as those students are going through school. And so that early screening for dyslexia specifically is pretty important.

Susan Lambert:

And I would imagine then it impacts not just multisyllabic words, but we're talking about an impact to comprehension, which shows up in the fourth grade, like NAEP scores. So it's sort of all built on each other.

Nancy Nelson:

That's right. So if we look at the simple view of reading , where the simple view of reading says that a reading comprehension, which is the ultimate goal of reading, is the product of good decoding skills and good listening comprehension skills--but when students get older, if they can't decode, the way that we look at reading comprehension is actually, you know, reading text. And so listening comprehension may be fully intact and present, but it's not going to go very far anind getting students who have dyslexia to comprehend text because they can't decode that text.

Susan Lambert:

Yeah. Yeah. So these measures then, or these these assessment tools in terms of screening to identify this risk are really important, but how do we know what kind of assessment we should be using for that? Because it seems like there's all different kinds and there's computer-based assessments. And so can you talk a little bit about effective assessment tools that might help us then to identify risk?

Nancy Nelson:

Yeah. So again, it kind of goes back to the criteria that we have built around different types of assessment. And so for screening, we really, because we're trying to administer these to all students within the school system, and because we want to be able to make comparisons between kids, but also from sort of a systems level, we wanna be able to take a step back and look at overall systems health, or how the system is functioning, how a school might be functioning compared to another school or a district--we really want these to be standardized assessments. And we want them to have some sort of norm-referenced score that's available, so we can see how students are falling relative to one another and relative to how schools are falling relative to other schools. Going back to these being administered to all kids, they have to be brief. I mean, no one has the time to administer, even in this nature time of, you know , modern technology, we don't have the time to administer really in depth assessments to all of our kids for the purpose of screening. And so we want these assessments to be brief, standardized, norm-referenced, and we want them to be indicators, again, going back to this risk issue. We want them to be indicators and predictors of later reading outcomes. And so we wanna know by looking at a screening assessment whether or not a student is on track or not, so that we can assign appropriate levels of support. Yeah. That makes a lot of sense. And I know at the University of Oregon, you've done a lot of work around DIBELS , and recently , DIBELS 8, I think, has been released. Can you talk about DIBELS 8? What distinguishes it and why it is an effective dyslexia risk indicator. Yeah. Yeah. So , DIBELS 8th Edition has been recently released. It's been out for just over a year and it really is building, you know, its release has been built on years and years of research. So the first edition of DIBELS was released in the early two thousands, another edition, DIBELS Next, was released about 10 years later, and it's been about 10 years since then. And so , as part of being a research university, but also being , you know, a college of education where we're very focused on practice, we wanna make sure we have sort of two goals and aims at the Center on Teaching and Learning, which is where I work, which is where DIBELS was created and is hosted now. DIBELS, because of the features of the assessment, it maps onto what I described about screening assessments and the features that that need to be there. We are focused on making sure that that research evidence or sort of the support behind good, high-quality screening instruments gets into the hands of practitioners. And so we're simultaneously always working on how we can make sure that the assessments and tools and interventions that we're developing through our research are based on the best and most current research evidence available, and then also working very hard to try and scale those up, to make them accessible and usable in school settings. And so some features that distinguish DIBELS from other types of similar screening assessments, like other curriculum-based measures--the big one truthfully, is that the way that scores have been derived and the cut scores and the benchmark scores that are available for the assessmen, and so many assessments that are used for screening in the curriculum-based field, but also more broadly are norm-referenced scores and they've been validated and that they show, you know, in some cases they there's end-of-year assessment data that are being used to determine some risk, but that's not usually the way that the norm-referenced scores for those assessments have been determined. So a lot of the systems will use percentile ranks only, and talk about how a student falls relative to students nationally, or students locally based on that percentile rank. So if I'm a student at the 40th percentile I'm scoring at or above only 40% of students within the system, which means I'm scoring below 60% of students, right? So, I might be , you know, conceivably sort of at the bottom of what we would consider to be a grade level threshold. And then we use those scores generally to make decisions about instruction and intervention for students in an RTI system. DIBELS is different than that because we still have those percentile ranks, but we set particular scores based on risk thresholds for meeting an end-of-year goal. And so the way that DIBELS has been researched is that we're actually using, and the benchmark scores that we've developed have been based on other criteria , which means that the scores that we use for doubles are criterion-referenced. And that's not as common with other curriculum-based measures or other screening measures, but that gives us a better indicator of risk. So broadly, when we're thinking about RTI and implementation for all kids, we use those benchmark scores in the fall of a particular year. And we have a relatively high degree of certainty that a student who is scoring at benchmark, so on track for being on grade level at the end of the year , is actually going to be on grade level at the end of the year, based on the research and the studies that have been conducted. Wow. In 20 years, it doesn't seem like it's been that long. And I'm assuming that listeners can get more information on that at the University of Oregon? Where's the best place for us to point? I guess I should ask it that way. Yeah. If people want comprehensive information about DIBELS specifically, dibels. uoregon .edu is the DIBELS website. And that's a great place to go to get information. We have a customer support arm that is available sort of around the clock to provide answers to questions by email or by phone. They answer phones and wanna make sure that everyone feels equipped to implement , and use the data from the assessments that are being administered, which I do wanna say one thing, and this may be a little out of order. But another thing that I see as really being a pitfall to the way that schools implement RTI is how infrequently they use the data that they collect. And so it's sort of a misunderstanding. There's sort of an idea, okay, we have to collect the data, but the sort of forgotten notion that you're collecting data for a particular purpose, you should always be collecting it for a particular purpose to answer a particular question , and if you're not using it, it's a waste of everybody's time. Yeah. That makes so much sense. And I've seen that across the country, just in my work with schools too, is the over reliance on the data collection and it gets swept under the rug and it's not actioned . Well, it's been a great pleasure. I know we've just skimmed the surface of multiple topics that we could probably follow up with more podcasts about. But we'll link our listeners to those couple of resources that you've given so that they can dig in and do some more work. And as we're wrapping up, I would love to just have you think about the one or two things you'd like our listeners to either remember, consider, think about more as it relates to this idea of assessment and or intervention. Yeah. So one of them, we were just sort of touching on is this idea of data and the role of data in education and how critical it is that we use data in a systematic way to support our implementation of anything. I also hear a lot, working with schools, that there's kind of a perception that, you know, teachers know their students well enough to make decisions about whether they're on track or not. And so screening or the use of data to monitor performance aren't really that important, it's just peripheral. And I agree definitely that teachers know their students and have a pretty good idea of whether or not a student is doing well or poorly, but there are data that show that, you know , research studies that have examined when we look at teacher judgment versus student performance, that teachers aren't actually able to accurately predict the rankings of students. And so then making decisions about who would and should potentially receive supplementary support, it's not going to line up perfectly. And so relying on data addresses that issue and also allows for us to engage in a more systematic process. That's part of really implementing school systems to meet the needs of all kids, to ensure that kids don't fall through the cracks, which is what has, you know, happened for decades when we don't rely on data in school systems.

Susan Lambert:

Yeah. That's very helpful advice. And I really appreciate you bringing us back to this idea of gathering the things that we can, and actually using all that information for the purposes of really helping students in our classrooms. And I think every single educator, whether they're a teacher or other role that they play within the school system, wouldn't deny that that's what we're here to do. So, thank you so much for your time today, Nancy. It's been very instructive and helpful to highlight interventions and the use of assessment. So yeah, thanks again.

Nancy Nelson:

Thank you, Susan .

Susan Lambert:

Thanks so much for listening to that conversation, which we first released in February of 2020, check out the show notes for resources from the National Center on Literacy. Let us know what you thought about this episode in our Facebook discussion group Science of Reading the community. Thanks so much for listening.