Science of Reading: The Podcast

S1-10. Myths and misconceptions about universal screening: Nancy Nelson

Amplify Education Season 1 Episode 10

Dr. Nancy Nelson, Research Assistant Professor at the Center on Teaching and Learning at the University of  Oregon, discusses myths and misconceptions around  RTI, Multi-Tiered Systems of Support (MTSS), and universal screening in reading instruction.

Quotes:

“Relying on data allows us to engage in a systematic process to implement systems to meet the needs of all kids.”

Resources: 

DIBELS® at the University of Oregon

Want to discuss the episode? Join our Facebook group Science of Reading: The Community.

Susan Lambert: What if a change in classroom practice could lead to change in reading outcomes? What should reading instruction include to ensure all students have the opportunity to succeed? What does cognitive science tell us about learning to read, and why aren't those learnings applied in our classrooms? Welcome to Science of Reading: The Podcast. I'm your host, Susan Lambert from Amplify Education. Join us every two weeks as we talk with Science of Reading experts to explore what it takes to transform our classrooms and develop confident and capable readers. In today's episode, we talk with Nancy Nelson from the University of Oregon, co-lead of the National Center on Improving Literacy. If you don't know about NCIL, be sure to check out the link in the show notes. They have so many helpful resources. We talk with Nancy a bit about the Center, and then we turn to Response to Intervention, or RTI. Nancy is an expert in the topic, and we talk about where it originated, what it is, and how it relates to MTSS, or the Multi-Tiered Systems of Support. We also talk about misunderstandings and misconceptions about RTI, what a strong implementation looks like, and really, the importance of assessment. Nancy's practical approach is super-helpful; no matter what your current understanding of the topic, I know you'll learn something new. Nancy, welcome. Thank you so much for joining us on today's episode. 

Nancy Nelson: Thanks for having me. It's great to be here. 

Susan Lambert: Yeah, and you know, we always like to start the podcast off by asking our guests to talk about their journey into early literacy and where you ended up, how you became interested in what you're doing now. Would you like to share a little bit of that with us? 

Nancy Nelson: Sure, yeah. Part of the path is linear, but part of it is not. So, out of college, I was a special education teacher in the Bay Area. I taught middle school and high school—math actually, so not reading-related. And I knew that I wanted to get an advanced degree. I love teaching, but it's very, very demanding on a daily basis. And so, I had interest in going back to graduate school and considered staying where I was already, which was at San Francisco State, where I got my master's degree in special education. And they had a joint Ph.D. program with U.C. Berkeley that I considered pursuing in special education, but wanted to pursue a different related field, which was school psychology. And I'm from Oregon originally, so I'm one of the few—born and raised in Oregon. And I applied to University of Oregon's Ph.D program in school psychology and came back then to Oregon. And coming back to graduate school, I really wanted to look at resiliency in education, and in students, and thinking about what they learn about resiliency, what the practices are, the safeguards that they have built up around them that might protect them, act as protective factors as they move through their own schooling experiences and out into the world. And when I came back to Oregon and was enrolled at the University of Oregon, I kept finding myself attracted more to the academic intervention and assessment side of things—I think based on my special education background. And University of Oregon has a very strong legacy in reading, the Science of Reading, implementation of school systems. And so, even though my focus or sort of my niche in graduate school was a little bit more math-related, I received a very well-rounded education in the Science of Reading and school systems, which school systems have, you know, been a large focus of what I've done in my career to date. So I always laugh anytime anyone asks me what my area of focus is or my area of research, because it sounds very broad: It's basically, you know, the implementation of Response to Intervention or Multi-Tiered Systems of Support in reading and mathematics, instruction and assessment. So, I have spread the gamut. But I really got into this field with a concern for students and the recognition that education is one of the few things that students experience in life that has the ability to change their trajectory. So, having access to educational opportunity and being successful in education really opens doors for students and provides opportunity and access to choice. And so I wanted to make sure that my work was aligned with supporting an aim that would allow all students, regardless of their background, to have the same access to those opportunities and choices.

Susan Lambert: Very interesting. And I'm just wondering, with that earlier interest in the idea of resilience, if you see any of that on the fringes of what you're doing now.

Nancy Nelson: Yeah, absolutely. I mean, I see them as being related. And so, you know, especially when we talk about reading instruction, the reading wars—sort of the history around whole-language instruction versus systematic, explicit phonics instruction—I see as being intimately linked to resiliency, in the way that explicit systematics phonics instruction provides the code; it unlocks the ability for students to engage in a host of opportunities, and in doing so, becomes a protective factor for those students. So, I feel very strongly that students get access to instruction that's delivered through the use of evidence-based practices, because that's what we know works.

Susan Lambert: Yeah, really important. And we've had a lot of conversations on this podcast about the topic of the Science of Reading. But similarly, starting early and strong with that instruction will help kids. I know you're also involved in the National Center for Improving Literacy. Can you share a little bit about the purpose of that organization, the work, and maybe your role there?

Nancy Nelson: Sure. Yeah. So, I am a co-principal investigator on the National Center on Improving Literacy, which means that I'm part of the leadership team with the other directors of NCIL [Encil], as we call it. So that's Hank Fien at the University of Oregon; Sarah Sako at Research Management Corporation, or RMC; and Yaacov Petscher at Florida State University. The Center is particularly interesting, I think, because of its inception and authorization, which happened through the Every Student Succeeds Act. And so it's one of the few special-education-focused centers that lives kind of outside of the Office of Special Education and Rehabilitative Services, in particular, the Office of Special Education Programs, which is who we generally work with for research and technical assistance related to special education. So, in 2016, this center was actually funded for the very first time, and so we're in our fourth year of implementation right now. Our overarching goal is very broadly focused, but still really targeting the needs of students with or at risk for disabilities in the area of literacy, which includes dyslexia. So we're focused on identifying and supporting the use of evidence-based approaches for screening, identification, instruction, and intervention for students in kindergarten through grade 12. With potentially some preK focus and some college focus, as well—around teacher training, for example. Really focused in the space of supporting students, whether or at risk for literacy-related disabilities, including dyslexia. And so within the center, in addition to being a co-principal investigator, I lead the professional development and technical assistance strand, which focuses on providing universal targeted and intensive technical assistance to a range of educator stakeholders. So, that includes, teachers, but paraprofessionals, school leaders, coaches, others that might be engaged in actively providing educational services to our target population: Students with or at risk for literacy-related disabilities. And that's one. So the technical and technical assistance and professional development priority is one of five priorities that we have for the center. 

Susan Lambert: Interesting. 

Nancy Nelson: Yeah. The first two are focused on the content, really. So, the first is evidence-based approaches for screening and identification. And the second is evidence-based approaches for instruction and intervention. And then the last three are really focused on dissemination and the ways that we get information out to the range of stakeholders that the National Center is intended to reach. So the third strand, that's led by Sarah Sako, focuses on parents and families. The fourth strand, as I mentioned, is focused more on educators and providing support to them. And then the fifth strand is more sort of our universal strand, universal technical assistance strand. That's really our website, social media, other aspects of information that's intended to provide access to evidence-based tools, primarily.

Susan Lambert: Great. Yeah. And what we'll do is, we will be sure to link listeners in the show notes to the website. But I know there's a wealth of resources and information on that site, pretty comprehensively.

Nancy Nelson: There are, yeah. Our dissemination team, that fifth-priority strand that I mentioned, has done a great job collating existing resources that are available. We don't wanna recreate the wheel in what we're doing, so if there are things that follow the Science of Reading or that have been demonstrated to be rooted in strong, researched evidence, we make those available on our website; we connect with those. And then there are some tools and resources that we've been developing ourselves that the dissemination team has been pushing out. So they have done a great job, and I do encourage people to check out that website and our Facebook and Twitter to get access to that information.

Susan Lambert: Great. We'll link everybody to all of those, so they don't have to go out and try them on their own. You talked a little bit about your work as it relates to RTI or MTSS. And you know what I would love to do is for you to just give a little bit of background on RTI. Where did it come from? Why is it important? I think it's one of those topics that in education, we assume everybody knows what we're talking about when we say it. We don't all know. 

Nancy Nelson: Yeah, definitely. So, RTI really grew out of the public health model, in the late ‘90s and early 2000s. This idea of providing increased intensity of support based on patient or client need really took root in education in the early 2000s in a couple of different ways. And Response to Intervention is interesting because it's been implicated on the general education side of education, but also on the special education side, in two pretty important ways that I think are important to distinguish. Response to Intervention was sort of the first terminology on the academic side of Multi-Tiered Systems of Supports, with really two overarching goals. And one of those is this idea, like the public health model, of using increasing tiers of support and assessment to determine the level of support students need within the school system. And that's really a general education function, right? Like the idea that we set up a system designed to meet the needs of all students within our system is sort of that idea that underpins that general education side of Response to Intervention. And then in addition to that, or within the system—because we have these increasing tiers of support—we wanna make sure that the system is set up to meet the needs of each and every individual student, also. So, there are students that are getting these intensive interventions, potentially in Tier 3 or of that ilk, that are designed to meet individual student needs within the system. And then another overarching goal of RTI that distinguishes RTI, I think, from some of the other types of Multi-Tiered Systems of Support is this notion that Response to Intervention has been evoked in special education law as a mechanism for determining whether or not a student is eligible for special education services under the category of specific learning disability. And so, those two pieces of [00:14:00] RTI, I think, are really important to consider. We sort of use Multi-Tiered Systems of Support as an overall umbrella term for systems like RTI; Positive Behavior Intervention and Supports, which also grew out of the University of Oregon, is another example of Multi-Tiered Systems of Support on the behavior side. And I really appreciate the field's use of MTSS as a way of trying to link some of these systems, and system structures, that are intended to do the same things: to meet the needs of each student and all students, whether it be for academics or behavior or social-emotional supports, because those systems have been historically siloed. And being able to connect under one umbrella term, I think, makes some critical points to educators. And I'll talk more about the Response to Intervention piece, but it sounded like you had a question, too, Susan.

Susan Lambert: Actually, I was gonna make a comment, but now I can't even remember what it was. So that's great. But it's great to distinguish MTSS, though, as sort of the overarching umbrella term. Is that what I'm understanding you to say? Is MTSS kind of the overarching term, with other things that fit underneath that? 

Nancy Nelson: Yeah, yeah. I think a lot of people would say that within MTSS, RTI is the academic arm and PBIS is the behavior arm.

Susan Lambert: That makes sense.

Nancy Nelson: Yeah. And I think is a very good way of looking at it, and sort of from a general perspective, the only thing where RTI is a little bit unique in that way is that there is this structure that's set up for special education eligibility determination decisions through RTI, and there isn't really a parallel that I'm aware of on the behavior side for PBIS in that kind of decision-making.

Susan Lambert: Wow, that's a really helpful sort of visual, to separate those two. Because I often hear RTI, MTSS, sort of used similarly—like, as a synonym—but that's really helpful, distinguishing.

Nancy Nelson: Yeah. Yeah. 

Susan Lambert: When you talk about RTI … specifically, then, I'm curious to know about … it's super-helpful to hear those big ideas in RTI, but [I’m] kind of curious to know about some common misconceptions within RTI.

Nancy Nelson: Yeah. So there are a few of these that come up a lot. Because this is, you know, the area where I work frequently, these misconceptions are perpetuated and I hear them frequently. So, one of those is that RTI is an intervention. That it's … which, in educational research, when we're talking about interventions or even in educational practice, interventions are relatively discrete, right? They're things that have usually been packaged in a particular way. They're intended to be delivered in a very specific way. And thinking about RTI as an intervention, instead of an approach—which is really what it is—misses the fact that RTI needs to take into account the local context. And so, there are guiding principles and there are certainly components of RTI that need to be in place for RTI as an approach to be implemented or implemented well. But if it's thought about as an intervention, you miss the contextual variables that should shape how some of those aspects or features of RTI are implemented within a setting. So for example, people ask, you know, a lot … in RTI, we have the triangle, that's the visual image, where all students at the bottom in this sort of green zone, these are the students that are on track for meeting grade-level goals. And then there are students that receive supplementary support above and beyond that sort of Tier 1 universal support on the basis of screening data that indicate they need that additional support. And then students that either are so far behind their peers or maybe haven't responded to that supplementary support, that need something more intensive within that system. And if we don't consider the different ways that students get there, we miss part of the aspect of the system. So people will say, for example, how many students should be in Tier 1? How many should be in Tier 2? And how many should be in Tier 3? And what should we do within each of those tiers? And the answer to those questions is really contextually relevant. And so there are guiding principles around what we know a healthy system looks like and what a healthy system should do, which is a system where we would say roughly 80% of the students are on track to meet grade-level goals; about 15% of students need supplementary support; and about 5% need intensive support. But it doesn't always look that way. And in fact, it looks that way very rarely, unfortunately, in educational settings. And so schools have to make decisions, often,  about how they're going to serve students in Tier 2 and Tier 3, because often they have, you know, 50% of their students that need Tier 2 or more than that, in that system. And sometimes they don't have the resources to provide that level of support, and so they have to change the way that they think about their system slightly. So the contextual relevance of RTI, and considering it as an approach, thinking about it as an approach, focusing on implementation instead of thinking about it as an intervention, is one major misconception.

Susan Lambert: Yeah, and I often hear people talk about an inverted triangle, but it's not one way or the other. There's ranges of where that triangle lands, depending on your context.

Nancy Nelson: Absolutely. You might have an inverted triangle, your sort of regular healthy triangle, or you might have a square. A rectangle. Another misconception that I hear a lot is that RTI is something that you do or don't do. And it stems from this idea that it's an intervention; it's a dichotomous variable; you know, there's not shades of gray … when really, there are, and it's more of a continuum. So, there's strong implementation, there might be weaker implementation, or no implementation at all, on any one of the features that sort of characterize and comprise the Response to Intervention approach. And then one last thing that I've been hearing a lot recently—and that has really surprised me, actually—is that there's some pockets across the country that associate Response to Intervention with whole-language instruction, and sort of a lack of use of data or evidence to inform practice. And that surprises me so much, I think, because of my training in Response to Intervention at the University of Oregon. But also just the way that RTI is set up and what we know about what works in education. We know that the research is very clear that we wanna focus, at minimum, on all five big ideas of reading, including comprehension, vocabulary, phonics, et cetera. And that explicit systematic instruction, not whole-language instruction, is really what's best for teaching students, especially those who are struggling to learn to read, all of those skills that they need to know.

Susan Lambert: And are you surprised by that? Is that a new misconception, do you think? Or you're just uncovering that misconception?

Nancy Nelson: It may be something that other people are aware of, or have been aware of, for a while. It's a new uncovering for me. And I think that's maybe a testament to how we kind of all work in our own spaces and are less aware of what's happening on the other side. But yeah, that was relatively new information for me. And it's something I'd certainly like to dispel.

Susan Lambert: Yeah, for sure. Why don't we talk a little bit, then, about what does a strong implementation look like? You talked about that continuum a little bit. But if I were a district leader or even a building administrator, and I was saying, you know, “I really want this to be strong, but I need to know what it looks like,” what would that look like?

Nancy Nelson: Yeah. There's a lot of information on that. And it's something that we could probably talk about for hours. But sort of in broad strokes, there are—and everyone does this a little bit differently within the Response to Intervention world, which potentially is part of the problem of communicating it effectively to practitioners—but there's a focus on screening and progress monitoring or the assessment aspect of Response to Intervention. There's a focus on instruction—core instruction, supplementary instruction, and intensive intervention—so sort of the instructional intervention piece of things. And then there's also a focus on what I would call infrastructure supports. And these are probably the least understood within RTI, but there's still some consensus around these things, which includes things like leadership involvement, professional development and coaching, the use of database decision-making to interpret assessment data and apply that to instruction and intervention. So all of those things really need to be in place, in a systematic way, in order for the RTI system to be implemented well.

Susan Lambert: That makes sense. A lot of variance there, I would imagine, at a contextual level. So from place to place to place, there's going to be different issues with that particular element of it.

Nancy Nelson: That's right. Yeah. And across the system, you know, there are principles that should guide the work, too. The use of data is one of them, for sure. And similarly, the use of evidence-based practices across each of those areas of the system. 

Susan Lambert: Yeah. And we've done, again, a lot of work talking about evidence-based practice as it relates to Science of Reading instruction in the lower grades. What I'd love to do is to segue a little bit to talk about the use of assessment and how that use of assessment fits into this model, in ways that are supported by the research.

Nancy Nelson: Yeah, sure. That makes sense.

Susan Lambert: And you mentioned a little bit about screening instruments. So why don't we sort of start there, about the importance of screening to identify risk.

Nancy Nelson: Yeah. So this is another thing that I hear a lot in the field, that is a bit of a misconception, which is the notion that screening assessment should be sort of the be-all-end-all comprehensive assessment to inform teachers about what their instruction should look like and the way that screening is actually intended to be used. The way that screening instruments have been developed, for example, and the way that they're intended to be used within Response to Intervention systems is really around screening for risk. And the way this should be implemented in a Response to Intervention system is that screening should be brief measures that are used within school settings that hopefully do have some instructional relevance, but are intended to sample skills that are highly predictive of where students will end up, say, at the end of a school year, as an example. In doing that, that allows us to make very efficient decisions at the beginning of a school year in determining the level of support that a student will need to access in order to be successful. And it doesn't mean, you know, that those screeners are a hundred percent accurate; there's always a little bit of error in a screener. We want them to be as accurate as possible, but because our Response to Intervention systems aren't high-stake systems, and because students will move flexibly across tiers of support, it's okay if the placement isn't perfect from day one of the fall of a school year. Because you'll have other data sources that you'll be using between fall and winter, at winter benchmark timepoint, for example, that allow you to move students to other tiers of support that they might need. And so I think that that piece is really critical, understanding the risk value. I see complaints in the field a lot about screeners that are fluency-based, where teachers say, “Well, we're not just teaching fluency.” And that's absolutely true. Of course, we're not teaching just fluency, but fluency is an indicator of overall proficiency and that's the part that matters and that's the reason that we measure it as a function of screening. Similar to letter names, is a good example. In some of the screening measures that are available and used in early reading—like DIBELS or easyCBM or amesweb—there is a measure of letter names and students' ability in the early grades, particularly kindergarten or first grade, to name letters. And it's not because we want to take data from that measure and go out and just teach students the letters until they have that down, because we know that that's not actually an effective practice. But letter names is highly predictive—students' ability to recognize letter names early on in elementary school is highly predictive of their proficiency at the end of the school year, and later on in their elementary school reading development. And so, by assessing that early on, we have an indicator of whether or not a student is likely to meet grade-level goals without any other support provided, essentially. So we screen those students [at] the beginning of the school year, place them into tiers of support, and then provide them with corresponding instruction and intervention at the intensity level that we think that student  needs on the basis of the screening data. And we don't just screen for letter names, obviously. We screen, obviously, for other things that are also more instructionally relevant and are actually associated with the particular content students need to learn. But in a screening paradigm, we're not going to, by definition, screen students for all of the skills that they need to learn at the end of the year. That's a different type of assessment, and that's not what screening is intended to do—in general or within an RTI system.

Susan Lambert: And I wanna make a connection back to what you talked about early on in the conversation. That really, RTI was born out of the healthcare model. 

Nancy Nelson: Exactly. Yeah. 

Susan Lambert: Related would be: Why I go to the doctor and have my blood pressure taken is that I don't go for the comprehensive tests, all of the tests that I think need to have, but we have our blood pressure taken for one indicator of a possible further need for tests. Is that right? 

Nancy Nelson: That's absolutely true. And you see that across medicine, and the way that medicine works, also, just in other aspects of life, right? So, we drive cars and if the battery light goes on, or if the engine light goes on, we'd say, “Oh, that's an indicator.” You might take your car to the shop and find out that nothing is wrong, actually—that it's fine. Or maybe it just needs a tuneup. Or maybe it's something much more severe. You find that out through additional diagnostic assessment, when there is an indicated problem. But that screening system is built in order to flag those potential problems so that you can do either follow-up assessment if it's needed, or at least provide some sort of universal-level treatment, or targeted treatment, when and if it's necessary.

Susan Lambert: That's really helpful, to understand the purpose of screeners. And I'm gonna ask one sort of extension question on that. 

Nancy Nelson: Sure. 

Susan Lambert: One thing we do know is that across the country, built into the state requirements are these screeners to identify risk for dyslexia. Can you talk to us about why that's actually really important?

Nancy Nelson: Yeah, it's something that I struggled with, honestly, when I was a teacher, because I worked with students with a range of disabilities, and to be perfectly honest, I was concerned that by focusing so specifically on dyslexia, we would lose sight of the needs of students with other disabilities within a system. And what I've found is actually the opposite. And that has a couple of different facets. One is that what we see by emphasizing dyslexia—for whatever reason—schools, families, the educational system, communities at large are more able to understand why screening in general is important. And screening for risk, then, can help let the system know how well it's functioning generally. What kinds of instructional intervention supports should be provided for students, generally? And then really address more comprehensively the needs of all students who are struggling: Students who may end up having dyslexia, students who may have other literacy-related disabilities, other students who are probably struggling in the system that aren't getting any supplementary support that might really only need a little bit of it in order to be back on track.

Susan Lambert: Hmm.

Nancy Nelson: The other thing I see that's more dyslexia-specific is that dyslexia is a pretty prevalent learning disability. There are a number of children that struggle with it. The prevalency estimates range because of the way that dyslexia is identified. And so, we're setting a cut score and looking at who falls below that cut score. And those students who demonstrate particular skill patterns below that cut score will be identified as having dyslexia in research studies, for example. And so, those prevalence estimates vary across research studies. But in general, even if we're talking about five to fifteen percent of the population having dyslexia, that's a pretty large swath of our school-based children. You know, dyslexia isn't—again, it's not a black-and-white kind of condition. It also exists on a continuum. And so, there are students with mild forms of dyslexia and students with more severe forms, and those students will probably need different instructional supports within school settings. But so often those students with dyslexia, particularly on the mild end, are able to get through kindergarten, first, maybe even second grade, and kind of fool people into thinking that they can read. Because they've been able to memorize words that they're seeing, or potentially the strategies schools are teaching those students for reading, because they're not potentially teaching phonics instruction systematically and explicitly. Those students kind of get through those first several years, and then get into third grade and demonstrate some pretty significant difficulties with reading fluently and decoding multisyllabic words, which then ends up contributing to a host of difficulties as those students are going through school. And so that early screening for dyslexia specifically is pretty important.

Susan Lambert: Hmm. And I would imagine then it impacts not just multisyllabic words, but we're talking about an impact to comprehension, which shows up in the fourth-grade NAP scores, so it's sort of all built on each other.

Nancy Nelson: That's right. So if we look at the Simple View of Reading, the Simple View of Reading says that reading comprehension—which is the ultimate goal of reading—is the product of good decoding skills and good listening comprehension skills. But when students get older, if they can't decode … the way that we look at reading comprehension is actually reading text, and so listening comprehension may be fully intact and present, but it's not going to go very far in getting students who have dyslexia to comprehend text because they can't decode that text.

Susan Lambert: Yeah. Yeah. So, these measures then—or these assessment tools in terms of screening to identify this risk—are really important. But how do we know what kind of assessment we should be using for that? Because it seems like there's all different kinds. And there's computer-based assessments … so, can you talk a little bit about effective assessment tools that might help us, then, to identify risk?

Nancy Nelson: Yeah. So, again, it kind of goes back to the criteria that we have built around different types of assessment. And so for screening, because we're trying to administer these to all students within the school system, and because we want to be able to make comparisons between kids, but also from a systems level, we wanna be able to take a step back and look at overall systems’ health or how the system is functioning, how a school might be functioning compared to another school or a district. We really want these to be standardized assessments. And we want them to have some sort of norm reference score that's available, so we can see how students are falling relative to one another and how schools are falling relative to other schools. Going back to these being administered to all kids, they have to be brief. I mean, no one has the time to administer—even in this nature or this time of modern technology, we don't have the time to administer really in-depth assessments to all of our kids for the purpose of screening. And so, we want these assessments to be brief, standardized, norm-referenced. And we want them to be indicators, again, going back to this risk issue, we want them to be indicators of—and predictors of—later reading outcomes. And so we want to know, by looking at a screening assessment, whether or not a student is on track or not, so that we can assign appropriate levels of support.

Susan Lambert: Yeah. That makes a lot of sense. And I know at the University of Oregon, you've done a lot of work around DIBELS. And recently, DIBELS 8, I think, has been released. Can you talk about DIBELS 8, what distinguishes it, and why it is an effective dyslexia-risk indicator?

Nancy Nelson: Yeah. Yeah. So, DIBELS 8th edition has been recently released. It's been out for just over a year. And it really is building—its release has been built on years and years of research. The first edition of DIBELS was released in the 2000s. Another edition, DIBELS Next, was released about 10 years later, and it's been about 10 years since then. And so, as part of being a research university, but also being, you know, a college of education where we're very focused on practice, we have two goals and aims. At the Center on Teaching and Learning, which is where I work, which is where DIBELS was created and is hosted now, DIBELS, because of the features of the assessment, it maps onto what I described about screening assessments and the features that need to be there. And we are focused on making sure that that research evidence or the support behind good, high-quality screening instruments gets them into the hands of practitioners. And so we're simultaneously always working on how we can make sure that the assessments and tools and interventions that we're developing through our research are based on the best and most current research evidence available, and then also working very hard to try and scale those up to make them accessible and usable in school settings. And so some features that distinguish DIBELS from other types of even similar screening assessments—like, other curriculum-based measures—the big one, truthfully, is the way that scores have been derived and the cut scores and the benchmark scores that are available for the assessment. And many assessments that are used for screening in the curriculum-based field, but also more broadly, are norm-referenced scores, and they've been validated, and they show, you know, in some cases they have end-of-year assessment data that are being used to determine some risk.But that's not usually the way that the norm-referenced scores for those assessments have been determined. So a lot of the systems will use percentile ranks only, and talk about how a student falls relative to students nationally or students locally, based on that percentile rank. So if I'm a student at the 40th percentile, I am scoring at or above only 40% of students within the system, which means I'm scoring below 60% of students, right? So I might be, you know, conceivably sort of at the bottom of what we would consider to be a grade-level threshold. And then we use those scores, generally, to make decisions about instruction and intervention for students in an RTI system. DIBELS is different than that, because we still have those percentile ranks, but we have set particular scores based on risk thresholds for meeting an end-of-year goal. And so the way that DIBELS has been researched is that we're actually using—and the benchmark scores that we've developed have been based on—other criteria, which means that the scores that we use for DIBELS are criterion-referenced, and that's not as common with other curriculum-based measures or other screening measures. But that gives us a better indicator of risk. So, broadly, when we're thinking about RTI and implementation for all kids, we use those benchmark scores in the fall of a particular year, and we have a relatively high degree of certainty that a student who is scoring at benchmark—so, on track for being on grade level at the end of the year—is actually going to be on grade level at the end of the year, based on the research and the studies that have been conducted.

Susan Lambert: Wow. And 20 years, it doesn't seem like it's been that long!

Nancy Nelson: I know.

Susan Lambert: And more information about … I'm assuming that listeners can get more information on that at the University of Oregon … where's the best place for us to point? I guess I should ask it that way.

Nancy Nelson: Yeah. If people want comprehensive information about DIBELS, specifically, DIBELS.uoregon.edu is the DIBELS website. And that's a great place to go to get information. We have a customer support arm that is available sort of around the clock to provide answers to questions by email or by phone. They answer phones and wanna make sure that everyone feels equipped to implement, and use, the data from the assessments that are being administered. Which I do wanna say one thing, and this may be a little out of order—

Susan Lambert: No! Good! 

Nancy Nelson: But another thing that I see as really being a pitfall to the way that schools implement RTI is how infrequently they use the data that they collect. And so, it's sort of a misunderstanding. There's sort of an idea, “Okay, we have to collect the data,” but the sort of forgotten notion that you're collecting data for a particular purpose, should always be collecting it for a particular purpose to answer a particular question, and if you're not using it, it's a waste of everybody's time.

Susan Lambert: Yeah, that makes so much sense. And I've seen that across the country just in my work with schools, too, is the overreliance on the data collection, and it gets swept under the rug, and it's not actioned

Nancy Nelson: Yeah. Yeah. 

Susan Lambert: Well, it's been a great pleasure. I know we've just skimmed the surface of multiple topics that we could probably follow up with more podcasts about. But we'll link our listeners to those couple of resources that you've given, so that they can dig in and do some more work. And as we're wrapping up, I would love to just have you think about the [00:44:00] one or two things you'd like our listeners to either remember, consider, or think about more, as it relates to this idea of assessment and/or intervention?

Nancy Nelson: Yeah. So, one of them we were just sort of touching on—it's this idea of data and the role of data in education, and how critical it is that we use data in a systematic way to support our implementation of anything. I also hear a lot, working with schools, that there's kind of a perception that teachers know their students well enough to make decisions about whether they're on track or not, and so screening or the use of data to monitor performance aren't really that important. They're sort of—it's just peripheral. And you know, I agree, definitely, that teachers know their students and have a pretty good idea of whether or not a student is doing well or poorly. But there are data that show—research studies that have examined—that when we look at teacher judgment versus student performance, that teachers aren't actually able to accurately predict the rankings of students. And so then making decisions about who would and should potentially receive supplementary support, it's not going to line up perfectly. And so, relying on data addresses that issue, and also allows for us to engage in a more systematic process that's part of really implementing school systems to meet the needs of all kids, to ensure that kids don't fall through the cracks. Which is what has, you know, happened for decades when we don't rely on data in school systems.

Susan Lambert: Yeah. That's very helpful advice. I really appreciate you bringing us back to this idea of gathering the things that we can and actually using all that information for the purposes of really helping students in our classrooms. And I think every single educator, whether they're a teacher or other role that they play within the school system, wouldn't deny that that's what we're here to do. So. 

Nancy Nelson: Exactly. 

Susan Lambert: Thank you. Thank you so much for your time today, Nancy. It's been very instructive and helpful to highlight interventions and the use of assessment. So yeah, thanks again. 

Nancy Nelson: Thank you, Susan.

Susan Lambert: We are so grateful to our amazing guests today, and to all of you for making a difference in the lives of students every single day. Be sure to check the show notes for resource links from today's podcast. And we want to hear your stories and successes. Follow us on Facebook, at Science of Reading: The Community or send an email to SoRmatters@amplify.com. Tell us what guests you think we should book or tell us about the research that really excites you, and be sure to hit the subscribe button on your favorite podcast app so you don't miss an episode. Until next time, I'm Susan Lambert from Amplify Education.