Science of Reading: The Podcast

S6 E2: NAEP: What you've always wanted to know with Chester Finn, Jr.

September 21, 2022 Amplify Education Season 6 Episode 2
Science of Reading: The Podcast
S6 E2: NAEP: What you've always wanted to know with Chester Finn, Jr.
Show Notes Transcript

In this episode, we dive deep into the National Assessment of Educational Progress (NAEP), also known as the Nation’s report card. Chester Finn, Jr., author of the new book Assessing the Nation's Report Card: Challenges and Choices for NAEP, joins Susan to talk about the NAEP assessment. They discuss how the assessment works, what it is and isn’t, and what benefits and opportunities it provides as the achievement gap continues to grow.

Show Notes:

Additional resources:


Quotes:

“For this to work, we need both great teachers and great curricula.” —Chester Finn

“The single most important thing NAEP cannot do [is that] it cannot in any definitive way explain why scores are what they are or are rising or falling.” —Chester Finn

Episode Content Timestamps*

1:00: Introduction: Who is Chester E. Finn Jr.?
2:00: The History of NAEP
9:00: What is NAEP and how does it work?
16:00: Long term assessment
23:00: NAEP and achievement gaps
26:00: Next step with NAEP
29:00: State-level impact of NAEP results
31:00: Why isn't education more front and center in policy today?
34:00: Level of concern and literacy prognosis
37:00: Limitation and opportunities around NAEP
40:00: What does "It's all about the students" mean to Chester Finn?

*Timestamps are approximate, rounded to nearest minute





Susan Lambert:

This is Susan Lambert and welcome to Science of Reading: the podcast. In just the past few weeks, you may have heard about the results of a recent study.

News clip 1:

A new federal study shows the setbacks of pandemic, Delta kids.

News clip 2:

Now new national data just released ...

News clip 3:

But a new study is showing just how much...

News clip 4:

A new assessment finds math and reading school .

News clip 5:

The results come from assessments given to nine-year-olds.

Susan Lambert:

This federal data that's being talked about is part of the National Assessment of Education Progress, better known as NAEP or the Nation's Report Card. NAEP data is the best we have for measuring national education trends in the United States . And on this episode, instead of delving into the latest results, we're taking a deeper look at the test itself, what it is and why it's so important for educators to know about it. To help me understand the NAEP. I spoke with Chester E. Jr., author of a new book called, Assessing the Nation's Report Card: Challenges and Choices for NAEP. Chester Finn, who goes by the nickname, Checker, served as an assistant secretary i n the U.S Department of Education, and he shared plenty of wisdom about what NAEP data does and doesn't tell us and how it can be improved. Here's my conversation with Checker, Finn. Checker Finn. Thank you for joining us on today's episode.

Chester Finn, Jr.:

It's a pleasure.

Susan Lambert:

So, we are here to talk about your book, Assessing the Nation's Report Card, but before we dive into that, I really want you to talk a little bit about your work, and maybe a little bit about how you came to know the NAEP So deeply

Chester Finn, Jr.:

Boy, NAEP has been a part of my life for about a half a century, maybe a little bit more. I was a junior staff member in the Nixon white house in 1969, when I heard about NAEP for the first time. And I believe that was the first year that it was testing anybody anywhere. It was brand new in the 1960s and the federal government was paying for part of it. The foundations were paying for part of it. And the new head of the Education Commission of the States, ECS, came in to see me to tell me about this thing, that the ECS was running for The government called, the National Assessment of Educational Progress. Later, we called it the Nation's Report Card, but at the time it had no such nickname. And I think the total federal contribution at that time was $2 million a year., something like that. And we're into several hundred million today, by the way. And I think Wendell Pierce was his name. He had been the superintendent of schools in Cincinnati. Anyway, I think he was mostly coming to tell me that the federal support was not sufficient. And I'm not sure I did anything about it. But in any case, NAEP was launched. And then it reappeared in my life in a big way in the mid-eighties, when I was assistant secretary of education for research, et cetera , and that included being responsible for all the statistics stuff in the federal government , education statistics that is, and that included NAEP and Bill Bennett was the secretary and the governors of a bunch of states were yammering in the aftermath of a nation at risk, were yammering for better data on student learning. And this led to a considerable demand to reboot NAEP into something that could help state leaders. 'cause historically up until that point in the '80s, the states had wanted no part of it, which is to say they didn't want any data on their states. They didn't want to be able to be compared. So, NAEP, until 1988, was not allowed to produce state level data. And this was a great political compromise dating all the way back to the anxiety of the education establishment, when NAEP got started, that no comparisons should be able to be made. And so they weren't until after A Nation at Risk. So all these governors, including my friend Lamar Alexander, the governor of Tennessee, wanted to know, how's their state doing compared to other states, compared to the country, compared to how it used to be doing. So Bennett , with my help, I will say, put together a big kind of commission on the future of NAEP. And it came in with a big report and then the Congress , Senator Kennedy working with the Reagan administration--those days you could do these things--rewrote the NAEP law to allow for independent governance through a governing board, to allow for state level data, and to allow for essentially the setting of standards within NAEP. And then, as we're walking out the door of the Department of Education and Bennett has just appointed the brand new governing board, he asked me if I would be chairman of it . So I said, yes. And so for the first few years of the National Assessment Governing Board , meaning basically sort of '88 to '92 I think, I was chairman. And this was a formative time for all this state stuff and all these standard setting and so on. And then I've been a kibitzer on NAEP ever since. And then wrote this book about it. So anyway, 50 years, goes back to my childhood.

Susan Lambert:

<laugh> So I'm pretty sure that when you were young, you thought, when I grow up, I just wanna be an expert in the national assessment. Right?

Chester Finn, Jr.:

That's it? That was my goal in life <laugh> .

Susan Lambert:

And for those, those people that are listening--we have a lot of teachers that are listeners, for sure, other audience members too, that are listeners--but I'm thinking, they probably don't remember the '60s probably don't even remember the '80s, so we're going way back in time for a lot of our listeners.

Chester Finn, Jr.:

Well, we are because the national assessment, which I continue to call the most important test you've probably never heard of, and I suspect that's true of a lot of your listeners, did start in the '60s, when the U.S Commissioner of Education, we didn't have a department in those days, the biggest job was commissioner of education. His name was Francis Keppel, he was the former dean of the Harvard ed school. And he suddenly discovered that his office of education had a lot of data about quantity stuff in education, how many teachers, how many schools, how many kids. But didn't have any information on whether anyone was learning anything. So he said, can we do anything about that? And that was, I believe that was '63 or so, when he asked a prominent psychologist at Stanford if there was any way to do this. And that was the birth of NAEP.

Susan Lambert:

Hmm . That's very fascinating. And I know in this book, so again, for our listeners, the name of the book is Assessing the Nation's Report Card, and the subtitle is, Challenges and Choices for the NAEP, which I think you say in here, NAEP rhymes with, it's not ape, but you use a rhyming word in here to help everybody.

Chester Finn, Jr.:

Tape, cape ...

Susan Lambert:

Tape. That's the one you use. NAEP rhymes with tape,

Chester Finn, Jr.:

Right. N A E P, National Assessment of Educational Progress. At some point, Lamar Alexander nicknamed it, the Nation's Report Card. And that's not a bad way to think about it, 'cause it really is a periodic report card on whether K12 students in America, specifically grades four, eight and 12 are learning what they should, or more than they used to, in core subjects. And up to 10 now, up to 10 subjects

Susan Lambert:

In the work that I do, I often reference, well it's reading obviously. I often reference the fourth and eighth grade NAEP reading scores to the audiences that I speak to. And I often wonder, like what you said, if they really understand what it is or even care about it. Because usually we slide past those slides, if you will, and it either, sometimes I call it the elephant in the room, right? Like everybody talks about the NAEP, at least in these reading presentations. And I love this quote that you say in the book, "it's striking how few people really understand what the national assessment is and isn't what it can and cannot do, how it works, why it's important to American education, why it gets so much attention, and yet why in many ways it's peripheral and ignorable."

Chester Finn, Jr.:

Yep . All true.

Susan Lambert:

All true. And, I mean your entire book is about that. The first part is a lot about the history, but the entire thing is all about the NAEP. And anyway, I'm gonna quit talking in a second, but people that do know about it, it's sort of like, I think you call it furniture. I call it the elephant in the room. We know it's there, but we don't really wanna talk about it. We don't know what to do about it. So I would love if you would just talk about what it actually is and how it works, because I learned a lot by reading your book about the NAEP.

Chester Finn, Jr.:

Well thank you for that. And of course, everybody should read the book, but I also am glad I don't support myself and my family from book royalties.

Susan Lambert:

We'll link our listeners in the show notes to this. So I'm sure they'll all order it just for you.

Chester Finn, Jr.:

Well, the Harvard Education Press will be grateful. And so will I. The reason most people aren't connected to NAEP is 'cause it doesn't tell you anything about your child, your classroom, your school, or in almost every case, your district. For the most part, the smallest unit that it tells you anything about is your state. So what it is is a sample-based test, not a census-based test like the state assessments are, and it's given to the whole country. And it's given in as many as 10 subjects, though the ones everybody focuses on are the reading and math scores, because Congress has required that those be administered every two years in grades four and eight. And therefore, that's the most frequent data we get. But we also periodically, and it's an irregular schedule, but we periodically get data from NAEP, also for 12th grade incidentally, in history and geography and science and even the arts. But again, it's a sample-based test and it's a very carefully drawn sample. So though it doesn't tell you anything about your school or your district, for the units that it does tell you about--which are states in the country as a whole--it tells you quite a lot. It tells you about subgroups of kids. It tells you about boy versus girl, black versus white , urban versus rural , to a degree rich versus poor . And it tells you over time , how the scores in 2022 recently administered, are gonna compare with the scores in 2018 or 2008 and so on. So it follows trends. And one of the most important things it does, like a temperature chart on your hospital bed , is it tells you whether your temperature's going up or down. It Tells you whether the reading score is going up or down in fourth grade in Tennessee or in eighth grade in Ohio or for the country as a whole or for Hispanic American kids or for native Americans or for girls and so on. And so, though it doesn't say much to a principal or a teacher or a parent, it tells a governor or a state superintendent a lot about what's going on or isn't going on in their state. And it certainly tells the U.S. Secretary of Education and the Senate Education chairman what's going on in the country as a whole in terms of outcomes. And the great frustration is that while it can tell you that reading scores are going up or down or history, et cetera , it can't tell you why. It is not an experimental design. It cannot answer the why questions. It can give you all sorts of correlations with other things that might be occurring and with other attributes of kids and so on, cannot explain why something is getting better or worse. That's the big deal. Yep .

Susan Lambert:

Yep. I think it's really interesting about how it's administered too . So one student doesn't take an entire test from start to finish. Is that right?

Chester Finn, Jr.:

Correct. Basically four kids adds up to an entire student taking the whole test and the reason is they wanna minimize testing time. So kids don't spend more than an hour or so, and it's sort of a hundred kids in a school and the school is part of the sample. And most kids only encounter it once if they ever encounter it at all. And , in order to sample a large enough part of the domain, i.e., enough elements of, let's say reading, you need to ask a lot of questions in order to get across a bunch of things about skills and knowledge and understanding and so on. If you did that all with one kid, you'd be strapping that fourth grader to either a book or these days, a piece of technology , for hour after hour after hour, to answer all those questions. So by essentially dividing up the test , and each kid taking essentially a quarter of it, you're able to minimize testing time and yet cover a lot of the domain of the subject

Susan Lambert:

Hmm . I found that fascinating. That was one thing that I learned was that a kid doesn't sit down and take it from start to finish, that they only have to take sort of a section of it. How is it decided what schools, what districts get the opportunity to take the assessment?

Chester Finn, Jr.:

Or are talked into it because , sometimes they have to be persuaded precisely because they don't get any direct, immediate value from participating. But the sample, let's say it this way, NAEP is paid for by the federal government and is overseen and managed by the National Center for Education Statistics, Part of the U.S. Department of Ed. However, it is administered and all the heavy lifting is done by private contractors. For a long time now, the biggest contractor has been the ETS, the Educational Testing Service, the same people that give you SATs, for example. But there's another company involved called WestEd and it is a specialist in sample design. So it's WestEd that figures out essentially, which kinds of kids at what grade level, in which schools, in which states need to participate in order for the sample to work out for the state and the country. And they spend a lot of time with this. And incidentally, you never get a 100% participation, but I think 85% is the minimum threshold for validity of the data. But schools often have to be talked into this, but they were picked sort of almost anonymously by this s ample design firm that is paid by the government to do this.

Susan Lambert:

Hmm . So it's not like, congratulations you've been selected! And schools are like , yes! We can't wait!

Chester Finn, Jr.:

You try to reward the schools and the kids in all sorts of ways by making 'em feel good about participating in a national thing. It's their patriotic duty after all. But they don't, as I said, get any real information about themselves from participating in this.

Susan Lambert:

Yeah. I think that's a really important point because , well, you know, testing fatigue is a thing. And at least if we could give some feedback to the kids or to the parents or to the schools on how they're doing, that gives some sort of motivation.

Chester Finn, Jr.:

That's why one of the recommendations in the book is for what I call a retail version of NAEP, that teachers could give to their own class or parents to their own child and get an equivalent of some kind.

Susan Lambert:

Wow. That's interesting.

Chester Finn, Jr.:

It wouldn't satisfy all of the metrics of...fancy psychometrics, okay? But you would get a rough sense of how your kid or your class is doing compared to, let's say. The country or the state.

Susan Lambert:

Hmm . Interesting. Another thing that's in here that I had no idea about was this thing called the Long-term Trend Assessment. An LTT. I think you called it a sidecar. What exactly is that little sidecar?

Chester Finn, Jr.:

Well, as we talk, we're just days away from the release of some new data from the long-term trend part of the assessment. It's gonna be our first before and after COVID, national data on nine-year-olds' reading and math. Anyway, there are two NAEPs, to make it over complicated for your poor listeners. What's called main NAPE is the one I've been talking about so far, grades four , eight, and 12 state level, as well as national, and so on. All that came after 1988. Long before that, going back to the end of the '60s, this national test that was not allowed to test at the state level, was giving its tests to kids, not by grade level but by age group, ages nine, 13, and 17 , because that was the original design. Okay ? And so they've kept it going so that you've got essentially 50 years of data on nine-year-old and 13-year-old reading, and so on. It's the long term trend, but it's only national. It doesn't have all the bells and whistles of all the subgroups. It doesn't have nearly as broad a portion of the domain being tested. And then the big objection to it is that essentially they're still administering the same test that they were administering in the early 1970s. In order to keep the trend going, they basically give the same test again. It's not that anybody's gaming the test or trying to teach to the test, it's that the subjects have evolved over these decades. And so a lot of people grump that to be asking the same kinds of questions that you were asking in 1972, in 2022 is a sort of pedagogically and in curricular terms kind of archaic. But that's the only way to keep the trend alive. And, anyway, what the governing board decided to do was reprogram some money so that a group of nine-year-olds could be given the long-term trend assessment in the spring of '21 and compare it with the spring of '19 and get a before-and-after COVID picture for the first time. We don't have any national before-and-after COVID data, and this will be t he first.

Susan Lambert:

Right. Hmm. Well, we're talking late August, this episode, isn't going to be released until September. So for our listeners, what we'll do is we'll link our listeners in the show notes to that data that you say is supposed to be coming out any day now.

Chester Finn, Jr.:

Perhaps they will have seen it in a headline along the way.

Susan Lambert:

Yeah. So if we're looking for the headline along the way, will it be released as NAEP data? Or is it released as called something else?

Chester Finn, Jr.:

No, it's called NAEP data and the reporters will be confused because they're these two NAEPs <laugh> , but it will be NAEP data. It will be from the long-term trend known as LTT, part of NAEP. And it will be nine year olds and that's all that's coming out in the immediate term.

Susan Lambert:

Got it. Got it. That is really helpful because , that's another thing that I learned today, Checker, so thank you for that learning.

Chester Finn, Jr.:

On a very narrow set of topics, I know quite a lot. <laugh>

Susan Lambert:

Well , that's why you're here and we appreciate it. So, so far what we know about the NAEP, and not the LTT that we're talking about, but the plain old nap that I usually talk about.

Chester Finn, Jr.:

It's called main NAEP, by the way .

Susan Lambert:

Okay, the main NAEP. We know that it's given across the country and at the state level. So we know that. We know that a student doesn't take the entire test. I think you also mentioned that just recently, there's accommodations now. So we have special ed students that are on IEPs. Is that right? Are special needs accommodations included? How long have they been included in the data?

Chester Finn, Jr.:

Since the mid-90s, when U.S. Education Secretary, during the Clinton administration, persuaded the governing board that even though there was some risk to the trend data by adding this population of kids, it was necessary for equity, to add this population of kids. And so they do make accommodations. Now there's a limited percentage of severely disabled kids who are legitimately excluded by the states. I think 1%, 2% total , but there is a limit on that.

Susan Lambert:

Mm , okay. So we know that. We also know that we can get information about trends. Is there anything else that we missed in terms of what we wanna say, what it, what it is?

Chester Finn, Jr.:

One other element since No Child Left Behind came along in 2002, which is say for 20 years now , two things: One is Congress, as I mentioned, has required that this be done every two years in reading and math in grades four and eight. So, t hat's obligatory and it's regular. But the other thing that happened then is that a group of big city school systems a sked if they could please be included. So it's known as TUDA the Trial Urban Data Assessment, I think. And last I looked, 27 large cities were participating voluntarily, but at federal expense, in the same two-year cycle as the states a re participating in. And so you can now also compare Baltimore with Austin with Cleveland and so on. And you can see whether your city is getting better or worse, at least in reading a nd math on this two-year cycle.

Susan Lambert:

Hmm . That's interesting. So not just states, but several large cities now.

Chester Finn, Jr.:

There's a limited number. Limited mostly by budget, frankly, 'cause a lot more would like to participate. And it's only urban, so big suburban districts, like for example, Montgomery county, Maryland, where I live are not participating even though their enrollment is larger than that of some of the cities that are participating.

Susan Lambert:

Hmm . Interesting. So you did talk about some things that the NAEP can do in terms of helping us understand trend lines and desegregating populations to look at what that looks like. Anything else that you'd like to mention that the NAEP can do for us?

Chester Finn, Jr.:

Yeah. It's our single best instrument on achievement gaps. One of the major goals of, let's say, No Child Left Behind, and then the Every Student Succeeds Act, isn't just to raise achievement. It's also to reduce gaps between groups. And our best way of knowing whether that's happening, at least at the national level and at the interstate level, and in these cities I mentioned, is through the NAEP data. So the subgroup reports have become a crucial way for equity purposes, knowing whether those gaps that worry us for good reason in the country are getting worse or beginning to narrow and also for high and low achievers. So one of the ways you can slice and dice the NAEP data is you can see, for example, at the 10th percentile level ... is the 10th percentile, which is say low scoring kids, is the 10th percentile getting better or worse? Is the 90th percentile getting better or worse? So one of the alarming developments over recent years in NAEP is that that the 90th percentile scores are rising, but the 10th percentile scores are flat, which is to say that this kind of gap is actually getting worse between essentially , high achievers and low achievers. And that's quite troubling. You can also tell from the NAEP data, who's achieving these three benchmarks that the governing board set, known as basic, proficient, and advanced. So these are three different ways of essentially assessing the adequacy of student learning. These are really benchmarks that have been set and the middle one called proficient is a, I call it aspirational. It's a "should" level. The governing board has said , this is the level that kids should be reading at and how many of them are? And the answer usually is about 40% are achieving at the proficient level and maybe 8% are achieving at the advanced level. And then also, you can sometimes interconnect NAEP data with international data and say, how's the U.S. comparing with other countries, occasionally even how states are comparing with other countries. It's really very interesting, the number of things you can do here. And then you're gonna ask me about what can you not do? <laugh> and that's important also.

Susan Lambert:

<laugh> You can go ahead with that before I ask my next questions. <Laugh>

Chester Finn, Jr.:

Well, I already mentioned the single most important thing that NAEP cannot do, which is it cannot in any definitive way, explain why scores are what they are or are rising or falling. For that you really need a true experimental design with a control group, and so on. NAPE isn't that. It's a look at the whole country, but it can't explain why, for instance , 12th grade scores have been flat for a very long time in the U.S. while fourth grade scores in math especially have been rising. And we don't know why we can speculate. I think it's 'cause the high schools have been neglected by all the reform efforts, but that's my theory. It's not something NAEP is proving to me.

Susan Lambert:

Mm-hmm <affirmative> Interesting. So we talk about all what it can do. Let's get back to being positive, Checker. Let's go back to what it can do. So, what difference does it make? So if you say we can slice and dice and we can see all these things, what does that mean for next steps? What do we do with that? I mean, it's fun for people that like to slice and dice, but what's next.

Chester Finn, Jr.:

No, no. It's certainly important for researchers and analysts and policymakers. It's a hugely important kind of diagnostic tool equivalent to a CAT scan or a blood pressure device or a thermometer. There are all kinds of medical analogies here. And I take for granted that the more leaders and policymakers know about who isn't learning and getting better or worse in core subjects, the more intelligently they'll be able to formulate policy going forward and practice going forward. And you can get into some important nuances within, for instance, the reading assessment, which I know you're especially interested in, they have what are called sub-scores, different aspects of reading, different elements of reading, let's say decoding versus comprehension. And because the kids' responses are not just multiple choice, they also include free response, that is where not just fill in the blank, but little essays that kids write and so on. So you can learn a great deal, even within the subject ,about what's being learned by large numbers of kids, or isn't, and by groups of kids or isn't. So, you could look at, for example, white kids in the fourth grade, in the sub -score having to do with decoding versus comprehension. And then you can look at the black kids. You could also look at those who are in the 90th percentile, and so on. Anyway, my point is if you're a policymaker or a state superintendent, or in some cases, a district superintendent, you can say, well, look at this, we are seeing that this goal of the country or a state is not being achieved. We can see that these kids are making progress toward this goal. We can see that this subject is improving, but this one isn't. What does that tell us about our curriculum, our standards, our teachers? You can also look at your own state standards and state assessments and compare them with NAEP standards and NAEP assessments. And in this sense, NAEP functions as a kind of an auditor or truth squad for the state. It can tell you whether your state assessment and the way it's being analyzed and reported to you is ... I'm gonna say telling the truth. Because we've had incidents over the years where a state will be saying that 70% of its eighth graders are proficient readers, same state NAEP says 37% of them are proficient readers. So what do we conclude from that? Well, I conclude the state's expectations are too low and that the state is misleading its population as to how well the kids in that state are reading. So there's an auditor function here that I think is important.

Susan Lambert:

Yeah. That is interesting. In all of your work, and I'm putting you on the spot a little bit, I apologize. Or maybe I don't apologize. I don't know. But in all of your work, can you think of an example of a state that has actually looked at this NAEP data and said, you know what, we need to do something in terms of change at the policy level for a state to really move our kids along.

Chester Finn, Jr.:

Yeah. There have been times when, especially this whistleblower or truth squad or auditor function, yeah, has caused the governor of a state to say, hey, why is there this big discrepancy between how NAEP says our kids are doing and how our state superintendent has been saying our kids are doing. We better look into this discrepancy. What's wrong here? Something is not right in this picture. And so that can mean a change in state standards, in state assessments, in state accountability systems. It could also mean a personnel change is coming at the high level in that state. So, yeah. I can also think of times when <laugh> , when people have run for office on the basis of their state's NAEP scores, or at least let's say cited , their state's rising scores for minority kids as a justification for "reelect me". Now , usually that's unwarranted. Usually the person making that claim, they didn't do it, but nevertheless, they can say things are looking better here in X state and I've been in office for those four years. So that's a little bit of mischief, but it's also a way of having information about whether things really are getting better or worse in your state or your country or your subgroup.

Susan Lambert:

Hmm. That's fascinating. And, you know, while you were talking, I was thinking about, Hmm , why isn't education, and maybe this is a little rhetorical, I'm not sure, but why isn't education more front and center in policy right now? Why aren't we talking more about, about what kids need? I mean, I look at the reading scores, right? And I'm horrified at the percentage of kids that are below proficient when we know we can do better. What's your speculation on that?

Chester Finn, Jr.:

Oh God. Well, first of all, it used to be more front and center than it is today. In the '80s, after A Nation at Risk, and right up through the '90s when the National Governor's Association and the president in that case , Bush, and then Clinton, and then another Bush were all focused on education reform and it was a big deal. And the first Bush declared he wanted to be the education president. It's the first time in American history that anybody ever said that. And , why in 2022 , it's gone down the list, not so much for the public, but with the politicians and the chattering classes. I think it's, 'cause we've allowed ourselves to get distracted by other things that are more immediate in our face. The education problem is an enduring and challenging problem. It's much more exciting to talk about who's getting even with whom. Those kinds of issues. And we also got some other big issues that have taken the attention. Climate, just for example, or healthcare or immediately, abortion. So this kind of enduring issue of educational achievement and gaps between groups has kind of taken a back seat .

Susan Lambert:

Hmm . It's sort of like the weird hum that happens in the background that over time you just sort of don't even recognize it's there anymore, but it's still there.

Chester Finn, Jr.:

It is still there. And it's also become a little discouraging, I have to also say, because we've had an awful lot of well-intended and even disruptive education reform efforts over the last couple decades, and people are not satisfied with the outcomes. And NAEP is one of the ways we know that the outcomes are not satisfactory by the way. And so it's a little discouraging to keep trying these reforms and not see the payoff and maybe people have gotten weary of this problem.

Susan Lambert:

It's a good segue to a question I really wanna ask you. And it's a little personal, because I do talk a lot about, in terms of the fourth grade reading scores, NAEP scores, they've been flat for a really long time. And I wonder, should we be concerned about that flat line and what changes would have to happen to make that trajectory change, and how fast? So if I do something tomorrow, how quickly is that gonna show up on the NAEP either for positive or negative?

Chester Finn, Jr.:

Well, it's not gonna show up fast. The NAEP is a slow moving instrument, even changing what's on the test takes five or six or seven years to go through a major overhaul of what even kinds of questions are being asked in reading. It's a slow moving thing. And the fluctuations over these two-year periods, especially, that Congress has mandated, are not usually very large. Over eight years, 10 years, they''re much more significant, I think. You're right. Reading has been flat and math's gone up some. And , if reading were flat at a satisfactory level, I wouldn't be upset, but reading is flat at way too low a level. And for a lot of kids and especially some minority groups and poor kids, reading is flat at way too low a level. Which means those kids are not getting the opportunity they need to succeed in America today. I mean, it's almost as simple as that. So yeah, we should be pretty upset about it. If you're not reading well in fourth grade, you already know this. All your listeners already know this. If you're not reading well in fourth grade, you're not gonna be doing well in anything in eighth grade. And you're not gonna be very likely learning much in high school because you don't have the foundation. And then whatever you do after high school is not gonna be what it could have been if you had been a proficient reader at the end of early elementary grades. So yeah, we should be worried about this. What to do about it, a lot of people are beating their heads over this. I mean, my own formula includes making sure that the kindergarten, first ,and second grade are using reading science to teach the initial stages of reading so that kids become competent decoders of words, and then a knowledge-rich curriculum that gives them the background information that they need to make sense out of what they're reading, and then accumulate. And I think these things build the skills and the knowledge and the comprehension build, but for this to work, we need both great teachers and great curricula. And we also need, frankly, some support at home. If kids never see their parents reading, or there's nothing to read and they're spending all their time on video games or being entertained, they're not gonna get much practice reading.

Susan Lambert:

Yeah. Yeah. It's a good reminder that things don't change fast. And what helps us change is like doing the things that we know work in the classroom. So thanks for that. Let's get back to the NAEP. That's such a depressing conversation. <laugh>

Chester Finn, Jr.:

Sorry.

Susan Lambert:

No, it's okay . It's okay. It's the reality. Let's go back to NAEP a little bit. And in this book, you talk a little bit about both the limitations, but also the potential opportunities that we have around NAEP. You wanna talk a little bit about that?

Chester Finn, Jr.:

Sure. I would like NAEP to do more than it currently does by yielding, frankly, more data in more subjects in more grade levels at more units of action. For that to happen, there's a budget issue, but there's also an efficiency issue. NAEP is pretty cumbersome. It's gotten layered over the years with a lot of slow moving elements and it's kind of set in its ways. And I'm also not sure that some of the ways the money's being spent today, like these every two-year assessments of things that don't change much on a two-year basis, is a very good investment of the available budget. I think there were kind of retail use that I mentioned where you can, more people could kind of see how they're doing or their kid's doing compared to NAEP, I think would be a valuable addition . Personally, I think NAEP's single biggest gap today is 12th grade state level NAEP. We have fourth and eighth grade state level NAEP for reasons that have never been real clear to me. We don't have 12th grade state level NAEP. And if I were a governor or a state superintendent of schools, the end of high school is when I would most want to know how the students in my state are doing. And yet NAEP is not helpful in that regard today. It should be. It's one of the sort of additional activities that it should be engaging in. And when it tests history, for example, lately, it's just been testing eighth graders. Only eighth graders. Not even fourth graders and certainly not 12th graders. Again, that's partly budget, it's partly test burden, it's partly sort of system burden, if I can put it that way. And so I think I see a lot of potential to get more information. I also think that we could do much better job of analyzing some of the information that we've got and giving people more, I don't know , at least actionable comparisons and what the statisticians call, cross-tabs. One thing relates to another thing .

Susan Lambert:

Mm-hmm, <affirmative>. Interesting. That's a lot about the NAEP. <laugh>,

Chester Finn, Jr.:

That's what the book's about.

Susan Lambert:

It is. And it's really fascinating. I've shared it with some of my colleagues who said, wow, I also learned something new about the NAEP. So, you know, I appreciate your willingness to get this all down on paper and your willingness to stay involved in this project since what, 1969 or something. So congratulations on that legacy.

Chester Finn, Jr.:

Well, thank you. I get old and gray. It's something I'm sort of proud of frankly, to have been part of this.

Susan Lambert:

Yeah, you should be. Well in closing, what we do here at the podcast is really try to get information and concepts and ideas out to those folks that are really doing the work closest to the students. And I always say that the heart of the work is really the students. So if I say to you the phrase, it's all about the students, what does that mean to you and your work?

Chester Finn, Jr.:

Well, of course I agree with it, and the students are the end point here, just as the diners in a restaurant are the end point of the restaurant. It's not about the restaurant owner. But I also have to admit that mostly where I work is the sort of policy level that we hope reverberates down to the classroom and the student and the teacher, and even the parents. I don't have a lot of direct contact other than family with the student level part of this. And yet that's the reason we're doing all of this. And, we sort of policy wonks occasionally get all tangled up in our own arguments over , kind of, which side of the bread is the butter on. And we forget that somebody's ultimately supposed to turn that into a sandwich that nourishes a kid for lunch. And so I'm glad you're doing what you're doing because getting it to be about the student and the people that are in direct contact with , I mean we have 50 million public school kids in America today. That's a whole lot of people. And the future of the country kind of depends on what's happening to them during those 12 grades or 13 grades of school. So , this is why we're doing it. And I know that talk about things like NAEP is remote to practitioners on the ground, and yet it's something that affects them indirectly, and of course that they affect indirectly. But very importantly, if more kids learn to read, for instance, those scores will go up one day.

Susan Lambert:

Yep . For sure. Well, thank you, Checker Finn . Thanks for joining us. We know you're a busy man and we appreciate your time.

Chester Finn, Jr.:

It's been a pleasure, nice being with you and all the best to you and your listeners.

Susan Lambert:

Thank you. Thanks so much for listening to my conversation with Checker Finn. Check out the show notes for a link to his book, Assessing the Nation's Report Card, Challenges and Choices for NAEP. Stay connected with us and other listeners by joining our Facebook discussion group, Science of Reading the community. Stay tuned there for some upcoming Facebook lives to talk about the sixth season of the podcast and to answer any questions you might have. Next time on Science of Reading: The Podcast, we're diving deeper into the theme of this season, how to make meaningful changes in education.

Clip from the next show:

We're really good at knowing what to do. We're really bad at implementing it. So what you see around the world is people read books. They go to conferences, they hire consultants, and then nothing happens.

Susan Lambert:

Stay tuned for all that. And thanks so much for listening.