Saturday, July 12, 2008

Conclusions About Reading First’s Effectiveness

Originally published here on May 13, 2008.

The Reading First Impact Study: Interim Report begins with an Executive Summary that includes a bulleted list of general conclusions. The top item on the list is this:
On average, across the 18 participating sites, estimated impacts on student reading comprehension test scores were not statistically significant.

If you stop reading there (after only 175 words of the report), it is easy to walk away with the conclusion that Reading First is ineffective. Of course, there are still over 190 pages of the report left to read at that point. And if you are willing to read further, to even just finish that particular page, you find the following statement right there in the same set of bullets:
Study sites that received their Reading First grants later in the federal funding process (between January and August 2004) experienced positive and statistically significant impacts both on the time first and second grade teachers spent on the five essential components of reading instruction and on first and second grade reading comprehension. Time spent on the five essential components was not assessed for third grade, and impacts on third grade reading comprehension were not statistically significant. In contrast, there were no statistically significant impacts on either time spent on the five components of reading instruction or on reading comprehension scores at any grade level among study sites that received their Reading First grants earlier in the federal funding process (between April and December 2003).

In other words, not all the findings of the report cast a dark shadow over Reading First.

I should take a moment for some self-disclosure before telling you more about what I think the Reading First Impact Study says. I am not a fan of No Child Left Behind - at least not of the accountability provisions. I talked about that last month. And I've discussed my feelings about it in other places, too.

That said, it's hard for me to read the Impact Study and come away with a completely negative view of Reading First's performance. The most obvious question is this: why did schools that got their Reading First grant later in the process see a significant impact when schools that got their awards early in the process did not?

Two obvious (though speculative) answers stand out to me. The first is that there is probably a learning curve with a new federal program of this nature. The early award schools served as guinea pigs to determine what worked and what didn't; the late award schools benefited from the experience of those early award schools.

The second answer is money. Early award schools got on average $432 per student. Late award schools got on average $574 per student. As cliché as it sounds, maybe money really is the answer to everything.

If you're willing to accept the study as scientifically valid, the most reasonable conclusion seems to be that Reading First wasn't very effective in the beginning, but it is becoming effective. If you're not willing to accept the study as scientifically valid, there are no reasonable conclusions that you can draw from it...

It should also be pointed out that the study measured two main things: teacher behavior (time spent on highly explicit instruction in the five components of reading) and reading comprehension in students. There is no information in the study as to whether fluency increased significantly in first graders. There's no information on vocabulary development in the grades studied, no information on either phonics or phonemic awareness at any level. Only one of the five components of reading is measured in students. To me, that seems incomplete.

There's one last issue that needs to be brought up in any discussion of the effectiveness of Reading First. That issue is the question of what impact Reading First has had on schools where no grant has been awarded. And I'll look at that question in my next blog post.

No comments: