Saturday, July 12, 2008

The Reading First Impact Study - What the Press Didn’t Say About It

Originally published here on May 13, 2008.

I've been in and out of some form of journalism since high school - and that's been 30 years ago now. I've worked for rural weekly papers, covered a local beat for a daily paper, blogged niche topics for the New York Times Company (and others), and covered education along the way.

I sympathize with the difficulty involved in creating a fair, meaningful news story about the Reading First Impact Study - one that fits into a limited number of column inches and holds the public's attention. At the risk of sounding arrogant, it doesn't surprise me that the NY Times, USA Today, the Associated Press, and the Washington Post all failed to accomplish that feat.

The media in general reported on what politicians said about the study. If we were to look at the news reports on the study in light of Bloom's Taxonomy we'd find that the media offered us mostly knowledge (the taxonomy's lowest rung) of the report and perhaps some comprehension (the taxonomy's second rung), but very little in the way of analysis, synthesis, or evaluation - the upper rungs of the Bloom's Taxonomy, all of which require higher thought processes. In fact, if you read the news reports closely you'll discover that most of them are not concerned ultimately with the report, but with reaction to the report. The stories quote congressmen and senators saying what they have always said about Reading First.

Here are some things you didn't find in the news coverage:

  • You did not hear that there are concerns about the scientific validity of the study, partly because it was not a random study. (The irony: after all the emphasis on scienctifically-based reading research, the Department of Education can't come up with a valid scientific study on this.) If the study's science is bad, it's conclusions don't mean much.



  • The Reading First Impact Study - There's a lot you didn't hear in the news...You did not hear that there are concerns about the method used in the study (an interval method) to measure how much highly explicit instruction occurs at Reading First schools. Because an interval method was used, teachers who mentioned some detail of one of the five components of reading (phonemic awarenes, phonics, fluency, vocabulary, comprehension) just once in a three minute interval received the same score as teachers who touched on the components repeatedly.



  • You did not hear that reading scores are up across the board in the US since Reading First was implemented.



  • While you heard generalizations about the Reading First schools used for the study, you did not hear that the Reading First schools in the study were not a homogeneous group. The study distinguished between schools that received Reading First award money early in the program's history and schools that received Reading First award money later. And you did not hear that the Impact Study's conclusions were not the same for the two groups. (We'll talk more about that another day...)


While most news sources mentioned in passing that there is still another report in the works, they tried to leave readers with the impression that this current report was somehow conclusive and final. Neither of those to things seems true after actually reading the Impact Study. There's a reason the words "Interim Report" appear in the study's title.

In my next blog post I'll talk about what conclusions we can draw from the Impact Study.

No comments: