About two weeks ago, the journal PLoS ONE published an article titled "Why Most Biomedical Findings Echoed by Newspapers Turn Out to be False: The Case of Attention Deficit Hyperactivity Disorder."
As The Economist noted in a September 22 story titled Reporting Science: Journalistic Deficit Disorder, this did not turn out to be a popular subject with newspaper reporters themselves, in fact, "as The Economist went to press, a search on Google News suggested that, a week after its publication, not a single newspaper had reported [the] paper."
In fact, the only reason I knew about the paper is that Andy Revkin at The New York Times called my attention to it, both by e-mail and in an excellent DotEarth post yesterday, which puts this study into a thoughtful context of other related research. His post, From Abstract to News Release to Story, a Tilt to the 'Front Page Thought', is wholly worth reading.
But I want to step back for a minute to the PLoS ONE study. The lead author, Francoise Gonon of the University of Bordeaux and his colleagues, began by pointing out that "because positive biomedical observations are more often published than those reporting no effect, initial observations are often refuted or attenuated by subsequent studies." They wondered whether newspapers tend to preferentially pick up on these positive reports and whether they also followed up by reporting on studies that refuted or weakened the initial findings.
After selecting the condition Attention Deficit Hyperactivity Disorder as a focus, the researchers searched databases that archived both medical research and newspaper publications during the 1990s. They found 47 papers in high-profile journals on ADHD and 347 resulting English language newspaper stories. They then selected the 10 papers that had received the most coverage. Of those papers, seven were based on new hypotheses about ADHD. Gonon's analysis of later papers found that six of those seven hypotheses were either refuted by other scientists or found to be flawed. The seventh received poor reviews. The remaining three were designed to confirm existing theories. Two held up well, one, as the literature showed, did not.
All of this is, of course, part of the normal process of science. But Gonon and his colleagues found that for most the part, journalists ignored all the work that evaluated the initial studies, which would unfortunately seem to leave readers with the impression that ADHD science was nothing but a giant celebration of positive findings.
As the resarchers concluded, "Because newspapers preferentially echo initial ADHD findings appearing in prominent journals, they report on uncertain findings that are often refuted or attenuated by subsequent studies. If this media reporting bias generalizes to health sciences, it represents a major cause of distortion in health science communication."
And as The Economist noted, it's not necessarily surprising that journalists chose not to cover a study that didn't reflect all that well on their own work. I did make a point of doing Google News search and found that media had mostly looked the other way. I found perhaps three or four other stories about Gonon's work, notably a piece by Peter McKnight in The Vancouver Sun that emphasized the key point that one study never tells the whole story.
And that point emphasizes why we science writers should actually be paying attention to this kind of research. It doesn't cast us in the most flattering light. But if we're want to do this better, to escape what Revkin has sometimes called the single study syndrome, then we need to pay attention to what we've done wrong so that we can start getting it right.
We don't do ourselves any favors, and in this particular case, we don't do readers trying to understand and deal with ADHD any favors, if we persist in skewing the results.