This week's issue of the Journal of the American Medical Association (JAMA) contains an article (paywall) with the rather non-committal title, "Empirical Evaluation of Very Large Treatment Effects of Medical Interventions."
But as Frederik Joelving, the consistently sharp medical writer at Reuters, points out that impression is misleading. In Joelving's words the study makes a clear statement that "next time a research finding leaves you slack-jawed, thinking it's too good to be true, you might just be right".
The study, led by Dr. John Ioannidis of the Stanford University School of Medicine, analyzed 3,000 of research reviews (encompassing more than 200,000 actual studies) done by the Cochrane Collaboration, an organization that analyzes medical evidence. It found that 90 percent of those "very large treatment effects" reported in initial studies either shrank or disappeared entirely as further research was done. Or as the study itself noted, with some understatement, "Well-validated large effects are uncommon."
Why does this matter to science journalists? Well, the news cycle definitely rewards big effect stories. And because we often report on science as a series of events rather than an ongoing process, the result can overstate the findings or importance of a single study. I've raised this issue here before, both in a post about last month's deservedly controversial GMO study in France and in a broader sense. I'm not expecting the news cycle issue to change, by the way, but I do think that the more we are aware of such complexities, the more likely we are to do a more realistic job of covering science and medical stories.
Ioannidis is, of course, a fairly high profile critic of research methodologies, a subject that he's been pursuing for some years. One of his better known treatises, in fact, dates back more than seven years, a 2005 essay in the journal PLoS Medicine titled, "Why Most Published Research Findings Are False." And this most recent study did not generate a deluge of coverage – I counted maybe 20 stories on Google News – but it did lead to some other thoughtful reports, among them:
At the Los Angeles Times, Eryn Brown pointed out that the most striking results tended to come from studies involving very small subject populations, often 100 or less, raising the possibility that the findings may be a result of statistical chance (more probable in small studies).
Health Day reporter Serena Gordon (published here in U.S. News and World Report) noted that the JAMA issue also included an accompanying editor that emphasized that with one exception – "extracorporeal oxygenation for severe respiratory failure in newborns" – none of the so-called large effects actually involved life-saving outcomes.
California's KQED health blogger Lisa Aliferis nicely quoted an exchange with Dr. Ionnadis:
when I pointed out that this news was likely to be a big bummer to lots of Americans used to the search for the silver bullet, he laughed softly and said, “Yeah, too bad.” He says that a belief in the silver bullet “creates a vicious circle. It also creates an environment where claims for a silver bullet thrive against such overwhelming evidence.”
And at Health News Review, Gary Schwitzer, known for his work in the subject of evidence-based medical reporting, not only praised Ionaddis's work but offered some history and some supporting context.
And it's insisting on context that, over the long run, will make all of us look smarter.
Leave a Reply