Skip to Content
18Apr 2013

'Neuroscience Cannae Do It Cap’n, It Doesn’t Have the Power'

Marcus Munafo

Last week, researchers at the University of Bristol published a study in Nature Reviews Neuroscience in which they report that much of what passes for research in neuroscience is--what's the word I'm looking for?--worthless. 

The researchers, led by Marcus R. Munafo, entitled their study, "Power failure: why small sample size undermines the reliability of neuroscience." In their abstract, they note that "a study with low statistical power has a reduced chance of detecting a true effect," and it also allows for "statistically significant" results that do not represent real effects.

"Here, we show that the average statistical power of studies in the neurosciences is very low," they write. That means the studies are likely to overestimate the size of any effect they find, and less likely to be reproduced by anyone else. "Unreliable research is inefficient and wasteful. Improving reproducibility in neuroscience is a key priority and requires attention to well-established but often ignored methodological principles." 

In a university press release, one of the researchers, Kate Button, said, “There's a lot of interest at the moment in improving the reliability of science. We looked at neuroscience literature and found that, on average, studies had only around a 20 per cent chance of detecting the effects they were investigating, even if the effects are real."

Much has been made lately of problems with neuroscience research. The science is often weak, critics say, and the news coverage is often way too enthusiastic. Here we have some evidence to back up those criticisms.

I learned about this study from National Geographic Phenomena blogger Ed Yong, from whom I stole my headline, above. In a thorough post on the study, Yong notes that the Bristol study comes at a bad time, when Barack Obama has just announced a $100 million neuroscience research initiative. He neatly explains what statistical power means, and notes that it is a problem elsewhere in science too, in particular in medical studies, which have likewise come under fire by such critics as John Ioannidis, author of the now famous 2005 paper, "Why Most Published Research Findings Are False."

Yong also points out that raising statistical power in research studies is "easier said than done. It costs time and money."

Kate Button, whom I mentioned above, wrote a very nice article on the study herself, noting that when she began her graduate studies in psychology she "thought that certain psychological findings were established fact. The next four years were an exercise in disillusionment." The story, which appears in The Guardian, does a fine job of explaining statistical issues at some length and unexpectedly concludes with a bit of optimism. "Awareness of these issues is growing and acknowledging the problem is the first step to improving current practices." What, I wonder, are the other 11 steps for scientific research?

That's all I have to say about the study. But I do have one final point to make. This important study was published in Nature, a British journal. It was covered by Yong, who is based in the UK, and by The Guardian, a British newspaper. Considering the importance of this paper, and the excitement in the U.S. and among the U.S. media about neuroscience, one would have expected it to have been widely covered here as well.

But that doesn't seem to be the case. Greg Miller did a nice post at Wired, and Bob Grant did a short post at The ScientistGary Stix did a good piece at Scientific American.

But I didn't see much more. Was the trans-Atlantic cable down last week?

-Paul Raeburn

 

 

 

Login or register to post comments