Yet another scandal has broken over bad science, this time in the field of neuroscience. In a new paper published in Nature Neuroscience, researchers from the Netherlands claim that of 314 studies in the field, more than half relied on an erroneous assumption about the independence of the data and were therefore likely to be giving false positive results. This bombshell wasn’t widely covered, but luckily it was explained clearly by Gary Stix at Scientific American. His story, Statistical Flaw Punctuates Brain Research in Elite Journals, is posted on his blog, Talking Back.
The post suggests that the stastistical flaw dosen't just punctuate brain researrch, it puctures much of it.
According to this new expose, a number of researchers were using a statistical technique that relies on each data point being independent. And yet that’s not the case for nerve impulses or other types of data examied in these studies. Stix describes the situation concisely but clearly here:
The problem of false positives appears to be rooted in the growing sophistication of both the tools and observations made by neuroscientists. The increasing complexity poses a challenge to one of the fundamental assumptions made in statistical testing, that each observation, perhaps of an electrical signal from a particular neuron, has nothing to do with a subsequent observation, such as another signal from that same neuron.
In fact, though, it is common in neuroscience experiments—and in studies in other areas of biology—to produce readings that are not independent of one another. Signals from the same neuron are often more similar than signals from different neurons, and thus the data points are said by statisticians to be clustered, or “nested.” To accommodate the similarity among signals, the authors from VU University Medical Center and other Dutch institutions suggest that a technique called multilevel analysis is needed to take the clustering of data points into account.
There are two important things this study does. First, the accusers point out a specific problem and a solution. It they are correct, this could have a huge impact, not just in neuroscience but other fields where this same problem could crop up.
And the paper’s authors specify that these flawed studies were published in elite journals. Other exposes have pointed out that there’s a high fraction of questionable or "false" research in general, but this is less interesting because there are so many journals with little or no impact. I’m told by proponents of cold fusion that there are thousands of papers in the published literature showing cold fusion works, but they don’t affect the physics community because the results are published in journals that members of the mainstream community know to ignore.
In this case, we’re talking about Cell, Nature and Nature Neuroscience – journals that get attention not just among scientists but among journalists. For those of us who cover neuroscience, this post and the paper should be required reading.
Leave a Reply