Skip to Content

Category: statistics

Faye Flam
Share

Yet another scandal has broken over bad science, this time in the field of neuroscience. In a new paper published in Nature Neuroscience, researchers from the Netherlands claim that of 314 studies in the field, more than half relied...

Yet another scandal has broken over bad science, this time in the field of neuroscience. In a new paper published in Nature Neuroscience, researchers from the Netherlands claim that of 314 studies in the field, more than half relied on an erroneous assumption about the independence of the data and were  therefore likely to be giving false positive results. This bombshell wasn’t widely covered, but luckily it was explained clearly by Gary Stix at Scientific American. His story, Statistical Flaw Punctuates Brain Research in Elite Journals, is posted on his blog, Talking Back.

The post suggests that the stastistical flaw dosen't just punctuate brain researrch, it puctures much of it.

According to this new...

The CDC announced in late February that many Americans are still too fat. Not much eyeball-grabbing news there, but in a clever move by the CDC press office, someone turned the focus on one small blip in the data. In the 2 to 5 year old category, obesity rate appeared to fall from about 14% to about 8.5%, which...

The CDC announced in late February that many Americans are still too fat. Not much eyeball-grabbing news there, but in a clever move by the CDC press office, someone turned the focus on one small blip in the data. In the 2 to 5 year old category, obesity rate appeared to fall from about 14% to about 8.5%, which still doesn’t sound exciting until someone turned it into a relative drop and declared that obesity rates fell by 43%.

That gave CDC’s press office a tempting morsel of reporter bait to dangle.

Many news organizations bit on it, though most included the more modest absolute percentage change too. That was the case with the New York Times, story, Obesity Rate for Young Children Plummets 43% in a Decade, USA Today’s...

It looked like a great read. The cover story in this week’s issue of The Economist was called How Science Goes Wrong. (In the interest of full disclosure I was an...

It looked like a great read. The cover story in this week’s issue of The Economist was called How Science Goes Wrong. (In the interest of full disclosure I was an intern at The Economist, and I still hold it up as proof that one can make complex issues accessible without dumbing things down.)

And so I considered the possibility that this story would uncover new insights into the pitfalls and limitations of science. Instead, it amounted to little more than a remake of a flawed piece that ran in 2010 in the New Yorker, under the headline, The Truth Wears Off / Is there something wrong with the scientific method?

There was certainly room for someone to step up and write a responsible version of that story, which was written by Jonah...

It would be a big surprise if a prominent journal published a study commissioned by the dating website eHarmony showing that people who met through online dating were more miserable than those who met through friends, work, bars, etc. Funny that such results rarely appear.

In fact, this week’s PNAS had...

It would be a big surprise if a prominent journal published a study commissioned by the dating website eHarmony showing that people who met through online dating were more miserable than those who met through friends, work, bars, etc. Funny that such results rarely appear.

In fact, this week’s PNAS had a paper on a study showing that online dating, (surprise!), allegedly leads to happier marriages. Not only was eHarmony behind the funding, but one author had been a member of the scientific advisory board and another had been the company’s director of research.

The potential conflicts of interest here were at least as interesting as the results of the study, which entailed a large survey of couples married between 2005 and 2012. Here’s how the PNAS press blub describes the result:  “More than one-third of nearly 20,000 Americans surveyed in a study met their...

Last week, researchers at the University of Bristol published a study in Nature Reviews Neuroscience in which they report that much of what passes for research in neuroscience is--what's the word I'm looking for?--worthless....

Last week, researchers at the University of Bristol published a study in Nature Reviews Neuroscience in which they report that much of what passes for research in neuroscience is--what's the word I'm looking for?--worthless. 

The researchers, led by Marcus R. Munafo, entitled their study, "Power failure: why small sample size undermines the reliability of neuroscience." In their abstract, they note that "a study with low statistical power has a reduced chance of detecting a true effect," and it also allows for "statistically significant" results that do not represent real effects.

"Here, we show that the average statistical power of studies in the neurosciences is very low," they write. That means the studies are likely to overestimate the size of any effect they find, and less likely to...

Nate Silver's rational approach to politics seems to provoke highly irrational responses.

At SlateDaniel Engber ...

Nate Silver's rational approach to politics seems to provoke highly irrational responses.

At SlateDaniel Engber writes that Silver, author of the FiveThirtyEight blog at The New York Times, "appears to have hit the mark in every state--a perfect 50 green M&Ms for accuracy." Engber links to a map of Silver's predictions versus a map of the results. Impressive, right? But Engber can't...