It looked like a great read. The cover story in this week’s issue of The Economist was called How Science Goes Wrong. (In the interest of full disclosure I was an intern at The Economist, and I still hold it up as proof that one can make complex issues accessible without dumbing things down.)
And so I considered the possibility that this story would uncover new insights into the pitfalls and limitations of science. Instead, it amounted to little more than a remake of a flawed piece that ran in 2010 in the New Yorker, under the headline, The Truth Wears Off / Is there something wrong with the scientific method?
There was certainly room for someone to step up and write a responsible version of that story, which was written by Jonah Lehrer.
Lehrer’s piece came nowhere near supporting the provocative headline. The Economist’s version followed along the same line of reasoning and even suffered from the same flaws.
The crux of The Economist’s contention is that science is not in fact self-correcting because few studies are replicated and many of the irreproducible ones nevertheless get incorporated into medical practice or, in the case of social science, the tools of policy wonks. (The piece ran with no byline as is customary at The Economist).
There wasn’t much of a news peg beyond a recent hoax perpetrated by Harvard biologist John Bohannon, who created a make-believe cancer research paper and submitted it to 304 journals – some allegedly peer reviewed. It was accepted in 157. The results of the hoax were published in the journal Science, ironically appearing in the same issue that ran a much–criticized study suggesting that reading snippets of literature improved people’s social skills.
But so what? If some journals are failing to undertake proper peer review, can we really conclude that the scientific enterprise is rotten to the core?
Many of us have written about problems in both clinical trials and in social science – questionable statistics, hype, confusion between causation and correlation, and conclusions that are not supported by the data. But when The Economist author attempts to extend these problems to all of science, the piece starts contradicting the premise. The physics example in fact shows science working just fine.
Turns out a few years ago scientists thought they might have detected a new particle – a clump of five quarks appropriately named the pentaquark. The finding was considered a hint and not a discovery, which is why it didn’t get the fanfare accorded to the top quark and Higgs Boson. It wasn’t replicated and the issue remains inconclusive. Here’s a very different take from a publication called Symmetry:
Whether positive or null, each experimental result brings physicists closer to an understanding of the universe. Ruling out the presence of possible subatomic particles is just as important as finding new ones. Without the searches, scientists would never know if these particles were out there, waiting to be discovered.
Even though the pentaquark seems to be illusory, at least in the form physicists have pursued so far, the alley leading toward it has been full of interesting revelations.
So what’s the problem here? There are a couple of reasons to think the more recently-famed Higgs Boson won’t go the way of the pentaquark. First, the experiments include a built-in form of replication. Two independent teams sifted through data collected by two detectors of different designs.
Beyond that, physics relies on a tight interplay between theory and experiment. Theoretical physicists can’t just make things up – theories are very highly constrained by what’s already been observed. So when someone proposes something like the Higgs field, or before that a top quark, these are not wild guesses. Theory also constrains chemistry and biology.
Social sciences don’t have such rigid paradigms, so it’s harder to pick out which results are extraordinary enough to require extraordinary evidence.
Psychology is where both The Economist and the older New Yorker piece started. Then they moved on to clinical medicine, which suffers from its own suite of problems with statistics and lack of replication. So far, okay, but then both pieces leap to the conclusion that these same problems beset all of science. Then both authors attempted to give example from physics. Here’s Jonah Lehrer in The New Yorker:
The same holds for any number of phenomena, from the disappearing benefits of second-generation antipsychotics to the weak coupling ratio exhibited by decaying neutrons, which appears to have fallen by more than ten standard deviations between 1969 and 2001.
For some reason Lerher’s editors didn’t think he needed to explain what the weak coupling ratio is. He’s allowed to ask readers to trust him that he understands it well enough to use it as support for his grand thesis. The New Yorker editors must have thought Lehrer walked on water. Some months later, he fell in with quite a splash. (Lehrer was caught making up quotes and various other sins.)
I asked Drexel University physicist Len Finegold about this passage and he said he was puzzled and looked it up, concluding that the New Yorker statement was wrong and misleading.
In a Tracker Post on the Lehrer story, Charlie Petit also questioned Lehrer’s examples from the physical sciences, noting that Lehrer does not offer enough information for readers to draw a conclusion.
Neither the New Yorker nor The Economist made a convincing case that physics, chemistry, or biology fail to correct false results, at least over the long haul. And Lehrer never clearly stated what he meant by the scientific method. He criticized the statistical methods used in some fields of science, but that hardly equates to the scientific method.
As for the hoax biology paper noted in The Economist, there’s some disagreement over how dire the situation really is. Here’s one take by Dan Vergano at National Geographic.com. The journals in question were known as open access journals, making money by charging people to publish papers rather than charging for subscriptions. What’s not clear is whether we’re just seeing a lot of bottom feeding journals. If the journals that biologists respect and trust did okay, then perhaps the field won’t be completely hamstrung by the proliferation of pretenders. That was the contention of this op-ed piece in The Guardian.
There are serious, important problems with the way science is published and they’re worth exploring. There are problems with the way some scientists use statistics, and the way some PR departments and journalists spin the results. Social science is very different from physics. Human behavior may never conform to the same predictable laws that govern quarks.
It’s too bad The Economist overreached, since God knows we need more critical coverage of science. The public isn’t served by reporters who cheer every new journal article as if it’s a shining nugget of truth. But neither is there support for the contention that the whole of science is beset by some fatal blight.
The truth is in that messy, complicated place in between.
Sachi Pallem says
I think the link for the National Geographic Article doesn’t link to the specific article but just to the website in general.
annacass says
Thank you for finding the broken link. This article was published in 2013, and National Geographic has likely made changes to their website in the meantime. We’ve located the story and repaired the link.