Patterns and Trends of 2013: The Year of Conclusions that Don’t Follow from the Data
Science stories can be quick to seep into popular wisdom, even - and maybe especially - when they’re not quite true. A good illustration came today in Frank Bruni’s column in the New York Times, in which he urges people to lay off the tweeting and spend more time reading fiction. It’s a good column, even though he hasn’t dissuaded me from resolving yet again to Tweet more.
He uses science to underline his point. Research, he says, shows that reading makes people more empathetic – “more attuned to what those around them think and feel.” And he links to this story, which is one of many based on a 2013 study in Science that does not really show reading fiction makes you more empathetic. The problems with the coverage of the study were detailed in this Tracker Post.
You can hardly blame Frank Bruni, considering the way the story was covered. The study showed that after being exposed to a few passages of “literary” fiction, subjects did slightly better on some psychological tests than those who read a few minutes of other fiction (including Gone Girl, which was quite good), or non-fiction. The effect was short-term, since the subjects only read for a few minutes and took the tests immediately afterwards.
The study says nothing at all about long-term effects of reading fiction and very little about the short-term effects. The differences were slight, and there was some subjectivity involved in choosing which fiction was deemed “literary”. Penn linguistics professor Mark Liberman wrote a detailed critique of the study here, noting a big potential for bias in the way the researchers hand-selected a few bits of text.
This episode was among many in which reporters repeated a conclusion that was not supported by the evidence they presented. Sometimes the unsupported conclusions started with the researchers, sometimes with a statement in a press release, and more rarely with the media. With the fiction study, perhaps reporters gave the claim a pass since it supported something they already believed. I bet people would have scrutinized it more critically if scientists claimed, say, that marijuana or premarital sex made you more empathetic.
In other cases, stories touted conclusions that extrapolated and generalized far beyond the evidence their authors presented readers. In The Economist, for example, a cover story called How Science Goes Wrong started with examples of problems in a couple of areas – clinical research and behavior. That’s fine but does not support the dramatic conclusion that there’s something the matter with the entire scientific enterprise. For more on the problems with this story and others like it, see this Tracker piece.
In a Nature piece called 20 Tips for Interpreting Scientific Claims, authors also extended the methods and problems of clinical research to all of science. The tips could easily be misused to imply that work in evolutionary biology or climatology doesn’t stack up because it doesn’t follow protocols needed for a different field.
Also in the area of overgeneralization, dueling pieces in the New York Times and Wall Street Journal attempted to use a few studies to trash or prop up the entire field of evolutionary psychology. Neither author could support the extreme positions – the Times that it’s all bad or the WSJ that it proves men and women are from different planets. The reality most likely lies in the middle.
Another common way that science writers draw unsupported conclusions is by making assumptions about cause and effect. For example, in the story, Want your Daughter to be a Math Whiz, Soccer Might Help, there’s a description of a study showing a correlation between sports participation and some test scores. But there’s nothing presented in the story to support the idea that sports participation causes girls to be better students. See this Tracker post for more.
The New York Times did something similar in describing a study connecting short-term relationships and unsatisfying sex, as reported by the female partner. As explained in this Tracker Post, the assumption through the piece was that women shouldn’t have short-term relationships because short relationships cause bad sex. The author never considered the possibility that bad sex caused women to cut relationships short.
Sometimes headline-grabbing conclusions are just non-sequiturs. In a story on sex differences found with brain scanning, researchers claimed in a press release that the findings may explain why men are better at navigation and various other tasks. The scientists offered no evidence men were in fact better at navigation or had any superior mental abilities, or that the brain scans have anything to do with such differences, if they exist.
2013 also saw an unsupportable backlash against 2012’s unsupportable accusations that Republicans were anti-science. I read a number of those original stories and from what I could see, no Republicans were ever quoted in 2012 saying they didn’t like science or respect it. Rather, many were simply misinformed, wrong, or mistakenly believed that things such as “intelligent design” were forms of science.
But rather than point out that being wrong or ignorant about some piece of science doesn’t make one “anti-science”, many writers simply made the same unsupportable accusations against those on the other side of the political spectrum, though there, too, they offered no examples of people who said they didn’t like or support or respect science.
Reporters sometimes give scientists a pass on logic because of their authority. That happened when people quoted Michio Kaku saying the Higgs Boson caused the big bang. In writing this Tracker post, I found the best support for this was simply that the Higgs Boson is evidence of a Higgs Field, which is similar in character to a theoretical field called the inflaton field, and that this other field was instrumental in the rapid expansion of space known as inflation, posited to have happened near the birth of the universe.
Near the year’s end, the Huffington Post ran with a story on its own survey, claiming that people don’t trust scientists or science reporters. The public probably shouldn’t trust either group without some degree of critical scrutiny. Scientists aren’t necessarily being dishonest when they extrapolate beyond their data. They’re allowed to speculate and ponder. But it’s our job to make sure readers know which statements are based on evidence, and which ones are just guesses. In the end, it shouldn't matter so much whether people trust us. If we provide evidence to support the claims in our stories. they shouldn't have to.