The CDC announced in late February that many Americans are still too fat. Not much eyeball-grabbing news there, but in a clever move by the CDC press office, someone turned the focus on one small blip in the data. In the 2 to 5 year old category, obesity rate appeared to fall from about 14% to about 8.5%, which still doesn’t sound exciting until someone turned it into a relative drop and declared that obesity rates fell by 43%.
That gave CDC’s press office a tempting morsel of reporter bait to dangle.
Many news organizations bit on it, though most included the more modest absolute percentage change too. That was the case with the New York Times, story, Obesity Rate for Young Children Plummets 43% in a Decade, USA Today’s Child obesity rates drop 43% in past decade and The Washington Post’s story, New CDC data shows 43 percent drop in obesity rates among children 2 to 5.
But then the Times, Post and others spent considerable space speculating on the cause of this “dramatic” drop. Perhaps, some declared, preschoolers were picking up on the Michelle Obama fitness campaign.
Does that really seem plausible? 3 year-old are toddling off to spinning classes and are demanding that their parents go heavy on the broccoli and light on the chocolate pudding? A plausible explanation that wasn’t explored in most of the initial news blitz – it’s just statistical noise.
AP’s Mike Stobbe wrote a more measured piece and made the important point that the data are a little erratic, suggesting the explanation might be nothing but noise or sampling error:
The preschooler obesity numbers fell from 14 percent in 2003-2004 to 10 percent in 2007-2008, then jumped to 12 in 2009-2010, then slipped to 8 in the most recent survey.
So it seems to have been jumping around a little. "We're going to need more" years of data to see if the apparent trend is really nosing downward, said John Jakicic, director of the University of Pittsburgh's Physical Activity and Weight Management Research Center.
Then came a backlash against the initial wave of hype. The first naysayer out of the gate, from what I have seen, was Slate, with a piece titled Obesity Rate for Children Has Not Plummeted. The author, Razib Khan, points out two problems with much of the news coverage. First is the confusing nature of that 43%, or 40% depending on rounding, and more importantly, the possibility that the result is just noise anyway.
With such an extensive body of data, if the researchers start cutting it up different ways, they are likely to come across a blob of noise they can dress up as a result. He uses a coin flipping analogy.
A fair coin is unlikely to land on heads nine out of 10 tosses, so such an outcome suggests the coin is probably not fair. Unlikely is not the same as impossible, and if you look long and hard you will inevitably stumble upon random events that seem novel but are just the outcome of chance.
Yes, and if you flip the coin 10,000 times and then combed through your results, you wouldn’t be that surprised to find a run of nine heads in a row somewhere.
Good statisticians can calculate the right way to adjust the probability of finding such a pattern embedded within a bigger collection of data. But did anyone do that for the obesity results? Khan delves into the JAMA paper where the result was announced. There, he found that the researchers admitted they did not make the proper statistical adjustments.
In his Language Log blog, Penn linguistics professor Mark Liberman quoted the Slate piece, agreed with it, and then added his own fascinating analysis of how the numbers were derived, which revealed that many of the trends assumed to be changes in obesity might just reflect changes in the way data are sampled. Liberman also noted that if you look at the 2-5 year-old category another way, you could say that the percentage of non-obese kids increased by 6%.
Other critical pieces followed. Most focused just on the misleading nature of the 43% or 40%. Here’s one, by Atif Kukaswadia in a PLOS blog: Childhood obesity drops by 40% in the last decade, Or not really, but who’s checking?
Apparently quite a few people are checking. There was also a critical piece in the Columbia Journalism Review, some of which didn’t make sense to me. The CJR came down quite hard on The Philadelphia Inquirer (disclaimer: I used to work there), saying that a wire story the paper ran had a “misleading” headline. But the headline, U.S. Obesity Rate Shows Signs of Leveling Off, actually seemed to reflect the bigger picture and the boring fact that people are still fat, but not, overall, getting much fatter. So what’s the problem? Here’s CJR:
Obesity Rate Shows Signs of Leveling Off,” though the body of the article read much more pessimistically, quoting the study’s author in the third graf, saying her data showed “no change in youth or adults.” So much for leveling off.
I’m baffled. If obesity rates were rising and now they’re steady, doesn’t that translate to “leveling off”? I guess there’s a slight ambiguity, since “no change” could be interpreted to say that the rise is as steep as ever, but it seemed pretty clear that the rise had flatted (though not people's stomachs). As news stories went, it was one of the better ones except for one little goof that CJR missed:
17% of kids are obese,” followed by “Obesity in kids is defined as a child who has a BMI at or above the 95th percentile for children of the same age and sex.”
How can 17% of kids be in the top 5%? This is one of the mysteries that Liberman detangles in his blog post.
The CJR post went on to praise the USA Today story by Liz Szabo, but that headline, Child obesity rates drop 43% in past decade, seemed not just misleading but wrong. From my reading, the study showed no change in children, except in one subset. There’s a subhead that suggests some experts warn that numbers are misleading, but then the lede says this:
With so little good news about obesity in the USA, public health advocates are celebrating a rare victory: a sharp decline in obesity rates among young children.
Is it really praiseworthy to say something misleading and then tell people what you just said might be misleading? Why not just refrain from saying it in the first place?
The Philadelphia Inquirer last weekend ran an enlightening follow-up, Teasing out the Numbers in Obesity Study, by staff writer Don Sapatkin. That story added another very important piece of information – there were error bars involved. Big ones.
Or, using confidence intervals at each end – the range of statistical possibilities rather than the most likely point near the middle – the eight-year change comes out somewhere between a 7 percent increase and a 66 percent decrease.
Too bad the original stories didn’t report this, but better late than never.
Sapatkin got in touch with Temple University math professor John Allen Paulos, who is “not convinced” by the alleged improvement. Paulos is famous for his book Innumeracy. He also wrote a book called A Mathematician Reads the Newspaper. Every journalist should read it. It was written over a decade ago, but it covers cases much like this one, in which journalists use misleading relative percentages rather than more straightforward absolute ones. Some things don’t change.