Skip to Content
8Jan 2013

Columbia Journalism Review: Personal-health journalism serves up a daily diet of unreliable information.

David H. Freedman

In a 4,000-word cover story in the Jan./Feb. 2013 issue of the Columbia Journalism ReviewDavid H. Freedman, a contributing editor at The Atlantic, offers us a comprehensive critique of what he calls "personal-health journalism"--what most of us would call medical writing. "Personal-health journalists have fallen into a trap," he writes, producing stories that "grossly mislead the public, often in ways that can lead to poor health decisions with catastrophic consequences."

The problem is not "the sloppiness of poorly trained science writers looking for sensational headlines," he writes. "Many of these articles were written by celebrated health-science journalists and published in respected magazines and newspapers; their arguments were backed up with what appears to be solid, balanced reporting and the careful citing of published scientific findings."

Freedman's criticism is breathtaking in its expanse and its failure to admit exceptions. The logical problem here is that Freedman's story is a piece of personal-health journalism, and the criticism he applies to others dooms his own piece. His story collapses like a liar's paradox: All health journalists are wrong, he writes. But it can't be true. If they are, then he is right.

I'm joking here, but I have a point: If Freedman is going to indict stories that are based on solid reporting and careful citation of scientific findings, what is he going to rely on to make his case? If not solid reporting and published findings, what?

Here is a summary of what Freedman has to say.

He leads with a dissection of a Tara Parker-Pope story, "The Fat Trap," that appeared in the Times magazine in December, 2011. The article discusses research that shows that most people who lose weight gain it back, and it includes studies that show why that might be the case--and why the body's systems designed to conserve energy as fat are so difficult to defeat.

Freedman writes that the article is "a well-reported, well-written, highly readable, and convincing piece of personal-health-science journalism that is careful to pin its claims to published research." It's hard to imagine how he could criticize it after such expansive praise, but criticize it he does. "There’s really just one problem with Parker-Pope’s piece: Many, if not most, researchers and experts who work closely with the overweight and obese would pronounce its main thesis—that sustaining weight loss is nearly impossible—dead wrong, and misleading in a way that could seriously, if indirectly, damage the health of millions of people."

Again, we find ourselves in a logical knot. If the article was "well reported," how did it miss the "many experts" who would pronounce its thesis dead wrong? 

Freedman bases his claim that she was wrong on "many readers--including a number of physicians, nutritionists, and mental-health professionals," none of whom he names; and two doctors whom he quotes. He does not review published studies on this point. This casual survey of critics might be enough if Freedman were arguing that Parker-Pope's reporting had missed something--but it falls far short of what he needs to call her "dead wrong."

Freedman does make some useful points about medical reporting, but they are not new. "The problem isn’t unique to the Times, or to the subject of weight loss," he writes. "In all areas of personal health, we see prominent media reports that directly oppose well-established knowledge in the field, or that make it sound as if scientifically unresolved questions have been resolved." He's right; we do see stories that challenge widely held notions about obesity, and some of them are wrong. And many stories do make scientific questions sound settled when they are not. These criticisms are worth making, but the "celebrated" medical journalists that Freedman is condemning are aware of these concerns and try to avoid these mistakes. Newcomers might find this discussion helpful.

After dispatching Parker-Pope, Freedman broadens his critique. "Indeed, most major Times articles on obesity contradict one another, and they all gainsay the longstanding consensus of the field." I'm not sure what longstanding consensus he is referring to, but Freedman has twisted himself into another knot. The stories cannot all contradict one another and contradict the consensus at the same time. If one story contradicts Freedman's unspecified consensus, the story that contradicts that story must agree with the consensus. Freedman seems to be operating in some multidimensional space where one line can be perpendicular to dozens of others, and they are all perpendicular to each other. That doesn't happen in the three-dimensional medical writing that we're accustomed to. Again, I have a point: Freedman can't dismiss all Times stories. Some of them, even if by happenstance, must turn out to be what he considers correct.

As an example of bad medical writing, Freedman cites "innumerable articles," including Parker-Pope's, that suggest that obesity is largely genetically determined. And then he engages in a little medical writing of his own:

But study after study has shown that obesity tends to correlate to environment, not personal genome, as per the fact that people who emigrate from countries with traditionally low obesity rates, such as China, tend to hew to the obesity rates of their adopted countries. What’s more, global obesity rates are rapidly rising year by year, including in China, whereas the human genome barely changes over thousands of years. And studies clearly show that “obesity genes” are essentially neutralized by healthy behaviors such as exercise.

This could all be true, but these two paragraphs are nowhere near enough to dismiss every story that comes to a different conclusion. And suddenly Freedman is citing published studies, which he has already argued cannot provide the basis for legitimate conclusions. Do we sense an agenda here? Freedman writes that "study after study has shown that obesity tends to correlate to environment, not personal genome." He expresses no doubt on that point. So we wonder: Is it all medical writing that Freedman condemns, or just the stories that emphasize genes over environment?

Freedman then cites the noted critic and scientist John Ioannidis of Stanford, who has convincingly argued that there are many, many problems with the conclusions reported in medical journals. Freedman explains at some length why randomized clinical trials "are plagued with inaccurate findings," but his analysis will not be news to the people who do these studies or those who write about them. Indeed, Freedman is recycling some of what he wrote in a story that he published in The Atlantic in November, 2010. Ioannidis is correct: These studies are not perfect. But what is the alternative? 

Freedman then builds to his close, where we expect him to explain what the alternatives are. How can medical writers do better? "Too many health journalists tend to simply pass along what scientists hand them—or worse, what the scientists’ PR departments hand them," Freedman writes. Of course they do. But not Parker-Pope. How do we solve the Parker-Pope problem: Eradicating the "catastrophic consequences" in the work of "celebrated" medical journalists?

Because published medical findings are "more often wrong than right" (a conclusion drawn from Ioannidis), a reporter who quotes studies "is probably transmitting the wrong findings," Freedman writes. "And because the media tend to pick the most exciting findings from journals to pass on to the public, they are in essence picking the worst of the worst. Health journalism, then, is largely based on a principle of survival of the wrongest." 

What should reporters do? Freedman:

Readers ought to be alerted, as a matter of course, to the fact that wrongness is embedded in the entire research system, and that few medical research findings ought to be considered completely reliable, regardless of the type of study, who conducted it, where it was published, or who says it’s a good study.

That sounds like a prescription for refusing to report any medical news, and more--for actively working to shield readers from new medical findings. It is not true that "wrongness is embedded in the entire research system," and anyone who has been cured of cancer, protected from polio, or treated for pneumonia knows it isn't true. 

Health journalists make the situation worse, Freedman writes, by writing about "the exciting, controversial idea that their editors are counting on." For example (and this is my example), a reporter probably couldn't get far with his editors if he pitched a story saying some medical reporters are lousy at what they do. But the "exciting, controversial" idea that even the best medical reporters mislead and threaten the lives of their readers might be what his editors are counting on.

In his closing paragraph, he seems to broaden his critique to all science writers, not just medical writers. This comes as a surprise. He quotes Dennis Overbye of the Times, who does not cover medicine, as saying that scientists' values are honesty, doubt, respect for evidence, and so forth. Freedman's response: "But given what we know about the problems with scientific studies, anyone who wants to assert that science is being carried out by an army of Abraham Lincolns has a lot of explaining to do." Apparently Freedman thinks his criticism applies to the cosmologists and astronomers that Overbye covers, although it's difficult to know how a cosmology story based on overwhelmingly misleading studies would threaten readers' lives, which is where the argument began.

In the end, Freedman offers us no remedy for what he thinks is wrong with medical journalism. He doesn't cite any examples in which he thinks medical reporters got it right; apparently, in his view, there are none. Freedman has succeeded in writing a provocative piece, and it is sure to get more attention than a story that says what is actually the case: Some medical researchers and medical writers are very good at what they do; some are terrible; and most are somewhere in the middle.

Freedman is correct: Few medical research findings should be considered completely reliable. But many medical research findings, and many of the stories the best medical reporters write, have enriched our lives and, to use Freedman's metric, saved readers' lives in some cases, I'm sure. I commend Freedman for reminding medical reporters that they are not beyond criticism. But he offers them no way out. According to his website, Freedman has written mostly about technology, and only occasionally about medicine. Maybe that's what his criticism ultimately comes down to: Medical writers would do best to write about something else.

-Paul Raeburn

Comments

We could all appreciate the a little more professionalism. Of course these people need to have some health education background in order to be able to write news about these medical facts. As a reader I always use the most trustful sources. I recently read an interesting article on Nexium, I found it helpful and relevant to my health condition but before I cut to some decisions I still need to see a doctor.

Rose,

I appreciate the comment, and although I suffer from a New York bias, I do try to remember that other things are happening across the Hudson. I think I can stand by my criticism of Freedman's story and, without contradiction, acknowledge that much of the reporting being done around the country is poor, or, as you point out, it's nonexistant.

This is particularly ironic in North Carolina, home to Duke, a medical powerhouse. 

Best of luck with North Carolina Health News.

 

I read Freeman's piece with great interest, went back and re-read Parker-Pope's article and then stumbled across your piece here, Paul. I'd like to comment, even knowing that I'm quite late to the dance.

To start, I need to give some context. I'm a health reporter in North Carolina, where all of the state's papers, save the Charlotte Observer, have lost all of their health reporters via layoff, buyouts, resignations and attrition. When I left my old pub radio job to start NC Health News (with an intention to fill that gap somewhat), my old role was not replaced. So, there's no one reporting on pub radio in NC who has any healthcare expertise, actually, there's no healthcare reporting at all.

When I go home to NY (I'm from LI), and tell people that I'm doing what I'm doing because there are only two full time health reporters in a state of 9.7 million, I get looks like I've come from Mars.

Because I am.

I actually really appreciated Freeman's article. And while I'm tremendously glad there are folks who are deeply concerned about arguing how many angels are dancing on the head of the NY Times health page, most of the people in the country aren't reading the NYT, they're reading their local media outlets, and there's a real trend in personal health reporting out there that's B.A.D. So, I think Freeman's point is well taken.

Perhaps he'd have been better served to dissect some of the personal-health reporting from beyond the Tristate area. I'm sure his putting a target on TPP's story was all about saying, "See, even at the Grey Lady, it's this bad," but if you think she does a bad job of parsing the research and reaching a conclusion, and that Freeman does a bad job at parsing TPP's parsing, you really need to read some regional papers... where most of my legislators and fellow-citizens get their information.

The reality out here past the Hudson is that the quality of health reporting is pretty goddamned poor, something Gary Schwitzer attests to almost daily. I know a lot of journos out here pay attention to CJR. I'm hoping that the piece will make editors and reporters think twice, call one more researcher, challenge their own assumptions more vigorously and be more careful in the conclusions they reach in the complicated healthcare stories they take on.

 

Charles,

Here we are, two people who do not know a lot about current thinking on obesity, swapping hypotheticals. OK, one more round...

Your hypothetical Parker-Pope lede makes sense if in fact the evidence to support Weight Watchers is overwhelming and the evidence for the contrary position is but a “growing belief.” If there is a lot of evidence on both sides of this question (that’s my guess), or a preponderance on the Parker-Pope side, then she is on safer ground. Freedman doesn’t help us with this. He asserts that she is dead wrong and continues on his way. And so are many other “celebrated” journalists, etc., etc., etc.

But you and I are now editing Parker-Pope, so to speak. We could differ on how to begin the tale, and we might disagree about how to massage her copy, if we were her editors. But we are not talking about the kind of cataclysmic failing that Freedman is warning us about. We’re merely talking about different ways to handle the story. Freedman did not say that a better edit of her piece would have helped; he says she’s (broken record) dead wrong.

Paul,

Thanks for the note. I hope my noting the typo didn't seem snotty -- I was worried that it would seem odd to quote it without some acknowledgement of its existence.

Let me try to suggest where I think Freedman disagrees with you, if only for the purpose of clarity.

Tara-Pope's article begins with a small study by Joseph Proietto and explains its findings. Then her introduction finishes with a nut paragraph that tells readers what the thrust of the article will be:

"For years, the advice to the overweight and obese has been that we simply need to eat less and exercise more. While there is truth to this guidance, it fails to take into account that the human body continues to fight against weight loss long after dieting has stopped. This translates into a sobering reality: once we become fat, most of us, despite our best efforts, will probably stay fat."

Freedman, I think, would argue that Tara-Pope should have begun with something like the following:

"In more than four decades, Weight Watchers has provided a program that has, according to its databases, helped ten zillion people take off weight. Fewer than X% of its members return, which has long been taken as evidence that its members take the weight off for a long time. For Weight Watchers -- and the scores of scientific studies based on the company's work -- such figures are ample evidence that people in the real world can and do lose weight, even if it is difficult.

"Yet at the same time, there is a growing belief among scientists, based on an accumulating number of studies in laboratories, that such weight loss should be next to impossible, because the body contains newly discovered genetic switches that, so to speak, seem to fight against dieting. Indeed, according to Joseph Proietto..."

Because I don't know much about the subject, I would never say that Tara-Pope's original article is wrong. Nor do I have any brief for Weight Watchers, which I know little about -- I made up all that stuff above as a hypothetical example, because Freedman cites them in his article.

My point is only that the second, Freedmanesque approach truly is quite different from the first -- and presents the scientific results in a broader, more skeptical context right from the beginning. Some good journalists do adopt this approach. But it's not something I commonly encounter. And Freedman's article made me want to go back and look at my stuff to see whether I should have worked in this way myself more often in the past.

Hoping that this particular deceased quadruped can be flogged a little more,
CCM

 

Charles,

Thanks for the reasoned comments, and pardon my failed copy editing.

I agree with your point that reporters should begin with what is known and proceed from there. We all agree, don’t we? With regard to the obesity example, yes, there are gigantic clouds of error and confusion. But evidently not so in Freedman’s mind: Given those clouds of confusion, how can he pronounce Parker-Pope dead wrong? He does not come close to making that case; he simply asserts it. That’s one small problem with his piece. My suspicion is that all the reporting one could do in a lifetime regarding obesity would conclude that genes and environment are both important. Freedman disagrees, based on (questionable) “study after study.”

The larger problem is that he wavers between two points of view, one of which is widely accepted and not news, and the other of which is, in my view, unsupportable. The first is that reporters ought to be more skeptical and careful—a point of view we all agree with, and not the kind of news that would make a cover story in CJR. The second is that even the best reporters fall victim to errors in science publications and make mistakes in reporting that make their stories—at least in one example we’re given—dead wrong. If Parker-Pope and you and I and many experienced and skilled reporters reading the Tracker can do our best work and still be found wanting by Freedman, what do we do?

He offers us no way out, except the formulaic advice that we should “openly question findings from highly credentialed scientists and trusted journals.” I believe that’s what we’ve all been taught to do. When your mother says she loves you, check it out, as the gag goes. And if “wrongness is embedded in the entire research system,” as Freedman writes, what do we rely on to write our stories? Opinions? Beliefs? Or, as Freedman proposes, “what we see in clinical practice”? The point of careful studies, despite their problems, is that they can avoid the biases inherent in clinical practice. Without studies, who can say which clinical observation is superior to another? 

@ Paul Raeburn:

You write (in comments):

"What was news in this story was that even the best reporters, such as Parker-Pope, can do everything write [sic] and still be "dead wrong." I don't believe that..."

To me, this seems to be the key difference between yourself and Freedman -- and, in my opinion, the key place in which you have misunderstood Freedman's argument. Freedman's argument is that a) simply taking extra care to report individual scientific findings is pointless when entire fields are enveloped in gigantic clouds of error and confusion; and b) reporters should step back and use as a starting point the few things that are actually known, even when scientists do not.

For example, Freedman talks about the many journalistic articles that accurately report on scientific studies which purport to show that sustained weight loss is next to impossible. In his view, if I understand right, it is pointless to start with those scientific studies and then reach beyond them to set context. Instead, reporters should begin with what is solid: that long-standing, highly successful businesses like Weight Watchers have reams of data showing that in the real world, as opposed to the lab, hundreds of thousands of people have had sustained moderate weight loss. Then they should look at the studies and see whether they fit in with this factual background. Freedman would suggest, I think, that the journalist's subject might be the glaring contradiction between the findings from the lab and the real-world evidence. This would be a strikingly different approach -- and is the kind of journalistic solution that Raeburn is demanding.

I should note that Dave Freedman is a friend, so I'm biased toward giving him a sympathetic hearing. And I don't know anything about Weight Watchers, so take my discussion above as a potential example of the kind of actual knowledge he is talking about, rather than gospel. Still, I do think this is his argument in a nutshell. Moreover, the approach he champions -- proceeding from what little seems to be known, and then seeing where the novel fits into it -- is one that I see good scientists use all the time to evaluate new findings in their own disciplines.

CCM

OK, I pressed "save" on the comment I just wrote, figuring the system would save it while I logged in, but instead the system posted the comment under the name "anonymous." I intended to put my name on it. (Note to webmaster: Maybe "Post Comment" would be a better label for that button? Just a friendly suggestion.)

I'm not going to list the many flaws and distortions in this critique of my article. (I mention a few in a comment I posted in the comment stream on the article at CJR.) Anyone who believes Raeburn has successfully dismissed the concerns I raise (or that science writers have been dealing with them all along--impressively, he makes both claims) is believing what they want to or need to believe, and will not be moved by any evidence or reasoning I offer. (One reason I try not to respond to people who demand I give them easily Googled citations is that they only want me to cite them so they can dismantle them, high-school-debate-style, not learn from them.) The main point is, I think Raeburn is doing exactly the right thing here, and I'm delighted to see someone casting a harsh eye on a work of science journalism (which is, I guess, what my piece could be considered), even if it's my piece going under the nasty microscope. So in that regard, good for him, and I'm happy to see him have his say. I hope people read both pieces with an open mind.

My real complaint is that Raeburn and those science writers who applaud his effort to discredit my piece rarely give their own work this sort of treatment. If they did, I wouldn't have a leg to stand on. As long as they keep turning out the same credulous glorification of published science, and saving their tough criticism for anyone who dares to question the enterprise, nothing will improve. So bravo for taking on my piece, and shame on all of you for routinely giving yourselves a pass.

There are plenty of pros and cons to be brought up about Freedman's article. But is the main point here being missed? The Tracker is properly a place to critique individual articles. But (in my opinion) it should also be addressing issues of science journalism directly, using articles like Freedman's as a springboard for discussion.

So what is the important issue here? I would say that from the point of view of a reader of scientific news the most important take-away from a new piece of news is not "What is the latest research report saying?". Instead it should be: "What are the best minds in the relevant field of science saying about the the latest research?"

I mean, seriously, new research reports are a dime a dozen. (Well, a little more than that these days, perhaps.) Nature, Science, PNAS, etc. serve up dozens every week. If the results are really novel (and why would they be in one of those journals if not?), they may also be controversial. If I'm not an expert in the particular field of an article, but still very interested in the results, what I most want to know is what the genuine experts think. Digging up that kind of information is what I think a good science writer really ought to be doing, not simply restating the results in, perhaps, somewhat less technical language. Or trying to critique the result on his/her own.

Then, as to this: "Freedman seems to be talking about a simple thing: whether obesity is caused by environment or genes." Stop right there. It isn't so simple at all, is it? Many real world phenomena have multiple causes - especially phenomena involving complex human diseases. So, tell me, what "causes" cancer? Is it a choice between either X or Y? Why not both? And what about Z? I think the situation with obesity is similar, if not quite as complex as cancer. Right off the bat "environment or genes" is not the only possibility at all. What about "lifestyle factors" (amount of exercise, addictive behaviors, etc.)? And within each of those causation categories, there are many subtypes. (Which genes, for example - there are hundreds of possibilities.) And all the possible causes may interact with each other.

What I as a reader of science writing want is not simply the latest results. I want as much of the context as possible (within time/space constraints). If someone has found a new gene that may affect obesity I want to know where it fits in the whole picture. I want to know what other genes are relevant - in the opinion of other experts in the field. And how does the gene interact with factors of environment and lifestyle?

Instead of simply reporting that yet another study says that people who lose a lot of weight almost always gain it back, can we talk about what factors affect the result? What are the factors that that appear relevant to whether or not weight losses are sustained? If the latest study doesn't address that question, was it really worth doing or reporting? What is known about such factors from other similar research?

Think about it. How is it useful to a lay reader of a story on obesity research to know if some new gene related to leptin response has been found, unless that's connected with factors of environment or lifestyle that the reader may have more control over?

Perhaps science writers need to focus as much as real estate professionals do. Except instead of location, location, location, the key ingredient is context, context, context.

I agree with Curtis and Mary Beth that skepticism is crucial, and that too many medical reporters are not skeptical enough. But that's not news, at least among those of us who practice and think about science and medical reporting. What was news in this story was that even the best reporters, such as Parker-Pope, can do everything write and still be "dead wrong." 

I don't believe that, and I don't believe the best reporters pander to the public's desire for a simple answer. That, among other things, is what makes them the best reporters.

Curtis is correct that new findings should be put in context, of course. And while Freedman doesn't say explicitly that all health reporters are wrong, he doesn't offer examples of good stories. If Freedman believes that two-thirds of the medical literature is wrong, then one can only assume he does not want that news reported, and that the best reporters would do well by not reporting it. 

 

As a former biotech rep, a final point is that manipulation of data is commonplace from a marketing/PR perspective. That holds true for selling journals, advertising link throughs and products. We live in a society that rewards winning at any cost. The facts are that average readers, consumers and medical professionals each have a set of values for determining validity and trust in any publication, corporation or person. At some point, individuals have to become responsible for making their own educated choices.

"Freedman then builds to his close, where we expect him to explain what the alternatives are. How can medical writers do better? "Too many health journalists tend to simply pass along what scientists hand them—or worse, what the scientists’ PR departments hand them," Freedman writes. Of course they do. But not Parker-Pope. How do we solve the Parker-Pope problem: Eradicating the "catastrophic consequences" in the work of "celebrated" medical journalists?"

My answer is skepticism and genuine investigative journalism. If one point looks too good to be true, it's guaranteed there are five average and five sub-par results that no one takes the time to dig for. The diamond data mine is loaded, filled with synthetics and a few of genuine natural carbon sparklers.

 

[Curtis Brainard of the Columbia Journalism Review sent me this email earlier today. I'm reprinting it with his permission.]

Hey Paul,

I assigned and was one of the editors on the piece. Thanks for the tough and thorough critique. I think you make a couple of fair points, while others are off the mark.

We recognized the peril of citing studies to shoot down other studies, and perhaps I let Dave drift too far from a debate about journalism to a debate about medical science, but I think he uses the studies in a different way that Parker-Pope and Taubes did. He's using them to call tentative theories in question, not to hold them up as reliable medical advice. There's just way too much "you-can't-lose-weight-even-if-you-try-oh-no-you-can-but-only-if-you-eat-a-high-protein-diet" coverage out there. And we didn’t accept Freedman's piece simply because going after the best medical reporters is exciting and controversial (that was a low blow on your part), but rather because we believed that it's important to point out that even the best writers sometimes pander to the public's craving for simple answers and the medical industry's desire to provide them. I'll concede that we could've done a better job deconstructing Parker-Pope and Taubes's work, giving them more credit for the nuance in their articles, but this wasn’t, as you imply, a hollow attempt to go after them simply for the sake of doing so.

When it comes to Dave's discussion of health reporting in general, I think you mischaracterize a couple of his points. He never, ever says that "all health reporters are wrong," and it's really unfair to say he does. Nor is he in--in any way, shape, or form--issuing "a prescription for refusing to report any medical news" or "for actively shielding readers from new findings." I'm shocked you would think so. I think it's pretty clear that all he's saying is that when covering health/medicine, journalists should be far, far more skeptical. I'd say that the vast majority of articles I read treat the latest papers published in Science and Nature as gospel and don't deliver any scientific context or any caveats. It's deplorable. And like Freedman, I think it's dangerous. That's why we didn't bother to highlight a lot examples of reporters doing it right. Sometimes you have to shake a stick at people. You can't just point to the good samaritans out there and hope all the bad actors will fall in line.

Best,

Curtis

 

Michelle,

As you point out, there is a serious need for better attribution in this piece, especially in the example you mention. There are other spots where attribution would help--but to what does one attribute these claims if, like Freedman, one largely dismisses the medical literature as wrong? As I pointed out, he gets himself in a box over and over again.

And you're right, of course; personal experience does not trump all. The reason people do studies is because often things that seem to be true, that we think must be true--turn out not to be true.

Thanks for taking the time to share your thoughts.

 

Paul - So glad that you wrote this analysis of the CJR piece. You didn't mention the two things that bothered me the most (though you raised a number of excellent points that didn't occur to me).

1) "Many programs and studies routinely record sustained weight-loss success rates in the 30-percent range." No attribution here. NO ATTRIBUTION HERE. The sentence is so poorly worded that it's not clear if the weight loss is in the 30 percent range per person (ie I lose 30 percent of my body weight) or if the study/program leads 30 percent of participants to achieve some undefined measure of "sustained weight loss." With the exception of surgery or a Dean Ornish-like program, I can't think of anything that fits this criteria. I would be shocked if there is a way to do this with a "satisfying lifestyle change." If it's true, I have no idea why millions of Americans are overweight and obese. Seems that the author like statistics and studies when the findings fit his thesis.

2) "Many, if not most, researchers and experts who work closely with the overweight and obese would pronounce its main thesis—that sustaining weight loss is nearly impossible—dead wrong, and misleading in a way that could seriously, if indirectly, damage the health of millions of people." You have to acknowledge the conflict of interest here. These researchers and experts have built their entire lives, certainly professional and likely personal since this is how they support themselves and their families, on this topic. They are believers and God bless them for it. I agree with them! But while they have valuable insight, their personal experience doesn't trump everything else.

And finally, I loved this: "Scientific research needs to square with what we see in clinical practice." What do you see in clinical practice: that once we become fat most people stay fat? - or - that most people who go to Weight Watchers pick up behavioral modification tips and drop 30 percent of their body weight? Is it just me?

Paul, you are dead right that the issue here is that THIS is a piece of personal health journalism and THIS should be the example given to newcomers on what not to do. Normally I wouldn't care that much. Bloomberg almost never writes personal health stories. But I thought the Columbia Journalism Review was supposed to be ... I don't even know what to say. Certainly better than this.

Charles,

Good points. While you are right about many dimensions, but Freedman seems to be talking about a simple thing: whether obesity is caused by environment or genes. 

And you're right about physics being precise--in some circumstances--but I remember physics teachers telling me things were more-or-less equal if they were "within a factor of 2," or if they were the same order of magnitude. Biology usually does better than either of those approximations.

Thanks for the comment.

"Freedman seems to be operating in some multidimensional space where one line can be perpendicular to dozens of others, and they are all perpendicular to each other. That doesn't happen in the three-dimensional medical writing that we're accustomed to. "

I wouldn't be so sure that this "doesn't happen". Scientists of all kinds, not infrequently, have to deal with a multi-dimensional reality (dozens of dimensions, sometimes). That's what long-used statistical techniques like factor analysis are all about. Such techniques are being refined all the time, including use of sophisticated mathematical ideas from topology. But that's a complicated story in itself. The short summary is that of two studies (medical or otherwise) that contradict each other, it's not at all clear that either one or the other has to be "correct", unless they are fairly simplistic, such as "Y is the main cause of disease X". A finding that "Z is the main cause" is contradictory, yet both could be wrong. There may be many "causes", including some more significant than either Y or Z.

I'd suggest that one conclusion that can be drawn from Freedman's critique (whatever flaws it may have) is that proper medical reporting is a lot harder to do than is actually done most of the time. To do a proper job, the reporter needs to be aware of just what other lines of research have already been done that paint a very different picture from some "new" finding. And if a particular reporter lacks a substantial awareness in the topic, it's even more essential to contact other scientists in the topic area who can provide the necessary perspective.

Editors need to insist on this kind of depth. It's more than just a he-said/she-said issue of balance - there needs to be enough perspective to be able to offer a useful assessment as to how much consensus does, or doesn't, exist for a particular topic.

This situation is very different from scientific fields such as physics, which have a tradition of requiring very high levels of statistical significance. A field that tolerates a 1 in 20 chance of statistical error, or even 1 in 50, is very different from a field that insists on only one chance in 20 million.

I stand by what I commented on Friedman's story. A major part of the problem is that p-values are too loose in medical research. No, I'm not looking for, and know we can't have, the 0.01 percent of physics. But, at least no looser than 3 percent, if not 2 percent, across the board?

We'd probably get rid of some of the he said, she said, right there.

That said, I agree that his extension of the critique to science journalism in general was off base.

Login or register to post comments