"For the first time, a test that detects 10 types of lipids, or fats, circulating in a person's blood has been shown to predict accurately whether he or she will develop the memory loss and mental decline of Alzheimer's disease over the next two to three years," Melissa Healy writes at the Los Angeles Times.
The question, of course, is: How accurately?
And how soon will it be ready? Healy writes–we're still in the lede here–that "a screening test based on the findings could be available in as little as two years."
I'm not a betting man, but if I were, I'd take all comers: If you think this will be available in two years, slap your cash on the barrel, pal.
It's not until her seventh graf that Healy reports that the test could "sort the subjects" with and without Alzheimer's disease "with over 90% accuracy."
The accuracy of medical tests is usually considered in two respects. First, how sensitive is it? How good is it at picking up the people with the illness in question? How many does it miss?
And secondly, how specific is it? How likely is it to find positive results in people who do not have the ailment?
Healy doesn't make those distinctions. And she also surprises us in paragraph eight: The accuracy is "on a par with tests that measure the cerebrospinal fluid for evidence of abnormal proteins that are a hallmark of Alzheimer's disease."
You mean we already have a test as good as this? If so, that should be in the lede–or the second graf. A blood test, as she points out, "would be cheaper and less invasive than a spinal tap," and that's a good thing. Healy doesn't say so in her lede, but I suspect many readers will come away from it thinking that this is the first test for Alzheimer's disease. Which, according to Healy, it isn't.
To Healy's credit, she was far more measured than the Georgetown University Medical Center's press release, which began, "For the first time in decades of research aimed at cracking Alzheimer’s disease, a team led by Georgetown University Medical Center investigators has developed and validated a simple blood test that can predict which individuals will experience cognitive decline or Alzheimer’s disease in two or three years."
I wouldn't say that the researchers "cracked" Alzheimer's disease. Neither did the press release–but it certainly implied as much.
Georgetown's Howard Federoff, the senior author on the report in Nature Medicine, says in the release that the test is important because it will allow "advances in Alzheimer's disease research and treatment," not because it will help patients any time soon.
And I'm sorry to dwell so long on Healy, because while her story could have been better, others were awful.
CNN might take top honors for worst story with this lede: "In a first-of-its-kind study, researchers have developed a blood test for Alzheimer's disease that predicts with astonishing accuracy whether a healthy person will develop the disease." It goes on to quote Federoff, who tells senior medical correspondent Dr. Elizabeth Cohen that this "is a potential game-changer." She quotes somebody else saying that if this test works in people in their 40s and 50s, "that would be the 'holy grail.'"
We pause here to reflect on the use of superlatives. If you use them too soon, you run out of things to say. By which I mean: If a test for middle-age adults is the holy grail, what would we call an Alzheimer's cure? A really really holy grail?
At ABC NEWS, Dr. Danielle Krol of DailyDoseMD wrote a nicely understated lede that said, "A blood test for early Alzheimer’s disease may be on the horizon, according to a new small study that links substances found in blood to mental decline three years later."
At The Washington Post, Tara Bahrampour checked with the Alzheimer's Association, which said the test was "intriguing," but "preliminary." That caution appeared near the end of the story; it should have been higher.
The best piece I saw was by John Gever, a dependable debunker at MedPage Today, who led by saying the test's "accuracy fell short of what would normally be acceptable for a screening test."
That's a perspective I hadn't seen anywhere else. And Gever reports that the test has a "positive predictive value" of 35%, which means that "nearly two-thirds of positive screening results would be false." In other words, out of every 10 people who tested positive, two-thirds of them–say, six or seven–would be told they will develop Alzheimer's disease when that was not the case.
If Gever's math is correct–I can't vouch for it, but he has a long track record–then the test is nowhere close to being useful for screening patients.
We might not all learn to calculate positive predictive values, but we can learn to ask about them.
And we should have already learned, with claims like this, that they might turn out to be unholy grail.
-Paul Raeburn
Leave a Reply