Two consecutive stories on NPR’s “Morning Edition” Friday took very different approaches to the issue of medical risk and scientific proof. First Snigdha Prakash reported on a new study showing that heart problems from Vioxx can last up to a year after you stop taking the drug. She quoted only critics of Vioxx and gave no indication that there might be another side to the story. She noted that Merck has won three trials and lost three so far; she didn’t remind us of the famous quote from Merck’s highest‐profile loss:
Jurors who voted against Merck said much of the science sailed right over their heads. “Whenever Merck was up there, it was like wah, wah, wah,” said juror John Ostrom, imitating the sounds Charlie Brown’s teacher makes in the television cartoon. “We didn’t know what the heck they were talking about.” (Merck Loss Jolts Drug Giant, Industry, August 22, 2005, The Wall Street Journal)
In the next story Joanne Silberner reported on concerns that four California women “had died after taking the two‐drug abortion pill combination, Mefipristone, sometimes called RU486, and Misoprostol.…The deaths appeared to be a horrific side effect of the drugs.” But Silberner immediately noted that “it’s not likely to be that simple.” She quoted experts who cautioned against jumping to conclusions. She noted that the numbers were small. We need to know much more before we could assume there was a problem with these abortion drugs.
It was a good example of careful, cautious reporting. But why are journalists seemingly much more cautious in reporting medical risks involving abortion than in reporting other kinds of risks? There are plenty of critics of the “junk science” involved in the Vioxx stories; why aren’t they interviewed in Vioxx stories? The numbers were small in the Vioxx study, as in the case of the abortion drugs, but that fact was dismissed in one report and emphasized in the other.
Cato’s Jerry Taylor noticed something similar in a Wall Street Journal column 11 years ago (January 3, 1995; not online). He noted that the Journal of the National Cancer Institute
caused quite a stir by publishing an epidemiological study suggesting that women who have abortions are 50% more likely to develop breast cancer than women who do not.…“Not so fast,” countered epidemiologists; a 1.5 risk ratio (as epidemiologists put it) “is not strong enough to call induced abortion a risk factor for breast cancer.”
Taylor agreed that a 1.5 risk ratio is below the appropriate level of concern. But he wondered why “the same risk ratio that was so widely pooh‐poohed by scientists as insignificant and inconclusive when it comes to abortion was deemed by the very same scientists an intolerable health menace when it comes to secondhand smoke. Actually, that’s not quite true. The 1.3 risk factor for a single abortion was significantly greater than the really hard to detect 1.19 risk ratio for intensive, 40‐year, day‐in‐day‐out pack‐a‐day exposure to secondhand smoke (as figured by the EPA).”
Taylor worried that too many people fail to understand statistical probabilities or assume that correlation equals causation. He also wondered whether even scientists are susceptible to a political bias against smoking or for a woman’s right to choose. How much more true that must be for journalists.