For years I have ranted about the flaws of medicine, especially when it comes to mental illness and cancer. But my complaints are mild compared to those of Jacob Stegenga, a philosopher of science at the University of Cambridge.
In Medical Nihilism, published by Oxford University Press, Stegenga presents a devastating critique of medicine. Most treatments, he argues, do not work very well, and many do more harm than good. Therefore we should “have little confidence in medical interventions” and resort to them much more sparingly. This is what Stegenga means by medical nihilism. I learned about Medical Nihilism from economist Russ Roberts, who recently interviewed Stegenga on the popular podcast EconTalk.
Skepticism toward medicine, sometimes called “therapeutic nihilism,” was once widespread, even among physicians, Stegenga notes. In 1860 Oliver Wendell Holmes, dean of Harvard Medical School, wrote that “if the whole materia medica, as now used, could be sunk to the bottom of the sea, it would be all the better for mankind—and all the worse for the fishes.”
Such cynicism faded with the advent of anesthesia, antiseptic surgical techniques, vaccines and truly effective treatments, notably antibiotics for infectious disease and insulin for diabetes. Stegenga calls these latter two “magic bullets,” a phrase coined by physician/chemist Paul Ehrlich to describe treatments that target the cause of a disease without disrupting the body’s healthy functions.
Researchers have labored mightily to find more magic bullets, but they remain rare. For example, imatinib, brand name Gleevec, is “an especially effective treatment” for one type of leukemia, Stegenga says. But Gleevec has “severe adverse effects, including nausea, headaches, severe cardiac failure and delayed growth in children.”
Most other forms of cancer, as well as heart disease, Parkinson’s, Alzheimer’s, arthritis, schizophrenia and bipolar disorder, lack cures or reliable treatments. Many “widely consumed” medications are “barely effective and have many harmful side effects,” Stegenga writes. Examples include drugs for high cholesterol, hypertension, type-two diabetes and depression.
Stegenga warns readers not to stop taking prescribed medications without medical supervision, because abrupt cessation can be risky. But our health will improve and our costs shrink, Stegenga contends, if we resort to treatments much less often. As Hippocrates once said, “to do nothing is also a good remedy.”
Anticipating objections to this thesis, Stegenga emphasizes that he is not anti-science or anti-medicine. Quite the contrary. His goal is to improve medicine, aligning it with what rigorous research actually reveals about the pros and cons of treatments. His thesis should not hearten advocates of “alternative” medicine, which has even less empirical standing than the mainstream. He writes:
There is no place I would rather be after a serious accident than in an intensive care unit. For a headache, aspirin; for many infections, antibiotics; for some diabetics, insulin—there are a handful of truly amazing medical intervention, many discovered between seventy and ninety years ago. However, by most measures of medical consumption—number of patients, number of dollars, number of prescriptions—the most commonly employed interventions, especially those introduced in recent decades, provide compelling warrant for medical nihilism.
Here are key points:
Medical research is slanted toward positive results. The core of Stegenga’s book is his critique of clinical trials. Everybody wants positive results. Patients are desperate to be cured and prone to the placebo effect. Journals are eager to publish good medical news, journals and mass media to publicize it and the public to read it. Researchers can gain grants, glory and tenure by showing that a treatment works.
Most importantly, biomedical firms, which sponsor the bulk of research, can earn billions from a single approved drug, like Prozac. John Ioannidis, a Stanford statistician who has exposed flaws in the scientific literature and whom Stegenga cites repeatedly, contends that “conflicts of interest abound” in medical research. Most clinical research, Ioannidis asserted bluntly in 2016, “is not useful,” meaning it does not “make a difference for health and disease outcomes.”
Randomized controlled trials, the gold standard for medical research, are supposed to minimize bias. Typically, subjects are randomly assigned to two groups, one of which receives a potential treatment and the other a placebo. Researchers and subjects are “blind,” meaning that they do not know who is getting the drug or placebo.
But as Stegenga points out, researchers must make many judgment calls as they design, implement and interpret trials. Randomized controlled trials are thus far less rigorous and objective and more “malleable,” or subject to manipulation, than they seem. The same is true of meta-analyses, which assess data from multiple trials.
This malleability explains why the results of different trials vary widely, and why industry-sponsored research is far more likely to show benefits than independent investigations. Meta-analyses of antidepressants carried out by researchers with industry ties are 22 times less likely to mention negative effects than independent analyses. According to another analysis, company-sponsored comparisons of hypertension treatments are 35 times more likely to favor the sponsor’s treatment over alternatives.
More rigorous studies show fewer benefits. Researchers eager for positive results can engage in p-hacking, which involves formulating hypotheses and finding data to support them after a study is carried out. P-hacking is a form of cherry-picking, which allows researchers to attribute significance to what may be random correlations. One way to prevent p-hacking is to make researchers pre-register studies and spell out hypotheses and methods in advance.
A 2015 study compared the effect of pre-registration on federally funded trials of heart-disease interventions. Of trials carried out before 2000, when pre-registration went into effect, 57 percent showed benefits from interventions, compared to 8 percent of the later trials, which were also designed with less input from industry and more from independent researchers. Stegenga notes that on average post-2000 interventions “did not help.”
Meta-analyses by the Cochrane Collaboration, a group of independent researchers with high standards of evidence, are half as likely to report positive findings as meta-analyses by other groups. The disturbing implication of these studies, Stegenga says, is that “better research methods in medicine lead to lower estimates of effectiveness.” In general, and this is worth highlighting, the rigor of research on medical treatments is inversely proportional to the benefits it finds.
Drugs’ harmful effects are underreported. Stegenga accuses the FDA, which has close ties to industry, of setting the bar too low in approving drugs. He quotes a senior FDA epidemiologist complaining that the agency “consistently overrated the benefits of the drugs it approved and rejected, downplayed or ignored the safety problems.”
Research generally under-reports adverse effects. Preliminary “safety” trials almost always go unpublished, as do many later trials that show largely negative effects. Moreover, published studies often provide no data on patients who withdraw from a study because of adverse reactions to a drug. Medications’ harmful effects often come to light only after approval by regulatory agencies. One study found that harms are underestimated by 94 percent in post-approval surveillance.
Drugs recently withdrawn after approval include (these are generic names, Google for brand names) valdecoxib, fenfluramine, gatifloxacin and rofecoxib. Those that remain on the market in spite of increased safety concerns include celecoxib, alendronic acid, risperidone, olanzapine and rosiglitazone.
This last drug, marketed as Avandia for type-two diabetes, increased risk of heart disease and death in early studies. The manufacturer claimed that a new trial showed much lower risks, but the trial excluded subjects most likely to react adversely, according to Stegenga.
Health-care providers engage in “disease-mongering.” Stegenga faults physicians and drug companies for expanding their markets by inventing disorders and pathologizing common conditions. He calls this practice “disease-mongering.” Dubious disorders include restless leg syndrome, erectile dysfunction, premenstrual dysphoric disorder, halitosis, male balding, attention deficit hyperactivity disorder, osteoporosis and social anxiety disorder.
Stegenga points out that the FDA recently approved flibanserin for “female sexual dysfunction” after aggressive lobbying by a supposed patient-advocacy group, “Even the Score.” The group accused the FDA of “gender bias” because it had “approved drugs for erectile dysfunction but had not yet approved a drug for female sexual desire.” The lobbying was reportedly organized and funded by the manufacturer of flibanserin, which a meta-analysis has shown to have marginal benefits and significant adverse effects.
Similarly, physicians keep “discovering” disorders in new populations. An especially disturbing example is the diagnosis of mental illness in infants. The New York Times reported that in 2014 physicians wrote 83,000 prescriptions for antidepressants and almost 20,000 prescriptions for antipsychotic medications for infants two years old and younger.
Screening doesn’t save lives. Although he focuses on treatments, Stegenga disparages tests, too. A staple of preventive care is that screening asymptomatic people for disease leads to earlier diagnosis and better outcomes. Unfortunately, Stegenga writes, screening can lead to “false positive diagnoses, overdiagnosis and overtreatment.” (Overdiagnosis occurs when a test detects a small tumor or other anomaly that if left alone would never cause harm.)
Most evaluations of screening examine whether a test for a disease—such as a mammogram or PSA test for prostate cancer--reduces deaths from that disease compared to untested controls. Although the disease-specific method seems reasonable, it might unduly favor the test by erroneously excluding deaths resulting from the disease, treatment or test (such as a perforated colon caused by colonoscopy). Hence some researchers have argued that tests should be evaluated by counting all deaths, no matter what the designated cause, in screened and unscreened groups.
A 2015 review examined popular tests for four major killers: cancer, heart disease, diabetes and respiratory disorders. The study found that few screening methods reduced disease-specific mortality and none reduced all-cause mortality. The authors conclude that “expectations of major benefits in mortality from screening need to be cautiously tempered.”
Modern medicine is overrated. Modern medicine gets too much credit for boosting average life spans, according to Stegenga. He cites evidence compiled by scholar/physician Thomas McKeown in the 1970s that increased longevity results less from vaccines, antibiotics and other medical advances than from improved standards of living, nutrition, water treatment and sanitation.
McKeown’s work remains influential in spite of criticism. Moreover, health-care providers routinely violate the Hippocratic decree to do no harm. A 2013 study estimated that more than 400,000 “preventable hospital-caused deaths” occur in the U.S. every year, and as many as 8 million patients suffer “serious harm.”
Stegenga acknowledges that “medical nihilism” sounds grim. Some readers might prefer his more upbeat phrase “gentle medicine,” which calls for less emphasis on cures and more on care, including pain-management (although the current opioid epidemic shows that pain management poses risks too). Some physicians who espouse reductions in treatment call themselves “medical conservatives.”
But I like “medical nihilism” because it stings. It delivers a much-needed slap across the face of health-care providers and consumers, a slap we need to rouse us from our acceptance of the abysmal status quo. If more of us accepted medicine’s limits and acted accordingly, our health would surely improve and our costs plummet.
Stegenga’s book isn’t perfect. He’s a bit repetitive, and overly fond of Bayesian analysis. (To my mind, his Bayesian calculations simply affirm the common-sense conclusion that we should be wary of alleged breakthroughs in fields with a long history of failure.) He is a little stingy in giving medicine credit for certain advances, notably vaccines.
Like EconTalk’s Russ Roberts, I wish Stegenga had dwelled more on cancer treatments, which have occupied me lately. Would Stegenga advise friends diagnosed with, say, breast or prostate cancer to forego treatment? Would he forego it himself? (I plan to present his replies to these and other questions in a subsequent post.)
I nonetheless applaud Stegenga for his important, timely, brave book. It complements other tough critiques of medicine, such as Gilbert Welch’s Less Medicine, More Health, Marcia Angell’s The Truth about Drug Companies, Ben Goldacre’s Bad Pharma, Elisabeth Rosenthal’s An American Sickness and Robert Whitaker’s Anatomy of an Epidemic. I hope that Medical Nihilism is widely read and discussed, and that it helps bring about reforms in medical practice, research and communication, reforms we desperately need.
For objective assessments of treatments by medical experts, see websites of the Cochrane Collaboration (which has recently been roiled by controversy), The NNT (which stands for “number needed to treat,” that is, the number of people who must take a treatment for one person to benefit from it) and Rxisk.org (which also provides patients’ reports on drugs’ effects).