October 27, 2012 | 5
Doctors use different standards to judge scientific research depending on who funded it. They judge research funded by industry as less rigorous, have less confidence in the results, and are less likely to prescribe new drugs than when the funding source is either the NIH or unknown – even when the apparent quality of the research is the same.
Those were the results of a study published by Harvard researchers Dr. Aaron Kesselheim and colleagues in the New England Journal of Medicine last month. The story has received a fair amount of coverage since then, including being analyzed by the Scientific American Guest Blog, the Los Angeles Times, and the New York Times.
There’s a question of ethical and practical relevance embedded in this: is it justifiable to judge a paper by its author or funding source – even when you cannot discern a difference in quality?
The perspective from much of the medical side seems to be a definite yes. The divide between doctors and so-called “Big Pharma” is nothing new. Pharma has a bad reputation in the medical community, and there is history to back it. One of the most well-known scandals involved Vioxx being taken off the market in 2004 after Merck admitted it withheld information about known adverse risk of heart disease, resulting in tens of thousands of deaths. In 2008, physician and former Editor in Chief of the New England Journal of Medicine Marcia Angell wrote, “Bias in the way industry-sponsored research is conducted and reported is not unusual and by no means limited to Merck.” In 2011, Harriet Washington published a piece in The American Scholar highlighting some of the ways industry has misled and manipulated data, which include: comparing a new drug against a placebo rather than against another treatment option, comparing drugs to competitors in wrong dosages, pairing a drug with one known to work well, ending a trial prematurely when they see “clues that the trial is going south,” and cherry-picking only positive findings to report. This type of behavior can and should be called out as scientific misconduct, and those who commit it must be held accountable.
But if there’s something just a bit unsavory about judging a paper solely by who wrote it, there’s good reason for it. The scientific world prides itself on judging content of ideas, not presumed integrity of authors. It’s the rationale behind the widespread practice of research journals blinding reviewers of authors’ names. Using any criteria other than quality in scientific evaluation is admittedly a kind of bias – something we are usually quite wary of in science. As the authors of the study succinctly put it, “The methodologic rigor of a trial, not its funding disclosure, should be a primary determinant of its credibility.” Moreover, if we’re comfortable using authorship as a proxy for quality, it’s not an absurd leap to start extending that approach to authors outside of industry. It’s not uncommon to hear accusations of industry bias because of self-interest in financial gain; but imagine if we started hearing sweeping accusations that young researchers, for example, should be trusted less because of their self-interest in trying to advance their careers. Industry is not alone in being capable of bias. The problem of publishing only positive results, for instance, is a recognized problem that has been discussed in the scientific community at large for years.
There are also practical concerns of being overly dismissive of industry. Amidst the history of manipulation and fraud, there are medical contributions too. In the New York Times, surgeon and author Pauline Chen cited data showing that industry was responsible for nearly 60 percent of the more than $100 billion spent on research in 2007. Using authorship ties as a proxy for quality means possibly overlooking research of potential value for patients.
So why not just use quality, removing the need to probe into researchers’ background, affiliations, and motivations? Unfortunately, letting the data speak for themselves is not always possible. The low quality parts could be found in what does not make it to print. In the list of misconduct Washington’s article described, ending a trial prematurely and failing to report negative results are forms of misconduct that would not be transparent from a paper alone. Similarly, failing to report side effects, as in the Vioxx scandal, is another way relevant data can be hidden. That’s conscious and explicit manipulation, but there’s evidence for unconscious manipulation too. Numerous studies have found that the “funding bias,” in which conclusions of research are more likely to agree with the sponsor’s aims, is a real phenomenon. While unconscious bias is again not unique to industry, there’s something to be said for awareness of the trend where it has been clearly tracked.
At the end of all this, we are left with two competing facts: 1) Industry sometimes produces valuable research that contributes to patient care. 2) There is also a significant history of manipulation. Is it possible to reconcile these two facts, in a way that is both vigilant against misconduct but also doesn’t pass over potentially valuable findings?
I think the last point about quality not always being transparent is the critical fact. Given that it’s entirely scientifically feasible for a study that appears to be of good quality to actually be flawed, holding research conducted by authors with a dubious history seems justifiable. Should you dismiss industry across the board? Probably not. I think the authors’ caution against the dangers of excessive skepticism is sensible. I also agree that more “fundamental strategies” such as increased protocol and data transparency will make the whole process of determining quality easier. But as it stands, those doctors in the study voicing skepticism about the conclusions of industry sponsored research is understandable. As it goes, a critical eye and looking to others to replicate findings before you embrace new conclusions is probably a good approach to research in general, no matter who the initial authors are.
Get 6 bi-monthly digital issues
+ 1yr of archive access for just $9.99