About the SA Blog Network

Unofficial Prognosis

Unofficial Prognosis

Perceptions and prescriptions of a medical student
Unofficial Prognosis Home

Doctors versus “Big Pharma”: is it justifiable to judge research by its authors?

The views expressed are those of the author and are not necessarily those of Scientific American.

Email   PrintPrint

Doctors use different standards to judge scientific research depending on who funded it. They judge research funded by industry as less rigorous, have less confidence in the results, and are less likely to prescribe new drugs than when the funding source is either the NIH or unknown – even when the apparent quality of the research is the same.

Those were the results of a study published by Harvard researchers Dr. Aaron Kesselheim and colleagues in the New England Journal of Medicine last month. The story has received a fair amount of coverage since then, including being analyzed by the Scientific American Guest Blog, the Los Angeles Times, and the New York Times.

There’s a question of ethical and practical relevance embedded in this: is it justifiable to judge a paper by its author or funding source – even when you cannot discern a difference in quality?

The perspective from much of the medical side seems to be a definite yes. The divide between doctors and so-called “Big Pharma” is nothing new. Pharma has a bad reputation in the medical community, and there is history to back it. One of the most well-known scandals involved Vioxx being taken off the market in 2004 after Merck admitted it withheld information about known adverse risk of heart disease, resulting in tens of thousands of deaths. In 2008, physician and former Editor in Chief of the New England Journal of Medicine Marcia Angell wrote, “Bias in the way industry-sponsored research is conducted and reported is not unusual and by no means limited to Merck.” In 2011, Harriet Washington published a piece in The American Scholar highlighting some of the ways industry has misled and manipulated data, which include: comparing a new drug against a placebo rather than against another treatment option, comparing drugs to competitors in wrong dosages, pairing a drug with one known to work well, ending a trial prematurely when they see “clues that the trial is going south,” and cherry-picking only positive findings to report. This type of behavior can and should be called out as scientific misconduct, and those who commit it must be held accountable.

But if there’s something just a bit unsavory about judging a paper solely by who wrote it, there’s good reason for it. The scientific world prides itself on judging content of ideas, not presumed integrity of authors. It’s the rationale behind the widespread practice of research journals blinding reviewers of authors’ names. Using any criteria other than quality in scientific evaluation is admittedly a kind of bias – something we are usually quite wary of in science. As the authors of the study succinctly put it, “The methodologic rigor of a trial, not its funding disclosure, should be a primary determinant of its credibility.” Moreover, if we’re comfortable using authorship as a proxy for quality, it’s not an absurd leap to start extending that approach to authors outside of industry. It’s not uncommon to hear accusations of industry bias because of self-interest in financial gain; but imagine if we started hearing sweeping accusations that young researchers, for example, should be trusted less because of their self-interest in trying to advance their careers. Industry is not alone in being capable of bias. The problem of publishing only positive results, for instance, is a recognized problem that has been discussed in the scientific community at large for years.

There are also practical concerns of being overly dismissive of industry. Amidst the history of manipulation and fraud, there are medical contributions too. In the New York Times, surgeon and author Pauline Chen cited data showing that industry was responsible for nearly 60 percent of the more than $100 billion spent on research in 2007. Using authorship ties as a proxy for quality means possibly overlooking research of potential value for patients.

So why not just use quality, removing the need to probe into researchers’ background, affiliations, and motivations? Unfortunately, letting the data speak for themselves is not always possible. The low quality parts could be found in what does not make it to print. In the list of misconduct Washington’s article described, ending a trial prematurely and failing to report negative results are forms of misconduct that would not be transparent from a paper alone. Similarly, failing to report side effects, as in the Vioxx scandal, is another way relevant data can be hidden. That’s conscious and explicit manipulation, but there’s evidence for unconscious manipulation too. Numerous studies have found that the “funding bias,” in which conclusions of research are more likely to agree with the sponsor’s aims, is a real phenomenon. While unconscious bias is again not unique to industry, there’s something to be said for awareness of the trend where it has been clearly tracked.

At the end of all this, we are left with two competing facts: 1) Industry sometimes produces valuable research that contributes to patient care. 2) There is also a significant history of manipulation. Is it possible to reconcile these two facts, in a way that is both vigilant against misconduct but also doesn’t pass over potentially valuable findings?

I think the last point about quality not always being transparent is the critical fact. Given that it’s entirely scientifically feasible for a study that appears to be of good quality to actually be flawed, holding research conducted by authors with a dubious history seems justifiable. Should you dismiss industry across the board? Probably not. I think the authors’ caution against the dangers of excessive skepticism is sensible. I also agree that more “fundamental strategies” such as increased protocol and data transparency will make the whole process of determining quality easier. But as it stands, those doctors in the study voicing skepticism about the conclusions of industry sponsored research is understandable. As it goes, a critical eye and looking to others to replicate findings before you embrace new conclusions is probably a good approach to research in general, no matter who the initial authors are.

Ilana Yurkiewicz About the Author: Ilana Yurkiewicz is a fourth-year student at Harvard Medical School who graduated from Yale University with a B.S. in biology. She was an AAAS Mass Media Fellow, and her work has appeared in the New England Journal of Medicine, Aeon Magazine, Science Progress, The News & Observer, and The Best Science Writing Online 2013. She has an academic interest in bioethics, currently conducting ethics research at Harvard after previously interning at the Presidential Commission for the Study of Bioethical Issues. She is going into internal medicine and is also interested in quality and systems improvement. Follow on Twitter @ilanayurkiewicz.

The views expressed are those of the author and are not necessarily those of Scientific American.

Rights & Permissions

Comments 5 Comments

Add Comment
  1. 1. ecstatist 3:58 am 10/28/2012

    Trust me when I say that the most valuable characteristic that can be instilled is curiosity. And curiosity’s most valuable component is skepticism. Skepticism does no harm to the honest. (Admittedly, it may delay some.)

    Link to this
  2. 2. N49th 11:19 am 10/29/2012

    Good blog. If I may suggest further reading? There is a book, ‘White Coat Black Hat’, by Carl Elliott.
    Secondly, when it comes to drugs, availability and cost contact someone at ‘Nurses without borders’ and get their point of view.

    Link to this
  3. 3. greenhome123 5:47 pm 10/30/2012

    People should also use different standards when judging research funded by companies like Monsanto. (Monsanto makes the weed killer, RoundUp – as well as Genetically Modified GMO RoundUp resistant Corn and soybeans). I believe the FDA should conduct long term GMO animal studies instead of trusting Monsanto’s studies 90 day studies. Fortunately, there is Proposition 37 in California that will be voted on in a few days, and if it goes through food sold in California will have to be labeled if it contains Genetically Modified Organisms.

    Link to this
  4. 4. tucanofulano 3:50 pm 11/1/2012

    So far Monsanto, DuPont, and others with an axe to grind have spent over $40 millions to try to persuade California voters to vote against their own best interests and support Monsanto’s ‘spin’ “nothing to see here, move along”. Studies over 5 or 6 generations of mutations attributable to GM ‘frankenfoods’ is required, certainly 90 days’ of doctored data put out by a party most likely to benefit from favorable support can only be considered a clear and present danger to the citizens of the USA, and indeed of the world.

    Link to this
  5. 5. StevedeBurque 8:59 am 02/18/2014

    Having started in the world of basic science, there was once a relatively robust structure of independent scientists who were plentiful, and remorselessly aggressive in putting the test of fire to all new ideas. The NSF and the NIH were responsible for funding research.
    In our journal club, we would read medical journals – such as the NEJM and JAMA – for low comedy and practice for the young ‘uns to shred a weak article. In Real Science, to assert anything that is later proven to be false – even if the assertion comes from credible data – you are a paraiah, a Yahoo, a Cold-Fusionist. In medical research, though, one needed only to hit the paper, not the bullseye, and one could scoot that document onto one’s resume.
    Since the disappearance of the primacy of the NSF – many years ago – it has worsened immeasurably. I feel like medical practice now is more like the Dark Ages than the New Millenium. Gossip and inference take place of rational inquiry.

    Link to this

Add a Comment
You must sign in or register as a member to submit a comment.

More from Scientific American

Email this Article