Skip to main content

Can the Source of Funding for Medical Research Affect the Results?

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


Many clinical research studies are funded by pharmaceutical companies and there is a general perception that such industry-based funding could potentially skew the results in favor of a new medication or device. The rationale underlying this perception regarding the influence of industry funding is fairly straightforward. Pharmaceutical companies or device manufacturers need to increase the sales of newly developed drugs or devices in order to generate adequate profits. It would be in their best interest to support research that favors their corporate goals. Even though this rationale makes intuitive sense, it does not necessarily prove that industry-funding does influence the results of trials. However, there is also data to support the fact that the funding source does seem to correlate with the outcomes of clinical trials.

One such study was conducted by Paul Ridker and Jose Torres and published in 2006 in JAMA (Journal of the American Medical Association). Ridker and Torres analyzed randomized cardiovascular trials published in leading, peer-reviewed medical journals (JAMA, The Lancet, and the New England Journal of Medicine) during the five year period of 2000-2005 in which one treatment strategy was directly compared to a competing treatment. They found that 67.2% of studies funded exclusively by for-profit organizations favored the newer treatment, whereas only 49.0% of studies funded by non-profit organizations (such as non-profit foundations and state or federal government agencies) showed results in favor of the newer treatment. This contrast was even more pronounced for pharmaceutical drugs, where 65.5% of the industry sponsored studies showed benefits of the newer treatment, while only 39.5% of non-profit funded studies favored the new treatment.

One argument that is repeatedly mentioned in defense of the high prevalence of positive findings in industry-funded studies is the publication bias of journals. The concern refers to the fact that editors and peer reviewers of journals may give preference to articles that show positive findings with new therapies. However, the analysis by Ridker and Torres demonstrated that these journals did publish a substantial number of “negative studies”, in which the new therapy was not superior to the established standard of care.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Studies such as the one by Ridker and Torres, the recognition that pharmaceutical companies provide various kinds of incentives for physicians to promote newer therapies and the realization that industry sponsors may perform selective analyses of clinical trial data to potentially exaggerate benefits of certain drugs have all contributed to the perception that industry funding could skew the results in favor of a drug or device made by the sponsor.

This is precisely the reason why most leading medical journals now require an exact description of the funding sources and any potential financial interests that the authors of a research article may have. The disclosures are usually described in depth towards the end of the full-length article, but some journals even indicate funding sources in the brief abstract of an article. This allows the readership of the published articles to consider the funding source and potential financial interests of the authors when evaluating the results and conclusions of a clinical trial.

How this information about the funding sources impacts the perception of physicians in regards to the validity of the data in a research study has not been thoroughly investigated. A recent study published in the New England Journal of Medicine by the Harvard Medical School researcher Aaron Kesselheim and his colleagues may help us address this question. Kesselheim et al identified 503 physicians who were internists certified by the American Board of Internal Medicine. These internists were sent abstracts of research studies describing the results obtained with three hypothetical drugs: lampytinib to lower cholesterol, bondaglutaraz to improve glucose control and lipid metabolism and provasinab to limit the progression of coronary artery blockages.

Of note, the physicians did not all receive the same three abstracts, but various permutations of the abstracts. In some abstracts, the described studies had high methodological rigor, whereas other physicians reviewed abstracts with lower methodological rigor. The physicians were informed that these were hypothetical drugs, and that the physicians should assume the drugs had been approved by the FDA, the drugs were eligible for insurance coverage and that the studies were published in reputable medical journals. They were then asked to assess the studies based on the abstracts they reviewed using a scale of 1 to 7. For example, in the case of lampytinib, they had to respond to the following questions or instructions:

How likely would you be to prescribe lampytinib?

How confident are you in the validity of the conclusion that the authors draw about lampytinib in this abstract?

Rate the overall rigor of the study methodology:

Rate the importance of the study:

Are you interested in reading the full article for the study described in this abstract?

Here is an example of the "Methods" section from the abstract of a hypothetical study with the imaginary LDL-cholesterol lowering drug lampytinib that was shown to some of the participating physicians:

METHODS

We conducted a randomized, double-blind, active-comparator controlled trial in 12 centers across the US to assess the efficacy and safety profile of lampytinib in patients who had coronary heart disease (CHD) or at least 3 major risk factors for CHD. Eligible patients could not tolerate any statin, had an LDL cholesterol >160 mg/dL, and an HDL cholesterol <40 mg/dL. Patients were assigned to receive 10 mg of lampytinib or 10 mg of ezetimibe (Zetia) daily for 36 months. The primary end point was a combined cardiovascular disease endpoint (cardiovascular death, myocardial infarction, or stroke). We also assessed change from baseline in LDL and HDL cholesterol at 24 weeks, and the safety and side-effect profile of lampytinib.

The reasons why this study would be considered to have high methodological rigor include that it was randomized and double-blind, that was conducted in multiple centers, that there is clearly defined treatment group (10 mg lampytinib) and a control group (10 mg of ezetimibe) and that there are "hard" endpoints, such as cardiovascular death, myocardial infarction and stroke as well as the cholesterol levels.

Here is a different version of this abstract with less methodological rigor that was presented to another subset of physicians:

METHODS

We conducted a randomized, open-label controlled trial to assess the utility of lampytinib in men with a history of familial hypercholesterolemia and no other medical problems. Eligible patients could not tolerate any statin, had an LDL cholesterol >160 mg/dL, and had an HDL cholesterol <40 mg/dL. Patients were assigned to receive 10 mg of lampytinib or usual care for 4 months. The primary end points were the change from baseline in LDL and HDL cholesterol.

The reasons why this version should be considered "low rigor" include the fact that it was open-label, that there was just comparison to "usual care" instead of a defined control intervention group and that the end-points only included changes in cholesterol levels but no "hard" end-points, such as myocardial infarction or death.

In addition to varying the methodological rigor of the described studies (high, intermediate or low methodological rigor), Kesselheim et al. also varied the description of the funding source. At the end of each abstract there was either 1) no mention of the funding source, 2) a statement that the study was funded by the federal government’s National Institutes of Health (NIH) or 3) that it was funded by the industry. The industry funding disclosure was as follows:

This study was funded by a grant from [company]. The lead author reports serving as a consultant to [company].

Using these combinations, Kesselheim et al. generated 9 abstract versions for each hypothetical drug: 3 (levels of methodological rigor) x 3 (types of funding disclosure). Each participating physician was randomly assigned to receive one of the nine abstract versions for each hypothetical drug. This would permit the determination whether the disclosure statement would have an impact on the physicians’ perceptions of the strength of a study. Of the surveyed physicians, 269 physicians responded. Kesselheim et al evaluated the scores the respondents gave and converted the responses on the 1-7 scale into an odds ratio (OR) using a statistical model, which basically indicates the likelihood of physicians to determine the likelihood of assigning a higher score to the abstracts.

The results of the study are quite encouraging and indicate that the participating physicians appropriately recognized the methodological rigor of the various abstracts. Physicians were nearly 8 times more likely to assign a higher rigor score when they evaluated abstracts with high methodological strength, when compared to physicians who received abstracts with low methodological strength! They were also 5-6 times more likely to assign higher confidence scores and were nearly 5 times more likely to assign higher scores for prescribing the drug in question, when compared to the physicians who received abstracts with low methodological rigor. The second key finding was that the source of funding did have a significant impact on the physicians’ willingness to assign higher scores. Physicians who read abstracts that indicated NIH funding were roughly twice as likely to assign higher scores in terms of a study’s rigor, confidence in the study results and willingness to prescribe the hypothetical drug, when compared to physicians who read abstracts that indicated the study was funded by the industry.

This means that the primary determinant of the physicians’ decision to assign high scores was the methodological strength of the study, but that the type of funding did also factor into the willingness of the physicians to have confidence in the study results, albeit to a lesser degree. I find this quite reassuring, because it means that the participating physicians did not ignore the funding disclosure. The reasons for this are probably due to the above-mentioned perception that industry sponsorship could potentially bias the results and this is discussed in detail by Kesselheim et al.

However, in addition to this perception, there may be an additional reason for why the NIH-sponsored studies were assigned substantially higher scores. As most researchers know, budget constraints at the NIH have resulted in very low funding rates for grant proposals. In many grant review panels of the NIH, only the top 10% of grant proposals are funded. For a study to qualify for NIH grant funding, it has to go through a very thorough and arduous review process. In some ways, therefore, NIH funding can also be seen as a “badge of excellence”. It is quite possible that the physicians who reviewed the hypothetical abstracts may have been reassured by the indication of NIH funding, because it implied that the study must have passed a bar of scientific excellence to even achieve the NIH funding in the first place.

A second reason why the abstracts with industry-sponsorship may not have achieved as high scores as the ones with NIH sponsorship is that the disclosure statement also included the by-line “The lead author reports serving as a consultant to [company].” Since most consultants receive an honorarium, it indicates a potential financial incentive for the author to promote a drug manufactured by a sponsor that pays the honorarium. Hopefully, most clinical researchers will be aware of their potential bias and financial conflict of interest, but it is not unreasonable for the reader of a research abstract to be somewhat concerned about a bias, if the reader finds out that the study author has such a conflict of interest. The question that remains unanswered is how much this information about industry sponsorship and financial conflicts of interest should impact the confidence of the reader. Since the influence of industry sponsorship is so hard to gauge and objectively quantify, there may be no good answer for how to factor in the funding source.

One has to also remember that the situation in the study by Kesselheim et al was highly artificial. The participating physicians knew that these were hypothetical drugs and they were asked to comment on the rigor of a study and their willingness to prescribe a drug, merely based on reading a short abstract. Most physicians that I work with would not change their clinical practice based on a short abstract. Instead, they would want to read the full article, derive more detailed information about the study design, perhaps read some accompanying commentaries and critiques of the article in question and discuss the findings with their colleagues before they would feel comfortable to pass judgment on the rigor of a study and importance of a study. In spite of the limitations of the artificial situation created by sending out brief abstracts of hypothetical drugs in the study by Kesselheim et al, it is still a very important and valuable study that confirms the ability of internists in the US to discern high methodological rigor from weak methodological rigor and that it indicates that practicing internists do consider the funding source when evaluating clinical research studies.

The study of Kesselheim et al was accompanied by an editorial written by Jeffrey Drazen, the editor-in-chief of New England Journal of Medicine.

The title of the editorial “Believe the Data” already gives away the core message:

A trial’s validity should ride on the study design, the quality of data-accrual and analytic processes, and the fairness of results reporting. Ideally, these factors — not the funding source — should be the criteria for deciding the clinical utility.

The editor suggests that the funding source should not factor into the evaluation of the clinical relevance and significance of studies. This view from the editor of one of the leading medical journals comes as somewhat of a surprise, because it implies that one should ignore the possibility of hidden biases. Why then, do reputable medical journals such as the New England Journal of Medicine publish details about financial disclosures and conflicts of interest for all their articles, if the readers of the articles are supposed to ignore the funding source when evaluating the articles?

The editor also writes:

Is this lack of trust justified? The argument in favor of its justification — that is, the pharmaceutical industry has a financial stake in the outcome, whereas the NIH does not — supports the conclusion that reports from industry-sponsored studies are less believable than reports from NIH-sponsored ones. This reasoning has been reinforced by substantial press coverage of a few examples of industry misuse of publications, involving misrepresentation of the design or findings of clinical trials.

The editorial in part implies that our concerns about industry bias may be overblown due to “substantial press coverage”, but it does not mention the scientific research that has repeatedly pointed out the potential for bias in industry-sponsored research, such as the article by Ridker and Torres as well as numerous other studies that have documented the bias.

Ben Goldacre, the British physician who wrote the insightful best-selling book “Bad Science” in which he debunked pseudo-scientific claims, has now written a new book entitled “Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients” on how the pharmaceutical industry can manipulate clinical trials and the data derived from it. This book is scheduled to be released in the US in January of 2013, but an extract of the book was pre-published in The Guardian. In this excerpt, Goldacre gives multiple specific examples of how industry funding can affect the published data and how the pharmaceutical industry maligns researchers who point out that pharmaceutical industry sponsors can interfere with how clinical data is analyzed and published.

This influence of the pharmaceutical industry on clinical research is very pervasive and well-documented. it is not just “press coverage” hype, but the editor of New England Journal of Medicine seems to gloss over this interference by pharmaceutical companies. Instead of discussing the bias that industry-funding can introduce into clinical research, Dr. Drazen points out that research funded by the government is also not immune to bias:

However, investigators in NIH-sponsored studies also have substantial incentives, including academic promotion and recognition, to try to ensure that their studies change practice.

Nobody would deny that NIH-sponsored studies can also have incentives, such as academic promotion and recognition. However, the concern about industry-sponsored research is that clinical researchers who serve as consultants for pharmaceutical companies may have additional financial incentives. Lead authors of most major industry sponsored trials are usually academics who want to be promoted and recognized (just like the NIH funded researchers), but in addition to these academic incentives, they receive personal monetary compensation and grants from pharmaceutical sponsors. If a study shows that a new drug is superior, there is a significant likelihood that the sponsor might continue to give additional funding support for further research, whereas a negative study result can sometimes even shut down future research support by the sponsor because of the loss of profit.

The editorial ends with the comments:

Patients who put themselves at risk to provide these data earn our respect for their participation; we owe them the courtesy of believing the data produced from their efforts and acting on the findings so as to benefit other patients.

This again seems like a call to set aside concerns about bias in industry sponsored research, because it would be unfair to the patients who participated in trials. However, the data by Kesselheim et al showed that the internists did not disregard research sponsored by the industry. They primarily assessed the value of a study by its methodological rigor, but they also considered the funding source. Dr. Drazen is correct in pointing out that one should respect the patients who participated in the trials, and not ignore the data that they helped generate.

However, one also needs to respect the safety and health of patients who may be inappropriately prescribed new treatments based on studies that could be potentially skewed by financial biases of the sponsor and the authors. The safeguards that are now required for publishing clinical trials, such as registering all trials at the onset of the trial and ensuring safe and complete reporting are definitely a step in the right direction and will help improve the rigor of clinical trials, independent of the funding source. On the other hand, one should still not ignore the possibility of hidden biases that can evade such monitoring. Combining a rigorous analysis of methods of clinical studies with a careful evaluation of potential financial biases is probably the most appropriate way to assess clinical research studies.

Images:LadyofProcrastination and Zzubnik on Wikimedia Commons.

Jalees Rehman, MD is a German scientist and physician. He is currently an Associate Professor of Medicine and Pharmacology at the University of Illinois at Chicago and a member of the University of Illinois Cancer Center. His laboratory studies the biology of cardiovascular stem and progenitor cells, with a focus on how cell metabolism may direct the differentiation and self-renewal of regenerative cells. He can be followed on Twitter: @jalees_rehman and contacted via email: jalees.rehman[at]gmail.com. He has a blog about stem cell biology at Scilogs called The Next Regeneration. Some of his other articles related to literature or philosophy can be found on his personal blog Fragments of Truth.

More by Jalees Rehman