Skip to main content

Misconduct, not error, is the source of most retracted papers

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


There's a new study in Proceedings of the National Academy of Sciences that should make the scientific community sit up and do a little pondering. Researchers from the University of Washington, the Albert Einstein College of Medicine and the firm MediCC! analyze retracted papers from 1977 onwards and investigate the reasons for their retractions. The authors focus on papers in the life sciences. And they find that about 67% of the 2047 retracted papers owed their retraction to plain old misconduct; only 21% or so can be traced back to error and honest mistakes. The misconduct can come in three forms - outright fraud, plagiarism and duplication. The other piece of bad news is that among these three, fraud contributes the most to the retraction with plagiarism and duplication tagging behind.

The study is disturbing, not just for the amount of misconduct it unearths but for the various trends it observes and for what it has trouble finding. The authors look at parameters such as the time to retraction, the impact factor of the journal in which the retracted papers were published and the geographic distribution of the papers. In each one of these categories it discovers something thought-provoking.

The first thing the study discovers is that fraud is not as easy to find as we think. There are several instances of papers where the retracted papers are still available and cited. Not only does this prevent the retraction from misleading other researchers but it also depressingly means that, as the authors themselves confess, the study likely underestimates the number of retractions. Why this is so makes partial sense in at least a few of these cases where scientists seem to find something of value in the paper in spite of the retraction. Also opaque are the cases where the retraction is ambiguously communicated, giving the impression of error rather than misconduct. The problem is that the retraction notice is often published by the original authors and they are understandably reluctant to provide any hints of fraud. Thus, the notice for one particular study says that "results were derived from experiments that were found to have flaws in methodological execution and data analysis". What this note does not say - and what's confirmed later by an independent Office of Research Integrity inquiry - is that the "flaws" were actually a result of manipulation and willful fabrication. Retraction notices may thus clearly hide evidence of wrongdoing, and it seems that journals are not always correcting or modifying original notices wherever necessary.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


An analysis of the geographic origins of retraction is also quite interesting since this seems to be split by the nature of the misconduct. The US, Germany, China and Japan accounted for 75% of cases of fraud or suspected fraud. Conversely, China and India accounted for a large percentage of plagiarism and duplication. Why this may be so takes us to the next step of the analysis; analyzing retraction by impact factor. The authors found that fraud contributed the most to retractions in high-impact journals like Science and Nature while plagiarism and duplication were responsible for retractions in lower-impact journals. This may be consistent with the geographic trends since the US, Germany and Japan publish in higher-impact journals compared to India or China.

Sadly and perhaps not surprisingly, this is also a case where a few bad apples give the whole field a bad name. Scientific controversies over the last few years are littered with the corpses of multiple papers published by the same author or the same lab. As the authors find out, a small number of authors were responsible for multiple retractions and nearly all the multiple retractions from the same author could be traced to outright fraud rather than plagiarism or duplication. A table in the paper has a rogues' gallery of infamous retractors, ranging from the infamous vaccination denialist Andrew Wakefield to Anil Potti, the Duke researcher who falsified data in multiple papers and then hired an "image consultant" to craft personal airbrushed websites with misleading information. Papers from these people have been cited a depressingly large number of times, and one factor which the study cannot gauge is the immense negative impact they must have had on other researchers' honest work before being retracted.

The authors also look at time-to-retraction and citation frequency of retracted articles and find both hope and disappointment. Over the years there's a general, gradual trend toward longer time-to-retraction that's independent of impact factor. As for citation frequency, while many retracted articles stop being cited right away, there are also several that continue to be cited for various reasons, ranging from a lack of transparency in the retraction notice to researchers still finding something of value in the papers.

What do we make of all this? The responsibility of journals and funding agencies to scrutinize papers and grants and detect fraud is obvious but the issue runs deeper. As with any large-scale endemic scientific problem, this one says more about scientific culture than about the technical details in the paper, although the technical details are important. It's worth noting that all these papers are from biological or medical fields. These fields routinely try to analyze and present complex, multifactorial data sporting error bars as tall as skyscrapers. Even honest, non-retracted work is having a hard time being duplicated in these fields. One only needs to look at the recent controversy about SIRT inhibitors, caloric restriction and lifespan extension to appreciate how messy biological research can get. In a field as messy as biology, it's easy to make stuff up and move a few gel lanes in Western blots here and there in what is typically a mountain of data involving multiple techniques and experiments. Biomedical research is likely going to continue to invite fraud because of its sheer and growing complexity. In addition the monetary benefits, fame and visibility associated with potentially important results in fields like caloric restriction are immense, leading to even more opportunities for fraud and wishful thinking.

However, the emphasis on biomedical research should not blind us to the greater problem the journal points to; the pervasive culture of publish-or-perish and grantsmanship that often views research as a zero-sum game. Science has very much turned into a high-stakes competitive business. More than ever, researchers are competing against each other for funding, resources, grants and publicity. Competition has now extended into the international arena with researchers in certain countries thinking it a matter of patriotic pride to score points against their transcontinental counterparts. There are cases where vast sums of money are allocated by developing countries to their prized scientific fighters to catapult themselves into the big league of top scientific nations. It should go without saying that these kinds of pressures are going to do nothing to dissuade researchers from tweaking a few experimental parameters here, adding a data point to the graph there; all for the sake of getting their papers into high-impact journals, ensuring tenure, awards and fame.

But sadly this culture is here to stay, at least for the foreseeable future. What can we do to curb its worst excesses? The paper points to encouraging solutions like the Office of Research Integrity and regular courses on ethics and data presentation. There are few counterparts to these measures in developing countries and this needs to be taken seriously by their governments. More importantly, researchers, journal editors and referees need to be taught both the value and the methodology of honest research from day one of their careers. They also need to be taught means of detecting fraud and plagiarism, a task which has been rendered somewhat easier by technology and software for detecting duplication and possible manipulation. The authors also suggest the creation of a centralized database of scientific misconduct. It's worth noting that in its own way Retraction Watch has admirably served this purpose. Finally, the study points to a great responsibility on the part of journals and unfortunately one which not all of them are fulfilling. As indicated above, many journals still communicate ambiguous retraction notices that give no hint as to the cause of retraction. Some journals still have retracted papers on their websites. Retractions will continue to be underestimated if journals don't tell us about them. One very valuable service that journals can provide in this regard is to make as many papers open-access as possible, or at least allow comments on most papers that might potentially point to suspicious data.

Ultimately though, the study asks what all of us - scientists, editors, educators, readers and the lay public- can do to stem this growing tide of scientific fraud. Retraction Watch is just one example of what we can collectively do to keep everyone honest. In the past few years, online sources like blogs have made it clear that they can be in the front ranks of detecting and reporting fraud. There have been at least some retractions in the past few years where the retracted paper was first discussed on a blog. It's quite clear that bloggers and journalists can often sniff out suspicious papers even before official sources. The official sources need to pay attention to these first responders and act on their analysis and recommendations. The quick and extensive dissemination of unworthy science will also hopefully provide a disincentive to scientists who are tempted by fraud and manipulation. Let's hope that the existence of a completely non-anonymous jury of thousands of eagle-eyed peers will nip any invitation to fraud in the bud. And in one sense the presence of scientific frauds keeps us all awake; paraphrasing Jefferson, perhaps it's best that "the tree of scientific honesty be refreshed from time to time by the words of wrongdoers and vigilantes of peer review".

Ashutosh Jogalekar is a chemist interested in the history, philosophy and sociology of science. He is fascinated by the logic of scientific discovery and by the interaction of science with public sentiments and policy. He blogs at The Curious Wavefunction and can be reached at curiouswavefunction@gmail.com.

More by Ashutosh Jogalekar