Scientists travel on the edge of the known world, exploring nature through observation, experimentation and repeated testing. Our building blocks are pieces of experimental evidence that are linked into coherent models of the universe. An essential part of the scientific process is the critical analysis of research results by scientists with expertise in the discipline. Because of this peer-review process, mistakes are supposed to be caught before they propagate in the literature.
Yet despite careful pre-publication scrutiny, some reports are later retracted or, worse, widely suspected to be erroneous but never corrected. One recent examination of 53 landmark medical studies found that further research was unable to replicate all but six of them.
How does this happen? After all, peer review is the heart of the scientific process. Why are peer-reviewed studies sometimes wrong? For scientists and non-scientists, the retraction of a published result is unsettling.
Over the past year, I experienced first-hand the long, grueling process that ensues when a prominent published result turns out to be flawed.
Professor John Ioannidis, an expert on the credibility of medical research, concludes that a more common reason for errors is that scientists sometimes turn a blind eye to unwanted results. He estimates that even for the most carefully controlled clinical trials, the rate of false results is around 10 percent.
Another reason for error is that some scientists, in haste to avoid being scooped by another team (and therefore losing credit for being “first”- a mark of prestige) will rush their results to publication prematurely without carefully checking for errors.
It is also common for scientists to move on to new projects after a result is published and consequently not repeat previous experiments. When scientists do build on their own discoveries, and if errors had been made, they can often quickly uncover them. For example, in a 1991 Nature paper, researchers reported the discovery of a planet outside our solar system. However, a few months later they found an error in their calculations and retracted the paper.
In an article describing this case, astrophysicist Mario Livio points out that blunders are sometimes hard to correct. This is especially true if the study took years to carry out or if key scientists have left the lab. It is virtually impossible to obtain funding to repeat results that have already been published.
If errors are ignored, they perpetuate in the literature or in the media, which can slow scientific progress and sometimes directly harm human health.
In one egregious example, a researcher proposed a link between the administration of the measles, mumps, and rubella (MMR) vaccine and autism. Although this claim was widely discredited, movie stars and many newspapers continued to promote it. Many parents chose not to vaccinate their children for fear that their children would become autistic. The result has been a worldwide outbreak of preventable disease. For example, Marin County, California, home to a wealthy, educated populace, recently experienced the largest outbreak of whooping cough in the nation. Health care workers descended into Marin as if it were a third-world country to reeducate parents about the importance of vaccination.
My own story involves nothing as flagrant as hasty publication or outright fraud, but I hope that a discussion of the errors that led to the retraction of our papers will be instructive.
How Plants and Animals Respond to Disease-causing Bacteria
My laboratory has spent many years studying how plants respond to disease. Plants, in the wild and on the farm, can only defend themselves against infection if they carry specialized immune receptors that sense the invading microbe. This type of immune response is not restricted to plants. Animals, too, carry immune receptors with striking similarity to the plant receptors, which are likewise vital for survival.
In 1995, we isolated a rice immune receptor, called XA21. Rice plants carrying XA21 are resistant to diverse strains of the bacterial pathogen Xanthomonas oryzae pv. oryzae, which causes a serious disease of rice in Asia and Africa. The structure of the XA21 protein suggests that XA21 can detect the bacterium and quickly mobilize its defenses to mount a potent immune response. This discovery explained why most rice plants are virtually defenseless against bacterial attack, except for those that carry the XA21 or other immune receptors.
But isolation of the rice XA21 immune receptor was only half of the story. Just as both a lock and key are required to open a door, the rice XA21 protein is only activated when it detects a specific signal from the bacterium. We hypothesized that the bacterium secreted a molecular key that fit into the XA21 lock. In 2009 and 2011 we published two papers showing that the activator of Xa21-mediated immunity (named Ax21) was a small, modified bacterial protein that binds to the rice XA21 receptor to activate the immune response. We were able to mimic the biological response by treating the XA21 rice plants with a synthetic version of a modified Ax21 peptide (a short string of amino acids modified at one particular amino acid). We also showed that Ax21 was important in the bacterium’s growth and development.
We were ecstatic to identify this bacterial protein because it provided an important piece to the puzzle of how the rice plant is able to respond to infection. News of the discovery rippled through our community of researchers and drew others into the investigation.
For example, shortly after we reported our findings at a small meeting of Xanthomonas researchers, another laboratory rushed to publish a paper reproducing key aspects of our results in a human pathogen. Soon after, three more papers reproducing our results were published.
Yet, in a bizarre twist, just as these other laboratories were validating our research, we were discovering reasons to question our own findings.
How we uncovered errors
One of the practices of my laboratory, and in most laboratories, is to test the integrity of research materials before beginning each experiment. For example, when a new member joins the lab, the researcher first validates the integrity of the bacterial strains and rice seed stocks by standard DNA genotyping.
In this way, new members of my laboratory uncovered two major errors in our previous research. First, we found that one of the bacterial strains we had relied on for key experiments was mislabeled1.
Second, another key conclusion of our original finding - that we could trigger activation of resistance by treating plants with a synthetic Ax21 peptide - relied on a test that turned out to be highly variable. When former and new team members tried to repeat this experiment, they were not able to consistently reproduce the specific effects that we had originally reported2.
Because key parts of the work depended on the integrity of the strains and the robustness of the plant treatments, I decided to retract our PLoS One and Science papers. The Science retraction was published online today.
How to decide when to retract
For me, there was never any question that I would correct the scientific record if we had made a mistake. The bigger challenge was confirming that we really had made mistakes and generating sufficient evidence to warrant full retraction of the papers.
There were two important factors to consider. A delay in the retraction would waste the time of my colleagues who wanted to replicate our assays and build on our work. But publishing a hasty retraction with insufficient evidence would be equally counterproductive. I therefore needed to figure out how to strike the right balance between getting the answer right and getting the word out that our research may be flawed. I discussed these issues with the journal editors early in the process. They agreed that caution was important and graciously allowed us ample time to carry out further experimentation. Although at first I planned to repeat all 25-plus experiments in the papers, it soon became clear that this would be impossible to accomplish within a reasonable time frame. In April, at an international plant immunity meeting, I announced that we had made errors, that we were questioning some of our conclusions, and that I was working with the journal editors to publish a statement as soon as possible.
Sorting out the situation was personally and professionally painful for all involved. Instead of advancing the science in new directions as we’d planned, it was necessary to backtrack: re-isolate bacterial strains, optimize greenhouse conditions, and repeat experiments. Former lab members who had begun new positions as professors in Korea and Thailand were devastated to learn that the researchers in my lab could not repeat their work. Junior scientists in the laboratory worried their careers would be tarnished, and understandably did not want to spend too much time on the “clean-up” operation. I therefore assigned the bulk of the work of replication to experienced, highly qualified staff scientists, which took time away from their own projects but helped the junior scientists move to projects that were not affected by errors. It took persistence, courage and confidence to stick together as a team throughout this challenging year. I am proud that we were able to do this.
The process even touched our families. In response to my frequent expressions of concern at home, my 6th grade daughter sent me an email with this subject line: “Everyone makes misktakes” and linked to a relevant story she had found on the internet: Helium, not so super after all!
We continue to investigate the role of Ax21, and at some point we may be in a position to republish some of the originally reported findings. We are beginning again with fresh approaches, reconstructed bacterial strains, and new team members. Soon, we will report on our latest results in the Ax21 story.
How to clean up the scientific process
How can the scientific community do better at avoiding published errors and correcting them more quickly when they are discovered? A growing group of scientists are addressing this question. Their concerns and suggestions are eloquently summarized by Gary Marcus, a professor of psychology at New York University, in a recent New Yorker article. The researchers argue that we need to reward scientists that get it right rather than simply get it published. We “need to do a better job of conveying to our students and to the media that no matter how strong the evidence presented in any one study, any new result needs to be confirmed by other laboratories.” They suggest incentives that will reward scientists to a greater degree for producing solid, trustworthy research that others are able to replicate successfully and then extend.
Paradoxically, the same qualities - trust and teamwork - that are key to a productive and harmonious laboratory environment are the same ones that can lead to an informality that allows errors to be propagated. No amount of friendly feeling or trust within the lab makes verifying the integrity of research materials unnecessary. The environment in which that balance is struck is up to the principal investigator. Science is as much or more about getting the process right than it is about the people behind the process. Not that all people are interchangeable; they aren’t. But science resides in the process, first and foremost and it is the process that the people need to commit themselves to.
Despite the importance of retractions in correcting the scientific record, there are few guidelines as to how they should be handled or how fast self-correction should occur. To this end medical journalists Ivan Oransky and Adam Marcus created the web log Retraction Watch, which catalogs retractions as a window into the scientific process and explores the causes of each one; it has been called “one of most important recent developments in science journalism” by former Scientific American editor in chief John Rennie.
My lab and I have learned some very hard lessons in the past year. By telling our story, my hope is to help foster the ongoing improvement of the scientific process.
October 12 update: In response to this post, I have received many kind words of support. Thank you.
For more on this story, please see:
Footnotes with technical details for those in the discipline:
1The first hint of trouble was the discovery by a new postdoc in the lab that the Ax21 insertion mutant did not cause disease on Xa21 rice plants. This result directly contradicted a key conclusion of our previous report- that strains carrying an insertion mutation in the Ax21 gene can overcome resistance conferred by the rice XA21 receptor (Figure 2 of the Science paper). What could explain the discrepancy?
The only thing I could think of was that the previous lab members had mixed up the Ax21 insertion mutant strain with the raxST mutant strain (which was the only strain in our collection that was highly virulent on Xa21 plants). However, this scenario seemed highly unlikely. After all, in lab meetings before submission of the data for publication in 2009, I, as well as others in my laboratory, had carefully scrutinized each experiment. The DNA profile of the original Ax21 insertion mutant has been accurate and the mutant strain has clearly formed long lesions and grew to high levels on Xa21 plants. The results were statistically significant and the experiments had been repeated many, many times by several lab members. Such an error would mean that the over the last few years, laboratory members had unknowingly used the RaxST mutant strain instead of the Ax21 insertion mutant.
To test this possibility, a team of hardworking postdocs grew each of the 12 strains collections in our collection that were labeled as Ax21 insertion mutants. By careful sleuthing, they found that two out of 12 of the strains formed long lesions on Xa21 plants as reported in the paper. However, these strains did NOT carry the Ax21 insertion mutant genotype. Instead both strains carried an insertion in the raxST gene. In other words, the strains forming long lesions on Xa21 plants were mislabeled. They were not Ax21 insertion mutants; there were simply the raxST insertion mutants.
Because we keep track of the use of each strain in our collection, we were able to roughly determine how the mislabeling and mix up might have occurred. A team of three scientists in my laboratory had worked closely on this project for several years. One person created the bacterial strain, verified the DNA profile and then passed it on to the other two. I believe that the group members trusted each other so completely that they did not check the integrity of the strain before using it. It may be that it was at this point that the mislabeling occurred but I still don’t really know. They also made mistakes in their complementation tests of the Ax21 insertion mutant with the wild-type Ax21 gene.
At the same time that we were revalidating each of the strains in our collection, two of the postdocs also investigated the processing and secretion of Ax21 using alternative approaches and more sensitive methods. They found that Ax21 was not processed and secreted as we had hypothesized (manuscript in prep).
Because other groups had validated our findings, we decided to directly test their strains before our publishing our retractions. Once we received the appropriate import permits, one of our colleagues generously provided a bacterial strain generated in his lab that carried a deletion mutant in Ax21. My laboratory as well as the laboratory of a former postdoc carried out DNA tests on the strain and inoculated the strain with appropriate controls onto rice plants. We found that although the mutants were correctly generated, the strains were not infectious. We have informed our colleagues of our results and he is remaking the strains.
2When laboratory members first established the pretreatment assay years ago, they included diverse controls to optimize the assays. However, in subsequent experiments, some of the controls were dropped to reduce the size of the experiments. In the more recent experiments we found that although the modified (sulfated) Ax21 peptide did induce resistance in Xa21 plants, it also induced resistance in plants lacking the Xa21 immune receptor, an important control. Furthermore, results of the pretreatment test were highly dependent on greenhouse conditions.
Over the last year, we repeated the pretreatment tests with numerous additional controls. We synthesized new peptides because we hypothesized that the original peptide may have lost its essential modification. We worked with another laboratory to meticulously map the peptide sequence and modification. We shared our rice genetic materials and plasmid with laboratories in Korea, France and China and asked them to independently repeat the experiments. In my own lab and in the lab of the first author, we included additional controls, tested plants at different ages, and grew and inoculated plants under different conditions. Sometimes the results were encouraging but they were always highly variable.
Crumpled Paper: Pablo Bisoglio, Wired.co.uk
Scientific Progress Goes 'Boink'; Bill Watersen
Mouse and rice in mirror: Andrzej Krauze, The Scientist