Skip to main content

Introducing Registered Reports, a New Way to Make Science Robust

To encourage the replication of findings, we should ask researchers to describe their methods before they conduct experiments

Credit:

Getty Images

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


Imagine a young, brave scientist tasked with a seemingly impossible venture. In order to gain the favor of scientific royalty,  they would have to prove their worth by doing what very few scientists have done before: replicate a previously known fact. Now, the valiant scientist knew the journey ahead was long, perilous and fraught with challenges. Respected elders of the scientific society scoffed at the idea, others expressed severe bias, and finally, grant money ran out.

The replication was not working at all. In this sad state, our scientist wandered into a pub and enthusiastically described the significance of her replication idea to a few fellow colleagues. Her peers delightfully penned it in a new leather-bound tome entitled Registered Reports, and after some inspection of accuracy, they guaranteed its printing. After a few months, the young scientist’s report was announced in the latest scroll of Publish or Perish.

This isn’t a fairy tale. Replication, or arriving at similar results when collecting new data, is an ongoing problem primarily plaguing the life and social sciences. Sometimes known as the replication challenge, this obstacle should not be confused with the reproducibility challenge: which would mean, for example, our brilliant scientist instead struggling to arrive at the same results using the original methodology. Although these two horrors affect many disciplines, brain-imaging research is particularly sensitive to them, given the large amount of data that each project yields. Yet only very few neuroscience journals explicitly encourage authors to submit replication studies. Replicability and reproducibility challenges, along with low statistical power and publication and researcher biases, make the young scientist’s journey truly treacherous.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Let’s take a closer look at (what can only be termed as) this grueling odyssey to publication.

Throwing down the gauntlet first: Low statistical power—that is, a very small chance, oftentimes much lower than landing heads in a fair coin toss, to yield a statistically significant result—makes it difficult to detect an effect. In many fields, including neuroscience, statistical power tends to be so low that our young scientist would have to repeat the same experiment many times to actually find it. Our young scientist is busy, though. And poor. She doesn’t have time for that. Heck, ain’t nobody got time for that! Even worse, low statistical power, in combination with the next obstacle, publication bias, inflates reported effects, which further imperils replicability.

Publication bias is the tendency to report only positive findings, but this effect largely depends on the journal. Most journals mainly publish findings that are statistically significant, which vexingly leads to yet another problem called the file drawer effect: Studies that yield nonsignificant findings (so-called null findings) are less likely to be submitted for publication than those that produce statistically significant results. Further, of those few studies with null findings that are published, most are likely too low in power to detect the phenomenon they are investigating. As such, they remain inconclusive and thus uninformative.

Properly powered studies that yield null findings are crucial for informing the precise design of future experiments and aiding scientists to replicate previous work. Granted, brain-imaging studies are very costly, and investigators may use this discretion to get something publishable from all of the time and effort that goes into an experiment. Therefore, some rightly worry that published literature only shows us the tip of the iceberg.

And just when our young scientist thought her journey couldn’t get more dangerous, she looked in the mirror and encountered her own biases! Some investigators unintentionally select to only publish information that fits their story line (cherry-picking), thereby ignoring different pertinent information necessary for replication. Other investigators unintentionally or intentionally tweak their data analyses to carry home significant findings or hypothesize after the results are known. Brain-imaging data, in particular, provides plenty of opportunity for tweaking, because its analysis involves many steps, and at each step, scientists can select from a variety of methods and corrections. Obviously, trying to replicate results that came about using these questionable research practices is difficult, if not impossible.

But what’s this? Out of nowhere, registered reports (RRs) rides in—on a white horse, clad in shining armor. This brand-new publication format can address all of these thorny issues, because it asks scientists to declare their methods before collecting data. Scientists state their hypotheses and how they want to test them, and those statements are then assessed by a panel of peer reviewers, who check if the study’s rationale is sound and whether the design, proposed methods and statistical tests are appropriate (see graphic below).

The best part? Once submissions check all the boxes, authors are guaranteed to publish their work regardless of the results. With the aid of RRs, our young scientist can finish this perilous journey safely and, what’s more, have guaranteed publication. While the concept of registered reports is nascent, its impact on the replication challenge (if employed properly) is impactful on many levels.

The guarantee to publish is one main benefit that RRs offer. Recent preliminary findings suggest that the format combats publication bias effectively, such that more than 60 percent of null findings reach the surface, as compared with only 5 to 20 percent in traditional literature. RRs require clear documentation and thus encourage the use of other open science methods, such as sharing data, computer code and materials, which allows others to replicate and reproduce findings. They are thus the valiant servants restoring the faith in our scientific endeavor.

But can RRs also eliminate their publish-or-perish dilemma? As with all new initiatives, time will tell. And time is certainly the key challenge in the process, because the initial planning requires scientists to meticulously study the effect they want to investigate before they get to perform their actual experiment. The format requires high standards (for example,transparent documentation, archiving and quality control), so individual projects take longer, and fewer can be completed.

Being a brave (and, don’t forget, shrewd!) pioneer, our young scientist wonders whether she might miss her chance at a fellowship competition, which often calls for a large publication track record. RRs remain pioneer work, setting higher standards in transparent reporting and quality control. As long as these standards are more widely required by journals, and higher efforts are sufficiently accredited by treasurers (that is, funding bodies), scientific publishing remains a numbers game.

With these caveats in mind, the registered reports concept seems to be a promising initiative that may improve the transparency, validity and credibility of all research—but in particular, brain imaging studies—thereby improving the quality of published science. Best of all, they can transform the fairy tale of publication into a reality for any young scientist. And who wouldn’t want to live the academic life happily ever after?