Skip to main content

What Is Peer Review for?

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


There is a lot of back and forth right now amongst the academic technorati about the "future of peer review". The more I read about this, the more I've begun to step back and ask, in all seriousness:

What is scientific peer-review for?

This is, I believe, a damn important question to have answered. To put my money where my mouth is I'm going to answer my own question, in my own words:


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


The scientific peer-review process increases the probability that the scientific literature converges--at long time scales--upon scientific truth via distributed fact-checking, replication, and validation by other scientists. Peer review publication gives the scientific process "memory".

(Cover of the first scientific journal, Philosophical Transactions of the Royal Society, 1665-1666. Source: Wikipedia)

Note that publication of methods and results in a manner than they can be peer-reviewed is a critical component here. Given that, let's take part in a hypothetical regarding my field (neuroscience) for a moment.

In some far-distant future where humanity has learned every detail there is to know about the brain, what does the scientific literature look like in that world? Is it scattered across millions of closed, pay-walled, static manuscripts as much of it is now? Does such a system maximize truth-seeking?

And, given such a system, who is the megamind that manages to read through all of those biased (or incomplete, or incorrect) individual interpretations to extract the scientific truths to distill a correct model of the human brain and behavior (whatever that might entail)?

I am hard-pressed to imagine that this future scientific literature looks like what we currently possess. In fact, there are many data-mining efforts underway designed to overcome some of the limitations introduced by the current system (such as, for cognitive neuroscience, NeuroSynth and my own brainSCANr).

The peer-review system we have now is incomplete.

I'm not attacking peer-review, I'm attacking peer-review based on journal editors hand-picking one to three scientists who then read a biased presentation of the data without being given the actual data used to generate the conclusions. Note that although I am only a post-doc, I am not unfamiliar with the peer-review process, as I have "a healthy publication record" and have acted as reviewer for a dozen "top journals".

To gain a better perspective, I read this peer-review debate published in Nature in 2006.

In it, there were two articles of particular interest, titled:

* What is it for?

* The true purpose of peer review

These articles, in my opinion, are lacking answers to the questions that are their titles.

Main points from the first article:

* "For authors and funding organizations, peer review provides an important veneer of respectability."

* "For editors, peer review can help inform the decision-making process... Prestigious journals base their reputation on their exclusivity, and take a 'top-down' approach, creaming off a tiny proportion of the articles they receive."

* "For readers with limited time, peer review can act as a filter, reducing the amount they think they ought to read to stay abreast."

The first two points are issues of reputation management, which ideally have nothing to do with actual science (note, I say ideally...) The second point presupposes that publishing results in journals is somehow the critical component, rather than the experiments, methods, and results themselves. The final point may have been more important before the advent of digital databases, but text-based searches lessens its impact.

Notably, none of these mention anything about science, fact-finding, or statements about converging upon truth. (Note, in the past I've gone so far as to suggest that even the process of citing specific papers is biased and flawed, and that we would be better off giving aggregate citations of whole swathes of the literature.)

The second article takes almost an entirely economic, cost-benefit perspective of peer-review again focused on publishing results in journals. Only toward the end does the author directly address peer-review's purpose in science by saying:

...[T]he most important question is how accurately the peer review system predicts the longer-term judgments of the scientific community... A tentative answer to this last question is suggested by a pilot study carried out by my former colleagues at Nature Neuroscience, who examined the assessments produced by Faculty of 1000 (F1000), a website that seeks to identify and rank interesting papers based on the votes of handpicked expert 'faculty members'. For a sample of 2,500 neuroscience papers listed on F1000, there was a strong correlation between the paper's F1000 factor and the impact factor of the journal in which it appeared. This finding, albeit preliminary, should give pause to anyone who believes that the current peer review system is fundamentally flawed or that a more distributed method of assessment would give different results.

I strongly disagree with his final conclusion here. A perfectly plausible explanation for this result would be that scientists rate papers in "better" journals higher because they're published in journal perceived to be better. This would appear to be a source of bias and a major flaw of the current peer-review system. Rather than giving me pause as to whether the system is flawed, one could easily interpret that result as proof of the flaw.

The most common response that I encounter when speaking with others scientists about what they think peer-review is for, however, is some form of the following:

Peer-review improves the quality of published papers.

I'm about to get very meta here, but post-doc astronomer Sarah Kendrew recently wrote a piece in The Guardian titled, "Brian Cox is wrong: blogging your research is not a recipe for disaster".

This was followed by a counter post in Wired by Brian Romans titled "Why I Won’t Blog Unpublished Results". In that piece, Brian also says that peer-review improves papers:

First of all, the formal peer-review process has definitely improved my submitted papers. Reviewers and associate editors can catch errors that elude even a team of co-authors. Sometimes these are relatively minor issues, in other cases it may be a significant oversight. Reviewers typically offer constructive commentary about the formulation of the scientific argument, the presentation of the data and results, and, importantly, the significance of the conclusions within the context of that particular journal. Sure, I might not agree with every single comment from all three or four reviewers but, collectively, the review improves the science. Some might respond with ‘Why can’t we do this on blogs! Wouldn’t that be great! Internets FTW!.’ Perhaps someday. For now, it’s difficult to imagine deep and thorough reviews in the comment thread of a blog.

(emphases mine)

Although Brian concedes (but dismisses) the fact that none of these aspects of peer-review need be done in formal journals, he argues that because his field doesn't use arXiv and there is currently no equivalent for it, then journals are still necessary.

We also see an argument in there about how reviewers guide statements of significance for a particular journal, and the conclusion that somehow these things "improve the science". But even the narrative that peer-review improves papers can be called into question:

Smith R. Peer review: a flawed process at the heart of science and journals. J R Soc Med, 99(4); 2006

Another answer to the question of what is peer review for is that it is to improve the quality of papers published or research proposals that are funded. The systematic review found little evidence to support this, but again such studies are hampered by the lack of an agreed definition of a good study or a good research proposal.

Rothwell PM and Martyn CN. Reproducibility of peer review in clinical neuroscience. Brain, 123(9); 2000

Peer review is central to the process of modern science. It influences which projects get funded and where research is published. Although there is evidence that peer review improves the quality of reporting of the results of research, it is susceptible to several biases, and some have argued that it actually inhibits the dissemination of new ideas.

To reiterate: peer-review should be to maximize the probability that we converge on scientific truths.

This need not happen in journals, nor even require a "paper" that needs improvement by reviewers. Papers are static snapshots of one researcher or research teams' views and interpretations of the results of their experiments.

Why are we even working from these static documents anyway?

Why--if I want to replicate or advance an experiment--should I not have access to the original data and analysis code off which to build? These two changes would drastically speed up the scientific process. Almost any argument against implementing a more dynamic system seems to return to "credit" or "reputation". To be trite about it, if everyone has access to everything, however will they know how clever I am? Some day I expect a Nobel Prize for my cleverness!

But a "Github for Science" would alleviate even these issues. Version tracking would allow ideas to be traced back to the idea's originator with citations inherently built into the system.

I'm not saying publishing papers is bad. Synthesis of ideas allows us to publicly establish hypotheses for other scientists to attempt to disprove. But most results that are published are minor extensions of current understanding that don't merit long-form manuscripts.

But the current system of journals, editors who act as gatekeepers, one to three anonymous peer-reviewers, and so on is an outdated system built before technology provided better, more dynamic alternatives.

Why do scientists--the heralds of exploration and new ideas in our society--settle for such a sub-optimal system that is nearly 350 years old?

We can--we should--do better.

Wager, E. (2006). What is it for? Analysing the purpose of peer review. Nature DOI: 10.1038/nature04990

Jennings, C. (2006). Quality and value: The true purpose of peer review Nature DOI: 10.1038/nature05032

Editors (2005). Revolutionizing peer review? Nature Neuroscience, 8 (4), 397-397 DOI: 10.1038/nn0405-397

Smith, R. (2006). Peer review: a flawed process at the heart of science and journals Journal of the Royal Society of Medicine, 99 (4), 178-182 DOI: 10.1258/jrsm.99.4.178

Rothwell, P. (2000). Reproducibility of peer review in clinical neuroscience: Is agreement between reviewers any greater than would be expected by chance alone? Brain, 123 (9), 1964-1969 DOI: 10.1093/brain/123.9.1964

Bradley Voytek (@bradleyvoytek) is a post-doctoral researcher at UCSF. He's got some strange interests. He earned his PhD in neuroscience from Berkeley in 2010 where he researched how brain regions communicate to give rise to cognition in normal health and after brain injury. His research and blogging has appeared in The Washington Post​, Wired, and The New York Times​. With his wife, Jessica, he runs brainscanr.com. His non-academic... uh.... interests, include examining the zombie brain as part of his science outreach. He's also a consultant for The National Academy of Sciences – Science & Entertainment Exchange. He blogs at blog.ketyov.com. In 2006 he split the Time Person of the Year award.

More by Bradley Voytek