About the SA Blog Network

Guest Blog

Guest Blog

Commentary invited by editors of Scientific American
Guest Blog HomeAboutContact

What Is Peer Review for?

The views expressed are those of the author and are not necessarily those of Scientific American.

Email   PrintPrint

There is a lot of back and forth right now amongst the academic technorati about the “future of peer review”. The more I read about this, the more I’ve begun to step back and ask, in all seriousness:

What is scientific peer-review for?

This is, I believe, a damn important question to have answered. To put my money where my mouth is I’m going to answer my own question, in my own words:

The scientific peer-review process increases the probability that the scientific literature converges–at long time scales–upon scientific truth via distributed fact-checking, replication, and validation by other scientists. Peer review publication gives the scientific process “memory”.

(Cover of the first scientific journal, Philosophical Transactions of the Royal Society, 1665-1666. Source: Wikipedia)

(Cover of the first scientific journal, Philosophical Transactions of the Royal Society, 1665-1666. Source: Wikipedia)

Note that publication of methods and results in a manner than they can be peer-reviewed is a critical component here. Given that, let’s take part in a hypothetical regarding my field (neuroscience) for a moment.

In some far-distant future where humanity has learned every detail there is to know about the brain, what does the scientific literature look like in that world? Is it scattered across millions of closed, pay-walled, static manuscripts as much of it is now? Does such a system maximize truth-seeking?

And, given such a system, who is the megamind that manages to read through all of those biased (or incomplete, or incorrect) individual interpretations to extract the scientific truths to distill a correct model of the human brain and behavior (whatever that might entail)?

I am hard-pressed to imagine that this future scientific literature looks like what we currently possess. In fact, there are many data-mining efforts underway designed to overcome some of the limitations introduced by the current system (such as, for cognitive neuroscience, NeuroSynth and my own brainSCANr).

The peer-review system we have now is incomplete.

I’m not attacking peer-review, I’m attacking peer-review based on journal editors hand-picking one to three scientists who then read a biased presentation of the data without being given the actual data used to generate the conclusions. Note that although I am only a post-doc, I am not unfamiliar with the peer-review process, as I have “a healthy publication record” and have acted as reviewer for a dozen “top journals”.

To gain a better perspective, I read this peer-review debate published in Nature in 2006.

In it, there were two articles of particular interest, titled:

* What is it for?
* The true purpose of peer review

These articles, in my opinion, are lacking answers to the questions that are their titles.

Main points from the first article:

* “For authors and funding organizations, peer review provides an important veneer of respectability.”
* “For editors, peer review can help inform the decision-making process… Prestigious journals base their reputation on their exclusivity, and take a ‘top-down’ approach, creaming off a tiny proportion of the articles they receive.”
* “For readers with limited time, peer review can act as a filter, reducing the amount they think they ought to read to stay abreast.”

The first two points are issues of reputation management, which ideally have nothing to do with actual science (note, I say ideally…) The second point presupposes that publishing results in journals is somehow the critical component, rather than the experiments, methods, and results themselves. The final point may have been more important before the advent of digital databases, but text-based searches lessens its impact.

Notably, none of these mention anything about science, fact-finding, or statements about converging upon truth. (Note, in the past I’ve gone so far as to suggest that even the process of citing specific papers is biased and flawed, and that we would be better off giving aggregate citations of whole swathes of the literature.)

The second article takes almost an entirely economic, cost-benefit perspective of peer-review again focused on publishing results in journals. Only toward the end does the author directly address peer-review’s purpose in science by saying:

…[T]he most important question is how accurately the peer review system predicts the longer-term judgments of the scientific community… A tentative answer to this last question is suggested by a pilot study carried out by my former colleagues at Nature Neuroscience, who examined the assessments produced by Faculty of 1000 (F1000), a website that seeks to identify and rank interesting papers based on the votes of handpicked expert ‘faculty members’. For a sample of 2,500 neuroscience papers listed on F1000, there was a strong correlation between the paper’s F1000 factor and the impact factor of the journal in which it appeared. This finding, albeit preliminary, should give pause to anyone who believes that the current peer review system is fundamentally flawed or that a more distributed method of assessment would give different results.

I strongly disagree with his final conclusion here. A perfectly plausible explanation for this result would be that scientists rate papers in “better” journals higher because they’re published in journal perceived to be better. This would appear to be a source of bias and a major flaw of the current peer-review system. Rather than giving me pause as to whether the system is flawed, one could easily interpret that result as proof of the flaw.

The most common response that I encounter when speaking with others scientists about what they think peer-review is for, however, is some form of the following:

Peer-review improves the quality of published papers.

I’m about to get very meta here, but post-doc astronomer Sarah Kendrew recently wrote a piece in The Guardian titled, “Brian Cox is wrong: blogging your research is not a recipe for disaster”.

This was followed by a counter post in Wired by Brian Romans titled “Why I Won’t Blog Unpublished Results”. In that piece, Brian also says that peer-review improves papers:

First of all, the formal peer-review process has definitely improved my submitted papers. Reviewers and associate editors can catch errors that elude even a team of co-authors. Sometimes these are relatively minor issues, in other cases it may be a significant oversight. Reviewers typically offer constructive commentary about the formulation of the scientific argument, the presentation of the data and results, and, importantly, the significance of the conclusions within the context of that particular journal. Sure, I might not agree with every single comment from all three or four reviewers but, collectively, the review improves the science. Some might respond with ‘Why can’t we do this on blogs! Wouldn’t that be great! Internets FTW!.’ Perhaps someday. For now, it’s difficult to imagine deep and thorough reviews in the comment thread of a blog.

(emphases mine)

Although Brian concedes (but dismisses) the fact that none of these aspects of peer-review need be done in formal journals, he argues that because his field doesn’t use arXiv and there is currently no equivalent for it, then journals are still necessary.

We also see an argument in there about how reviewers guide statements of significance for a particular journal, and the conclusion that somehow these things “improve the science”. But even the narrative that peer-review improves papers can be called into question:

Smith R. Peer review: a flawed process at the heart of science and journals. J R Soc Med, 99(4); 2006

Another answer to the question of what is peer review for is that it is to improve the quality of papers published or research proposals that are funded. The systematic review found little evidence to support this, but again such studies are hampered by the lack of an agreed definition of a good study or a good research proposal.

Rothwell PM and Martyn CN. Reproducibility of peer review in clinical neuroscience. Brain, 123(9); 2000

Peer review is central to the process of modern science. It influences which projects get funded and where research is published. Although there is evidence that peer review improves the quality of reporting of the results of research, it is susceptible to several biases, and some have argued that it actually inhibits the dissemination of new ideas.

To reiterate: peer-review should be to maximize the probability that we converge on scientific truths.

This need not happen in journals, nor even require a “paper” that needs improvement by reviewers. Papers are static snapshots of one researcher or research teams’ views and interpretations of the results of their experiments.

Why are we even working from these static documents anyway?

Why–if I want to replicate or advance an experiment–should I not have access to the original data and analysis code off which to build? These two changes would drastically speed up the scientific process. Almost any argument against implementing a more dynamic system seems to return to “credit” or “reputation”. To be trite about it, if everyone has access to everything, however will they know how clever I am? Some day I expect a Nobel Prize for my cleverness!

But a “Github for Science” would alleviate even these issues. Version tracking would allow ideas to be traced back to the idea’s originator with citations inherently built into the system.

I’m not saying publishing papers is bad. Synthesis of ideas allows us to publicly establish hypotheses for other scientists to attempt to disprove. But most results that are published are minor extensions of current understanding that don’t merit long-form manuscripts.

But the current system of journals, editors who act as gatekeepers, one to three anonymous peer-reviewers, and so on is an outdated system built before technology provided better, more dynamic alternatives.

Why do scientists–the heralds of exploration and new ideas in our society–settle for such a sub-optimal system that is nearly 350 years old?

We can–we should–do better.

ResearchBlogging.orgWager, E. (2006). What is it for? Analysing the purpose of peer review. Nature DOI: 10.1038/nature04990
Jennings, C. (2006). Quality and value: The true purpose of peer review Nature DOI: 10.1038/nature05032
Editors (2005). Revolutionizing peer review? Nature Neuroscience, 8 (4), 397-397 DOI: 10.1038/nn0405-397
Smith, R. (2006). Peer review: a flawed process at the heart of science and journals Journal of the Royal Society of Medicine, 99 (4), 178-182 DOI: 10.1258/jrsm.99.4.178
Rothwell, P. (2000). Reproducibility of peer review in clinical neuroscience: Is agreement between reviewers any greater than would be expected by chance alone? Brain, 123 (9), 1964-1969 DOI: 10.1093/brain/123.9.1964

Bradley Voytek About the Author: Bradley Voytek (@bradleyvoytek) is a post-doctoral researcher at UCSF. He’s got some strange interests. He earned his PhD in neuroscience from Berkeley in 2010 where he researched how brain regions communicate to give rise to cognition in normal health and after brain injury. His research and blogging has appeared in The Washington Post​, Wired, and The New York Times​. With his wife, Jessica, he runs His non-academic… uh…. interests, include examining the zombie brain as part of his science outreach. He’s also a consultant for The National Academy of Sciences – Science & Entertainment Exchange. He blogs at In 2006 he split the Time Person of the Year award. Follow on Twitter @bradleyvoytek.

The views expressed are those of the author and are not necessarily those of Scientific American.

Comments 16 Comments

Add Comment
  1. 1. bigbopper 1:40 pm 11/2/2011

    Seeing as how we’re talking about science, what scientific testing of the peer-review system has been done? Is there any empirical evidence to support the assertion that peer-review increases the likelihood that published scientific papers will converge on the “truth”? Or is this just a piece of “established wisdom”, as likely as any other piece of “established wisdom” to turn out to be wrong?

    Link to this
  2. 2. criener 3:05 pm 11/2/2011

    This is an interesting idea. But even if we do away with publishing paper copies, isn’t there something to be said for honing how one presents the research for wider consumption (even if it is only among other, let’s say, neuroscientists of the hippocampus), rather than simply a stream of data? I think improving the rhetoric and argument of scientific papers before broader consumption is a worthy, yet quite difficult to measure, goal.

    I am all for an ecology of journals and manuscripts, with more places for a diversity of studies (one-off cool findings, 5 experiment rigorous modeling, etc) but is this really a problem with peer review?

    Finally, this problem seems to be different in different subdisciplines. Raw data and code may be useful in some domains, but not nearly as useful as, say, a video of a participant’s eye view, in another domain.

    Link to this
  3. 3. Daniel Mietchen 9:47 pm 11/2/2011

    A grant proposal for a GitHub for Science is now up at as part of the #SciFund Challenge ( ), an attempt at crowdsourcing research funding.

    Link to this
  4. 4. JDahiya 2:56 am 11/3/2011

    A very good article, with cogently expressed points.

    Using technology better would make science and research more accessible, more interesting and more fun for all. Given the fear that many people have towards science, this, too, is important. It would raise the cognitive level in society as a whole.

    Link to this
  5. 5. Jerzy New 5:25 am 11/3/2011

    Actually, tests of peer review were done, and they give surprisingly ambiguous results. So, peer review not commonly fills its need.

    One study checked the peer review history of most cited papers published in a year. It turned that these papers were repeatedly turned down beforehand for being not important enough.

    Link to this
  6. 6. Jerzy New 5:42 am 11/3/2011

    I think academia cannot change itself from within. Main reason is carefully avoided. Peer review is allows influential scientists to block new ideas as not interesting or contradicting published papers, and block competing groups by bad reviews.

    There is well known saying that science progresses by death of old scientists believing old ideas.

    However, progress in peer review may come from another direction – internet. Internet gives no limits to publishing space. Scientific journals want peer review because they have a constrain of printing pages, and want best articles in them. Internet allows publishing everything. Even those uncomfortable material and methods, which are missing from top journals.

    Peer review will die when majority of scientists realize, that they themselves can rate credibility of online publication no worse than journal reviewer (which is often anyway Ph.D. student or a postdoc, whom professors dump this work).

    Link to this
  7. 7. bradleyvoytek 2:47 pm 11/3/2011

    @bigbopper: that’s a great point. In writing this piece I tried to break the peer-review scientific process into its constituent elements. Given the relative (long-term) successes in the sciences I assume that issues get worked out over time. But I guess one could make an actual, empirical analysis of that supposition.

    @criener: I said this to a friend who asked a similar question… I’m not advocating getting rid of papers to share results, but the format of intro>methods>results>discussion(>conclusion) is unnecessary for many publications. It’s a waste. Not every paper needs an introduction or discussion when they’re minor extensions of existing work. So allowing publication of just results and methods (along with data) would be perfect. If we used a github system, then data could be forked, (re)analyzed, extended, with commentary associated with the branch. Pending communal approval/peer-review then the branch could be merged back into the main project. Full histories are recorded, so credit would still be given. These aren’t issues with peer-review, but rather with the limited manner in which peer-review is interpreted.

    @Daniel: Thank you SO MUCH for pointing me to that.

    @Jerzy: Could you point me to that paper, please? People often comment about how scientists “block” ideas. Maybe I’m consorting with the wrong scientists, but I’ve never known anyone to do that.

    Link to this
  8. 8. gmperkins 4:35 pm 11/3/2011

    The current system could definitely use some updating.

    Some further points: 1) we have more PhDs now than the sum total of all dead PhDs 2) Universities and business research DEMAND constant publication 3) funding has become more political than ever before

    This has created a plethora of shoddy papers; mostly due to publication of preliminary results or of very weak results or of not quite thought out ideas. It is simply impossible to properly peer-review all the work that gets pumped out across the globe.

    Fortunately, ground-breaking/field extending work almost always gets checked (always in most fields). A good example is the latest ‘faster than light neutrinos’ which created alot of buzz in the media but physicists were quite skeptical and upon further analysis has shown holes in the claims of that paper/work. That is good science at work.

    Link to this
  9. 9. Jerzy New 7:12 am 11/4/2011

    “Maybe I’m consorting with the wrong scientists, but I’ve never known anyone to do that.”

    Of course it is not done openly. But for example, manuscripts against the line of reviewer are reviewed more sharply, asked for more additional tests (which can consume significant time), or small gaps in the proof are either accepted (paper needs not be perfect) or rejected.

    “Fortunately, ground-breaking/field extending work almost always gets checked (always in most fields).”

    You are confusing two things – peer review and checking results later. Good examples are frauds, like certain Korean scientist working on stem cells. They were indeed uncovered, but years later. They passed peer review first.

    “A good example is the latest ‘faster than light neutrinos’ which created alot of buzz in the media but physicists were quite skeptical and upon further analysis has shown holes in the claims of that paper/work. That is good science at work.”

    But after peer review, isn’t it? This example highlights one common situation in current science. Results often depend on large amount of calculations, which peer reviewers cannot test.

    In my opinion, peer review in real life (not as theoretical, ideal concept presented by Bradley) provides surprisingly little help above filtering obviously poor science (filtering out things of yeti and Loch Ness monster quality).

    Second, there is lots of unrepeatable results and conflicting results, especially in fields like molecular biology. I think many of them are artifacts, which passed peer review but were not conclusively disproved later.

    Third, it is wrong – as is commonly done – to rely on opinion of established scientists in discussion about peer review. Basic human thing. They win under the current system, so of course they believe its best. Nobody is a good judge in own case.

    It would be like asking top athletes, if current system of measuring winners is best and perhaps would they prefer to change it?

    And result – perhaps clear result is dissatisfaction in society – where are those flying cars and cure for cancer which scientists 20 years ago believed can deliver by now?

    Link to this
  10. 10. gmperkins 3:37 pm 11/4/2011


    Ah correct, I did extent peer-review into checking. I consider them one-in-the-same since, like you argue, you can’t possibly re-perform all the steps that went into the paper (at least not usually).

    I think the key is to not over-hype initial papers/results. And have a retraction SOP. All digital would make it easy, you simply remove the paper from the archive. I am for all digital for other reasons as well, as pointed out in the article: “I’m not saying publishing papers is bad. Synthesis of ideas allows us to publicly establish hypotheses for other scientists to attempt to disprove. But most results that are published are minor extensions of current understanding that don’t merit long-form manuscripts.” So very true but the fact that Universities, business, egos… all like a big long paper count website list…

    Link to this
  11. 11. PTripp 9:55 pm 11/4/2011

    I think peer review is important, but it’s not an absolute truth detector. In the rest of the world, ‘publishing’ no longer is restricted to pulp and ink with a limited audience. Of course any time there is a commercial aspect, ‘things’ get skewed. The pressure to publish to get funding to justify your existence definitely skews things. Some papers are published just to get published. Some are not published for various reasons, including not even being submitted, including being deemed ‘not ready’.

    One of the reasons the internet was created was to enable the free flow of information between researchers. In other words, peer review. Now it is controlled by commercial interests for the most part and the ‘free flow’ isn’t free any more.

    I submit for peer review that we are human and fallible. (All humans must recuse themselves as they may be biased.)

    There is no ‘magic bullet’. In any venture we (humans) use ‘sounding boards’ and ‘bounce ideas’ off friends and colleagues. Sometimes just vocalizing an idea is enough to point out ‘logic holes’ or give inspiration for further investigation.

    Peer review is a legitimate tool, but it’s not the only one in the tool shed. All tools should be utilized, and new ones will be added. An ax still works and does it’s job, but a chain saw does too. Personally I use both, but I’m only human…

    Link to this
  12. 12. pschleck 10:13 am 11/5/2011

    Modern peer review has serious potential conflict-of-interest issues, as fields increasingly sub-sub-specialize, so that papers aren’t really anonymous anymore (peers can often tell the authors from the subject and writing style), and reviewers may be motivated to exclude competitors from limited space in prestigious journals.

    See this article from a prominent physicist:

    Link to this
  13. 13. bradleyvoytek 12:56 pm 11/6/2011

    @pschleck that’s a great article. I’m going to assign that for reading in my future classes. Thank you.

    Link to this
  14. 14. sengupso 1:22 am 11/7/2011

    Good article, and good suggestions!
    2-3 investigators assigned to review your work.Since the authors are blind about the reviewers, reviewers should also be blind about the authors , – work coming from so and so famous scientists lab, from x university and y country…. ijust do not believe it. secondly impact factors. say i work on an obscure pathogen, most probably work regarding it will get published in some low impact factor journal. The work should speak for itself- not be trapped with the baggage of when , where it was conducted and especially WHO conducted it.
    Thanks for giving this space for expressing personal views.

    Link to this
  15. 15. Jerzy New 5:51 am 11/7/2011

    We had interesting discussion with colleagues over the weekend. Good analogy is that scientists talk about fraud in peer review like pro athletes about doping and match fixing in sport. The answer is “I don’t do it” “and none of my mates does” “It’s best left to sportsmen to regulate internally”. Of course, the reality is very different.

    In case of sport, it is well known that individual sportsmen and sports clubs will not police themselves. What works is draconic interference of international sports leagues, which in turn are prompted by criticism of viewers and sponsors of sport. This at least bites the problem.

    By analogy, scientists or individual universities or organizations cannot avoid fraud. Regulation must come from outside. One way might be to let internet free-for -all publishing.

    Another way can follow how criticism of viewers of championships forces sports federation to supervise individual clubs. In science, criticism of taxpayers who want best results from their money on science might theoretically make big funding organizations, like NIH, to establish checks of grant receiving institutions. Individual universities, research institutes and also science journals would be forced to watch for fraud, including fraud in peer review. If not, whole universities or journals, not just single scientists, would face punishment in form of withdrawing grant money.

    It would be interesting to see if this system works – but I think it would always face great fear and cricitism from scientists.

    Link to this
  16. 16. marcosmm 1:42 pm 01/25/2012

    Hello Bradley,

    We are a Canadian company that offers specialized solutions to the research community (Awards grants management software and Science and ethics compliance review). If you are interested about Peer Review, feel free to visit our website and check our newsletter database at You will find news about Research administration, Ethics review, Grants management, Awards/Compliance review and Advanced semantic search engine.


    eVision team

    Link to this

Add a Comment
You must sign in or register as a member to submit a comment.

More from Scientific American

Email this Article