Skip to main content

An Introduction to Open Peer Review

Last post we talked about traditional peer review, which is at least single-blinded. This time we will focus on Open Peer Review (OPR). The narrowest way to describe OPR is as a process in which the names of the authors and reviewers are known to one another.

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


Last post we talked about traditional peer review, which is at least single-blinded. This time we will focus on Open Peer Review (OPR). The narrowest way to describe OPR is as a process in which the names of the authors and reviewers are known to one another. Beyond this narrow definition, OPRs can be classified into a number of categories (Ford, 2013):

  • Signed – when reviewers‘ names appear alongside the published article or the reviews that are passed to the authors have the reviewers’ names on them.

  • Disclosed – the author and the reviewer are aware of each other’s identity and engage in discussion during the review process.

  • Editor-mediated – when an editor facilitates a part of the peer-review process, either by pre-selecting articles for review and/or making the final decision regarding an article.

  • Transparent – the entire review process is “out in the open,” including the original manuscripts, the reviewers’ comments and the authors’ replies.

  • Crowdsourced – anyone can comment on the article (before its official publication).

  • Pre-publication – any public review occurring before the article’s publication (say an article submitted to a preprint server)

  • Synchronous – supposed to happen exactly in the time of an article’s publication. In reality, it doesn’t exist. As Ford notes: “In the literature, synchronous review is approached only theoretically, as part of a novel and completely iterative publishing model.”

  • Post-publication – happens after an article’s publication (e.g. a blog post, an F1000Prime review).

In 2006, Nature undertook an OPR trial lasting four months. In addition to traditional peer review, authors of each article submitted to Nature that wasn’t rejected outright (about 60% of the articles sent to Nature are rejected without review) were able to choose whether, in addition to traditional peer review, they wanted their article to be displayed online for the public’s comments. Out of the 1,369 articles Nature reviewed during that time, authors of 71 (5%) chose to undergo the OPR. The trial was well-publicized ahead of time by Nature, but comments were rather scarce. The most-commented upon article received ten comments, and 33 articles received no comments. As Nature put it: “Despite enthusiasm for the concept, open peer review was not widely popular, either among authors or by scientists invited to comment.” Although the Nature experiment was considered a failure, other OPR experiments have proven to be more successful.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


The editors of the Biology Direct journal published last year an article titled “Biology Direct: celebrating 7 years of open, published peer review.” In BD “the signed reviews and the author responses are published as an integral part of the final version of each article.”

(Koonin, Landweber & Lipman, 2013). During its first seven years, BD published 365 research articles. Though both the Nature and the BD approaches are called OPR, it seemed Nature chose the crowdsourcing route, while BD took the transparent one.

Of course, there are many other differences between the Nature experiment and BD: Nature is hardly a new, unestablished journal; the Nature experiment lasted only four months and OPR was optional for authors rather than mandatory; BD is committed to OPR “we strived to establish a new system of peer review that we hoped would avoid the all too obvious pitfalls of anonymous peer review,” while Nature has no intention of replacing its current system; Nature is multidisciplinary, while BD focuses on core areas in biology.

Launched in 2001, the Atmospheric Chemistry and Physics (ACP) adopted a multi-stage OPR. The submitted articles first go through fast screening and, if deemed appropriate for the journal, published as “discussion papers” in its discussion forum. They stay there for eight weeks, during which the designated referees’ reviews, other comments, if exist, and the authors’ replies are published in the forum as well. Designated referees can decide whether to sign their reviews or not, but other comments are signed (only “register readers” can comment). Then the manuscripts are, if required, revised and peer-reviewed again if required (by the designated referees). If the manuscripts are accepted, at the end of the process they will be officially published in the journal. ACP publishes about 800 articles a year.

ACP Editor Ulrich Pöschl (2012) noted the advantages of the process:

1. Rapid dissemination of results and uncensored communication regarding said results.

2. A way to officially document controversy and discussions regarding the articles, as well as reducing the chance of plagiarism; a better chance of detecting an article’s flaws.

3. A final product that is based both on the designated referees’ reviews and on interested readers’ comments.

 

The ACP peer-review process (Source: Pöschl, 2012).

Note that in none of the journals the crowd-sourced peer review has replaced the designated referees. It is regarded as supplementary material. That is because, first of all, many articles don’t even receive comments. Even if they do, we cannot tell for sure if the commenter is indeed a “peer”. The general lack of comments is unsurprising, considering that researchers are overworked as it is, and most wouldn’t bother with reviews they have not been specifically asked for. Another issue is that of open criticism. Scientists often avoid public criticism of their peers, and usually ignore work they consider irrelevant or flawed (Cole & Cole, 1971). van Rooyen et al. (1999) found that designated referees that were asked to sign their reviews were 12% more likely to decline reviewing an article.

Biologist and blogger Zen Faulkes accurately wrote that

“When PLOS ONE launched in 2006, one of its prominent innovations was to provide tools for users to comment upon and rate papers very easily. These largely went unused. I don’t know of any journal that has a thriving online community discussing papers within the journal.”

Faulkes believes that the attempts of journals to create special discussion forums for their articles, as well as designated social media sites for scientists and post-publication peer review sites are all less effective than regular social media when it comes to science evaluation. He considers post-publication peer review in social media not as peer review in its traditional sense, but as “just the biggest research conference in the world.”

To quote Steven Harnad: “The refereed journal literature needs to be freed from both paper and its costs, but not from peer review, whose "invisible hand" is what maintains its quality.”

Traditional peer review (with designated referees) will not go out of fashion any time soon, but it is easier than even to supplement it with further discussion, whether pre- or post- official publication.

 

References

Cole, J., & Cole, S. (1971). Measuring the quality of sociological research.Problems in the use of the Science Citation Index. American Sociologist,6(1), 23–29.

Editorial (2006). Peer review on trial Nature, 441 (7094), 668-668 DOI: 10.1038/441668a

Faulkes, Z. (2014). The Vacuum Shouts Back: Postpublication Peer Review on Social Media Neuron, 82 (2), 258-260 DOI: 10.1016/j.neuron.2014.03.032

Ford, E. (2013). Defining and Characterizing Open Peer Review: A Review of the Literature Journal of Scholarly Publishing, 44 (4), 311-326 DOI: 10.3138/jsp.44-4-001

Greaves, S., Scott, J., Clarke, M., Miller, L., Hannay, T., Thomas, A., & Campbell, P. (2006). Overview: Nature’s peer review trial. Nature.

Harnad, S. (2000). The invisible hand of peer review. Exploit Interactive, 5(April).

Koonin, E. V., Landweber, L. F., & Lipman, D. J. (2011). published peer review. Biol Direct, 6, 35.

Pöschl U (2012). Multi-stage open peer review: scientific evaluation integrating the strengths of traditional peer review with the virtues of transparency and self-regulation. Frontiers in computational neuroscience, 6 PMID: 22783183

van Rooyen, S., Godlee, F., Evans, S., Black, N., & Smith, R. (1999). Effect of open peer review on quality of reviews and on reviewers’ recommendations: A randomised trial. British Medical Journal, 318(7175), 23–27.