Skip to main content

The Future of Peer Review

It’s very far from perfect, but major changes for the better are underway

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


Virtually all new scientific study that reaches the attention of the public has been peer-reviewed—the process through which experts are commissioned by an editor, often anonymously and almost always unpaid, to cross-examine a manuscript, look for flaws and recommend improvements.

This process, begun by the Philosophical Transactions of the Royal Society of London in the 18th century, is central to our ability to trust scientific research. The tradition of peer review has become ingrained in science over the centuries because it is, despite its flaws, the best system we have to evaluate research.

The volume of peer-reviewed research articles published every year has grown rapidly, with well over two million published yearly. The phenomenal increase in published research is driven in part by the rise of China, which has grown to be a global powerhouse with article submission volumes that now rival U.S. output. Growth has also been driven by new operating models like open access, which has made publishing available to more potential authors than ever before.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Peer review is now operating at a truly global scale, which means its flaws are, too.

We are increasingly seeing scientific publication slowed down and threatened by issues that have reached breaking point in the last few years. It now takes 180 days publish a typical peer-reviewed research article, and this can often drag out a year (pdf). In many cases editors simply can’t find expert reviewers who are willing and able to examine the deluge of manuscripts. Sometimes reviewers are too busy. Sometimes it’s simply too hard to find a way to contact them. Whatever the reason, science suffers.

Even where scientists do take on a review, they have rarely received any training in how to carry it out. The first experience usually happens when an early career researcher is handed a manuscript by their supervisor and asked to evaluate it on their behalf.

Meanwhile recent cases of fraudulent review and subsequent retractions have crystallized the issue. These are cases where authors are so desperate to publish that they’re impersonating (or paying) reviewers to provide favorable feedback in order to have their work approved for publication.

This may all seem a bit academic, but it’s not. It threatens our ability to trust and understand science—wasting public and private research funds, slowing the pace of discovery, undermining the ability to create sustainable “knowledge economies” and, ultimately, having a negative impact on society at large.

The problems are exacerbated by the anonymous nature of peer review. For this reason, and because all review work is siloed in individual journals, editors do not have the tools to deal with the deluge. Publishers are scrambling to save peer review.

What can we do about this? A recent report from the 2016 SpotOn conference on “What Might Peer Review Look Like in 2030” made a number of recommendations which we summarize as follows:

As an essential first step, we need to turn peer review into a rewarding activity. Incentives matter, so it is critical that tenure and grant committees consider researchers for their expertise as reviewers (amongst other things), not just as publication machines. With the proper incentives we will see a renewed focus on peer review, which will get research published and available to the world faster. Giving reviewers a way to showcase their contributions is the idea behind Publons, for example, (of which I am CEO and co-founder) and which was recently acquired by Clarivate Analytics.

The second step is to find ways to expand and improve the pool of available reviewers. Training needs to be built into PhD and postdoctoral programs, and supervisors need to recognize their roles in stewarding in the next generation of reviewers. Meanwhile editors need to be willing to look beyond their trusted inner circles or the corresponding authors on cited papers. Change starts at the top.

To that end, we need better tools to help editors identify, qualify and contact reviewers. What we have now is risky and inefficient. For example, the current approach that editors use to contact reviewers—asking authors for a reviewer’s e-mail address or searching for it online—has proved easy to game. We need to add a layer of verification to the process and coordinate between journals and across publishers. With even a base level of transparency we can significantly reduce the risk of fraud and streamline how reviewers are contacted. Imagine a world where it is possible for an editor to know that a researcher is already working on a review assignment. They won’t need another four review requests today.

Progress in this area will require the monolithic systems that editors use to manage the peer review process to improve drastically. These systems have hardly changed since they were first built in the early 2000s, partly because journal editors often insist on heavy customizations for their specific journal but also because they have not received enough investment. Their age is showing. It should be simple to plug in tools to facilitate reviewer discovery and transfer reviews among journals.

Finally, we need to start experimenting with new models of review, particularly those that increase transparency and speed of dissemination. Automated forms of peer review, like the online statistics checker StatCheck could help. Various forms of open review, such as those practiced by journals like GigaScience or PeerJ, and the positive results of a collaborative review experiment in the journal Synlett are showing positive results. There is perhaps a role for preprint servers to play as well. If we can get the base research out quickly, then we’ll have a little more time to do the peer review well. F1000 has taken this approach in its partnerships with Wellcome Trust and the Bill & Melinda Gates Foundation.

With these steps peer review will be able to scale to the next million articles. A central theme in these solutions is the need to break out of the silo, coordinate and invest in peer review. This is critical if we want to be able to trust the next generation of published research. We hope the Clarivate acquisition of Publons signals our intent to play our part, and we call on all researchers, publishers, funders and research institutions to join us in supporting the sentinels of science on their crusade to defend the ever-expanding sphere of human knowledge.