A popular refrain in the recent Stand Up for Science Marches has been: “What do we want?”

“Evidence-based policy!”

“When do we want it?”

“After peer review!”

It may not be pithy, but it strikes a chord with scientists the world over who have been trained that all discoveries should be peer-reviewed by one or two others before publication in a scientific journal, and expect no less rigor from policymakers. Various methods of peer review have been in existence for over 350 years, and formally used by journals since the 1960s. Peer review has a pivotal role in validating research results, usually prior to, but also post, publication; it also presents one of the greatest opportunities in advancing discovery.

However, the peer review system as it currently stands has many challenges. Peer review is often accused of being slow (sometimes taking months), inefficient, biased and open to abuse.

We often hear from scientists like Elodie Chabrol, a researcher in neuroscience at UCL London, who says: “I’m not an expert on peer review or an editor. I’m just a frustrated scientist. Getting published is essential to building a career and it’s not easy. It is frustrating to know that my research won’t be published for months. I know that some of the papers I read are old news by the time they reach me. No-one would read the news with months of delay, but that’s what we scientists do.”

While there have been a number of advances in peer review in recent times—including new models such as open peer review, bringing greater transparency—truly transformative change has not been widely adopted.

At BioMed Central we introduced open peer review back in 1999, and we continue to experiment with new models such as results-free peer review (where peer review focuses on the rationale and methods, in a bid to remove the bias towards positive results). We are still exploring ways in which we can improve the process of peer review, and in some cases, affect radical change to methods, processes and supporting systems. But of course we are not the only ones, and publishers will have to proactively partner with the wider community if we are to see real industry-wide improvements. Today we are publishing a report with Digital Science, What might peer review look like in 2030?, which examines some of the challenges and opportunities facing peer review, and which makes recommendations to improve the system.

Why focus on 2030? The scientific publishing landscape has changed a huge amount in the last 15 or so years, including the advent of digital journals and open access. We envisaged that by 2030, we may have seen another revolution in research publishing which could see huge benefits for academics.

As a result of the report, we’re making a number of recommendations. Firstly, one of the challenges is that the pool of peer reviewers is overstretched, with experienced academics overburdened and early career academics struggling to get a foot in the door. We need to find and invent new ways of identifying, verifying and inviting peer reviewers, focusing on closely matching expertise with the research being reviewed to increase uptake. Artificial intelligence could be a valuable tool in this.

In order to widen the reviewer pool and reduce the perception of bias we should encourage more diversity (including early career researchers, researchers from different regions, and women). Publishers in particular could raise awareness and investigate new ways of sourcing female peer reviewers.

When it comes to peer review, there is no one-size-fits-all and different research disciplines often prefer different models. We need to experiment with different and new models of peer review, particularly those that increase transparency.

Too often reviewers receive little training or guidance from their institutions and mentors. We need to invest in reviewer training programs to make sure that the next generation of reviewers is equipped to provide valuable feedback within recognized guidelines.

Many publishers are experimenting with peer review innovation, but we should be working towards cross-publisher solutions that improve efficiency and benefit all stakeholders. Portable peer review, where the peer review report travels with the paper to a second journal if not accepted for publication in the first, has not taken off at any scale, but could make the publishing process more efficient for all involved.

Peer review is mostly unpaid and unrecognized. There are mixed views on whether researchers should be paid for peer review (many feel this would introduce a perverse incentive to green-light a manuscript) but academics are largely united in calling for more recognition. Funders, institutions and publishers must work together to identify ways to recognize reviewers and acknowledge their hard work.

When it comes to preventing mistakes and research misconduct, we need to improve our use of technology to support and enhance the peer review process, including finding automated ways to identify inconsistencies that may are difficult for reviewers to spot.

We want to start a conversation with the ambitious aim of ultimately improving peer review for millions of working academics, and we’re calling on the research community to take part and take on the challenge. Whether you’re a frustrated scientist, a peer reviewer, an editor, a publisher or a librarian, we would love to hear your views. Please do tweet us using #SpotOnReport, email us (spoton@biomedcentral.com) or comment online.