ADVERTISEMENT
  About the SA Blog Network













Information Culture

Information Culture


Thoughts and analysis related to science information, data, publication and culture.
Information Culture HomeAboutContact

Introduction to Traditional Peer Review

The views expressed are those of the author and are not necessarily those of Scientific American.


Email   PrintPrint



Peer review was introduced to scholarly publication in 1731 by the Royal Society of Edinburgh, which published a collection of peer-reviewed medical articles. Despite this early start, in many scientific journal publications the editors had the only say on whether an article will be published or not until after World War II. “Science and The Journal of the American Medical Association did not use outside reviewers until after 1940, “(Spier, 2002). The Lancet did not implement peer-review until 1976 (Benos et al., 2006). After the war and into the fifties and sixties, the specialization of articles increased and so did the competition for journal space. Technological advances (photocopying!) made it easier to distribute extra copies of articles to reviewers. Today, peer review is the “golden standard” in evaluation of everything from scholarly publication to grants to tenure decision (in the post I will focus on scholarly publication). It has been “elevated to a ‘principle’ — a unifying principle for a remarkably fragmented field” (Biagioli, 2002).

Peer review takes time, effort, and can delay publishing considerably. It’s one of the bottle necks of scholarly publishing. We can distribute very quickly these days, but there are a limited number of experts in every field to review and their time is occupied by other tasks that are part of the academic life. Blaise Cronin, editor of JASIST, estimated that JASIST needs about 1,000 peer reviews a year (one person, of course, can review more than once) for 400 articles and that JASIST approach about 3,000 researchers to find the 1,000 peer reviewers.

Single or double blind?

Traditional peer review is usually single or double blind. Single blind review, where the reviewers are aware of the authors’ identities, but the authors are not aware of the reviewers’ identities, is the most common. Double-blind peer review is when neither the authors nor the reviewers are aware of the others’ identities. Authors tend to believe double-blind reviews are better and less biased, in principle, than single-blind, but they also doubt whether true blinding is possible (See review in Lee et al., 2013). A survey of editors-in-chief, editors and editorial board members of 590 chemistry journals found that 97% of the journals didn’t offer double-blind peer review. Most respondents considered double blinding needless because the content and references could not be truly masked. They thought double blinding would make the detection of fraud harder and considered the system satisfactory as it was (Brown, 2007).

An example: in my new work place we hold an annual e-learning in high education conference. When we receive proposals, one of my jobs is to anonymize them – remove their names and make sure that wherever the authors wrote their institute’s name it will be replaced with simply “the institute”. Unfortunately, the Israeli e-learning community is so small the process is pretty useless, because everyone knows everyone and everyone knows what goes on in which institute. A study by Justice et al. (1998) reached similar conclusions. They masked the authors’ identity in manuscripts sent to five prominent medical journals before sending them to be reviewed, but about 30% of the reviewers were able to identify the authors regardless (perhaps because self-references in the text weren’t removed).  In small research fields this number can go even higher.

Peer review, fraud, and mistakes

Whenever cases of fraud in published article or significant flaws are revealed, it seems natural to wonder “where were the peer reviewers?” Researcher Jan Hendrick Schon, for example, published over a hundred articles in four years (1998-2002) and the reviewers failed to detect 16 cases of misconduct in his articles. Sometimes, it’s simply a matter of chance. The blog Retraction Watch reported lately that after a colleague pointed out an (honest) error in their article, a group of authors retracted a paper from Physical Review Letters. Had this colleague been their pre-publication peer reviewer, the article wouldn’t have been published to begin with.

In a study, eight weaknesses were entered into an article already accepted for publication and sent to JAMA reviewers (200 respondents) who, on average, found less than two per reviewer. Sixteen percent didn’t find any weakness and only 10% found more than four (Godlee et al., 1998). Callaham et. al. (1998) sent a fake manuscript, containing 23 deliberate flaws, to editors of Annals of Emergency Medicine and all the journal’s peer reviewers who reviewed at least three manuscripts before the study. On average, the reviewers detected 3.4 of the ten major flaws in the manuscript and 3.1 of the 13 minor ones. Peer reviewers are the ‘gate-keepers’ of science, but their gate-keeping is far from perfect.

Inter-reliability of peer review

Peer review tends to have low levels of inter-rater reliability between reviewers (0.2-0.4). This, at least from a statistical point of view, makes them pretty unreliable. However, this might not be a bad thing: “Too much agreement is in fact a sign that the review process is not working well, that reviewers are not properly selected for diversity, and that some are redundant” (Bailar, 1991). It could be the reviewers give different weight to different qualities of the reviewed article, or that the article’s subject has not reached a scientific consensus (e.g.: altmetrics, useful tools for evaluation or rubbish?).

I’m well aware this post covers only a small part of the discussions and arguments about peer review (note the post is called “Introduction to traditional peer review), and hope to discuss the topic again in the future.

Bailar, J. (2011). Reliability, fairness, objectivity and other inappropriate goals in peer review Behavioral and Brain Sciences, 14 (01), 137-138 DOI: 10.1017/S0140525X00065705

Biagioli, M. (2002). From Book Censorship to Academic Peer Review Emergences: Journal for the Study of Media & Composite Cultures, 12 (1), 11-45 DOI: 10.1080/1045722022000003435

Benos DJ, Bashari E, Chaves JM, Gaggar A, Kapoor N, LaFrance M, Mans R, Mayhew D, McGowan S, Polter A, Qadri Y, Sarfare S, Schultz K, Splittgerber R, Stephenson J, Tower C, Walton RG, & Zotov A (2007). The ups and downs of peer review. Advances in physiology education, 31 (2), 145-52 PMID: 17562902

Bornman, L. (2008). Scientific Peer Review: An Analysis of the Peer
Review Process from the Perspective of Sociology
of Science Theories Human Architecture: Journal of the Sociology of Self-Knowledge, 6 (2)

Brown, R. (2006). Double Anonymity and the Peer Review Process The Scientific World JOURNAL, 6, 1274-1277 DOI: 10.1100/tsw.2006.228

Callaham ML, Baxt WG, Waeckerle JF, & Wears RL (1998). Reliability of editors’ subjective quality ratings of peer reviews of manuscripts. JAMA : the journal of the American Medical Association, 280 (3), 229-31 PMID: 9676664

Godlee, F., Gale, C., & Martyn, C. (1998). Effect on the Quality of Peer Review of Blinding Reviewers and Asking Them to Sign Their Reports JAMA, 280 (3) DOI: 10.1001/jama.280.3.237

Lee, C. J.,, Sugimoto, C. R.,, Zhang, G.,, & Cronin, B. (2013). Bias in Peer Review JASIST, 64 (1), 2-17

Spier R (2002). The history of the peer-review process. Trends in biotechnology, 20 (8), 357-8 PMID: 12127284

Hadas Shema About the Author: Hadas Shema is an Information Science graduate student at Bar-Ilan University, Israel. She studies the characteristics of online scientific discourse and is a member of the European Union’s Academic Careers Understood through Measurement and Norms (ACUMEN) project. Hadas tweets at @Hadas_Shema.

The views expressed are those of the author and are not necessarily those of Scientific American.





Rights & Permissions

Comments 11 Comments

Add Comment
  1. 1. Carborundum 5:38 pm 04/19/2014

    Keep up the good work. Lax standards in any field tends to drag all science down. Reviewers who are not diligent need to be weeded out of the process.

    Link to this
  2. 2. anumakonda.jagadeesh 11:49 am 04/20/2014

    Excellent article on Traditional Peer Review.
    Drummond Rennie, deputy editor of Journal of the American Medical Association is an organizer of the International Congress on Peer Review and Biomedical Publication, which has been held every four years since 1986. He remarked:
    There seems to be no study too fragmented, no hypothesis too trivial, no literature too biased or too egotistical, no design too warped, no methodology too bungled, no presentation of results too inaccurate, too obscure, and too contradictory, no analysis too self-serving, no argument too circular, no conclusions too trifling or too unjustified, and no grammar and syntax too offensive for a paper to end up in print.
    Richard Horton, editor of the British medical journal The Lancet, said:
    The mistake, of course, is to have thought that peer review was any more than just a crude means of discovering the acceptability—not the validity—of a new finding. Editors and scientists alike insist on the pivotal importance of peer review. We portray peer review to the public as a quasi-sacred process that helps to make science our most objective truth teller. But we know that the system of peer review is biased, unjust, unaccountable, incomplete, easily fixed, often insulting, usually ignorant, occasionally foolish, and frequently wrong.
    The interposition of editors and reviewers between authors and readers may enable the intermediators to act as gatekeepers. Some sociologists of science argue that peer review makes the ability to publish susceptible to control by elites and to personal jealousy. The peer review process may suppress dissent against “mainstream” theories. Reviewers tend to be especially critical of conclusions that contradict their own views, and lenient towards those that match them. At the same time, established scientists are more likely than others to be sought out as referees, particularly by high-prestige journals/publishers. As a result, ideas that harmonize with the established experts’ are more likely to see print and to appear in premier journals than are iconoclastic or revolutionary ones. This accords with Thomas Kuhn’s well-known observations regarding scientific revolutions. A theoretical model has been established whose simulations imply that peer review and over-competitive research funding foster mainstream opinion to monopoly. A marketing professor argued that invited papers are more valuable because papers that undergo the conventional system of peer review may not necessarily feature findings that are actually important.
    Peer review failures occur when a peer-reviewed article contains fundamental errors that undermine at least one of its main conclusions. Many journals have no procedure to deal with peer review failures beyond publishing letters to the editor.
    Peer review in scientific journals assumes that the article reviewed has been honestly prepared and the process is not designed to detect fraud.
    Dr.A.Jagadeesh Nellore(AP),India

    Link to this
  3. 3. Carborundum 5:48 pm 04/20/2014

    Together with the growing tendency of researchers & research institutions to block access to the data they claim supports their conclusions, the scientific community needs to excise these ideological tumours & get back to basic integrity

    Link to this
  4. 4. tuned 6:37 pm 04/20/2014

    “Oh, the humanity!”

    Link to this
  5. 5. Jerzy v. 3.0. 5:25 am 04/22/2014

    One more problem: reviewers receive nothing or very little compensation for a review, so are likely to do the job very superficially. In fact, articles for review are often given to Ph.D. students to do in spare time and so on.

    Talking as the guy whose paper once received a positive review, only the reviewer talked about the different species of study animal than the paper!

    Link to this
  6. 6. Jerzy v. 3.0. 5:30 am 04/22/2014

    Most important: peer review is a product of the times when science journals were few, but the number of articles grew quickly. Journals had to strictly filter articles to fit inside a limit of paper pages. This limit of space ceased to exist as the electronic publishing became dominant (today even paper journals are usually accessed online).

    So pre-publication peer review should be replaced by post-publication peer review, or some sort of community voting, or similar.

    Link to this
  7. 7. Jerzy v. 3.0. 7:16 am 04/22/2014

    Most important novelty in science may be completely fresh ideas and paradigm shifts. So how peer review might evaluate that? Review is about technical details. Faced with totally new idea, a reviewer might ‘get it’ or not. Most likely, he/she will be conservative and old member of his/her research discipline and reject it. Perhaps this is the reason why there are few truly novel discoveries, despite exponential growth of published papers.

    Link to this
  8. 8. Publons 12:01 am 04/23/2014

    I agree with Jerzy — the issues we have with peer review being slow, inefficient, and/or ineffective largely stem from the lack of reward reviewers get for their work.

    The reward doesn’t need to be financial; we just need a way to turn reviews into an acceptable research output that can added to one’s resume. If each review is recognized (and appropriately rewarded) as a valuable contribution to science, with some measure of the speed and quality of the reviewer, we would expect to see the speed and quality of peer review increase across the board.

    That’s our theory at Publons anyway.

    Link to this
  9. 9. enusetaxel 5:49 am 04/23/2014

    Thanks.

    Link to this
  10. 10. jgrosay 10:01 am 04/26/2014

    Hi!: I’d say that the first peer reviewed scientific assessment, and also the first Clinical Trial ever, can be considered when Louis ‘XVI of France’ hired Benjamin Franklin, then US ambassador in Paris, along with Antoine Lavoisier, Joseph Ignace Guillotin and Jean Sylvian Bailly, to ascertain if the so called: ‘Animal magnetism’, proposed by Mesmer, was a true fact or a fake, the teamsters’ conclusion was in the line of non-existence of this ‘Instant hypnosis’, or ‘Will’.

    Link to this
  11. 11. Hadas Shema in reply to Hadas Shema 11:00 am 04/26/2014

    Well, I was focusing on journal peer-review. Can’t cover everything peer-review in one article!
    Thank you for the comment
    Hadas

    Link to this

Add a Comment
You must sign in or register as a ScientificAmerican.com member to submit a comment.

More from Scientific American

Scientific American Holiday Sale

Give a Gift &
Get a Gift - Free!

Give a 1 year subscription as low as $14.99

Subscribe Now! >

X

Email this Article

X