Part of what makes the process of science so interesting is the part where scientists invite criticism. It's right there in the scientific method, in the part where we have to see if other experiments confirm original results. Or, as David Ng states in his "Introduction to the Scientific Method, by way of Chewbacca," this is where other folks get to dump on you.
Peer review is one of the formalized processes that allow scientists to criticize and dump on each others work. The big goal of peer review is to make science better, although there are lots of other smaller goals. The Action Potential blog over at the Nature Blog network just posted a case study in how this can happen.
First, a brief intro to peer review for those unfamiliar with the details: Authors submit their work to a journal, saying "publish my article!" The editors of the journal need a way to figure out if the science is any good, so they ask other scientists. The original article is sent to other experts to have a look. These reviewers are looking at two big concepts: is the science sound and is the work important enough to publish. Reviewers looking for sound science will evaluate the original authors methods, examine how they analyzed their data, and make sure that their conclusions are supported by that data. Reviewers also try to determine if the results are sufficiently original or interesting to be published in that particular journal. They then make a recommendation to the editor saying "Yes! Publish this now!" or "Maybe. Can the authors clarify some things?" or "Nope." For a much better look at what happens during peer review, see this excellent Boing Boing post.
Most of the time, we don't get to see this process. Reviewers are typically anonymous, and their comments are not meant for public consumption (I'll talk about new things like open review in my next post).
The Action Potential story shows us how constructive and detailed comments from reviewers can keep scientists on their toes and at the top of their game. A manuscript was submitted and sent out for review. The reviewers weren't overly enthusiastic about the paper. They suggested some improvements to the methodology used, including the addition of various controls. They were also concerned that this study wasn't original enough to be published in Nature, and provided several reasons why.
Although the reviewers originally rejected the paper, the reviewers comments provided the authors with a guide to revising their results, collecting additional data, making their conclusions stronger, and analyzing the data better.
So the authors got to work. After several (probably very busy) months, the authors resubmitted the paper and it was successfully published in the May 13 issue of Nature.
This is why peer review has become the standard in scientific publications - by allowing work to be criticized thoroughly at least once before publication, we are more likely to get sound scientific results that will hold up.
Of course, it isn't all butterflies and daisies. Peer review isn't always helpful. Sometimes it can be downright obstructionist. And as a result, many folks are working very hard to find new methods that allow for the same kind of critical review of scientific work at some point in the process. In my next post, I'll discuss some of the challenges with traditional peer review and some of the things being proposed to make it better.