About the SA Blog Network

The Curious Wavefunction

The Curious Wavefunction

Musings on chemistry and the history and philosophy of science
The Curious Wavefunction Home

Solomon Snyder on academic publishing: ask for adequate, not exhaustive, documentation

The views expressed are those of the author and are not necessarily those of Scientific American.

Email   PrintPrint

Image: Corpus Callosum

Renowned neuropharmacologist Solomon Snyder has a thought-provoking take on what seems to be one of the two evils that has plagued modern academia: publication (the other one is the job market). I have previously blogged about the increasing conservatism of academic publishing myself, and in this case “conservatism” also translates to “excessive rigor”.

Snyder starts by lamenting the startling fact that the average duration for a modern American biomedical scientist to start his or her academic career is about the same as that for a neuro or cardiovascular surgeon, people whose specialty is usually considered to be in the top tier of their profession; the difference of course is that a cardiovascular surgeon starts making $500K right off the bat while a new assistant professor starts making $80K and almost never goes beyond $200k or so. The long trudge begins with graduate education, the average duration of which has stretched out over the last three decades (these days, a 5 year Ph.D. is considered relatively quick). Every part of the academic process, from getting a postdoctoral position to your first job to your first grant, has turned into a war of attrition. The “winners” who emerge at the end of it are often demoralized academics in their early 40s whose best years may be behind them. And the situation seems to only be getting worse.

But the article’s really about publishing papers. Snyder hits the nail when he says that academic publishing has become so rigorous in asking for exhaustive experimentation and documentation that it dissuades many authors from publishing their best ideas, ideas which are interesting and valid but which may not have been completely fleshed out. He points to reviewers’ insistence that authors perform a comprehensive set of experiments – often ranging over several months – that would qualify their manuscript for publication. Anyone who has tried to publish biomedical papers must be well aware of how tedious and demoralizing the experience can be. This long-drawn process significantly impacts the progress of science:

“Why does it take so much longer to move from test tube to the printed page? One element is a journal review process that is substantially lengthier, especially in terms of experiments required to address the concerns of referees. To anticipate such referee responses, scientists preemptively carry forward experimentation more exhaustively than is necessary to document their assertions. Yet, we can clone genes in a couple of days. Shouldn’t we be able to complete experiments to satisfy reviewers in a few weeks rather than the 7–12 months typically consumed in revision, not to mention the many years devoted to developing the original manuscript? If one spends 5 years accumulating the data for a manuscript and another year revising it to satisfy referees, benefits to the public are delayed for years.”

In contrast Snyder points to his postdoctoral advisor, another legendary scientist named Julius Axelrod at the NIH who churned out discovery after discovery in short order and won a Nobel Prize (the Axelrod dynasty is nicely charted out in Robert Kanigel’s book “Apprentice to Genius”). The point that Snyder is making is that in those days the reviewing process was much quicker but the quality of science doesn’t seem to have suffered in spite of this speedier turnaround. What has gone wrong since then?

Snyder partially places the blame at the feet of Cell founder Benjamin Lewin who wanted Cell to showcase papers that were essentially complete stories; from hypotheses to final products. But Lewin also made the process highly streamlined. Reviewers were warned to stay away from insults, stick to succinct criticism and suggest adequate but not unrealistic experiments and further studies. The objective was to get the best science out in a form that was interesting enough to spark further inquiry but which was not necessarily the last word.

Lewin understood the piecemeal nature of science where researchers build on each other’s discoveries. This understanding of the scientific process has since been subverted by academic reviewers, partially to cull a flood of proposals and ideas and partially to satisfy their own whims. Sometimes old boys’ networks can conspire to put sound science in a straitjacket. Expecting every research project to tell complete, final stories not only imposes unrealistic and demotivating standards on scientists but also ignores the always incomplete and provisional nature of science. Snyder asks that expectations for accepting papers be changed and points to recent developments like the journal eLIFE which incorporates some of his thinking. Blogger SciCurious suggests her own system of peer-review where a paper is simultaneously sent to a group of journals with different standards; after hearing back from reviewers, the authors can decide whether to push ahead with further experiments to satisfy the top-tier journals or whether to publish the paper in a lower-tier journal right away. But Snyder’s perspective points out that all journals – whether top tier or otherwise – should have a reviewing system that allows for rapid dissemination of results.

Reviews and authors need to seriously contemplate Snyder’s recommendations. Academic research has already turned into a long slog with its uncertain job market and draconian grant approval and does not to face need additional difficulties in the form of glacial and unrealistic reviewing standards. Let’s remember that the purpose of science is to generate ideas, not products. And it shouldn’t take very long for ideas to see the light of day.

Ashutosh Jogalekar About the Author: Ashutosh (Ash) Jogalekar is a chemist interested in the history and philosophy of science. He considers science to be a seamless and all-encompassing part of the human experience. Follow on Twitter @curiouswavefn.

The views expressed are those of the author and are not necessarily those of Scientific American.

Rights & Permissions

Comments 4 Comments

Add Comment
  1. 1. zstansfi 4:46 pm 03/6/2013

    Completely disagree. Snyder harkens back to the days of yore when it was appropriate to publish partial results in the absence of clear confirmation. This model is no longer valid today (at least not for the influential journals). The big issue is this enormous growth of results and strong bias to ignore negative data which makes it far too easy to publish ambiguous studies and then never actually follow-up on their validity in the future. This is how to build entire fields on conjecture and circumstantial data until someone comes around and disproves the entire idea with a well-constructed study.

    A better model might be to rely upon different streams of scientific research. Restrict the number of “independent scientists” and encourage the development of large, well-funded research groups which can more effectively study ideas from the comprehensive perspective required to produce a complete story. These are the groups which should publish the “complete picture” papers that the big journals now accept today.

    Small science can continue on, but it must be in a modified format: independent researchers might be able to eke out major publications by collaborations with other researchers, but their bread and butter of partial results will need to rely upon specialized journals willing to publish incomplete stories, including both positive and negative findings.

    Frankly, this is already something of a reality today, except with two major differences: i) independent scientists who are struggling for funds haven’t accepted the reality that small science is on the decline and ii) there is really no legitimate outlet for the partial, negative results which are so desperately required for independent science to continue on in–even if only in a limited form.

    Link to this
  2. 2. curiouswavefunction 5:02 pm 03/6/2013

    I don’t quite agree. Restricting the number of individual scientists may only serve to encourage herd mentality and conventional thinking and keep out novel ideas. I would argue that big science continues to wax and wane while small science continues to quietly hum along using its limited resources. Plus the distinction is getting blurred since there are projects like the Sloan Digital Sky Survey and FoldIt where the smallest possible scientists (individual citizens) contribute to big scientific projects. I would also argue that partial results will always have their place in science since the goal of science is not to be complete but to be interesting.

    Link to this
  3. 3. penn.taylor 11:07 pm 03/7/2013

    Why haven’t more fields tried the model of It allows quick and easy dissemination of research to interested parties. Articles frequently appear there 8-12 months before being published in a major journal. Many physicists now publish in major journals mainly to meet institutional requirements for tenure. A large share of research communication and “moving the ball down the field” is happening through By the time a paper hits print in a major journal, it’s already old news in the active research community.

    Link to this
  4. 4. curiouswavefunction 11:04 am 03/8/2013

    arXiv has always been a great idea. I wish someone would do something like that for chemistry, biology and medicine. I do think however that chemists and biologists are less open toward the idea compared to more theoretical scientists, partly because of the commercial potential of their discoveries. I really hope they can think beyond this though.

    Link to this

Add a Comment
You must sign in or register as a member to submit a comment.

More from Scientific American

Email this Article