ADVERTISEMENT
  About the SA Blog Network













The Curious Wavefunction

The Curious Wavefunction


Musings on chemistry and the history and philosophy of science
The Curious Wavefunction Home

The varieties of scientific bashing

The views expressed are those of the author and are not necessarily those of Scientific American.


Email   PrintPrint



The discussion in the comments section of a recent post on yesterday’s Nobel Prize in Chemistry reflects one of the more unseemly strains of discourse in the chemistry blog world that I have seen in a while. The exchange was unusual, especially for a very prominent blog whose comments section is widely considered to be among the most civil in the science blogging world. The comments are rife with turf wars and bickering, made easier by the possibility of anonymity. Although they are not the worst that I have seen, they do a pretty good job showcasing how bad internal divisions among scientists can get.

Much of the rancor stems from a simple inability or refusal to really delve into and appreciate other chemists’ specialties, in this case biomolecular modeling and simulation. What’s more interesting though is that the comments demonstrate rather typical fallacies and problems that permeate discussion among scientists when they are talking about fields other than their own. In this particular case the field is computational chemistry, but you will find somewhat similar abuse when it comes to total synthesis or chemical biology. The comments demonstrate many of the common fallacies that have been part of human dialogue since we scampered down from the trees; these range from confirmation bias to motivated reasoning to pure ad hominem.

Here’s a brief list of the most common ones that I spotted in that wonderful thread:

1. Cherry picking: Definitely the most common fallacy. It usually starts with “I once had a bad experience with a model prediction….” When people are criticizing other fields the failures inevitably stand out and the successes are ignored or downplayed. Anecdotal evidence of failures is held up as the general rule, and minor but important successes are especially ignored.

2. Conflating the messenger with the message (“I once knew a bad modeler…”): They say you shouldn’t shoot the messenger but here the problem is with shooting the message. It’s pretty clear that bad scientists don’t translate to bad science. Almost every field gets periodically hyped or abused by its practitioners but that is no reason to just stop trying to gauge what the disciplines is actually about. Which leads us to the next related point.

3. Straw men: This involves holding up a particular field, technique or theory to an unrealistic standard for whatever reason, and then trying to beat the hell out of it because it thwarts your inflated expectations. Sometimes it’s because practitioners in the field have exaggerated its utility (and this of course does happen), sometimes it’s because non-practitioners overstate it, sometimes it’s because your own perception is skewed by occasional big successes.

In my own experience when it comes to molecular modeling, there exist two kinds of chemists; the ones who think it will bring about world peace and (more commonly) the ones who think it’s the devil’s invention. How about having more of the ones who have actually tried to understand its inner workings, its pitfalls and possibilities? How about having more chemists who want to work with modelers so that modeling can actually help them address their problems? How about understanding the proper place of modeling as a tool supplementing the armamentarium of the myriad theoretical and experimental tools used by chemists? Really, this is not a non-zero-sum game and it’s certainly not “Mortal Kombat”.

You simply can’t announce that you expect a technique to accomplish X when it’s actually supposed to accomplish Y and then kick it around when it fails to accomplish X. If you fail to properly appraise the goal and utility of any scientific method (experimental or theoretical) it’s really your fault. The classic case involves molecular dynamics (MD) simulations which comprised a small part of the citation for this year’s prizewinning work. Yet most of the discussion revolved around it. In most cases MD simulations are supposed to address local interesting problems, and I have seen my share of its successes in that regard in my own career. You simply can’t say “I expected MD to lick protein folding, and now I am going to bash it for its failure to do so”.

4. Ad hominem: (“You’re a modeler, so by definition anything you say must be useless. What’s your day job, anyway?”). Not much to be said about this kind of comment, except to say that ignoring it with silent contempt (‘mokusatsu‘) is the most appropriate response.

5. Considering prediction to be the only thing in science that matters: (“Any technique that merely explains must be useless”). There’s much, much more to be said about this in another post. For now let me point out that prediction, while undoubtedly very important, is no indication that you understand the inner workings of a system. You could predict what a magician is going to do next by watching him for a long time, but you will still be left with no understanding of how he is doing it. Science proceeds through good understanding, and prediction may or may not be a perfect vehicle for achieving it. Anyone who thinks that good explanations are either easy or simply a start to scientific understanding has not taken a close look at the history of science populated by thinkers like Darwin and Einstein.

As a chemist I am inclined to say that with friends like these we don’t need enemies, and it’s depressing to have chemists themselves being unable to present a unified front even as they rightly bemoan the lack of public appreciation and funding for their discipline. This has to change.

Ashutosh Jogalekar About the Author: Ashutosh (Ash) Jogalekar is a chemist interested in the history and philosophy of science. He considers science to be a seamless and all-encompassing part of the human experience. Follow on Twitter @curiouswavefn.

The views expressed are those of the author and are not necessarily those of Scientific American.





Rights & Permissions

Comments 5 Comments

Add Comment
  1. 1. larkalt 1:30 pm 10/10/2013

    In which it is demonstrated that nasty online places are not due to the fact that uneducated people can join – but rather to the medium (anonymity, driveby postings etc.)

    Link to this
  2. 2. Chryses 10:11 pm 10/12/2013

    “This has to change.”

    Such behavior will not change until after human beings change.

    Link to this
  3. 3. bucketofsquid 1:04 pm 10/14/2013

    Moderation of posts by the blog owner can have quite an impact. Too bad that rarely happens.

    Link to this
  4. 4. Ali TT 6:53 am 10/15/2013

    Its not just online that computational chemistry gets a kicking at times. I was at an AGM of a chemical organisation a couple of years back. Each year the recipient of a PhD scholarship from that organisation is invited to give a talk. It happened that this year, the talk was on computational chemistry. And not just any computational chemistry, but method development. The speaker made it clear that he was a long way from actually finalising and testing the code, being as he was only half way through his PhD. At the end, one of the aged audience members got up and stated that as the speaker was not expecting to generate any “new” results during his studies, then his University would be dishonest in awarding him a PhD!

    Link to this
  5. 5. curiouswavefunction 8:04 am 10/31/2013

    #4: People seem to forget that science is an incremental process.

    Link to this

Add a Comment
You must sign in or register as a ScientificAmerican.com member to submit a comment.

More from Scientific American

Scientific American Back To School

Back to School Sale!

12 Digital Issues + 4 Years of Archive Access just $19.99

Order Now >

X

Email this Article



This function is currently unavailable

X