October 10, 2013 | 5
The discussion in the comments section of a recent post on yesterday’s Nobel Prize in Chemistry reflects one of the more unseemly strains of discourse in the chemistry blog world that I have seen in a while. The exchange was unusual, especially for a very prominent blog whose comments section is widely considered to be among the most civil in the science blogging world. The comments are rife with turf wars and bickering, made easier by the possibility of anonymity. Although they are not the worst that I have seen, they do a pretty good job showcasing how bad internal divisions among scientists can get.
Much of the rancor stems from a simple inability or refusal to really delve into and appreciate other chemists’ specialties, in this case biomolecular modeling and simulation. What’s more interesting though is that the comments demonstrate rather typical fallacies and problems that permeate discussion among scientists when they are talking about fields other than their own. In this particular case the field is computational chemistry, but you will find somewhat similar abuse when it comes to total synthesis or chemical biology. The comments demonstrate many of the common fallacies that have been part of human dialogue since we scampered down from the trees; these range from confirmation bias to motivated reasoning to pure ad hominem.
Here’s a brief list of the most common ones that I spotted in that wonderful thread:
1. Cherry picking: Definitely the most common fallacy. It usually starts with “I once had a bad experience with a model prediction….” When people are criticizing other fields the failures inevitably stand out and the successes are ignored or downplayed. Anecdotal evidence of failures is held up as the general rule, and minor but important successes are especially ignored.
2. Conflating the messenger with the message (“I once knew a bad modeler…”): They say you shouldn’t shoot the messenger but here the problem is with shooting the message. It’s pretty clear that bad scientists don’t translate to bad science. Almost every field gets periodically hyped or abused by its practitioners but that is no reason to just stop trying to gauge what the disciplines is actually about. Which leads us to the next related point.
3. Straw men: This involves holding up a particular field, technique or theory to an unrealistic standard for whatever reason, and then trying to beat the hell out of it because it thwarts your inflated expectations. Sometimes it’s because practitioners in the field have exaggerated its utility (and this of course does happen), sometimes it’s because non-practitioners overstate it, sometimes it’s because your own perception is skewed by occasional big successes.
In my own experience when it comes to molecular modeling, there exist two kinds of chemists; the ones who think it will bring about world peace and (more commonly) the ones who think it’s the devil’s invention. How about having more of the ones who have actually tried to understand its inner workings, its pitfalls and possibilities? How about having more chemists who want to work with modelers so that modeling can actually help them address their problems? How about understanding the proper place of modeling as a tool supplementing the armamentarium of the myriad theoretical and experimental tools used by chemists? Really, this is not a non-zero-sum game and it’s certainly not “Mortal Kombat”.
You simply can’t announce that you expect a technique to accomplish X when it’s actually supposed to accomplish Y and then kick it around when it fails to accomplish X. If you fail to properly appraise the goal and utility of any scientific method (experimental or theoretical) it’s really your fault. The classic case involves molecular dynamics (MD) simulations which comprised a small part of the citation for this year’s prizewinning work. Yet most of the discussion revolved around it. In most cases MD simulations are supposed to address local interesting problems, and I have seen my share of its successes in that regard in my own career. You simply can’t say “I expected MD to lick protein folding, and now I am going to bash it for its failure to do so”.
4. Ad hominem: (“You’re a modeler, so by definition anything you say must be useless. What’s your day job, anyway?”). Not much to be said about this kind of comment, except to say that ignoring it with silent contempt (‘mokusatsu‘) is the most appropriate response.
5. Considering prediction to be the only thing in science that matters: (“Any technique that merely explains must be useless”). There’s much, much more to be said about this in another post. For now let me point out that prediction, while undoubtedly very important, is no indication that you understand the inner workings of a system. You could predict what a magician is going to do next by watching him for a long time, but you will still be left with no understanding of how he is doing it. Science proceeds through good understanding, and prediction may or may not be a perfect vehicle for achieving it. Anyone who thinks that good explanations are either easy or simply a start to scientific understanding has not taken a close look at the history of science populated by thinkers like Darwin and Einstein.
As a chemist I am inclined to say that with friends like these we don’t need enemies, and it’s depressing to have chemists themselves being unable to present a unified front even as they rightly bemoan the lack of public appreciation and funding for their discipline. This has to change.