October 22, 2013 | 6
When the purported discovery of the now infamous “arsenic DNA” bacteria was published, a friend of mine who was studying astrobiology could not stop praising it as an exciting scientific advance. When I expressed reservations about the discovery mainly based on my understanding of the instability of biomolecules containing arsenic, she gushed, “But of course you will be skeptical; you are an organic chemist!
She was right. As chemists me and many of my colleagues could not help but zero in on what we thought was the most questionable aspect of the whole discovery; the fact that somehow, contrary to everything we understood about basic chemistry, the “arsenic DNA” inside the bacteria was stably chugging along, replicating and performing its regular functions.
It turned out that the chemists were right. Measurements on arsenic DNA analogs made by researchers several months later found that the arsenic analogs differed in stability from their phosphate versions by a mind-boggling factor of 1017. Curiously, physicists, astronomers, geologists and even biologists were far more accommodating about the validity of the discovery. For some reason the standards used by these scientists were different from those used by chemists, and in the end the chemists’ standard turned out to be the “correct” one. This is not a triumph of chemists and a blemish on other sciences since there could well be cases where other sciences might have used the correct standards in nailing down the truth or falsehood of an unprecedented scientific finding.
The arsenic DNA fiasco thus illustrates a very interesting aspect of modern cross-disciplinary science – the need to reconcile what can be differing standards of evidence or proof between different sciences. This aspect is the focus of a short but thought-provoking piece by Steven Benner, William Bains and Sara Seager in the journal Astrobiology.
The article explains why it was that standards of proof that were acceptable to different degrees to geologists, physicists and biologists were unacceptable to chemists. The answer pertains to what we call “background knowledge”. In this case, chemists were compelled to ask how DNA with arsenic replacing phosphorus in its backbone could possibly be stable given everything they knew about the instability of arsenate esters. The latter had been studied for several decades, and while arsenic DNA itself had not been synthesized before, simpler arsenate esters were known to be highly unstable in water. The chemists were quite confident in extrapolating from these simple cases to questioning the stable existence of arsenic DNA; if arsenic DNA indeed were so stable, then almost everything they had known about arsenate esters for fifty years would have been wrong, a possibility that was highly unlikely. Thus for chemists, arsenic DNA was an extraordinary claim. And as Carl Sagan said, they needed to see extraordinary evidence before they could believe it, evidence that was ultimately not forthcoming.
For geologists however, it was much easier to buy into the claims. That is because as the article points out, there are several cases where elements in minerals are readily interchanged for other elements in the same column of the periodic table. Arsenic in particular is known to replace phosphorus in rocks bearing arsenate and phosphate minerals. Unlike chemists, geologists found the claim of arsenic replacing phosphorus quite consistent with their experiences. Physicists too bought readily into the idea. As the authors say, physicists are generally tuned to distinguishing two hypotheses from one another; in this case the hypothesis that DNA contains arsenic versus the hypothesis that it does not. The physicists thus found the many tests apparently indicating the presence of arsenate in the DNA to provide support for one hypothesis over another. Physicists did not appreciate that the key question to ask would be regarding the stability of arsenic DNA.
Like chemists biologists were also skeptical. Biologists usually check the validity of a claim for a new form of life by comparing it to existing forms. In this case, when the genetic sequence and lineage of the bacteria were inspected they were found to be very similar to garden variety, phosphate-containing bacteria. The biologists’ background knowledge thus compelled them to ask how it could possibly be that a bacterium that was otherwise similar to other existing bacterium could suddenly survive on arsenic instead of phosphorus.
In the end of course, none of the duplicated studies found the presence of arsenic in the GFAJ-1 bacteria. But this was probably the least surprising to chemists. The GFAJ-1 case thus shows that different sciences can have different standards for what they regard as “evidence”. What may be suitable for one field may be controversial or unacceptable for others. This fact helps answer at least one question for the GFAJ-1 paper: Why was it accepted in a prestigious journal like Science? The answer almost certainly concerns the shuttling of the manuscript to planetary scientists rather than chemists or biologists as reviewers. These scientists had different standards of evidence, and they enthusiastically recommended publication. One of the key lessons here is that any paper on cross-disciplinary topics must be sent to at least one specialist from each discipline comprising the field. Highly interdisciplinary fields like astrobiology, drug discovery, and social psychology are prime candidates for this kind of a policy.
Discipline-dependent standards of proof not only explain how occasionally bad science gets published or how promising results get rejected but it also goes into the deeper issue of what in fact constitutes “proof” in science. This question reminds me of the periodic debates about whether psychology or economics is a science. The fact is that many times the standard of proof in psychology or economics might be unacceptable to a physicist or statistician. As a simple example, it is often impossible to get correlations of better than 0.6 in a psychological experiment. And yet such standards can be accepted as proof in the psychological community, partly because an experiment on human beings is too complex to get more accurate numbers; after all, most human beings are not inclined planes or balls dropped from a tower. In addition one may not always need accurate correlations for discerning valuable trends and patterns. Statistical significance may not always be related to real world significance (researchers running clinical trials would be especially aware of this fact).
The article by Benner, Bains and Seager concludes by asking how conflicting standards of proof can be reconciled in highly cross-disciplinary sciences, and this is a question which is going to be increasingly important in an age of inherently cross-disciplinary research.
I think the GFAJ-1 fiasco itself provides one answer. In that case the most “obvious” objection was raised by chemists based on years of experience. In addition it was a “strong” objection in the sense that it really raised the stakes for their discipline; as noted before, if arsenic DNA exists then much of what chemists know about elementary chemical reactivity might have to be revised. In that sense it was really the kind of falsifiable, make-or-break test advocated by Karl Popper. So one cogent strategy might be to first consider these strong, obvious objections, no matter what discipline they may arise from. If a finding passes the test of these strong objections, then it could be subjected to less obvious and more relaxing criteria provided by other disciplines. If it passes every single criterion across the board then we might actually be able to claim a novel discovery, of the kind that rarely comes along and advances the entire field.