ADVERTISEMENT
  About the SA Blog Network













Guest Blog

Guest Blog


Commentary invited by editors of Scientific American
Guest Blog HomeAboutContact

The Replication Myth: Shedding Light on One of Science’s Dirty Little Secrets

The views expressed are those of the author and are not necessarily those of Scientific American.


Email   PrintPrint



Credit: Creative Commons/Jared Horvath

In a series of recent articles published in The Economist (Unreliable Research: Trouble at the Lab and Problems with Scientific Research: How Science Goes Wrong), authors warned of a growing trend in unreliable scientific research. These authors (and certainly many scientists) view this pattern as a detrimental byproduct of the cutthroat ‘publish-or-perish’ world of contemporary science.

In actuality, unreliable research and irreproducible data have been the status quo since the inception of modern science. Far from being ruinous, this unique feature of research is integral to the evolution of science.

At the turn of the 17th century, Galileo rolled a brass ball down a wooden board and concluded that the acceleration he observed confirmed his theory of the law of the motion of falling bodies.  Several years later, Marin Mersenne attempted the same experiment and failed to achieve similar precision, causing him to suspect that Galileo fabricated his experiment.

Early in the 19th century, after mixing oxygen with nitrogen, John Dalton concluded that the combinatorial ratio of the elements proved his theory of the law of multiple proportions.  Over a century later, J. R. Parington tried to replicate the test and concluded that “…it is almost impossible to get these simple ratios in mixing nitric oxide and air over water.”

At the beginning of the 20th century, Robert Millikan suspended drops of oil in an electric field, concluding that electrons have a single charge.  Shortly afterwards, Felix Ehrenhaft attempted the same experiment and not only failed to arrive at an identical value, but also observed enough variability to support his own theory of fractional charges.

Other scientific luminaries have similar stories, including Mendel, Darwin and Einstein. Irreproducibility is not a novel scientific reality. As noted by contemporary journalists William Broad and Nicholas Wade, “If even history’s most successful scientists resort to misrepresenting their findings in various ways, how extensive may have been the deceits of those whose work is now rightly forgotten?”

There is a larger lesson to be gleaned from this brief history.  If replication were the gold standard of scientific progress, we would still be banging our heads against our benches trying to arrive at the precise values that Galileo reported.  Clearly this isn’t the case.

The 1980’s saw a major upswing in the use of nitrates to treat cardiovascular conditions. With prolonged use, however, many patients develop a nitrate tolerance.  With this in mind, a group of drug developers at Pfizer set to creating Sildenafil, a pill that would deliver similar therapeutic benefits as nitrates without declining efficacy.  Despite its early success, a number of unanticipated drug interactions and side-effects—including penile erections—caused doctors to shelve Sildenafil. Instead, the drug was re-trialed, re-packaged and re-named Viagra.  The rest is history.

This tale illustrates the true path by which science evolves.  Despite a failure to achieve initial success, the results generated during Sildenafil experimentation were still wholly useful and applicable to several different lines of scientific work.  Had the initial researchers been able to massage their data to a point where they were able to publish results that were later found to be irreproducible, this would not have changed the utility of a sub-set of their results for the field of male potency.

Many are taught that science moves forward in discreet, cumulative steps; that truth builds upon truth as the tapestry of the universe slowly unfolds.  Under this ideal, when scientific intentions (hypotheses) fail to manifest, scientists must tinker until their work is replicable everywhere at anytime.  In other words, results that aren’t valid are useless.

In reality, science progresses in subtle degrees, half-truths and chance.  An article that is 100 percent valid has never been published.  While direct replication may be a myth, there may be information or bits of data that are useful among the noise.  It is these bits of data that allow science to evolve.  In order for utility to emerge, we must be okay with publishing imperfect and potentially fruitless data.  If scientists were to maintain the ideal, the small percentage of useful data would never emerge; we’d all be waiting to achieve perfection before reporting our work.

This is why Galileo, Dalton and Millikan are held aloft as scientific paragons, despite strong evidence that their results are irreproducible.  Each of these researchers presented novel methodologies, ideas and theories that led to the generation of many useful questions, concepts and hypotheses.  Their work, if ultimately invalid, proved useful.

Doesn’t this state-of-affairs lead to dead ends, misused time and wasted money? Absolutely.  It is here where I believe the majority of current frustration and anger resides.  However, it is important to remember two things: first, nowhere is it written that all science can and must succeed.  It is only through failure that the limits of utility can be determined.  And second, if the history of science has taught us anything, it is that with enough time all scientific wells run dry.  Whether due to the achievement of absolute theoretical completion (a myth) or, more likely, the evolution of more useful theories, all concepts will reach a scientific end.

Two reasons are typically given for not wanting to openly discuss the true nature of scientific progress and the importance of publishing data that may not be perfectly replicable: public faith and funding.  Perhaps these fears are justified.  It is a possibility that public faith will dwindle if it becomes common knowledge that scientists are too-often incorrect and that science evolves through a morass of noise.  However, it is equally possible that public faith will decline each time this little secret leaks out in the popular press.  It is a possibility that funding would dry up if, in our grant proposals, we openly acknowledge the large chance of failure, if we replace gratuitous theories with simple unknowns.  However, it is equally possible that funding will diminish each time a researcher fails to deliver on grandiose (and ultimately unjustified) claims of efficacy and translatability.

Many of my colleagues worry that honesty and full disclosure will tarnish the reputation of science.  I fear, however, that dishonesty will accomplish this much faster.  In the end, we must trust that the public and granting bodies can handle the truth of our day-to-day reality.  The story of legitimate science may not live up to the ideal—but at least it is the truth. Isn’t that what science purports to be all about?

Jared Horvath About the Author: Jared Cooney Horvath is a PhD candidate in cognitive neuroscience at the University of Melbourne and co-president of The Education Neuroscience Initiative (TENI).

The views expressed are those of the author and are not necessarily those of Scientific American.






Comments 42 Comments

Add Comment
  1. 1. David Cummings 4:05 pm 12/4/2013

    Excellent analysis. It reminds me of Stephen J Gould writing that “scientists as disinterested collectors of pure fact” is a myth.

    Link to this
  2. 2. SigmaEyes 6:58 pm 12/4/2013

    I can accept that scientific achievements can be challenged successfully and challenged unsuccessfully; that even when wrong, unintended consequence can be useful; that even unsuccessful science can establish or define what is a dead end in a useful way.

    What made me feel sullen, though, as I read through this article, was the statement,

    “if the history of science has taught us anything, it is that with enough time all scientific wells run dry. Whether due to the achievement of absolute theoretical completion (a myth) or, more likely, the evolution of more useful theories, all concepts will reach a scientific end.”

    This is dis-heartening to someone who believes in science. Admittedly perhaps I could be accused of putting my “faith” in science. To think that each scientific avenue will eventually end, shakes that faith to the point where I mentally try to reject it as a premise.

    I am curious if others think this is true of all of science, or if others believe as I do that scientific discovery usually produces many new questions, even when successful in the original objective. That science can be thought of as expanding, ever widening roads, not as dead end roads.

    Link to this
  3. 3. Cooney5431 7:31 pm 12/4/2013

    SigmaEyes: I think one of the major reasons scientific endeavors come to an end is exactly the reason you describe: expansion and evolution.

    One way to imagine it is not as dead-end roads, but as 4-way stops. Rather than roads absolutely ending, they spawns multiple additional avenues – each made possible only because the original road existed and was pushed to its limits.

    Link to this
  4. 4. howwhyifthen 1:16 am 12/5/2013

    Thanks Jared for this great piece. Now you have to make sure your students (including me) don’t get too disillusioned. Duane Merchant.

    Link to this
  5. 5. Bee 4:51 am 12/5/2013

    We live in the 21st century, not in the 17th. Science today is an organized community enterprise. Sloppy studies and non-reproducible but published findings are plainly an inefficient use of resources: time, human and financial resources. Your historic argument forgets to take into account that science has as a matter of fact changed dramatically. It is much more connected today, it moves faster, there are more people. Due to increasing specialization and technological advances, it is considerably harder today to check results of others. We necessarily have to rely much more on others. You should reconsider your attitude, it’s not helpful.

    The main problem here isn’t so much one of publish or perish, it’s a lack of commitment. It is by far easier to get funding for little crappy studies than for decent large ones. Unfortunately, a dozen little crappy studies will never replace a decent large one. I fault funding agencies for that, not the researchers.

    Link to this
  6. 6. hatemnajdi 7:02 am 12/5/2013

    “if the history of science has taught us anything, it is that with enough time all scientific wells run dry. Whether due to the achievement of absolute theoretical completion (a myth) or, more likely, the evolution of more useful theories, all concepts will reach a scientific end.”
    Does this apply to Heisenberg Uncertainty and Gödel Incompleteness?

    Link to this
  7. 7. Chryses 7:12 am 12/5/2013

    hatemnajdi (6),

    All Science concepts are equal, but some are more equal than others.

    Link to this
  8. 8. hatemnajdi 7:16 am 12/5/2013

    “…. all concepts will reach a scientific end.”
    Is it an end or a dead end? Max Planck said once: Science cannot solve the ultimate mystery of nature. And that is because, in the last analysis, we ourselves are a part of the mystery that we are trying to solve.

    Link to this
  9. 9. jtdwyer 8:01 am 12/5/2013

    My impression was that the discussion the regarding the irreproducibility of research results was primarily directed to the fields of social sciences and psychology. Have the clever psychiatrists redirected the discussion towards all fields of science? How does that make you feel?
    <%)

    Link to this
  10. 10. Jerzy v. 3.0. 8:50 am 12/5/2013

    @jtdwyer
    Certainly, not. Cancer research suffers the same irreproducibility problem.

    Given how many people die of cancer, the Author should think better about his “don’t worry, it will all be resolved sometime, after years or decades” attitude.

    Link to this
  11. 11. Shoshin 10:36 am 12/5/2013

    As one of my research advisors told me “No one expects you to be right, we just don’t want to see any hanky panky”. Nothing wrong if you are wrong. Lots wrong is you always claim to be right, but no one gets the same results.

    Another one told me “Unless you publish all of your raw data, your methodology and your computer code, your publications are infomercials, not science”.

    These two understood science. They have stood as my yardsticks in how I evaluate the veracity of a scientific claims. For example the “97% consensus” struck me immediately as being more akin to old school communist “elections”, which boasted “97% turnout” and “97% support” for the candidate. What utter garbage. Not even that many people would claim to like apple pie in a properly constructed survey.

    I’m amazed that people who claim to be scientists do not get these two basic priciples of BS detection.

    Link to this
  12. 12. JPGumby 10:54 am 12/5/2013

    Shoshin, I also always heard “data without a really good thorough assessment of error bounds is meaningless”, something I also think is often overlooked today.

    BTW, I think good science never dies, it just turns practical.

    Link to this
  13. 13. rloldershaw 11:02 am 12/5/2013

    One wants to avoid the use of absolutes in science.

    You say “… all concepts will reach a scientific end.”

    But do you think that we will one day discard the concept that matter is composed of atoms, or that on larger scales matter is lumped into galaxies?

    Perhaps it would be better to say that many concepts will be replaced by improved conceptual frameworks, but not all. Some conceptual principles seem to have an open-ended shelf-life.

    Robert L. Oldershaw
    http://www3.amherst.edu/~rloldershaw
    Discrete Scale Relativity/Fractal Cosmology

    Link to this
  14. 14. evosburgh 11:06 am 12/5/2013

    11. Shoshin: you and your advisors have summed up my beliefs about the nature of science and reporting the findings thereof. I was given very similar advice and when I published my thesis I placed all of my raw data in the Appendix so that if someone were willing to pick up where I left off they could confirm what I had done and then move forward.

    I do agree that not all scientific endeavors end as intended and sometimes the results we get, which are not as expected, which lead us down other paths.

    My single biggest complaint is when a statement is made that the science is settled and anyone who questions that statement must be a crack pot. There is very little that is certain in science and if the people presenting the results cannot present the appropriate uncertainty associated to their results they are not being completely forthright or they do not know any better.

    On that line of thought: if I produce an analysis with a deterministic result along with a range of probable outcomes which is then tested and we get a result that is outside of that range then I have failed. However, if we get a result which is within that range then I did not fail (no matter whether it is the P50, P99, P1 or anywhere inbetween). However, if my deterministic result greatly diverges from the actual result then I have probably done something wrong and I need to go back and revisit my methodologies. In a perfect world my deterministic result, P50 and the actual result would all coincide within the measurement error of the tools used to collect the data. In real life this is rarely the case which is why we do probablistic analyses. This is kind of a cover you behind technique to convey that we cannot ever get THE ANSWER when it comes to scientific endeavors and that there are uncertainties that we cannot resolve no matter how hard we try.

    Link to this
  15. 15. Jerzy v. 3.0. 11:24 am 12/5/2013

    11. Shoshin
    Scientific journals, including the most prestigious ones like Nature and Science, don’t publish full raw data and methods. Yes, these top journals break the classical principles of science.

    It started probably when journals wanted to squeeze many papers into limited number of printed pages. So they referred to earlier papers, to websites, left off the supposedly obvious bits of protocols etc., Now that there can be online supporting material of almost unlimited size, this is inexcusable.

    Link to this
  16. 16. jtdwyer 11:54 am 12/5/2013

    Jerzy v. 3.0.,
    I forgot to mention – my initial comment was just intended to be a little joke…
    <%)

    Link to this
  17. 17. Jerzy v. 3.0. 2:04 pm 12/5/2013

    @16
    I like a good joke as anybody. But here, hundreds of millions of research grants were wasted. Hundreds of young scientists broken their careers by trying to follow cutting-edge discoveries which existed not. Companies trying to develop drugs found that the basic research was wrong.

    Tens of thousands of people die every year waiting for new drugs. That science “will resolve matters over the years” will be too late for them. It is easy to forget it looking from the perspective of 19. century physicists where one decade or two doesn’t matter.

    Link to this
  18. 18. come.together.seth 3:09 pm 12/5/2013

    @Jerzy
    I think you’re missing the point: there is two types of problems: (1) bad science, where the investigator misleads the field about what they have performed and (2) reproducibility of results. What Mr. Horvath is suggesting is that (2) is not really that bad, as long as one accurately reports one’s findings (e.g. not commit 1).

    One research finding will not immediately lead to a novel cancer therapeutic. Before a drug goes into a clinical trial, there’s mounds of evidence that point to the drug’s mechanisms of action and potential safety. It is not “it will all be resolved sometime, so let’s sing Kumbaya, and love science,” but rather the nature of science. Exact reproducibility, as Mr. Horvath suggests, is a fools errand. Convergence of evidence from multiple lines of investigation, however, is how science comes to reveal truths.

    So, “chasing” a single published result is not in ANY researcher’s best interest, but rather, skillful experimental design to attack similar questions in novel ways is. It is not wasted time, it is gaining valuable knowledge.

    Link to this
  19. 19. jtdwyer 5:20 pm 12/5/2013

    Jerzy v. 3.0.,
    That fine, but you took exception to my assertion that the reproducibility issue was first identified in the fields of psychology and sociology – that assertion is supported by the references provided in comment #16.

    Link to this
  20. 20. Cooney5431 5:50 pm 12/5/2013

    Bee: I agree that the current atmosphere of professional science leads to a lot complacency – which, in turn, leads to a lot of unverifiable research.

    However, that the irreproducibility existed before contemporary professional science suggests, at least in part, that there’s something deeper in this issue worth exploring.

    In my opinion, irreproducibility seems to be integral to science – yet science continues to successfully evolve. This suggests reproducibility (on the exact scale) may not be the important arbiter many claim it to be.

    Per thing’s coming to an “end” – I think it is worth noting not a “dead end”. Rather, scientific paths seem to end when they evolve to several “new” lines of thinking. For instance, we all still agree in atomism – but the study of atoms evolved to the study of quarks, then into the study of subatomic particles. I think an interesting question (with very interesting ramifications) is whether or not this evolution will, itself, ever end or continue ad infinitum.

    Link to this
  21. 21. Shoshin 6:00 pm 12/5/2013

    True, sometimes practical publication limitations preclude full publication of raw data and methodology, and that is fine as long as the researcher freely communicates and shares the data and methodology on request.

    However, when researchers refuse to share data, claim the data are lost but claim the results are valid anyway, or hire lawyers to fight Freedom of Information Act requests, one can be certain that the research is fatally flawed and it is very safe to ignore the data and subsequent claims.

    Link to this
  22. 22. rloldershaw 6:24 pm 12/5/2013

    Of course then there are white holes, extra-dimensions, Boltzmann brains, the multicurse, anthropic pretzel logic, strings, branes, axions, …

    Perhaps papers on these topics should come with a disclaimer stamped prominently in red at the top: PURE SPECULATION (handle with tongs).

    Link to this
  23. 23. come.together.seth 9:23 pm 12/5/2013

    “This suggests reproducibility (on the exact scale) may not be the important arbiter many claim it to be.”

    I think non-scientists learn of science, they expect that experiments will work out the same way every time, because that is what we are taught by doing 4th grade science experiments… baking soda and vinegar will always make carbon dioxide erupt from volcanos. In this form, we are taught science is a recipe for baking.

    But in academic science, very little is learned from repeating an experiment: the findings have already been established in the eyes of the field (which, as Mr. Horvath points out, may not actually be fully established). So one builds upon the previous experiment: if it is not possible to reproduce the result, it is not sufficient to just report that failure to reproduce, a scientist must investigate the reasons why the result won’t reproduce. The scientist usually uncovers why the original result was found, and flawed. In this way, reproducibility is not a one-to-one repeat, but an exhaustive search of underlying rules governing the field. Science has a way of righting itself, because multiple investigators will approach the same problem from many different angles and methods. When there is a convergence, then truth may be established.

    In this way, academic science is like a chef cooking a meal, tasting as one goes, making mistakes that are sometimes delicious and sometimes inedible. Having burned one chicken dish, a chef doesn’t just quit cooking chicken forever.

    Link to this
  24. 24. Cooney5431 11:16 pm 12/5/2013

    Sorry to keep commenting, but thought of another interesting thing with regards to replication:

    What happens when replication goes bad? Take, for example, N-Rays. Although non-existent, literally hundreds of scientists replicated and reported this form of rediation. Or cold fusion – in the brief time people thought this was possible, a number of international journals carried research Pons & Fleischmann’s work.

    It may seem tangential – but I think this speaks to the broader concept of replicability not dictating the evolution of scientific thought.

    Link to this
  25. 25. Heteromeles 12:22 am 12/6/2013

    I’d like to point out that, in many fields, precise replication of older experiments is frowned upon. It’s been done, there are limited resources available, if you get it right, you’ve simply proved the first guy right. If you get it wrong, perhaps it’s your screwup. Who has won from making this mess?

    Replication happens in areas where there is a lot of money (e.g. medicine), where there is a lot of controversy (e.g. climate science), and most often, when failure to replicate an older experiment leads to something new and interesting.

    Link to this
  26. 26. Jerzy v. 3.0. 6:21 am 12/6/2013

    @18
    I think both principles are violated regularly in the current science and it is very damaging to a society.

    “So, “chasing” a single published result is not in ANY researcher’s best interest…” – in practice, there is no other way.

    Often a big experiment in molecular biology needs over a year of difficult setting a design (say, breeding mice with some disease, or establishing a protocol to grow a particular type of human cells in a dish), easily takes over a year to perform, costs some 10,000s of dollars in equipment and chemical, and takes a substantial part of a time of a Ph.D. or a postdoctoral stipend.

    So every wrong result which prompts several or more such follow-up experiments is a big waste of time and money. From the practical reasons, scientific community must maximally reduce such events.

    The question related to (1): which are coscious fraud and which artifacts, coincidences, well-meaning over-optimism, people forced to skip repeated validation because of time and monetary constraints? It is usually unfindable. I wish to believe that few scientists are conscious fraudsters.

    The fact is that two big pharma companies reported that they could not reproduce most of published landmark experiments on cancer biology. I think even the most cynical person will not say that majority of scientists are fraudsters. So most of errors must come from artifacts, badly described methods etc. But practically the damage is the same.

    Link to this
  27. 27. Jerzy v. 3.0. 6:44 am 12/6/2013

    @jtdwyer
    Yes, perhaps it was first reported in social sciences. However, it is present in most disciplines of science.

    I talk about molecular biology, because stakes are very high and constrains very tight. And there is a moral obligation to help patients with new therapies.

    Link to this
  28. 28. come.together.seth 9:11 am 12/6/2013

    @Jerzy
    Publication bias will always lead to results that won’t replicate. Inherent in statistics is the Type I error that cannot be avoided.

    The fear should be that in only publishing highly reproducible results, that we limit our scientific understanding. I’m not arguing that we shouldn’t improve publication standards, but science self-corrects for sloppy researchers (i.e. they lose funding).

    Making publication harder should not be a goal of science – it will increase the incentive to get the one result that everybody wants to see, but that isn’t true.

    In cancer, I’d be more concerned that there probably are a number of labs that similarly fail to replicate results, but leave those findings as unresolved – only by increasing communication between researchers will these incorrect findings be identified. All results should be published, so that investigators can look at the corpus of published work, not just single high-impact papers.

    Link to this
  29. 29. jh443 9:22 am 12/6/2013

    While reading this article, I couldn’t help but think of the movie “A Few Good Men,” and what Jack Nicholson’s character said about the operations at Gitmo – and how his statement is likely to apply to the public’s reaction to the “scientific method” described in the article: “You can’t HANDLE the truth!”

    Link to this
  30. 30. Bee 12:23 pm 12/6/2013

    @Cooney5431: You’re jumping to conclusions. Just because something has been practiced in the past doesn’t mean it’s good, and certainly not that it is necessary.

    What you should ask is not whether there are some cases in which irreproducibility was part of somebody’s hunch that eventually turned out to be correct, and that story entered history, but how many irreproducible ‘hunches’ there have been in the history of science that just wasted time – a waste of time that would have been avoidable.

    I am vaguely thinking of writing a piece about this on my blog if I find time on the weekend, in which case I’ll certainly link to this piece.

    Link to this
  31. 31. macgupta 1:55 pm 12/7/2013

    Aha! Galileo apparently had the wrong value for the acceleration due to gravity, but not the wrong law of accelerated motion.

    http://home.thep.lu.se/~henrik/fyta13/litteratur/Koyre1953.pdf

    Link to this
  32. 32. CS Shelton 12:08 am 12/8/2013

    nonsense-by-famous-smart-person-quote@8-

    What if there are answers about humans, and those answers are predictably material, soulless, and boring? That’s how I feel about that the mystery of the self. Meatsacks gonna meatsack. If that is the answer, it seems we’re well on the way to reaching it, and it’s not going to thrill when we get there.

    As to the original article, I join a few other commenters in feeling replicability as a standard shouldn’t be so easily dismissed. Is it useful, when used in conjunction with other standards and practices? If yes, then what’s the problem? Are we attacking a straw scientist who advocates that as the end-all be-all, or did someone at The Economist actually say that?

    It is key in weeding out some serious bullshit, you have to admit. Well… I suppose you aren’t obligated to make any sense at all, if you don’t want to.
    -

    Link to this
  33. 33. Cooney5431 7:16 am 12/8/2013

    Cs Shelton: LOL – straw-scientist, I like that.

    Clearly reproducibility, in practice with other methods, is useful. I don’t think that’s the point being made. Rather, the concept of reproducibility serving as a make-or-break tenet of successful scientific evolution is what is being discussed.

    It seems perfect reproducibility is an ideal – a great one which doubtless inspires much work – but it is not as solid a back-bone many believe (or that we were taught).

    Please don’t misinterpret this story as a call to “sack reproducibility” in science – that is swinging the pendulum much too far. Rather, see this as simply contextualizing the concept in order to place it in a more honest light (and one certainly more reflective of our day-to-day experience of it). That science continues to work with a less-than-stellar track record of pure replication suggests, perhaps, we need not hold this concept as the apex of our profession (or, at least, we need not publicize it as such).

    Link to this
  34. 34. Jerzy v. 3.0. 8:24 am 12/8/2013

    @32
    “replicability as a standard shouldn’t be so easily dismissed. Is it useful, when used in conjunction with other standards and practices?”

    Reproducibility is the key to science. Otherwise you end up with an airplane which works in only 75% of cases.

    @28
    “In cancer, I’d be more concerned that there probably are a number of labs that similarly fail to replicate results, but leave those findings as unresolved – only by increasing communication between researchers will these incorrect findings be identified. ”

    There are usually several or a dozen such labs!

    Unfortunately, voluntary communication is unlikely in the current system. Researchers are pitted to compete tooth-and-nail for grants and very scarce tenure positions. They are unlikely to help rival labs.

    Changes needed perhaps include encouraging publishing negative results (eg. in online journals), forcing labs to disclose all the previous experiments (as pharma companies are forced to do), and focing exhaustive description of materials and methods.

    And changing the mad science system, of course.

    Link to this
  35. 35. Heteromeles 11:31 am 12/8/2013

    I happen to agree with the need for publishing more negative results. I had a case like that with my PhD, where an observational study and two experiments all came up with the same (surprising) negative results. The details don’t matter, but it was an informative negative, because no one had expected it. Still, I faced a lot of friction in getting it published.

    Negative results do have problems: after all, they can result from simple technical errors. Unfortunately, positive results can also result from technical errors, and therein lies a bias that researchers do need to think about.

    Link to this
  36. 36. come.together.seth 9:55 am 12/10/2013

    “Unfortunately, voluntary communication is unlikely in the current system. Researchers are pitted to compete tooth-and-nail for grants and very scarce tenure positions. They are unlikely to help rival labs.”

    On one level, yes, the competition is fierce. However, in many fields, attendance at a conference brings out this information: usually from offhand comments about failures to find the same result, either from the PI or from the grad student/postdoc. Secondly, scientists love to critique, and what better critique to say that someone else’s work can’t be reproduced. In my experience, more information is passed off of the posters than on the posters themselves.

    If you look at it from a grant perspective, by applying for a grant on a failed procedure, you can bet there will be a reviewer who knows of such a failure. It makes little sense to hold onto failures to reproduce as a ‘trade secret.’

    Link to this
  37. 37. Andreas Johansson 4:10 pm 12/10/2013

    Jerzy wrote:
    “Reproducibility is the key to science. Otherwise you end up with an airplane which works in only 75% of cases.”

    I dunno if that’s a good example. We do aim, of course, for more than 75% reliability in aircraft design, but we don’t aim for 100% reliability (which is a practical impossibility and, more importantly, prohibitively expensive).

    Link to this
  38. 38. Dr. Strangelove 9:49 pm 12/10/2013

    Cooney
    Reproducibility was never a requirement of science. It is required in experimental sciences like physics and chemistry but not in biology, geology, astronomy, anthropology, economics, sociology, etc. You cannot replicate the big bang, evolution of species, formation of mountains, history, ancient culture, etc.

    The myth is all of science is experimental physical science. Truth is a lot of it is theory that exists only in the mind. Take multiverses and superstrings. Is that science? IMO it’s more mathematics than science.

    Link to this
  39. 39. Dr. Strangelove 10:23 pm 12/10/2013

    BTW in experimental science, it is not reproducibility that is key. Failure to replicate is the key. Why did it fail? If due to experimental error or statistical variability, other experiments will confirm it. Galileo et al success is not because others replicated his experiment. It’s because nobody failed to replicate more careful experiments. The theory has not been falsified.

    Einstein said no amount of observations can prove my theory. But only one contrary observation is needed to disprove it.

    Link to this
  40. 40. Cooney5431 12:35 am 12/11/2013

    Dr. Strangelove,

    Excellent points – I think you’re correct on all counts!

    Link to this
  41. 41. marz62 5:58 pm 12/12/2013

    Apart from the general shallowness of this analysis, it is woefully misguided; ‘reproducibility’ (or replication of studies) is not a ‘myth’ — it is an ideal, without which there is little to base scientific progress, or the search for scientific truth’, upon.

    This is not to deny that “chance and half truths” play some role in Scientific discovery…but to claim (topsy-turvy-like) that these are what propels all Science is absurd. The example of Pfizer’s unintentional and seredipitous creation of Viagra (from a failed pharmacotherapeutic) is a poor one…one cannot base an application for research funds on the possibly fortuitous by-product of this (destined to fail) research.

    Further, without the standard of reproducibility (see: the Reproducibility Initiative) to strive for, untold millions (or billions) in money will be wasted, and, let us not forget, hundreds or thousands of lives may be put at stake through failed, advanced-stage drug trials that cannot replicate initial in vitro (or animal model) results in actual humans. Would you take a medicine that showed “great efficacy” in pre-clinical trials (data for which was “massaged”) but failed to reproduce this efficacy in follow-up studies?

    Replication of results is the foundation of all publicly funded bio-medical research.

    The fact that some famous scientists (from the “wild west” days of experimental science) may have “fudged” some of their experiments (to fit their hypotheses) does not justify/rationalize the argument made here. These scientists are famous for the body of their work and the pioneering contributions they made in the earlier days of Science.

    But that speaks to the entire point of scientific progress: the mistakes (or falsified experiments) of the past are not sufficient for the present times, nor the future. We have learned, and will continue to learn, and in so doing, will no longer accept sloppy (or dishonest) science as the basis of what we call ‘Scientific truth’. We are progressing beyond this, thankfully.

    To believe Mr. Horvath’s argument, one must disbelieve in the idea (and ideal) of Scientific Progress, and the notion that, as we progress, we adopt clearer, more coherent, and more rigid standards of scientific proof and evidence.

    Link to this
  42. 42. Cooney5431 7:41 pm 12/12/2013

    marz62:

    Thanks for your sound analysis and comments. I agree 100% that replication is an ideal: and, as with all ideals, is unattainable but still inspirational. In this instance I used the term ‘myth’ because the majority of scientists – myself included – do not present reproducibility as being an ideal when we commonly discuss our work in the public sphere (rather, we tend to present it as a foundational given of science).

    I think your arguments reflect sound insight into the matter – but I do not think they accurately reflect modern science (or, at least, not my experience of it).

    As the Economist articles (to which this piece was written in response) pointed out, it is largely within the biomedical sciences that claims of irreproducibility are emerging at an alraming rate. In fact, many drugs and drug classes have been on the market for decades and new research is simply no longer supporting their efficacy. As such, it’s now common to acknowledge that millions of dollars and untold numbers of lives HAVE been wasted due to this problem.

    That irreproducibility is seen by many (including the authors of The Economist pieces) as an emerging blight reflective of today’s scientific environment suggests, to me at least, that many think science has taken a step-backwards in some respects. I think your choice of words (‘scientific progress’) highlights this well: to many, science is pushing closer to an idealized goal (or ‘truth’) and concerns of irroproducibility are counter-productive to this goal.

    By contextualizing the irrporudicibilty phenomenon and showing that it is not novel but, in fact, has been present since the beginning I was hoping simply to suggest that science continues to evolve despite this: ergo, what mistakes have we made concerning our conceptions of science and its evolution?

    WARNING: the next paragraph is highly speculative and will likely furl some feathers; but let’s consider it and see what happens.

    In your final paragraph you suggest that “…as [science]progress[es], we adopt clearer, more coherent, and more rigid standards of scientific proof and evidence.” It might be possible to rephrase this as “…as science evolves, we adopt different, more contemporary, and more technical standards of scientific proof and evidence.” In the former, it is implied things are getting better and, somehow, closer to some end-goal. In the later, it is implied things are simply changing and more reflective of our advanced technologies and methodologies: but that our relative position (with regard to whatever the end-goal may be) has not changed all that significantly.

    Link to this

Add a Comment
You must sign in or register as a ScientificAmerican.com member to submit a comment.

More from Scientific American

Scientific American Holiday Sale

Give a Gift &
Get a Gift - Free!

Give a 1 year subscription as low as $14.99

Subscribe Now! >

X

Email this Article

X