Skip to main content

An Experiment That Didn't Work

My PhD thesis research was a dead end, but that’s why it was important

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


Knowledge is a big subject. Ignorance is bigger. And it is more interesting.—Stuart Firestein, Ignorance: How it Drives Science

Ever tried. Ever failed. No matter. Try again. Fail again. Fail better.—Samuel Beckett in Failure: Why Science Is So Successful

My first week in the lab, my boss plopped a book with the bold title Ignorance: How it Drives Science. And now, as I wrap up writing my dissertation, she has given me its sequel, Failure: Why Science Is So Successful. Preternatural optimist that she is, she did not gift these books out of pessimism or wry passive aggression. Rather, she believed they contained important lessons. Lessons that perfectly bookend my Ph.D. career.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


My time in the lab began with ignorance—not the wide-eyed, first-year graduate student variety, but the rigorous brand that embraces an open question. A great conundrum in modern biology is how life’s great diversity stems from four letters—A, C, G, and T—arranged in a near-infinite array to compose life’s blueprint molecule: DNA. Now, consider that every cell in your body contains the exact same complement of DNA. Yet a heart cell looks and acts completely different from a brain cell which looks and acts completely different from a skin cell. So how did a heart cell, a brain cell, and a skin cell arrive at such different biological fates when given the exact same set of molecular blueprints?

To deploy the blueprint’s directions, instructions must first be transcribed to an intermediate molecule—the RNA—which then delivers them to the cellular machinery for execution. So understanding the dynamics of RNA, smack at the front lines of cellular activity, can help us understand how diversity emerges from the same DNA blueprint.

RNA is similarly composed of a four-letter alphabet: A, C, G, and U. That alphabet can be expanded upon with a library of over 100 chemical tweaks to fine-tune RNA function—a small M added to an A or a chemical S to a U. Of these alphabetical adornments, one stands out as the most ubiquitous: a subtle structural change in the genetic letter U to a pseudo-U, or pseudouridine (Ψ). Here, ignorance comes to play.

While Ψ was first discovered in the 1950s, we still don’t know much about its precise biological function today, except that without Ψ, cells die. We do, however, have some clues—one that particularly piqued my interest. Introducing Ψs into a set of instructions that dictate how a protein is made changed the way those instructions were interpreted by the cell. Ψ unexpectedly recoded RNA’s message beyond the mandates of the genetic code—a code considered fully cracked in the 1960s.

So in Ψ, I found a candidate for how diversity arises from DNA’s hard-coded instructions. But that study was undertaken in an artificial system, which left open the question: where does Ψ naturally lie? By understanding where Ψs are, we might begin to uncover what exactly they do to affect how cells behave. When I wound my way to this question, we still had no methods to map Ψs beyond a few varieties of RNA. So, with the power of next-generation sequencing technologies that first emerged to map the human genome, I went Ψ-hunting.

Meanwhile, the allure of Ψ had entered into the zeitgeist, calling researchers from around the world to endeavor on the same Ψ-charting quest. I was beat to the punch when four methods—three of which were released back-to-back-to-back—were published spotting Ψs in a whole host of RNAs. I decided to make the best of being quadruply beat to the punch and compared each group’s Ψ maps, partly out of curiosity, but mostly because I was asked to review the techniques as an objective fifth party. All four methods were based on the same principle, so their results should overlap well with one another. But they did not. And here enters failure.

Of the hundreds to thousands of Ψs catalogued by each method, only a small fraction of sites were found by them all. I was genuinely surprised by the result. So I hunkered down and thought through a host of technical and biological caveats that were not detailed in the original publications. I then tried to apply one of those methods to map Ψs in African trypanosomes, the single-celled parasites that cause African sleeping sickness. But, try as I might, I could not get the method to work. And so, more failure.

Failure is the natural product of risk, and there’s nothing riskier than the pursuit of ignorance—asking those big bold questions that probe the unknown. But while the practice of science is riddled with failures—from the banal failures of day-to-day life at the bench to the heroic, paradigm shifting failures that populate the book called Failure—many scientists are uncomfortable with the idea. We publish our innovations, the stories of how our ignorance led to success. Where the “publish or perish” mantra prevails, these stories are essential to making a name for ourselves and securing grant money. So there is little incentive to replicate the work of others or report experimental failure. In fact, there is barely a medium to publish these sorts of efforts, which are relegated to the bottom of the file drawer.

But the scientific method hinges on self-correction, which requires transparent reporting of positive (or negative) data and corroboration (or contradiction) of previous experiments. And so I wanted to share my work, to open it up to comment, to transform my failure into something productive. If I couldn’t get these Ψ mapping methods to work in my hands, that’s a problem worth sharing because chances are, I’m not alone. This is how we avoid chasing false leads, how we improve our practices, how we move science forward. These tenets lie at the heart of the “open science” movement, which I have come to embrace as I have ventured to share the failed fruits of my doctoral work.

Of course, open science is easier said than done. The increasing competitiveness of certain scientific fields has disincentivized transparency and collaboration. There is also a value judgment that comes with sharing experimental failure—a vulnerability that your peers will view your efforts as sloppy, rather than earnest and honest. So distributing negative or non-confirmatory data comes with an extra burden of proof.

Still, policy reforms and open science advocates are working to incentivize practices that foster open collaboration. Open-source software like the Open Science Framework now exist for collaborative sharing of data and data-processing workflows. Peer-reviewed publications like F1000Research are now accepting negative or non-confirmatory data of the sort I generated during my thesis. Preprint servers—which allow for direct uploading of complete manuscripts without formal peer review (but open for comment) and have long been embraced by the physics community—are now gaining steam in the life sciences thanks to the work of advocacy groups like ASAPbio.

It’s now been a year since I defended my dissertation, and I’ve taken up the open science call as an AAAS Science & Technology Policy Fellow. My day job is to think about how we can move science towards a culture of sharing. While I haven’t uncovered any mysteries in the world of RNA biology, I have learned that science needs to fail better. Because in science, things often don’t work out the way we think they should, and we are left with our ignorance. But the narratives we form around failure—transparently, openly, and together—can be just as valuable as those we form around success.