Skip to main content

Science in the Abstract: Don't Judge a Study by its Cover

A competition for attention lies at the heart of the scientific enterprise. And the abstract is its “blurb.” A scientific abstract is a summary used to attract readers to an article and to get a piece of research accepted for a conference presentation.

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


A competition for attention lies at the heart of the scientific enterprise. And the abstract is its "blurb."

A scientific abstract is a summary used to attract readers to an article and to get a piece of research accepted for a conference presentation. Other than the title, it's the part of an article that is looked at by the most people.

But should we take a scientific abstract literally? Is it likely to tell us the most important things we need to know about a study?


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Let's unpick those words first, because the answers become clearer when you keep the science production process in mind.

People often call a scientific article a study: "Have you read that study in the Journal of Exciting Results?" But the article is not actually a study: it's a report about a study.

A study is a complex activity. All sorts of artifacts are produced along the way - like grant applications, descriptions for ethics committees, data sets, and reports to funders or regulatory agencies.

Out of all the things observed, thought or recorded, some is chosen to go into one or more articles. There could be dollops of wishful thinking as a narrative is formed. Usually right at the end, and usually with an eye for the "juicy" bits, the abstract is written - and sent out into the battle for attention and citations.

It wasn't always like this. Scientists used to communicate about their work in person, in dissertations, monographs and books, and in letters among peers. They had meetings where they read out papers. Peer review began before scientific journals did.

Journals grew out of summaries written by others, not the scientist who did the research. The first two publications regarded as the precursors of the science journal started in 1665: the Journal des Sçavans in Paris (pictured here on the left), and the Royal Society of London's Philosophical Transactions. The first journals, full of news and book reviews, writes David Kronick, "were very much like the newspapers which provided their models."

Then came the explosion of journals in Germany in the 1800s, particularly in medicine. Although the phrase "publish or perish" is a creature of the 20th Century, academic competition had started in the 19th. According to Kronick, it can be credited to the decentralization of German universities. The number of journals is estimated to have gone from 900 in 1800 to almost 60,000 in 1901.

The structure of a scientific article that we're familiar with now, including an abstract, developed early in the 20th century. By then, the publicity "blurb" for books and movies had arrived on the scene, too. Along the way, scientists absorbed "PR" techniques in their work.

The abstract's writing style follows the style of scientific articles, though. It's a format called IMRAD, for Introduction, Methods, Results And Discussion. So they begin with the back story, and bring the "news" in at the end. That's the opposite direction of the inverted pyramid writing style of journalism, which cuts to the chase right at the start.

In 1969, Ertl criticized abstracts for not providing the information readers need, proposing a tabular form for summarizing scientific work. That didn't take off. In 1987, the move to structured abstracts - with formal sections like "objectives," "methods," and "results" - began in clinical research.

Early in this century, reporting standards for abstracts of clinical trials were developed - and they seem to have improved reporting. Last year, this was joined by standards for abstracts of systematic reviews. (Interest declaration: I'm one of the authors.)

What problems are we trying to resolve? And why does Ivan Oransky from Retraction Watch say it's "journalistic malpractice" to report on a study after only reading an abstract or press release?

It comes down to inaccuracy and academic spin, both of which are too common in abstracts. Inaccuracies have been shown in studies of abstracts in a chemistry journal(a quarter inconsistent with the article), medical journals (from 18%-68%) and pharmacy journals (61%).

To be fair, even though it's a worry, every inaccuracy isn't critical. But the spin is serious. It's not just about language that exaggerates or massages results. It's also about choosing the most exciting results, even when that's misleading.

Spin in press releases and media coverage often reflects the spin in abstracts. And until more research is open access, that will be all that most people can get to read.

It's possible that reading only abstracts might influence clinical decisions. Isabelle Boutron, a leading researcher of academic spin in clinical research, has done a randomized trial of whether abstracts with or without spin change decisions. There's no article about that trial yet, so there's not even an abstract to go by. Boutron reported at a conference I posted about here last yearthat experts who read abstracts with spin believed treatments were more beneficial than those who got the de-spun versions.

Of course, when there's spin in the abstracts, there's spin in the full article too. It's less likely to be wall-to-wall, though. Most of the time, less impressive information will in fact be there in the full text. The potential conflicts of interest are more likely to be there, too.

You can get a more realistic picture. But first you have to resist jumping to conclusions before you've read the fine print.

~~~~

 

The cartoons are my own - this time with apologies to the great René Magritte for the clumsy homage to The Treachery of Images. (More cartoons at Statistically Funny.)

The original blurb is from the Library of Congress, via Wikimedia Commons.

Articles I relied on heavily in the section on the early history of scientific journals were by Jack Meadows and David Kronick.

If you're wondering about the origin of not judging a book by its cover, according to the Flexners, the specific phrase, "You can never tell a book by its cover," first appeared in 1946 in the mystery novel, Murder in the Glass Room by Edwin Rolfe - a writer blacklisted by McCarthy's House Un-American Activities Committee - with co-author, Lester Fuller.

* The thoughts Hilda Bastian expresses here at Absolutely Maybe are personal, and do not necessarily reflect the views of the National Institutes of Health or the U.S. Department of Health and Human Services.

Hilda Bastian was a health consumer advocate in Australia in the '80s and '90s. Controversies riddled with ideology and vested interests drove her to science. Epidemiology and effectiveness research have kept her hooked ever since.

More by Hilda Bastian