ADVERTISEMENT
  About the SA Blog Network













Absolutely Maybe

Absolutely Maybe


Evidence and uncertainties about medicine and life
Absolutely Maybe Home

Academic spin: How to dodge & weave past research exaggeration

The views expressed are those of the author and are not necessarily those of Scientific American.


Email   PrintPrint



Cartoon at a medical journal happy hourYesterday’s uplifting emphasis at the quadrennial medical editors’ scientific meeting was bad research (“Bad research rising”). This morning’s motivational agenda focused on measuring some of the main techniques for jazzing up research results.

It started with making research look more independent by not declaring authors’ commercial conflicts of interest (COI). Kristine Rasmussen told us that in Denmark, doctors have to apply for permission to collaborate with industry. That enabled Rasmussen and her colleagues to study whether or not doctors who did clinical trials were declaring their commercial relationships with drug manufacturers.

Out of 171 trial authors, 11% did not disclose a COI with the trial sponsor or drug manufacturer, and another 26% did not disclose that they had a commercial relationship with the manufacturer of another drug for the same use. Lively discussion ensued. From the audience, Leslie Citrome remarked that some academic departments are involved in so many industry trials that they should now be regarded as contract research organizations rather than academia.

Later, we heard from Serina Stratton that out of 313 trials studied, 36 required sponsor/manufacturer approval for text or publication and 6 had gag orders. Leading to some inevitable questions: why aren’t all academic institutions protecting researchers and trial participants from industry restrictions on academic freedom – and why aren’t potential participants being warned about this before they agree to be in a trial?

Photo of Isabelle Boutron

Isabelle Boutron, on 9 September 2013 in Chicago at the 7th International Congress on Peer Review and Biomedical Publication

On to what data are reported: Jamie Kirkham found that 77% of systematic reviews they looked at had identified at least one trial in which it was suspected that data on harms had been held back. Scary.

Soon it was Isabelle Boutron’s turn to tackle researcher spin about their trials. She defines this as using techniques to make the intervention studied look more beneficial than it was. You can do this by distracting people from non-significant important results with data on less important outcomes. Or using words that make exaggerated claims that data don’t actually support.

This kind of academic spin is common in abstracts: in a past study, she and her colleagues found spin somewhere in the abstracts of 40% of trials. And it follows through into about half of press releases and subsequent media coverage.

Today she told us about a trial they did to see whether readers get caught out by the spin. They re-wrote abstracts to take out the spin and randomized 300 people to get either the spun or cleaned-up version. Not surprisingly, they found that the spin was successful in leading readers to believe the intervention was more beneficial than it was.

So how can we as readers protect ourselves from this fate? Understanding more about concepts like statistical significance is important - it’s one of the  main tools of the trade. I’ve written some more tips here: “They would say that, wouldn’t they?” Don’t rely on an abstract: you really do need to check the data and fine print. Check if there’s a systematic review on the subject: that can help you see how the result fits in with other research. And be wary if they don’t tell you what the range of other opinions on the topic is, or if most of the references they use come from their own team. “Even if I say so myself” isn’t usually a good basis to support someone’s interpretation.

We’re about halfway through this conference now, and it’s been keeping a cracking pace. Research on the accessibility of research and post-publication discussion about research lie ahead.

~~~~

Post on previous day: “Bad research rising”

Post on following sessions: “Opening a can of data-sharing worms”

As you would expect from a congress on biomedical publication, there’s a whole lot of tweeting going on. Follow on #PRC7

The cartoon is by the author, under a Creative Commons, non-commercial, share-alike license. The photo of Isabelle Boutron was taken by the author.

The thoughts Hilda Bastian expresses here are personal, and do not necessarily reflect the views of the National Institutes of Health or the U.S. Department of Health and Human Services.

Hilda Bastian About the Author: Hilda Bastian likes thinking about bias, uncertainty and how we come to know all sorts of thing. Her day job is making clinical effectiveness research accessible. And she explores the limitless comedic potential of clinical epidemiology at her cartoon blog, Statistically Funny. Follow on Twitter @hildabast.

The views expressed are those of the author and are not necessarily those of Scientific American.





Rights & Permissions

Comments 3 Comments

Add Comment
  1. 1. jtdwyer 7:57 pm 09/9/2013

    It’s just business!
    <%)

    Link to this
  2. 2. ultimobo 4:43 am 09/10/2013

    if this relates to recent articles about Harvard students cheating – then yes unfairness may lead to short-term advantage – but karma tends to bite you in the bum eventually – misleading research destroys people’s lives ? – see how you sleep at night …

    Link to this
  3. 3. Hilda Bastian in reply to Hilda Bastian 6:54 am 09/10/2013

    No, it doesn’t relate to that. Most of the time people wouldn’t see themselves as attempting to mislead people.

    Link to this

Add a Comment
You must sign in or register as a ScientificAmerican.com member to submit a comment.

More from Scientific American

Scientific American Special Universe

Get the latest Special Collector's edition

Secrets of the Universe: Past, Present, Future

Order Now >

X

Email this Article

X