Skip to main content

Bad research rising: The 7th Olympiad of research on biomedical publication

What do the editors of medical journals talk about when they get together? So far today, it’s been a fascinating but rather grim mixture of research that can’t be replicated, dodgy authorship, plagiarism and duplicate papers, and the general rottenness of citations as a measure of scientific impact.

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


What do the editors of medical journals talk about when they get together? So far today, it's been a fascinating but rather grim mixture of research that can't be replicated, dodgy authorship, plagiarism and duplicate papers, and the general rottenness of citations as a measure of scientific impact.

We're getting to listen and join in the editors' discussion this week in Chicago. They assemble once every four years to chew over academic research on scientific publishing and debate ideas. This tradition was started by JAMA in Chicago in 1989. The name of the international congress still goes by its original pre-eminent concern, "peer review and biomedical publication." But the academic basis for peer review is a small part of what's discussed these days.

The style hasn't changed in all these years, and that's a good thing. As JAMA editor Drummond Rennie said this morning, most medical conferences go on and on, "clanking rustily forward like a Viking funeral." Multiple concurrent sessions render a shared ongoing discussion impossible.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Not this one. This congress is a three-day plenary with lots of time for discussion. And just as well, because the work presented is provocative - and fascinating to anyone concerned with what's happening in medical science and the science behind editing. The abstracts will be loaded online progressively each day.

The congress hurtled off to an energetic start with John Ioannidis, epidemiologist and agent provocateur author of "Why most published research findings are false." He pointed to the very low rate of successful replication of genome-wide association studies (not much over 1%) as an example of very deep-seated problems in discovery science.

Half or more of replication studies are done by the authors of the original research: "It's just the same authors trying to promote their own work." Industry, he says, is becoming more concerned with replicability of research than most scientists are. Ioannidis cited a venture capital firm that now hires contract research organizations to validate scientific research before committing serious funds to a project.

Why is there so much un-reproducible research? Ioannidis points to the many sources of bias in research. Chavalarias and he trawled through more than 17 million articles in PubMed and found discussion of 235 different kinds of bias. There is so much bias, he said, that it makes one of his dreams - an encyclopedia of bias - a supremely daunting task.

What would help? Ioannidis said we need to go back to considering what science is about: "If it is not just about having an interesting life or publishing papers, if it is about getting closer to the truth, then validation practices have to be at the core of what we do." He suggested three ways forward: we have to get used to small genuine effects and not expect (and fall for) excessive claims. Secondly, we need to have - and use - research reporting standards. The third major strategy he advocates is registering research: protocols through to datasets.

Isuru Ranasinghe, in a team from Yale, looked at un-cited and poorly cited research in cardiovascular research. The proportion isn't changing over time, but the overall quantity is rising rather dramatically as the biomedical literature grows: "1 in 4 journals have more than 90% of their content going un-cited or poorly cited five years down the track." Altogether, about half of all articles don't have an influence - if you judge it by citation.

Earlier, though, there was a lot of agreement from the group on the general lousiness of citation as a measure and influence on research. Tobias Opthof, presenting his work on journals pushing additional citation of their own papers, called citation impact factors "rotten" and "stupid". Elizabeth Wager pulled no punches at the start of the day, reporting on analyses of overly prolific authors: surely research has to be about doing research, not just publishing a lot of articles. Someone who publishes too many papers, she argued, could be of even more concern than someone who does research, but publishes little. Incentives and expectations of authorship really no longer serve us well - if they ever did.

~~~~

Second post: "Academic spin"

Third post: "Opening a can of data-sharing worms"

As you would expect from a congress on biomedical publication, there's a whole lot of tweeting going on. Follow on #PRC7

The cartoon is by the author, under a Creative Commons, non-commercial, share-alike license. Photo of John Ioannidis by the author.

The thoughts Hilda Bastian expresses here are personal, and do not necessarily reflect the views of the National Institutes of Health or the U.S. Department of Health and Human Services.

Hilda Bastian was a health consumer advocate in Australia in the '80s and '90s. Controversies riddled with ideology and vested interests drove her to science. Epidemiology and effectiveness research have kept her hooked ever since.

More by Hilda Bastian