ADVERTISEMENT
  About the SA Blog Network













Absolutely Maybe

Absolutely Maybe


Evidence and uncertainties about medicine and life
Absolutely Maybe Home

Bad research rising: The 7th Olympiad of research on biomedical publication

The views expressed are those of the author and are not necessarily those of Scientific American.


Email   PrintPrint



Cartoon on research team gold medalWhat do the editors of medical journals talk about when they get together? So far today, it’s been a fascinating but rather grim mixture of research that can’t be replicated, dodgy authorship, plagiarism and duplicate papers, and the general rottenness of citations as a measure of scientific impact.

We’re getting to listen and join in the editors’ discussion this week in Chicago. They assemble once every four years to chew over academic research on scientific publishing and debate ideas. This tradition was started by JAMA in Chicago in 1989. The name of the international congress still goes by its original pre-eminent concern, “peer review and biomedical publication.” But the academic basis for peer review is a small part of what’s discussed these days.

The style hasn’t changed in all these years, and that’s a good thing. As JAMA editor Drummond Rennie said this morning, most medical conferences go on and on, “clanking rustily forward like a Viking funeral.” Multiple concurrent sessions render a shared ongoing discussion impossible.

Not this one. This congress is a three-day plenary with lots of time for discussion. And just as well, because the work presented is provocative – and fascinating to anyone concerned with what’s happening in medical science and the science behind editing. The abstracts will be loaded online progressively each day.

Photo of John Ioannidis

John Ioannidis, 8 September 2013, in Chicago at the 7th International Congress on Peer Review and Biomedical Publication

The congress hurtled off to an energetic start with John Ioannidis, epidemiologist and agent provocateur author of “Why most published research findings are false.” He pointed to the very low rate of successful replication of genome-wide association studies (not much over 1%) as an example of very deep-seated problems in discovery science.

Half or more of replication studies are done by the authors of the original research: “It’s just the same authors trying to promote their own work.” Industry, he says, is becoming more concerned with replicability of research than most scientists are. Ioannidis cited a venture capital firm that now hires contract research organizations to validate scientific research before committing serious funds to a project.

Why is there so much un-reproducible research? Ioannidis points to the many sources of bias in research. Chavalarias and he trawled through more than 17 million articles in PubMed and found discussion of 235 different kinds of bias. There is so much bias, he said, that it makes one of his dreams – an encyclopedia of bias – a supremely daunting task.

What would help? Ioannidis said we need to go back to considering what science is about: “If it is not just about having an interesting life or publishing papers, if it is about getting closer to the truth, then validation practices have to be at the core of what we do.” He suggested three ways forward: we have to get used to small genuine effects and not expect (and fall for) excessive claims. Secondly, we need to have – and use – research reporting standards. The third major strategy he advocates is registering research: protocols through to datasets.

Isuru Ranasinghe, in a team from Yale, looked at un-cited and poorly cited research in cardiovascular research. The proportion isn’t changing over time, but the overall quantity is rising rather dramatically as the biomedical literature grows: “1 in 4 journals have more than 90% of their content going un-cited or poorly cited five years down the track.” Altogether, about half of all articles don’t have an influence – if you judge it by citation.

Earlier, though, there was a lot of agreement from the group on the general lousiness of citation as a measure and influence on research. Tobias Opthof, presenting his work on journals pushing additional citation of their own papers, called citation impact factors “rotten” and “stupid”. Elizabeth Wager pulled no punches at the start of the day, reporting on analyses of overly prolific authors: surely research has to be about doing research, not just publishing a lot of articles. Someone who publishes too many papers, she argued, could be of even more concern than someone who does research, but publishes little. Incentives and expectations of authorship really no longer serve us well – if they ever did.

~~~~

Second post: “Academic spin”

Third post: “Opening a can of data-sharing worms”

As you would expect from a congress on biomedical publication, there’s a whole lot of tweeting going on. Follow on #PRC7

The cartoon is by the author, under a Creative Commons, non-commercial, share-alike license. Photo of John Ioannidis by the author.

The thoughts Hilda Bastian expresses here are personal, and do not necessarily reflect the views of the National Institutes of Health or the U.S. Department of Health and Human Services.

Hilda Bastian About the Author: Hilda Bastian likes thinking about bias, uncertainty and how we come to know all sorts of thing. Her day job is making clinical effectiveness research accessible. And she explores the limitless comedic potential of clinical epidemiology at her cartoon blog, Statistically Funny. Follow on Twitter @hildabast.

The views expressed are those of the author and are not necessarily those of Scientific American.





Rights & Permissions

Comments 2 Comments

Add Comment
  1. 1. Dr.MS 7:19 pm 09/9/2013

    Just as old GRE or graduate degrees reflect real intelligence or ability in a field, than the newer ones (acquired in the last fifteen years) that might reflect more of the skills developed through chronic and intense “test preparation, test-taking, test-coaching & test preparedness”, rather than real logic, analytical abilities, research integrity or rigor and creativity, “kids who cheat in universities to get the right grades or pass” may be the same “researchers who do shoddy sloppy work and merely try to move on”.

    It is also possible that “market influences” in research, which sometimes can motivate and stimulate, can go overboard and cause people to take short cuts that are dangerous (in the field of biomedical research), or do what is easy, sexy and sells…not what is useful, authentic, of long term value and reliable.

    We need to go back to treating Ph.D. dissertations as “seminal work” that takes years of hard work, honest work, ethical work, creative work, brain work, trial work and slog work with lots of hard harsh fair peer reviews.

    Our quantitative brain has outstripped the qualitative elements in our lives and research, thereby creating “many small publishable units” on “one large or solid data” over and over again.

    As a friend once said to me, “They don’t want to try or replicate anymore. Everybody wants their first research to be spectacular and ‘it’. The idea of doing hard clinical research, with many trials, ups and downs, corrections and changes, with many reliable replications, are no longer important. And funders are not researchers, or do not appreciate research. They just want results.”

    We need funders who treat methodology of research as an important aspect of research work…irrespective of the results. They need to stop exclusively worrying about quick product development, and expect exotic or break through research for every test and trial.

    Real innovation, solid research and transformative academic work takes years. This means we need funders to remember that “50% to 80% of research may not go anywhere…but they provide some path for future research that might result in useful innovative unique results”.

    “R & D”, as I joke now, “is all ‘Rig & Dig’” which is equivalent to “Get something done now and sell it fast”.

    Truly unfortunate. Science research needs more social scientists…to evaluate objectives, goals, theories, methodologies, ethics and values.

    Link to this
  2. 2. Hilda Bastian in reply to Hilda Bastian 8:02 pm 09/9/2013

    Well said – yes, that would help. If you haven’t seen it, you might like the Slow Science manifesto.

    Link to this

More from Scientific American

Scientific American Special Universe

Get the latest Special Collector's edition

Secrets of the Universe: Past, Present, Future

Order Now >

X

Email this Article

X