January 4, 2012 | 4
Big clinical trials—to test new drugs or procedures—generate reams of important data about safety and efficacy. Only a fraction of that information sees the light of day, a publishing practice that could put patients at risk, according to a special report published this week in the British Medical Journal (BMJ).
Even though scientific and medical journals are loaded with what might seem like endless reports—and lengthy methodology descriptions–from clinical trails each year, about half of clinical trial results go unpublished, An-Wen Chan, of the University of Toronto’s Women’s College Research Institute, noted in one of the seven new papers in the BMJ special section. Published results typically lack details about how studies were conducted and outcomes for individual participants. Although these particulars might seem negligible or dull to a causal reader, “the overall result is that the published literature tends to overestimate the efficacy and underestimate the harms of a given intervention,” he noted.*
Even major trials for drugs submitted to the U.S. Food and Drug Administration (FDA) contained substantial data holes in their published reports, according to one new analysis. A San Francisco- and Denmark-based team performed meta-analyses that included previously unpublished data for nine drugs that were submitted for FDA approval. With this newly included data, they found that 38 of 41 meta-analyses about these drugs were off: 19 of them overestimated the efficacy of the drug, and 19 of them underestimated it. It can be difficult to get a journal to publish—or a drug company to support publication of—harmful—or null, i.e. “negative”–results from a trial. But, as that paper’s authors noted, “when unfavorable results of drug trails are not published, meta-analyses and systematic reviews that are based only on published data may overestimate the efficacy of the drugs.”
Individual trials are important steps for later analyses—which examine previous studies of the same treatment and are fundamental for determining an intervention’s safety and effectiveness. These important, policy-changing meta-analyses often start with a simple search of the literature through a database such as Medline. But as L. Susan Wieland, of the University of Maryland School of Medicine, and her colleagues reported in their new paper, many of the randomized controlled trials in Medline were not tagged as such. That means reviewers and researchers searching for this key type of trial could miss hundreds of potentially important studies published each year, potentially swaying the final meta-analysis.
In an editorial in the same issue of BMJ, Richard Lehman, of the University of Oxford, and Elizabeth Loder, an epidemiology editor at the journal, called for reform of the “current culture of haphazard publication and incomplete data disclosure.” They advocated for a retroactive disclosure of all clinical trial data as “an important first step towards better understanding of the benefits and harms of many kinds of treatments.”
The U.S. government has taken a step toward making more of this data public, requiring publicly funded trials to publish full data sets within a certain time frame. But according to new findings from Joseph Ross, of the Yale University School of Medicine and his colleagues, and by Andrew Prayle, of Queens Medical Center in the UK, and his co-authors, a hefty chunk of trial findings is still awaiting release—long past publication deadlines.
“When the word ‘mandatory’ turns out to mandate so little, the need for stronger mechanisms of enforcement becomes very clear,” Lehman and Loder argued. Data from older studies that didn’t face these requirements can be even harder to come by, Lehman and Loder noted, making it nearly “impossible” for researchers and government agencies to do a thorough study of findings. Instead of a simple online search, those seeking a full picture of clinical research must go “searching over hill and dale and among the paperwork of regulatory bodies and drug companies to put together pieces of data.”
And the responsibility should not just lay with regulators, Lehman and Loder concluded. Although scientists might have methodological reasons or publication constraints for excluding some of their data, those should be changed, they argued. “Researchers or others who deliberately conceal trial results have breached their ethical duty,” Lehman and Loder wrote in their editorial. “Patients will have to live with the consequences of the failures for many years to come.”
*Correction (1/10/12): This sentence was edited after posting to correct the gender of the researcher An-Wen Chan.