Work That’s Worth Repeating

Published in Dome - November 2016

Big science is broken,” one recent headline declared. “Many scientific ‘truths’ are, in fact, false,” proclaimed another. Over the past few years, many similar articles have described the “replication crisis” in biomedicine.

The problem came to the fore roughly five years ago, when a few pharmaceutical companies published their concerns. Drugmakers do their homework before they invest large sums in potential new treatments. But in trying to reproduce the findings from academic studies, they were getting inconsistent results. For instance, Bayer claimed in 2011 that its scientists could replicate the original results of scholarly papers less than a quarter of the time.

Leonard Freedman, head of the Global Biological Standards Institute, put a price tag on the problem last year. He published an analysis of past studies showing that more than half of bench research is unverifiable, representing roughly $28 billion per year in U.S. research that cannot be trusted with any degree of confidence. That got the attention of the public—and the purse-string holders in Washington.

It made physician-scientists take heed as well. We felt confident that widespread fraud was not to blame, but clearly it was time to re-evaluate our processes and incentive systems.

Many complex factors feed into this phenomenon. A large portion of the research conducted by academic scientists is so highly technical that it can be difficult to re-create the precise methodology and conditions from the original study. Many published reports do not provide raw data or the exact details of the experimental design, making duplication a challenge. Some degree of human error is inevitable too.

Unfortunately, there are other, more troubling forces at play. For instance, today’s hypercompetitive environment in science can put intense pressure on researchers. With heightened competition to secure grants and publish in high-impact journals, some scientists may cut corners and produce work that is not of the highest caliber.

Francis Collins, head of the National Institutes of Health, recently argued in Nature that our system lacks the proper checks and balances, and needs restructuring. Among the major contributing factors, he cited “poor training of researchers in experimental design” and “increased emphasis on making provocative statements rather than presenting technical details.”

At Johns Hopkins Medicine, we are taking aim at these issues.

For instance, the school of medicine has a grant from the National Institute of General Medical Sciences to develop a 10-part course aimed at teaching the do’s and don’ts of study design and data handling. Our Department of Medicine is working to devise a manageable system for banking the primary data that feed into its faculty’s computations, as well as tools to improve the accuracy of data without adding burden to investigators. Moreover, we are designing a system for auditing 1 to 3 percent of our lab research protocols in-house, rather than waiting for others to do our fact-checking for us.

The School of Medicine Research Council has formed a subcommittee on reproducibility to deliver new institutional guidelines that will facilitate more open data sharing and best practices in experimental design.

Finally, as reviewers and editors of scientific journals, we must be mindful of the signals we send with what we choose to publish. When it comes to research, negative results can be, and often are, more important than positive results. Not every new treatment, new way to diagnose or new paradigm is better than the old ones. The problem is that there’s a real tendency in the very prestigious journals to publish only positive results. If we can reduce such publication bias, it might tamp down the temptation to take liberties with the data.

Such reforms will go far toward ensuring a solid foundation for the future of medicine and preserving the trust of the American public.