Skip Navigation

Search Menu


Can You Repeat That?

The scientific community is serious about fixing the replication problem in research.

watering plants illustration

Illustration by Sherrill Cooper

“Big science is broken,” one headline declared. “Many scientific ‘truths’ are, in fact, false,” proclaimed another. As a person who reads this magazine, you probably have encountered similar articles about the “replication crisis” in biomedicine.

The problem came to the fore roughly five years ago, when a few pharmaceutical companies published their concerns. Drug makers do their homework before they invest large sums in potential new targets. But in trying to reproduce the findings from academic studies, they were getting inconsistent results. For instance, Bayer claimed in 2011 that its scientists could replicate the original results of scholarly papers less than a quarter of the time.

 Leonard Freedman, head of the Global Biological Standards Institute, put a price tag on the problem last year. He published an analysis of past studies showing that more than half of bench research is unverifiable, representing roughly $28 billion per year in U.S. research that cannot be trusted with any degree of confidence. That got the attention of the public—and the purse-string holders in Washington.

It made us take heed as well. We felt confident that widespread fraud was not to blame, but clearly it was time to re-evaluate our processes and incentive systems.

In fact, many complex factors feed into this phenomenon. A large portion of the research conducted by academic scientists is so highly technical that it can be difficult to re-create the precise methodology and conditions from the original study. Many published reports do not provide raw data or the exact details of the experimental design, making duplication a challenge. Some degree of human error is inevitable too.

Unfortunately, there are other, more concerning forces at play. For instance, today’s hypercompetitive environment in science can put intense pressure on researchers. With heightened competition to secure grants and publish in high-impact journals, some scientists may cut corners and produce slipshod work.

Francis Collins, head of the National Institutes of Health (NIH), recently addressed the issue in Nature, arguing that our system lacks the proper checks and balances and needs restructuring. In his editorial, he pinpointed the major contributing factors, including “poor training of researchers in experimental design” and “increased emphasis on making provocative statements rather than presenting technical details.”

At Johns Hopkins Medicine, we are taking aim at these issues.

For instance, the school of medicine got a grant from the National Institute of General Medical Sciences to develop a 10-part course aimed at teaching the do’s and don’ts of study design and data handling. Our Department of Medicine is working to devise a manageable system for banking the primary data that feed into its faculty’s computations, as well as tools to improve data hygiene without adding burden to investigators. Moreover, we are designing a system for auditing 1 to 3 percent of our lab research protocols in-house, rather than waiting for others to do our fact-checking for us.

As part of the School of Medicine Research Council, we formed a subcommittee on reproducibility to monitor these issues and keep us accountable. Led by neuroscience professor Alex Kolodkin, the group plans to deliver new institutional guidelines that will facilitate more open data sharing and best practices in experimental design.

Finally, as reviewers and editors of scientific journals, we must be mindful of the signals we send with what we choose to publish. If we can reduce the publication bias against studies with negative results—and NIH is working on this—it might tamp down the temptation to take liberties with the data.

These reforms must start now to ensure a solid foundation for the future of medicine and to preserve the trust and faith of the American public. After all, we scientists pride ourselves in being truth-seekers; the truth should not vary from seeker to seeker.