According to McGill University researchers, cancer scientists often believe that preclinical study results can be replicated more often than they actually are. Their findings – which could be attributed to optimism on the part of the cancer scientists – were published in the journal, PLOS Biology.
Nearly 200 participants, including cancer experts and novices, were included in a survey and asked to predict the reproducibility of six preclinical cancer studies conducted using mouse models. The studies were being repeated by the Reproducibility Project: Cancer Biology, which is an organization that seeks to replicate results from important research papers published between 2010 and 2012.
The researchers polled predicted an average 75 percent chance that the studies’ statistical significance would be successfully reproduced, and a 50 percent chance of generating the same effect size. However, none of the six studies that have already been replicated by the Reproducibility Project have shown the same results as their original research.
In the past decade, biomedical sciences have been facing a so-called ‘reproducibility crisis’ whereby promising preclinical results fail to be confirmed in early clinical trials. Experts have pointed to commonly-used techniques which may provide inaccurate evidence that overinflates a drug candidate’s potential.
The fact that cancer researchers could not readily identify preclinical studies whose results could be difficult for an independent lab to confirm, suggests a potential weakness in the scientific method. Irreproducibility of preclinical findings could lead cancer scientists down the wrong developmental path, leading to wasted time and resources.
“If the research community believes a finding to be reliable, it might start building on that finding only to later discover the foundations are rotten,” said Dr. Jonathan Kimmelman, Associate Professor in the Biomedical Ethics Unit/Social Studies of Medicine at McGill. “If scientists suspect a claim to be spurious, they are more likely to test that claim directly before building on it.”
Kimmelman and his colleagues emphasize that they are not suggesting that the cancer researchers had lost touch with their area of study, but instead say that forecasting is a challenging task which some of the respondents even excelled at. They suggest that providing more training for scientists could help them more accurately interpret results, so that only truly promising compounds are investigated in further studies.
“This is the first study of its type, but it warrants further investigation to understand how scientists interpret major reports,” said Kimmelman. “I think there is probably good reason to think that some of the problems we have in science are not because people are sloppy at the bench, but because there is room for improvement in the way they interpret findings.”