fbpx

X

Poorly-Designed Preclinical Studies Preventing Ethical Review of Experimental Drugs

Poorly-Designed Preclinical Studies Preventing Ethical Review of Experimental Drugs

“With a median group size of 8 animals, these studies had limited ability to measure treatment effects precisely,” said Susanne Wieschowski, a postdoctoral fellow on Strech's team. “Chance alone should have resulted in more studies being negative – the imbalance strongly suggests publication bias.”

Regulatory bodies like the US Food and Drug Administration (FDA) as well as ethics committees rely on well-reported results from preclinical animal studies to decide whether the risk-benefit profile of a particular drug is favourable enough to push that drug through to first-in-human trials. However, the insufficient reporting of these results can impede the ability of review boards to make risk-based decisions on which drugs should enter human clinical trials, according to researchers from Germany’s Hannover Medical School and McGill University in Canada.

Institutional review boards often use “investigator brochures” containing preclinical findings to determine whether the potential benefits of a prospective human trial would outweigh the risks to participants. But after reviewing over 100 investigator brochures approved by institutional review boards at multiple German medical centers between 2010 and 2016, the researchers found that the vast majority of the reports were lacking in key areas.

“Our analysis shows that the vast majority of these documents lack the information needed to systematically appraise the strength of evidence supporting trials,” said Dr. Daniel Strech, professor for bioethics at Hannover Medical School. Strech and his colleagues published their findings in the journal PLOS Biology.

The total 109 investigator brochures contained findings from 708 preclinical efficacy studies. The research team found that over 95 percent of these studies failed to include elements aimed at reducing bias, such as randomization procedures and blinded outcome assessment. What’s more, only 11 percent of studies were published in a peer-reviewed journal and just six percent reported neutral findings in which the drug product under investigation produced no measurable effect.

“With a median group size of 8 animals, these studies had limited ability to measure treatment effects precisely,” said Susanne Wieschowski, a postdoctoral fellow on Strech’s team. “Chance alone should have resulted in more studies being negative – the imbalance strongly suggests publication bias.”

Since the majority of preclinical studies reported positive findings, Strech and his team are concerned that investigators may not be controlling their biases when it comes to designing impartial studies and reporting on their results. In light of this, they’re urging regulatory agencies to establish guidelines aimed at standardizing the conduct and reporting of preclinical efficacy studies.

“Why do regulatory agencies and other bodies involved in risk-benefit assessment for early human research accept the current situation?” asks Strech. “Why do they not complain about the lack of information needed to critically appraise the rigor of the preclinical efficacy studies and about the concerning lack of efficacy studies demonstrating no effects?”

Despite their confidence in their findings, the researchers do admit that their study had some limitations. One such limitation is the fact that their sample of investigator brochures was not random, which they explain was a result of the difficult-to-obtain nature of the documents.

“Future studies need to evaluate how improved reporting for preclinical data presented in [investigator brochures] influences risk–benefit analysis during ethical review,” wrote the study authors. “However, better reporting alone is unlikely to solve problems related to risk of bias in preclinical evidence.”