fbpx

X

Reducing Human Error In Clinical Trials: Improve Data Entry For More Robust Clinical Trial Data

Reducing Human Error In Clinical Trials: Improve Data Entry For More Robust Clinical Trial Data

The clinical trials industry has changed significantly in the past decade, with sponsors favouring increasingly large and complex studies. There was a time when most clinical research was conducted at a single study site by a team of ever-present clinical professionals. Today, multi-center, multinational clinical trials are common, with complex treatment protocols, large groups of clinical staff and huge data sets complicating the clinical trials process.

With years of development time and millions in R&D already invested into an investigational new treatment before it ever reaches the clinical trials stage, there is incredible pressure on clinical investigators to produce quality clinical trial data after the study is complete. Considering this with the US Food and Drug Administration’s (FDA) continuous demand for more compelling and robust clinical trial data – it becomes clear why accurate clinical trial data is of the utmost importance for sponsor companies.

With the increase in personnel working on any given clinical trial, the risk of human error in data entry inevitably grows. In many cases, these human errors are entirely preventable but could end up costing the sponsor company millions of dollars and months of extended trial time. Error-free data is essential to reach accurate conclusions based on a treatment’s safety and efficacy, and is a necessary requirement before investigational drugs can move through to late-stage clinical trials.

So what can a sponsor company do to mitigate the risk of human error and optimize their clinical data management practices across all data-collecting sites? Here’s a look at some of the most common sources of human error during the course of a clinical trial, and how sponsors and contract research organizations (CROs) can implement proactive measures aimed at reducing these risks, and improving the quality of the resulting clinical trial data.

Inconsistent Data Entry

As the number of study sites involved in a multi-center clinical trial increases, so too does the number of data entry personnel needed. Data entry professionals may be localized to each study site, or they may all be working from a central location – usually the head site of study.

In an effort to reduce costs and boost productivity, some clinical trials have opted for the centralized entry of data into the clinical trial management software. While efficient, this data entry style can make it harder to resolve data-related issues and relies on the timely submission of data from multiple separate study sites.

Though it requires more on-site personnel, local data entry does have its advantages. Shortening the time between data collection and data entry reduces the risk of errors once the data is submitted. In addition, resolving data-related errors – such as omissions and illegible writing – is relatively quick and easy as all staff are on-site.

There are two common types of human error in data entry: transcription and transposition errors. Transcription errors occur when the wrong data is entered into the clinical data management system. Typos and duplicate entries are both examples of this type of error. Transpositional errors occur when the numbers in a sequence are accidentally rearranged during data entry.

One way to mitigate the risk of these errors is to implement a double entry process whereby two data entry professionals enter the same information. A verification program is then used to identify any differences between the two files. If any differences exist, the discrepancies are identified and the changes are made. One data file goes through this process repeatedly until the two files match.

As any type of manual data entry has an inherent error rate, it’s important that clinical trials sites minimize this process as much as possible. Electronic data capture has become an increasingly common practice whereby data is collected using an electronic system, and subsequently transmitted to the main database. Eliminating the need for paper record keeping and transcription of this data into the system effectively removes one of the most error-prone processes in clinical trials data management.

Missing Signatures/ Authorizations Omission

Missing signatures in the trial master file can spell disaster for even the most well-controlled clinical trial. Whether the omission is the result of inadequate training of new staff, or a simple oversight on the part of the clinical trials professional, a missing signature – especially those for informed consent of clinical trial participants – seriously compromises clinical trial data integrity.

Since the trial master file is open to regulatory review, it’s essential that all the documentation is organized and complete in anticipation of an FDA inspection. Once again, electronic data capture can help ensure that all necessary fields – including electronic signature fields – are complete before the data is submitted. What’s more, many clinical data management programs are able to spot entries containing potential errors, prompting data entry staff to review them.

Optimizing Clinical Trials Data Collection

When planning for a clinical trial and designing the study protocol, it’s tempting for sponsors and CROs to collect as much study data as possible, regardless of whether it is directly related to the outcome of the trial. As excessive data collection can be a burden on participants and can magnify the risk of human error, study investigators should be prudent when selecting the types of data to be collected.

Once these datasets have been established, study investigators should follow these five best practices for maintaining the integrity of clinical trial data:

  1. Capture only the data required as per the study protocol. Once it has been established, the study protocol should not be deviated from unless it’s absolutely necessary.
  2. Ensure standardized data collection. All study coordinators across all clinical sites should collect data from each participant using the same methods, techniques, forms and formats.
  3. Organize data capture to allow for ease of data entry and analysis. All data points should be recorded at the time the measurement is taken to avoid errors and streamline further data manipulation.
  4. Allow ample time for data entry and review. Data entry can be one of the most frenzied and rushed aspects of the clinical trial as sponsors are eager to extract some meaning from the trial data. With redundant data entry procedures in place, sponsors can ensure that the data and analyses are robust.
  5. Avoid unnecessary data collection. These additional data points can complicate analyses and prolong the study period.

Whether you’re employing a state-of-the-art electronic data capture protocol, or you’re still using paper data collection, perhaps the most important way to prevent human error is to implement a centrally-operated training program for all site personnel. Clinical trial coordinators should ensure that all clinical staff understand the most important aspects of the study protocol including eligibility for participation as well as the primary and secondary outcomes.

Certification programs should also be established to ensure consistent training of all staff involved in data collection and data entry. Along with basic protocols, this training program should include information on the importance of data security and how errors in clinical trial data can be corrected.

Investing both time and resources in study staff training and skills assessment before the clinical trial commences can help mitigate the risk of data-related human errors during crucial study periods. It’s important that this training is provided to all new and replacement personnel, especially if they have joined the trial in progress.

As the quality of data collected during a clinical trial is the most valuable and predictive measure of a study’s success, it’s imperative that the data is preserved in an accurate, organized and complete state. Minimizing data-related delays will save time and money and set well-performing CROs apart.