By John Whitaker, Ph.D., senior VP of clinical innovation, and Amy Kissam, executive director of integrated clinical processes, INC Research
With drug development costs reaching between $800 million and $1.2 billion for each successful product, biopharmaceutical companies are trapped between mounting pressure to reduce development costs and the need to ensure better outcomes from clinical trials. Generics, lower approval rates, and global testing requirements are driving increases in development costs, yet outcomes remain uncertain in terms of both regulatory approval and market acceptance. In addition, large payers such as Medicare continue to exert downward pressure on prices. These shifting dynamics mean biopharms and CROs must employ a more strategic, end-toend approach to clinical trials — one that begins at the design and planning stage, is data-driven, and features workflows that direct the right resources to the right tasks, without compromising overall quality.
For real change to occur, biopharms and their clinical research partners must make better use of data to plan and manage the delivery of their clinical trials. This is particularly important in the area of clinical trial monitoring, where the industry has begun to embrace a more strategic approach. The practice of riskbased monitoring is strategic in that it allocates resources across a study based on data criticality, patient safety, data integrity, protocol compliance, and impact to operational delivery. This approach starts with a risk assessment, which includes identification of core critical data that supports endpoints, patient safety, and the overall clinical development plan. This risk assessment then becomes the foundation for operational strategy and the initial monitoring plan. Throughout the conduct of the trial, monitoring effort is escalated and de-escalated based on key risk indicators (KRIs) and data trends.
The industry, while mindful of its mission to develop better delivery models that improve quality and reduce cost, remains conservative in its adoption of new technologies and innovation in clinical trials. Sponsors still tend to tread cautiously due to the perception that new technology or process change may introduce additional risk to the regulatory or approval process. Even as regulators have more formally endorsed risk-based monitoring in recent years, industry adoption of these alternative monitoring approaches has been slow, and challenges remain in translating these concepts into effective clinical practice.
The Changing Perspective
The historical regulatory concern may be waning for some companies based on recent publications from regulatory authorities. In 2011, the FDA and the European Medicines Agency (EMA) issued their respective positions advocating for risk-based monitoring of clinical trials, and have opened the door to a new industry paradigm. The FDA and EMA both acknowledge that traditional 100 percent source document verification (SDV)-based monitoring approaches are not always the most effective in ensuring adequate protection of patients and data integrity.
The FDA notes that no single approach to monitoring is appropriate or necessary for every clinical trial and recommends that each sponsor design a plan that is tailored to the specific patient protection and data integrity risks of the study. In most cases, such a risk-based plan would include a mix of centralized and on-site monitoring. In its guidance, delivered in a reflection paper, the EMA says better solutions are needed to ensure that limited trial resources are best targeted to address the most important issues and priorities, especially those associated with predictable or identifiable risks to patient safety and data quality. The agency also encourages the incorporation of quality tolerance limits for the clinical trial procedures involved. These measures can direct the oversight and monitoring of patient safety, data integrity, and protocol compliance, resulting in more needfocused monitoring strategies.
While adoption may be slow, there is a large amount of interest and growing momentum in the industry. Helping drive those efforts is the Clinical Trials Transformation Initiative (CTTI), a public-private partnership launched in 2008. A major part of CTTI’s mission is to identify monitoring practices that, through broad adoption, will increase the quality and efficiency of clinical trials. Several related collaborations are affiliated with CTTI. One example is TransCelerate BioPharma Inc., a nonprofit founded by 10 Big Pharma companies in September 2012. The alliance, which has since grown to 17 members, has launched five precompetitive initiatives, including a program focused on establishing a standard framework for riskbased monitoring. This includes common tools and triggers to identify risk and categorization criteria for low-, medium-, and high-risk trials. The initiative will also test a validated approach through pilot trials and be vetted by regulators.
A Key Piece Of The Puzzle Is Data
In a more strategic data-monitoring approach, clinical researchers design a fit-for-purpose data verification model. Instead of reviewing trial data using the traditional 100 percent on-site SDV approach, researchers may opt for centralized data review where possible and implement a sampling plan for the on-site review of data. This sampling plan is designed prospectively based on the initial risk assessment and may be consistently applied across all sites in the study or varied based on identified risks at the region, country, and even site level. Additionally, the strategy may be designed to adjust as site risk changes throughout the progression of the trial and incorporate the escalation or deescalation of review effort based on KRIs. This approach can result in more efficient data gathering and analysis, with the potential to significantly lower development costs for new drugs. Further, a holistic, welldesigned monitoring approach, leveraging near real-time flow of data, can offer these savings while maintaining, or even improving, oversight of patient safety and data quality.
The building blocks of a strategic data-monitoring plan are targeted and triggered monitoring strategies. Targeted monitoring may involve various techniques, such as continuous, fixed, and random sampling methodology. This strategy includes a reduced SDV approach that is aligned to critical data, patient visits or selected patients, depending on the risk-benefit profile of the trial. Triggered monitoring supports an added level of risk management by predefining triggers for planned or additional on-site and off-site attention. These triggers are event-based around data volume and data quality and determined by thresholds of accumulative work and/ or quality.
To maximize the potential of targeted and triggered monitoring, the role of centralized monitoring should be leveraged. Centralized monitoring is ideally positioned to coordinate targeted and triggered strategies. Many organizations limit the functionality of centralized monitoring to an administrative role that coordinates on-site activities. However, the potential contribution for this group goes far beyond this administrative role. There is evidence that centralized monitoring can be more effective than on-site monitoring in detecting data anomalies, such as fraud and other nonrandom data distributions. In addition, electronic data capture (EDC) systems are making it possible to implement centralized monitoring methods that enable decreased reliance on on-site monitoring. The availability of data in aggregate form provides central monitors visibility to potential risks or trends, which may warrant additional scrutiny off-site or on-site. To realize these potential benefits, it is important that centralized monitoring teams are multidisciplinary. The ideal team will have clinical monitoring experience coupled with data analysis skills. These teams should also possess strong medical and safety surveillance perspectives.
Coming on the horizon is the promise of using statistical methods to augment existing monitoring strategies. The concept here is to use the reported data to guide the review and verification process. By applying statistical methods to identify inconsistent data points or patterns of data at a site, these signals can then be used to focus additional data review and investigations. These methods can also look for many other signals, including analyzing the data for trending, whether in the values themselves or attributes of that data such as the time of data collection. Data can be analyzed to determine if there is a directional bias or inconsistent variability (too much or too little) at a site, within a patient, or across an entire trial. The benefit of this approach is to further reduce the amount of data clinical researchers need to look at. They can plan to review less data initially, knowing that the statistical methods will provide a safety net to trigger additional guided data investigations as needed.
Early Planning Is Pivotal
Before deciding on the optimal monitoring approach for a trial, establishing a strong operational strategy is essential. Beginning the process early in development will allow for a more holistic approach to streamlining the protocol and risk identification. Building the operational strategy starts with the biopharm and CRO aligning their therapeutic expertise and leveraging that knowledge with historical data to clearly define potential risks and identify critical core data. Clinical teams should appropriately identify risks that are related to patient safety, potential barriers to regulatory approval, and risks to the delivery of quality data on time or within budget. These risks must be identified and fully vetted by a cross-functional team, with particular attention paid to three main categories: scientific and medical risks, regulatory risks, and operational risks.
Once trial risks have been identified, the goal is to eliminate, reduce, or mitigate them as much as possible. If a risk cannot be completely eliminated, biopharms and CROs must ensure that they clearly document the risk mitigation strategy, including which data, tools, or systems will be used to signal when that risk is about to occur and what type of remediation will be necessary. It also is important to isolate those trial procedures or activities that are considered essential to supporting the evidence needed for product approval. This will enable more informed discussions about potential areas where there may be excessive procedures in place that could expose patients to risk.
Clinical Trial Execution and Control
After a trial’s operational strategy has been established, the focus shifts to the delivery of the strategic data monitoring plan. Monitoring activities should focus on the critical measurements identified in the protocol and on preventing important and likely sources of error in their collection and reporting. Biopharms and CROs must put systems in place that provide the data transparency needed to support a strategic data monitoring plan — one that may combine a centralized approach with targeted or triggered strategies.
The ability to use tools that aggregate large datasets is critical and enables a more risk-adaptive monitoring approach to be adopted across a trial. Potential metrics could include differential data between sites around patient recruitment, serious adverse events reported, and reports of noncompliance. Simply collecting large amounts of data, however, does not mean statisticians will be able to identify unfavorable trends, potential risks, or safety issues. With the many data repositories that already exist, the challenge is integrating data streams into reliable intelligence that allows biopharms and CROs to make better and more timely decisions. It comes down to how well disparate data can be leveraged to make the right data available at the right time to support planning and operational delivery of clinical trials.