By Deborah Lubeck
The outcomes and costs associated with medical innovations are of increasing interest. Randomized clinical trials are the standard for determining if a therapy is safe and efficacious, but may not reflect real world treatment effectiveness. The evidence gap between clinical trials and studies of effectiveness is gaining importance.
What is Comparative Effectiveness Research?
Comparative Effectiveness Research (CER) is the comparison of two or more healthcare interventions in real-world practice for persons with diverse clinical characteristics. The comparison can be among several drugs or a medical versus a behavioral or surgical intervention. Randomized clinical trial designs developed for narrower purposes may not be applicable for CER.
Pragmatic Clinical Trials
Pragmatic trials directly compare a marketed pharmaceutical, device, or other intervention to established therapies. Participants are first randomized to one of the alternative therapies and then followed for clinical and functional outcomes whether they remain on treatment, switch to another therapy, or discontinue treatment altogether. Approximately 90% of pragmatic trials focus on a drug versus a procedure or other intervention, not drug comparisons. While these studies may lead to definitive evidence on the comparators, the protocol may preclude evaluation of therapies approved during study execution.
Prospective Observational Studies
Prospective observational studies enroll participants who meet broad inclusion criteria, such as a new diagnosis or treatment, irrespective of comorbid health conditions or past treatment. Sequential measurement of clinical and functional outcomes is an essential component of these studies. Longitudinal data and a heterogeneous patient cohort allow for the examination of disease course, treatment patterns, patient reported outcomes, and resource utilization. This data also may fill gaps in clinical knowledge, the natural history of a disease, or tracking new therapies as they evolve in clinical practice.
Retrospective database analyses quantify the burden of a disease, evaluate treatment patterns, and compare outcomes for marketed products using administrative data. Retrospective database analyses may also be conducted to supplement findings from randomized clinical trials for postmarketing commitments. While prospective trials provide considerable information on treatment behavior and patterns of care, much of the same information can be gathered from chart reviews. With electronic medical records, it is increasingly easier to gain access to de-identified, accurate, and complete information which supplements the information in administrative databases.
Decision Analytic Models
CER relies on information obtained from pragmatic trials, observational studies, and other data sources. It is often useful to combine the data using decision analytic models, which may or may not include incorporation of costs. At their best, decision models clarify relationships between therapeutic processes, costs, and outcomes in order to highlight the relative trade-offs between alternatives. However, if the methods and assumptions behind the models are not transparent, trade-offs between alternatives may not be clear.
Systematic Literature Reviews
Another option for comparing treatments is rigorous review and synthesis of existing studies. A systematic literature review begins with concise research questions followed by rigorous methods to identify, select, and appraise relevant research. When the results of primary studies are summarized (but not statistically), the review is called a “qualitative systematic review.” A quantitative systematic review uses statistical methods to integrate the results of independent studies.
Considerations for Selecting CER Designs
There are many reasons for selecting a CER design:
Deborah Lubeck, Ph.D., is VP of ICON Late Phase Services. She has been involved in outcomes research for 25 years and has coauthored over 160 peer-reviewed publications. Her research focuses on the design and analysis of observational studies. Prior to joining ICON, Deborah was on the research faculty at Stanford University and the University of California, San Francisco.