There’s a lot of buzz around the potential impact of real-world evidence (RWE) in health care—yet skepticism remains: If RWE is broadly accepted and applied, will manufacturers be incentivized to de-emphasize their focus on generating randomized controlled trial (RCT) data? Will methodologies control for confounding? Will manufacturers cherry pick results favorable to their products?
The first step in addressing these concerns is deepening our understanding of when real-world data (RWD) should and should not be used to answer questions in drug effectiveness and value.
Because RWD is not always the best resource to answer every research question. Context matters, particularly in terms of market access. To determine if RWE will be helpful, these factors should be considered:
Fit for purpose
The first step is to ensure the appropriateness of the data: Is it fit for purpose? Is it robust enough to answer the question of interest? Scientifically valid and accurate RWE studies rely on health care data that includes exposure, outcomes, and those covariates needed by decision-makers in order to have confidence in the results of the analysis. For example, in oncology, the stage of cancer is essential information in determining the course of treatment and if treatment is successful. Often, administrative claims data sets do not contain information on cancer staging and thus electronic health record (EHR) data is typically more appropriate for RWE oncology studies. Confirming that data are fit for purpose builds confidence—and trust—in the data among all stakeholders in the generated evidence.
Assessing impact
How will stakeholders rely on the evidence generated from the fit-for-purpose data for their decision-making? The answer underpins the methodological rigor required to generate the evidence for a specific question.
As the Institute for Clinical and Economic Review (ICER) and the Office of Health Economic Research outline in Understanding the Context, Selecting the Standards, contextual considerations determine the consequences of a new drug or expanded label compared to uprooting the current standard of care.
For example, if the evidence will be used to showcase superior comparative effectiveness to the current standard of care, then RWE must pass a high evidentiary bar. There will be a high degree of scrutiny and a significant budgetary impact when making a claim of greater effectiveness, especially if the drug in question comes with a higher price tag. In contrast, if RWE is used to demonstrate utilization or when there is a strong face validity to the findings, as when RWE confirms existing RCT findings, the evidentiary bar will be lower.
To illustrate contextual considerations for the evidentiary bar, consider the Haute Autorité de Santé’s (HAS) reassessment of etravirine for HIV utilized RWE to confirm the durability of the RCT results. While HAS noted the limitations of observational studies, they were willing to accept the evidence base because it was consistent with previous findings. However, when non-randomized observational data is used as the main effectiveness evidence source, pCODR/CADTH spend time analyzing and discussing if this evidence could be used. The decision to reimburse or not compared to confirming effectiveness during a reassessment are different questions, have different evidentiary requirements, and require different levels of scientific rigor.
The bottom line: collaboration
Knowing how evidence will be used in decision-making—and what the ultimate impact of that decision will be on the health care system—requires collaboration between all stakeholders.
Regulators, payers, and health technology assessment agencies should be engaged early in a transparent process of designing a RWD analysis to identify the uses and the impact of the RWE to be generated in each of their spheres. Early and often collaboration increases trust, speeding the adoption of any findings.