As interest in real-world evidence (RWE) increases, the conversation often focuses on the choice between randomized controlled trial (RCT) evidence “versus” evidence from real-world observational studies.
But the question is not whether to use RCTs versus RWE, it is which type of evidence can demonstrate causality for a given research question. An emerging movement challenges the perceived dichotomy between RCTs and RWE in favor of embracing each as distinct tools to be invoked for specific purposes.
Under this context, RCTs are no longer the sole source of evidence for establishing causality, and we can ascribe the word “causal” to observational research designed and carried out in ways that explicitly facilitate making a causal contrast.
Climbing the causal ladder
Consider the three-rung “causal ladder” introduced by Judea Pearl and Dana Mackenzie in The Book of Why.
According to the authors, the ladder’s bottom rung—“associations”—supports the generation of hypotheses by identifying correlations among data. The ladder’s second “interventions” rung enables testing whether, say, a drug causes a positive or negative effect; evaluation of interventions arguably advances the questions that drive the most frequent and impactful decisions in health care. The top rung, “counterfactuals,” allows scientists to dive deeper into why a particular effect emerged.
Viewed through this lens, far from representing a dichotomy, RCTs and RWE—when designed and implemented correctly—can each answer rung two, interventional questions. “A sufficiently strong and accurate causal model can allow us to use rung one, real-world data, to answer rung two, interventional queries,” Pearl and Mackenzie say.
Historically, regulators have relied on RCTs to build causal evidence, since randomization is a powerful technique for creating an unbiased contrast. But as the causal ladder illustrates, when principled process is applied to fit-for-purpose real-world data (RWD), RWE can be equally effective in, for example, resolving queries that RCTs can’t feasibly answer.
There are also instances where both RWE or an RCT could answer a question of interest. In the case of the ongoing CAROLINA (CARdiovascular Outcome study of LINAgliptin versus glimepiride in patients with type 2 diabetes) trial, for example, researchers from the RCT DUPLICATE project used RWD to predict the results of the ongoing cardiovascular outcomes trial, and saw near-perfect concordance between RCT and RWE with respect to the outcome of major adverse cardiac events.
Additional applications materialize when RWD is tapped to resolve questions a randomized trial couldn’t otherwise answer—that is, cases where an RCT couldn’t feasibly offer evidence. Prospects include adding external control arms for single-arm trials, or augmenting existing clinical trials with new subgroups, new endpoints, and/or different comparators.
Target trial employs RWD for causal inference
One epidemiologist who has worked to prove that “causal” shouldn’t be what he calls a “dirty word” in observational research is Miguel Hernán, M.D., Dr. P.H., Kolokotrones Professor of Biostatistics and Epidemiology at the Harvard T.H. Chan School of Public Health. At the inaugural Aetion Summit, “Evidence at the Apex,” Dr. Hernán presented his work on “Target Trials.” When designing a study, researchers seeking to use RWD must think as if they were designing a randomized trial, even if that trial would be impossible to run. By contemplating this “target trial,” a design of the RWD study emerges that can support a causal conclusion.
Lengthy RCTs may represent missed opportunities to utilize RWD for causal inference, he stated. Instead, a target trial uses RWD to mimic all aspects of a hypothetical randomized trial: eligibility criteria, treatment strategies, starting and ending of follow-up, outcomes, causal contrast, and analysis plan. “Any causal analysis would then be an attempt to emulate that target trial,” Dr. Hernán said. Strong confounding identification and adjustment, built on a principled design—i.e., the “causal model” noted by Pearl and Mackenzie—that stands in place of randomization.
The target trial attempts to bring reviewers and stakeholders into the causal world and to be explicit about causal inference, he explained, despite a generalized reluctance to use the word “causal.”
“Explicit emulation” is critical to a successful target trial, he stressed; omit any piece of the RCT design in the observational analysis—a non-specific designation of “time zero” of follow-up, for instance—and the results will be subject to bias.
Bias is exactly what occurred in two well-known “catastrophic failures” of re-observational analysis shared by Dr. Hernán. In both analyses, which he characterized as “the most widely misunderstood examples of the use of real-world data,” the problem was not the quality of RWD used but the manner in which the research question was framed and how the data were analyzed.
In one study, which looked at statins and lowered cancer risk, introduction of substantial biases in the design phase—essentially, violations of the target trial approach—resulted in strong discrepancies between the RWE and RCT findings.
To highlight the faulty science, Dr. Hernán explicitly emulated a target trial of statins and cancer using RWD from electronic health records from care providers in the United Kingdom. Results from his follow-up RWD analysis aligned more closely with those from the original RCT, finding no association between statins and cancer.
At the same time, he cautioned there will be some trials that RWD will be unable to emulate, such as in cases of high levels of unmeasured or unmeasurable confounding, or when no observable treatment variation exists, thus making a contrast impossible. “We are trying to vaccinate ourselves against the hype that we can do everything with RWD, because there are things we cannot do.”
Enabling causality with an analytics platform
Both a climb up Pearl and Mackenzie’s causal ladder and an exploration of Dr. Hernán’s target trial framework advance the use of RWE to demonstrate causality.
We developed the Aetion Evidence Platform® specifically to equip researchers with a system to run studies that can achieve causality. Once a researcher has identified the right questions, the platform provides guardrails to obtain a target trial approach, applying principled database epidemiology.
Of course, traceability of all steps of the analysis is critical to establishing trust among regulators and other decision makers. It is necessary to understand the data’s origin, where it went, how it was processed, and how it became part of a RWE package in order to prove causality.