Skip to content
AdminMay 10, 20199 min read

Methods, approaches, and predictions: Dr. Jessica Franklin discusses RCT DUPLICATE

Known as a “methods wiz” among her collaborators, Jessica Franklin, Ph.D., dedicates her ingenuity to developing and applying statistical methods to the study of comparative effectiveness and adverse effects of drugs, consequences of drug policy, and drug utilization. She is an assistant professor of medicine at Harvard Medical School and a biostatistician in the Division of Pharmacoepidemiology and Pharmacoeconomics at Brigham and Women’s Hospital. There, she leads the RCT DUPLICATE Project, a landmark study to demonstrate study design, methods, and value of real-world evidence (RWE) in the approval and regulation of drugs.

A large, comprehensive comparison of advanced observational real-world data (RWD) study approaches and randomized controlled trials (RCTs), RCT DUPLICATE will provide guidance on how to optimize the performance of causal inference methods applied to RWD for the study of comparative effectiveness and safety of medications. Mandated by the 21st Century Cures Act and funded by the FDA’s Center for Drug Evaluation and Research, the project expects interim results by mid-2019 and full results by the end of 2020.

We ask Dr. Franklin about the purpose and the promise of the project.

Q: What is the RCT DUPLICATE Project, and what are its goals?
A: The project launched in late 2017 when the Food and Drug Administration contracted with us to replicate published randomized controlled trials using real-world data sets. Their intent is to find out if the use of real-world evidence would have led to the same regulatory decisions. They want to know: If we had evidence from a real-world data study instead of an RCT, would we have come to the same regulatory conclusion?

Q: What first led you to using real-world data to attempt to replicate randomized controlled trials?
A: I was looking at a meta-analysis where they looked at a bunch of clinical questions. Each had a non-randomized study followed by a randomized trial that had been published later. I thought this was a really clever way to compare observational studies to randomized trials, avoiding publication bias: If you’re doing a non-randomized study of the same clinical question as a published randomized controlled trial and you get a different answer, you might not believe your results and not want to publish them. By looking only at scenarios where the observational study was published first, before there was any RCT evidence on the question, we avoid this problem. But because this meta-analysis only looked at published literature, the observational studies were not designed specifically to match the RCT that they were being compared against, and not all of the observational studies were of high quality. Finding a better way to compare findings from non-randomized to randomized trials is how I initially got interested in this problem.

Q: What do you expect to learn by comparing a non-randomized study to a randomized trial?
It’s really the only way we have to assess the success of the entire research process—from identifying the clinical question of interest to selecting an appropriate data source to answer that question, selecting an epidemiologic study design, all of the design parameters, then selecting the analyses that give out our final result.

Researchers are good at studying each one of those parts in isolation of the others. But the final analytic result depends on this full stream of processes. So, we need to assess the success of the entire research process to build an empirical evidence base that we can point to and say, this is how we create valid studies. And here’s the data that supports it. These replication trials can help us—and FDA—learn which methods work better in real data and real clinical questions and to confirm which design and analytic choices make our database studies interpretable for decision-making. They can help us learn which questions can be answered with real-world data. And similarly, which questions should not be answered with real-world data and when we need to run a randomized trial to answer validly.

Q: With hundreds of thousands of trials published each year, how did you select those to replicate?
We began the search with two databases of approvals—one of initial medication approvals and the other of supplemental approvals—that we developed in the Division of Pharmacoepidemiology and Pharmacoeconomics at Brigham and Women’s Hospital. In those databases, we looked at the pivotal trials that supported the approval decision. FDA staff also suggested trials they thought would be good candidates. So, that gave us a set of trials that achieved their primary endpoint: either to show superiority to placebo or another treatment, or non-inferiority to another treatment.

Trials may be replicable in some data sources, but not in others. So, we selected a set of candidate randomized controlled trials that we believe are replicable in our data sources. We’re working with three health insurance claims datasets: Optum Clinformatics, IBM MarketScan, and Medicare data. These insurance claims datasets don’t capture symptomatic outcomes like rheumatoid arthritis symptom severity scales. So we’re not able to replicate any trials that have rheumatological outcomes as the primary outcome in the trial because we don’t have those outcomes measured in our datasets. What we do have measured well are hard clinical endpoints; the events that send patients to the hospital: hip fracture, stroke, heart failure or asthma exacerbation, and so on. If one of these is the outcome in an RCT, then that’s a trial that we can probably replicate.

We also wanted to have some trials in our set where the trial was negative. It failed to achieve its primary endpoint and will not therefore contribute to an approval. To find negative trials we went to and searched for published studies of medications that had negative findings, and that were studying an outcome that we thought would be replicable.

Therapeutic areas of the trials we selected fall into four buckets: diabetic medications, cardiovascular medications (anti-coagulants, anti-platelets, anti-hypertensives, heart failure medications, statins, anti-arrhythmics), anti-osteoporosis medications, and asthma and COPD. All have clinical outcomes that we can measure well in our claims databases.

Q: What ensures that your RWD analyses are truly regulatory-grade?
The FDA has more than 50 years of experience of relying on the results of randomized trials to inform their regulatory decision-making. Now they have a mandate from Congress with the 21st Century Cures Act to come up with guidance for how real-world data can be used for regulatory decision-making. While they’ve been using real-world data for safety through the Sentinel program, they have less experience in using these data to support the effectiveness of a medication.
So, just as this project is providing empirical data on how well real-world data can match the results of RCTs, it’s also providing FDA with a process model for accepting real-world data evidence to support regulatory decisions.

For that reason, we’re meticulous about the process of implementation. For example, before we ever look at outcomes or comparative results, we’ve finalized the design, specified our primary analysis, and registered our full protocol on, where it’s given a time stamp.

Only after the protocol has been registered do we implement our study, do our analyses, and document all findings. This process ensures that the decision to move forward with a specific design and analysis plan depends only on its scientific validity, not on whether or not we like the end results.

The question then is how do we share our study and results with FDA? When a sponsor submits randomized trial evidence to support a regulatory approval, they submit their entire randomized controlled trial dataset to FDA. Then FDA statisticians and reviewers work with the data to adjudicate every outcome, evaluate the details of the analyses, and produce alternative analyses. FDA is accustomed to working with the data at that level, and we need to enable them to work with real-world data at the same level.

That’s one of the reasons we’re using the Aetion Evidence Platform™ in this project. The platform will allow FDA to work with the data without needing to transfer the data directly to FDA. When we share a replication with FDA, they can go in to the platform and look around. They can see what we did, what we didn’t do, and change whatever they want: the follow-up window, the definition of the outcome, and so on. Perhaps they disagree with an aspect of the design, like the follow-up time. They can go in and change that to see if the result changes in a meaningful way.

This is important because if the FDA can change things, they can see if the results hold up. And if the results hold up, then FDA can trust the results. If they change a small detail and the result completely changes, they know this result is not robust.

Q: Your team has already selected and is attempting to replicate over 30 completed trials with real-world data. Why has FDA funded this  expansion of the project to seven more ongoing trials?
 If we only replicate RCT results that have already been published, there may be continuing concerns about publication bias: that any non-randomized studies that conflicted with the RCT results may have been suppressed. By extending our replications to prediction of seven ongoing randomized controlled trials, however, we’re choosing an analysis and a design we think will provide valid results that match the RCT before we see the results of the RCT. And we will publish our non-randomized, real-world data studies first—before the randomized trial results are released. That will show that our findings have been informed only by study power and patient characteristics, and not by the RCT findings.

That’s important because we’re thinking about how our results apply to future clinical questions in which a real-world data study may be done without a corresponding randomized controlled trial. In those situations, we need to be able to say: This is the design and analysis that’s going to give us valid results and with which we can proceed with confidence.

Q: Can you tell us more about the RCT DUPLICATE pilot study registered on Are there any preliminary results you can share?
This study is predicting the findings of an ongoing phase IV trial that is comparing two drugs’ impact on cardiovascular outcomes among patients with type 2 diabetes. The manuscript describing the RWE results was submitted in January 2019. In February 2019, preliminary trial results were announced. Full results from the RCT will be released later this year, which will allow us to see how closely our finding matches the effect estimate from the RCT. We are hoping that our paper describing the RWE prediction of trial findings will be published before then.

Q: How do you define success for the RCT DUPLICATE project?
We fully expect we’re not going to be able to successfully replicate every single trial that we are working on. But if we can delineate the characteristics that make a clinical question answerable in real-world data, then we’ve learned a lot.

We also want to confirm which design and analytic choices make our database studies interpretable for decision-making. What designs and analyses can we use that will provide valid, causal conclusions? And that will provide results that match the results of the randomized controlled trial? If we can answer those questions, the project will have been a success.