Nicolle Gatto, Ph.D., is the Senior Vice President of Scientific Research at Aetion, bringing with her 20 years of experience from Pfizer where she led epidemiology teams focused on regulatory and safety initiatives, and provided strategic leadership on methods-related work. Now at Aetion, Nicolle serves as a senior scientific voice within the organization, advancing research with our team of pharmacoepidemiologists, clients, and partners.
We sat down with Nicolle to learn more about her experience leading epidemiology projects—including using real world data (RWD) and real-world evidence (RWE) to support regulatory decision-making—and to hear her thoughts on how biopharma and regulators can leverage external comparators to advance treatments and vaccines for COVID-19.
Responses have been edited for clarity and length.
Q: Can you provide some background on your experience with RWD and pharmacoepidemiology?
A: When I first started at Pfizer, epidemiology was largely used to better understand the epidemiology of an indication (i.e., filling gaps in the published literature). It could provide background rates for spontaneously reported post-approval adverse events, and conduct post-approval regulatory commitment studies to fill peri-approval knowledge gaps—for example to confirm safety and effectiveness in understudied patient groups.
But over time, epidemiologic methods and data were increasingly used during drug development to contextualize safety outcome imbalances in clinical trial data. For example, if there were more cases of a serious outcome in the treated group than in the control group, but the trial wasn’t positioned to make a definitive conclusion about that outcome, we’d analyze RWD to better understand if the observed risks were within the range of what we should expect. However, once we identified an appropriate data source and designed and completed the study, it was usually over a year later. By that time, the information was far from real-time—and often was no longer relevant.
To help with this, a main focus of my early work at Pfizer was establishing “standing cohorts.” The idea was to identify a group of patients in a real-world data source that were similar to the anticipated indicated population. We would characterize the cohort’s epidemiology and estimate the incidence of known complications of the indication and of any safety outcomes of interest, and update the cohort over time as data accumulated. Additionally, we could identify a subgroup—the “trial-like” subgroup—using the key inclusion and exclusion criteria from the clinical trial. That way, if a new imbalance arose, or if regulators had questions about the trial data during peri-approval, we could provide contextualizing data and/or respond to the query in a matter of weeks, as opposed to a year or more.
One of the challenges of this work was demonstrating the value of standing cohorts—which provide observational evidence largely as an “insurance policy”—to clinical development teams who were very much focused on deriving the randomized trial evidence needed for regulatory approval.
Eventually, teams saw the value in thinking proactively about supplemental data, and standing cohorts became standard practice. We found that it was helpful to incorporate them as early as phase II, especially in areas like oncology where accelerated approvals and regulatory fast tracks are more common. Teams understood that they needed to dedicate budget around phase IIb to have standing cohorts in place for phase III studies.
Q: If you were to recommend one RWE initiative that clinical development programs should adopt earlier into their processes, what would it be?
A: External comparator arms are obviously a hot topic right now, and for good reason. Regulators and HTAs have shown they’re open to the use of real-world data, especially for submissions with uncontrolled or insufficiently controlled trial data—often arising in oncology and rare disease indications—where you can’t conduct a randomized trial for ethical or practical reasons.
To have high quality and acceptable evidence, we need to be able to apply the inclusion and exclusion criteria from the trial to the real-world comparator data, and use analytic methods such as propensity score matching or inverse probability weighting to create baseline balance across the trial-treated and real-world comparator patients. The goal is to achieve balance that is believable as a sort of pseudorandomization. This is what we mean by an external comparator arm.
I think biopharma is realizing that having real-world data to contextualize trial findings is helpful. Establishing a trial-like cohort is a good place to start, but I think there is a lot more that can be learned by using principled epidemiology to establish external comparator arms for pivotal trials.
When a biopharma company is developing a product that is critical to their bottom line and their future, they should be thinking about real-world data very early on, no later than early phase II. It is critical that they think proactively about the information they will or may need to support their submission, conduct full data feasibility, and set up the external comparator at the same time as the trial.
Q: What are the most important factors to consider when designing an observational study?
A: There are two important factors to consider. One is the context for the study, including the decision it aims to inform and the stakeholder, for example: Is an HTA making a decision about reimbursement for a third in class drug, or is a regulator trying to decide whether to approve a novel medication for a rare disease? These are very different scenarios, and it’s critically important to understand the purpose for the analysis.
The second thing one must do is consider the target trial you would do (if feasible and ethical) to answer the specific and clearly defined research question for the study, then design an epidemiologic study to mimic that trial. The bottom line is that in any study where you want to demonstrate causal effect—whether it’s for safety or effectiveness, or for pre-approval or post-approval needs—you need to use principled epidemiology. One of the reasons that I came to Aetion is the shared belief in this process.
Q: Where do you see opportunity for RWE in the response to COVID-19?
A: In my vaccine work with Pfizer, in addition to developing standing- and trial-like cohorts, we used real-world data to characterize the population at risk before first-in-human trials. For instance, we might characterize the population for a phase II or phase III clinical trial as one that is broad but still at sufficient risk of the outcome to allow timely completion of the trial.
With COVID-19, while most vaccines will likely be tested in randomized controlled trials, there’s an opportunity for us to further explore the use of external comparators to contextualize trial data, extend our understanding of the natural course of COVID-19, and better understand the safety and effectiveness of treatments.
If we do this right, we’ll not only answer critical public health pandemic research questions, but we can also learn a lot about how to conduct external comparators in a highly valid and transparent way that supports regulatory decisions. We’ve seen biopharma starting to use external comparator arms in regulatory submissions, but biopharma and the FDA are both still learning about where real-world data are sufficient and best suited to support decisions. The COVID-19 pandemic may accelerate this learning process and advance the use of RWD by regulators.