Over the past year, news stories have focused on high-profile missteps or fraud in the scientific research community. Some instances, such as Dr. Marc Tessier-Lavigne resigning from his role as President of Stanford University due to allegations related to fraud, or Surgisphere retracting two cutting-edge papers on COVID-19, call to light the dangers posed to the scientific community when credibility and rigor are called into question or fall short. There has never been a more potent need for good, transparent science conducted by principled researchers using the most advanced methods. This is especially true in the discipline of using real-world data (RWD) to generate real-world evidence (RWE).
While distilling the concept of “regulatory-grade” RWE has become extremely important as the U.S. Food and Drug Administration (FDA) advances its RWE program, regulatory aspirations should not change the calculus behind the design and execution of an RWE study. The hallmarks of a study fit for a regulatory approval (e.g., clear objectives and summary of methods and results, fit-for-purpose study design, and adequate analysis to assess drug results) should be the same as a study fit for any healthcare-related decision, whether it be reimbursement, market access, or pharmacovigilance, among others.
As data sources continue to improve in quality, so will their utility for a variety of use cases, both in the regulatory space and beyond. RWE studies will become commonplace in the product approval, value demonstration, and reimbursement realms (some would argue they already are). As such, the consequences of conducting sub-par RWE studies can be grave. Here’s why:
- First, RWE studies seek to address some of the most pressing questions in public health. In many instances, researchers conduct RWE studies to better understand marginalized populations or patients with ultra-rare diseases. These are some of the most vulnerable patient populations, where access to a particular treatment can mean life or death. Even if a study isn’t supporting a regulatory application, it can still impact patient access, patient safety, or general well-being, placing a premium on the need for high-quality and rigorous approaches.
- Additionally, RWE is still fighting for its place in the evidence generation paradigm, routinely facing challenges that it is subpar or inferior to a randomized controlled clinical trial. RWE will never replace such trials, but it has a role in answering questions that those approaches cannot. Poorly designed studies can not only impact organizational reputation, but the RWE industry as a whole by undermining the foothold the industry has fought so hard to achieve.
- Finally, an RWE study conducted with less-than-principled methodological approaches can result in organizational and operational inefficiency. If RWE is not decision-grade, transparent, and reproducible, organizations run the risk of either making the wrong decision or having to repeat a study, both of which impact resources and credibility.
As the tools and methods available to the scientific community continue to evolve, it is crucial that experts remain dedicated to producing transparent, auditable, and high-quality research. The hallmarks of a good study are the same, and commitment to excellence should not be subject to a sliding scale that is dependent on the ultimate use of a study.
Aetion has deep experience designing and executing scalable, transparent, and scientifically rigorous analyses of RWD to evaluate the safety and effectiveness of medical products. While it may seem obvious, all aspects of an RWE study must be fit for purpose: data, design, and execution. In order to simplify execution of these concepts, Aetion has built and published many tools. When researchers decide on an RWE study, they need to conduct a systematic scientific feasibility assessment to generate a strong study design and identify fit-for-purpose data sources. There are a number of resources available to assist with these subsequent steps, including SPACE, a structured framework for creating a fit-for-purpose study design that can generate valid and transparent RWE templates for study design and SPIFD, a structured framework for determining which datasets are fit-for-purpose to answer a given research question. We recently harmonized these tools in a new publication we call SPIFD2, which calls for articulation of the hypothetical target trial and sources of bias that may arise and incorporates STaRT-RWE as a reference. It is important to keep in mind that these are frameworks for best practices, and do not apply on their face across all studies. Instead, you need to adapt them accordingly by asking “what do you need for this particular decision to be supported by strong evidence?”
We would love to discuss these concepts and how they apply to your evidence generation needs. To connect with one of our scientific experts at Aetion, please email: info@aetion.com.