Quantitative bias analysis for external control arms using real-world data in clinical trials: a primer for clinical researchers Journal Articles uri icon

  •  
  • Overview
  •  
  • Research
  •  
  • Identity
  •  
  • Additional Document Info
  •  
  • View All
  •  

abstract

  • Development of medicines in rare oncologic patient populations are growing, but well-powered randomized controlled trials are typically extremely challenging or unethical to conduct in such settings. External control arms using real-world data are increasingly used to supplement clinical trial evidence where no or little control arm data exists. The construction of an external control arm should always aim to match the population, treatment settings and outcome measurements of the corresponding treatment arm. Yet, external real-world data is typically fraught with limitations including missing data, measurement error and the potential for unmeasured confounding given a nonrandomized comparison. Quantitative bias analysis (QBA) comprises a collection of approaches for modelling the magnitude of systematic errors in data which cannot be addressed with conventional statistical adjustment. Their applications can range from simple deterministic equations to complex hierarchical models. QBA applied to external control arm represent an opportunity for evaluating the validity of the corresponding comparative efficacy estimates. We provide a brief overview of available QBA approaches and explore their application in practice. Using a motivating example of a comparison between pralsetinib single-arm trial data versus pembrolizumab alone or combined with chemotherapy real-world data for RET fusion-positive advanced non-small cell lung cancer (aNSCLC) patients (1–2% among all NSCLC), we illustrate how QBA can be applied to external control arms. We illustrate how QBA is used to ascertain robustness of results despite a large proportion of missing data on baseline ECOG performance status and suspicion of unknown confounding. The robustness of findings is illustrated by showing that no meaningful change to the comparative effect was observed across several ‘tipping-point’ scenario analyses, and by showing that suspicion of unknown confounding was ruled out by use of E-values. Full R code is also provided.

authors

  • Thorlund, Kristian
  • Duffield, Stephen
  • Popat, Sanjay
  • Ramagopalan, Sreeram
  • Gupta, Alind
  • Hsu, Grace
  • Arora, Paul
  • Subbiah, Vivek

publication date

  • March 2024