Compelling evidence from meta-epidemiological studies demonstrates overestimation of effects in randomized trials that fail to optimize randomization and blind patients and outcome assessors
Journal Articles
Overview
Research
Identity
Additional Document Info
View All
Overview
abstract
OBJECTIVES: To investigate the impact of potential risk of bias elements on effect estimates in randomized trials. STUDY DESIGN AND SETTING: We conducted a systematic survey of meta-epidemiological studies examining the influence of potential risk of bias elements on effect estimates in randomized trials. We included only meta-epidemiological studies that either preserved the clustering of trials within meta-analyses (compared effect estimates between trials with and without the potential risk of bias element within each meta-analysis, then combined across meta-analyses; between-trial comparisons), or preserved the clustering of substudies within trials (compared effect estimates between substudies with and without the element, then combined across trials; within-trial comparisons). Separately for studies based on between- and within-trial comparisons, we extracted ratios of odds ratios (RORs) from each study and combined them using a random-effects model. We made overall inferences and assessed certainty of evidence based on Grading of Recommendations, Assessment, development, and Evaluation and Instrument to assess the Credibility of Effect Modification Analyses. RESULTS: Forty-one meta-epidemiological studies (34 of between-, 7 of within-trial comparisons) proved eligible. Inadequate random sequence generation (ROR 0.94, 95% confidence interval [CI] 0.90-0.97) and allocation concealment (ROR 0.92, 95% CI 0.88-0.97) probably lead to effect overestimation (moderate certainty). Lack of patients blinding probably overestimates effects for patient-reported outcomes (ROR 0.36, 95% CI 0.28-0.48; moderate certainty). Lack of blinding of outcome assessors results in effect overestimation for subjective outcomes (ROR 0.69, 95% CI 0.51-0.93; high certainty). The impact of patients or outcome assessors blinding on other outcomes, and the impact of blinding of health-care providers, data collectors, or data analysts, remain uncertain. Trials stopped early for benefit probably overestimate effects (moderate certainty). Trials with imbalanced cointerventions may overestimate effects, while trials with missing outcome data may underestimate effects (low certainty). Influence of baseline imbalance, compliance, selective reporting, and intention-to-treat analysis remain uncertain. CONCLUSION: Failure to ensure random sequence generation or adequate allocation concealment probably results in modest overestimates of effects. Lack of patients blinding probably leads to substantial overestimates of effects for patient-reported outcomes. Lack of blinding of outcome assessors results in substantial effect overestimation for subjective outcomes. For other elements, though evidence for consistent systematic overestimate of effect remains limited, failure to implement these safeguards may still introduce important bias.