Comparison of the ability of double‐robust estimators to correct bias in propensity score matching analysis. A Monte Carlo simulation study Journal Articles uri icon

  •  
  • Overview
  •  
  • Research
  •  
  • Identity
  •  
  • Additional Document Info
  •  
  • View All
  •  

abstract

  • AbstractObjectiveAs covariates are not always adequately balanced after propensity score matching and double‐ adjustment can be used to remove residual confounding, we compared the performance of several double‐robust estimators in different scenarios.MethodsWe conducted a series of Monte Carlo simulations on virtual observational studies. After estimating the propensity scores by logistic regression, we performed 1:1 optimal, nearest‐neighbor, and caliper matching. We used 4 estimators on each matched sample: (1) a crude estimator without double‐adjustment, (2) double‐adjustment for the propensity scores, (3) double‐adjustment for the unweighted unbalanced covariates, and (4) double‐adjustment for the unbalanced covariates, weighted by their strength of association with the outcome.ResultsThe crude estimator led to highest bias in all tested scenarios. Double‐adjustment for the propensity scores effectively removed confounding only when the propensity score models were correctly specified. Double‐adjustment for the unbalanced covariates was more robust to misspecification. Double‐adjustment for the weighted unbalanced covariates outperformed the other approaches in every scenario and using any matching algorithm, as measured by the mean squared error.ConclusionDouble‐adjustment can be used to remove residual confounding after propensity score matching. The unbalanced covariates with the strongest confounding effects should be adjusted.

publication date

  • December 2017