Combining scores from different patient reported outcome measures in meta-analyses: when is it justified? Journal Articles uri icon

  •  
  • Overview
  •  
  • Research
  •  
  • Identity
  •  
  • Additional Document Info
  •  
  • View All
  •  

abstract

  • BACKGROUND: Combining outcomes and the use of standardized effect measures such as effect size and standardized response mean across instruments allows more comprehensive meta-analyses and should avoid selection bias. However, such analysis ideally requires that the instruments correlate strongly and that the underlying assumption of similar responsiveness is fulfilled. The aim of the study was to assess the correlation between two widely used health-related quality of life instruments for patients with chronic obstructive pulmonary disease and to compare the instruments' responsiveness on a study level. METHODS: We systematically identified all longitudinal studies that used both the Chronic Respiratory Questionnaire (CRQ) and the St. George's Respiratory Questionnaire (SGRQ) through electronic searches of MEDLINE, EMBASE, CENTRAL and PubMed. We assessed the correlation between CRQ (scale 1-7) and SGRQ (scale 1-100) change scores and compared responsiveness of the two instruments by comparing standardized response means (change scores divided by their standard deviation). RESULTS: We identified 15 studies with 23 patient groups. CRQ change scores ranged from -0.19 to 1.87 (median 0.35, IQR 0.14-0.68) and from -16.00 to 3.00 (median -3.00, IQR -4.73-0.25) for SGRQ change scores. The correlation between CRQ and SGRQ change scores was 0.88. Standardized response means of the CRQ (median 0.51, IQR 0.19-0.98) were significantly higher (p < 0.001) than for the SGRQ (median 0.26, IQR -0.03-0.40). CONCLUSION: Investigators should be cautious about pooling the results from different instruments in meta-analysis even if they appear to measure similar constructs. Despite high correlation in changes scores, responsiveness of instruments may differ substantially and could lead to important between-study heterogeneity and biased meta-analyses.

publication date

  • December 2006