Interrater reliability of measurements of comorbid illness should be reported Journal Articles uri icon

  •  
  • Overview
  •  
  • Research
  •  
  • Identity
  •  
  • Additional Document Info
  •  
  • View All
  •  

abstract

  • OBJECTIVE: Comorbidity indices are commonly used to stratify patients to control for treatment selection bias. The objectives here were to review the reporting of interrater reliability when studies use comorbidity indices in clinical research publications and to report the interrater reliability of four common indices in a particular research setting. STUDY DESIGN AND SETTING: Four trained abstractors reviewed the same 40 charts of patients with squamous cell carcinoma of the head and neck from a regional cancer center. Scores for the Charlson Index, the Index of Co-existent Disease, the Cumulative Illness Rating Scale, and the Kaplan-Feinstein Classification were calculated, and the intraclass correlation coefficient was used to assess interrater reliability. RESULTS: The details on the training of abstractors and the results of interrater reliability tests are not commonly reported. In our study setting, the Charlson Index had excellent reliability and the others had acceptable reliability. CONCLUSION: If the quality of a study using an index or scale is to be assessed, the reliability and interrater reliability of the score assignment process should be reported.

publication date

  • September 2006