The reliability of encounter cards to assess the CanMEDS roles Journal Articles uri icon

  •  
  • Overview
  •  
  • Research
  •  
  • Identity
  •  
  • Additional Document Info
  •  
  • View All
  •  

abstract

  • The purpose of this study was to determine the reliability of a computer-based encounter card (EC) to assess medical students during an emergency medicine rotation. From April 2011 to March 2012, multiple physicians assessed an entire medical school class during their emergency medicine rotation using the CanMEDS framework. At the end of an emergency department shift, an EC was scored (1-10) for each student on Medical Expert, 2 additional Roles, and an overall score. Analysis of 1,819 ECs (155 of 186 students) revealed the following: Collaborator, Manager, Health Advocate and Scholar were assessed on less than 25 % of ECs. On average, each student was assessed 11 times with an inter-rater reliability of 0.6. The largest source of variance was rater bias. A D-study showed that a minimum of 17 ECs were required for a reliability of 0.7. There was moderate to strong correlations between all Roles and overall score; and the factor analysis revealed all items loading on a single factor, accounting for 87 % of the variance. The global assessment of the CanMEDS Roles using ECs has significant variance in estimates of performance, derived from differences between raters. Some Roles are seldom selected for assessment, suggesting that raters have difficulty identifying related performance. Finally, correlation and factor analyses demonstrate that raters are unable to discriminate among Roles and are basing judgments on an overall impression.

publication date

  • December 2013