In-training evaluation using hand-held computerized clinical work sampling strategies in radiology residency. Journal Articles uri icon

  •  
  • Overview
  •  
  • Research
  •  
  • Identity
  •  
  • Additional Document Info
  •  
  • View All
  •  

abstract

  • PURPOSE: The in-training evaluation and final in-training evaluation are the mainstay format for evaluation summaries in Canadian residency training programs. This study investigates the feasibility of a clinical work sampling (CWS) approach to evaluation in radiology residency, with the aid of personal hand-held computing devices METHODS: This study was conducted over a 1-year period with 14 radiology residents spanning 4 postgraduate years. Residents were provided with a hand-held device to enter evaluation data, with entries assessing 9 categories of resident performance. Results from the CWS entries were compared with standard in-training evaluations completed at the end of the residents' rotations, as well as with an established annual objective evaluation tool. RESULTS: The overall reliability of the CWS approach according to the observed 7 forms per resident was 0.62, suggesting that a minimum of 20 forms would be required to achieve a reliability of 0.80. For the in-training evaluation report (ITER), internal consistency was 0.98, reflecting very high correlations between categories and indicating that the individual categories are not discriminating. Correlation across rotations was 0.36, which is low for summative evaluation. Correlation between the 2 measures was 0.47 (P = 0.09); neither measure was correlated with the American College of Radiology evaluation. CONCLUSION: The CWS strategy is feasible for adaptation to radiology residency, although compliance with voluntary entries was less than expected. It is not clear whether this reflects the additional burden of using the hand-held device, the fact that entries were voluntary rather than mandatory, or the many demands on both residents and evaluators. The added potential of this evaluation format includes the opportunity to discuss performance at the time of data entry, rather than resorting to the usual end-of-rotation evaluation. Nevertheless, the study has shown that the ITER remains only of marginal value for summative evaluation; the addition of the CWS would require at least 20 forms for acceptable reliability and might not justify the additional cost and complexity.

authors

publication date

  • October 2006