Relying on Others’ Reliability: Challenges in Clinical Teaching Assessment Journal Articles uri icon

  •  
  • Overview
  •  
  • Research
  •  
  • Identity
  •  
  • Additional Document Info
  •  
  • View All
  •  

abstract

  • BACKGROUND: The quality of the data generated from internally created faculty teaching instruments often draws skepticism. Strategies aimed at improving the reliability and validity of faculty teaching assessments tend to revolve around literature searches for a replacement instrument(s). PURPOSE: The purpose was to test this "search-and-apply" method and discuss our experiences with it within the context of observational assessment practice. METHOD: In a naturalistic pilot test, two previously validated faculty assessment instruments were paired with a global question. The reliability of both metrics was estimated. RESULTS: Generalizability analyses indicated that for both pilot tested faculty teaching instruments, the global question was a more reliable measure of perceived clinical teaching effectiveness than a multiple-item inventory. Item analysis with Cronbach's coefficient alpha suggested redundant instrument content. Rater error accounted for the greatest proportion of the variance and straight-line responses occurred in approximately 28% of residents' appraisals. CONCLUSIONS: The results of the present study draw attention to one of the common fallacies surrounding instrument-based assessment in medical education; the solution to improving one's assessment practice primarily involves identifying a previously published instrument from the literature. Academic centers need to invest in ongoing quality control efforts including the pilot testing of any proposed instruments.

authors

publication date

  • January 12, 2011