Evaluating the Construct Validity of Competencies: A Retrospective Analysis Journal Articles uri icon

  •  
  • Overview
  •  
  • Research
  •  
  • Identity
  •  
  • Additional Document Info
  •  
  • View All
  •  

abstract

  • BACKGROUND: A competency-based framework focuses on alignment between professional standards and assessment design. This alignment implies improved measurement validity, yet it has not been established that competence in one context predicts performance in another context. High-stakes competence assessments offer insights into the relationship between assessment design and competencies. METHODS/ANALYSES: The internationally educated nurses competency assessment program (IENCAP) was developed at Touchstone Institute in collaboration with the College of Nurses of Ontario (CNO) and includes a 12-station OSCE. Each station evaluated the same 10 competencies. We submitted competency scores to a multi-trait multi-method matrix analysis to evaluate the convergent and discriminant validity of competencies. RESULTS/OBSERVATIONS: All correlations were significant and positive; however, we did not find evidence of convergent or discriminant validity. Correlations were higher between different competencies evaluated within the same station (mean correlation = 0.60) compared to identical competencies evaluated across different stations (mean correlation = 0.19). DISCUSSION: The results do not provide evidence of construct validity for competencies. While competency-based approaches emphasize various generalized knowledge, skills, and attitudes, these findings indicate that the clinical context is a major determinant of performance. CONCLUSION: The context-dependent nature of competencies requires multiple assessments in varied contexts. Performance on a single competency cannot be determined in a single occasion. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s40670-023-01794-z.

publication date

  • June 2023