Measures of evidence-informed decision-making competence attributes: a psychometric systematic review Journal Articles uri icon

  •  
  • Overview
  •  
  • Research
  •  
  • Identity
  •  
  • Additional Document Info
  •  
  • View All
  •  

abstract

  • AbstractBackgroundThe current state of evidence regarding measures that assess evidence-informed decision-making (EIDM) competence attributes (i.e., knowledge, skills, attitudes/beliefs, behaviours) among nurses is unknown. This systematic review provides a narrative synthesis of the psychometric properties and general characteristics of EIDM competence attribute measures in nursing.MethodsThe search strategy included online databases, hand searches, grey literature, and content experts. To align with the Cochrane Handbook of Systematic Reviews, psychometric outcome data (i.e., acceptability, reliability, validity) were extracted in duplicate, while all remaining data (i.e., study and measure characteristics) were extracted by one team member and checked by a second member for accuracy. Acceptability data was defined as measure completion time and overall rate of missing data. The Standards for Educational and Psychological Testing was used as the guiding framework to define reliability, and validity evidence, identified as a unified concept comprised of four validity sources: content, response process, internal structure and relationships to other variables. A narrative synthesis of measure and study characteristics, and psychometric outcomes is presented across measures and settings.ResultsA total of 5883 citations were screened with 103 studies and 35 unique measures included in the review. Measures were used or tested in acute care (n = 31 measures), public health (n = 4 measures), home health (n = 4 measures), and long-term care (n = 1 measure). Half of the measures assessed a single competence attribute (n = 19; 54.3%). Three measures (9%) assessed four competence attributes of knowledge, skills, attitudes/beliefs and behaviours. Regarding acceptability, overall missing data ranged from 1.6–25.6% across 11 measures and completion times ranged from 5 to 25 min (n = 4 measures). Internal consistency reliability was commonly reported (21 measures), with Cronbach’s alphas ranging from 0.45–0.98. Two measures reported four sources of validity evidence, and over half (n = 19; 54%) reported one source of validity evidence.ConclusionsThis review highlights a gap in the testing and use of competence attribute measures related to evidence-informed decision making in community-based and long-term care settings. Further development of measures is needed conceptually and psychometrically, as most measures assess only a single competence attribute, and lack assessment and evidence of reliability and sources of established validity evidence.RegistrationPROSPERO #CRD42018088754.

publication date

  • December 2020