abstract
- PURPOSE: Simulated clinical events provide a means to evaluate a practitioner's performance in a standardized manner for all candidates that are tested. We sought to provide evidence for the validity of simulation-based assessment tools in simulated pediatric anesthesia emergencies. METHODS: Nine centres in two countries recruited subjects to participate in simulated operating room events. Participants ranged in anesthesia experience from junior residents to staff anesthesiologists. Performances were video recorded for review and scored by specially trained, blinded, expert raters. The rating tools consisted of scenario-specific checklists and a global rating scale that allowed the rater to make a judgement about the subject's performance, and by extension, preparedness for independent practice. The reliability of the tools was classified as "substantial" (intraclass correlation coefficients ranged from 0.84 to 0.96 for the checklists and from 0.85 to 0.94 for the global rating scale). RESULTS: Three-hundred and ninety-one simulation encounters were analysed. Senior trainees and staff significantly out-performed junior trainees (P = 0.04 and P < 0.001 respectively). The effect size of grade (junior vs senior trainee vs staff) on performance was classified as "medium" (partial η2 = 0.06). Performance deficits were observed across all grades of anesthesiologist, particularly in two of the scenarios. CONCLUSIONS: This study supports the validity of our simulation-based anesthesiologist assessment tools in several domains of validity. We also describe some residual challenges regarding the validity of our tools, some notes of caution in terms of the intended consequences of their use, and identify opportunities for further research.