Development and Implementation of an Online OSPE Test Bank Graded by Artificial Intelligence Journal Articles uri icon

  •  
  • Overview
  •  
  • Research
  •  
  • Identity
  •  
  • Additional Document Info
  •  
  • View All
  •  

abstract

  • The development of online anatomy education has rapidly accelerated during the COVID‐19 pandemic. This shift to the online world has mainly been focused on the delivery of content, while testing anatomical knowledge has proven to be more challenging, particularly for objective structured practical exams (OSPEs, also known as “spot tests” or “practical exams”). Online resources for OSPEs are uncommon compared to the extensive banks of multiple‐choice questions available. Another issue is that whether virtual or in‐person, grading OSPEs is challenging and time consuming. Recent research in our laboratory has suggested that machine learning algorithms, using decision trees, can be trained to mark OSPEs with a >95% accuracy. Building on these findings, the goal of this project is to create a virtual OSPE bank, train the AI to grade OSPEs, and develop an application with automated AI grading to act as a resource for students studying anatomy and physiology. Currently, we have written over 120 OSPE question sets using images from the Bassett Collection, the UBC Neuroanatomy collection, and images developed at the Education Program in Anatomy at McMaster University. The questions and answers were initially developed by senior undergraduate students, with coaching from faculty and staff familiar with OSPE generation acting as experts. The questions were then collectively reviewed by the students before undergoing two independent reviews by the experts. After revision, a final blind review of the questions was undertaken by a third expert to ensure validity and accuracy. These questions are being made available on the undergraduate anatomy and physiology course learning management system (LMS), Avenue to Learn. On this LMS, students will be able to use the questions for OSPE practice and answers will be available in the traditional manner. The answers given by the students and graded by the faculty will be collected and analyzed to determine the difficulty and discrimination of the questions. The data and analyses will be used to refine additional OSPEs, as well as refine the AI marking tool for the virtual OSPE application. Our hypothesis is that by providing instant and accurate feedback on valid and accurate questions, course evaluations and performance on OSPEs will improve. The research group is actively searching for collaborators willing to generate and review additional OSPE questions to expand the question bank and improve the AI marking tool.

publication date

  • May 2022