abstract
- RATIONALE AND OBJECTIVES: The American Registry of Radiologic Technologists (ARRT) leads the certification process with an exam comprising 200 multiple-choice questions. This study aims to evaluate ChatGPT-4's performance in responding to practice questions similar to those found in the ARRT board examination. MATERIALS AND METHODS: We used a dataset of 200 practice multiple-choice questions for the ARRT certification exam from BoardVitals. Each question was fed to ChatGPT-4 fifteen times, resulting in 3000 observations to account for response variability. RESULTS: ChatGPT's overall performance was 80.56%, with higher accuracy on text-based questions (86.3%) compared to image-based questions (45.6%). Response times were longer for image-based questions (18.01 s) than for text-based questions (13.27 s). Performance varied by domain: 72.6% for Safety, 70.6% for Image Production, 67.3% for Patient Care, and 53.4% for Procedures. As anticipated, performance was best on on easy questions (78.5%). CONCLUSION: ChatGPT demonstrated effective performance on the BoardVitals question bank for ARRT certification. Future studies could benefit from analyzing the correlation between BoardVitals scores and actual exam outcomes. Further development in AI, particularly in image processing and interpretation, is necessary to enhance its utility in educational settings.