Home
Scholarly Works
Evaluating Artificial Intelligence Competency in...
Journal article

Evaluating Artificial Intelligence Competency in Education: Performance of ChatGPT-4 in the American Registry of Radiologic Technologists (ARRT) Radiography Certification Exam

Abstract

RATIONALE AND OBJECTIVES: The American Registry of Radiologic Technologists (ARRT) leads the certification process with an exam comprising 200 multiple-choice questions. This study aims to evaluate ChatGPT-4's performance in responding to practice questions similar to those found in the ARRT board examination. MATERIALS AND METHODS: We used a dataset of 200 practice multiple-choice questions for the ARRT certification exam from BoardVitals. Each question was fed to ChatGPT-4 fifteen times, resulting in 3000 observations to account for response variability. RESULTS: ChatGPT's overall performance was 80.56%, with higher accuracy on text-based questions (86.3%) compared to image-based questions (45.6%). Response times were longer for image-based questions (18.01 s) than for text-based questions (13.27 s). Performance varied by domain: 72.6% for Safety, 70.6% for Image Production, 67.3% for Patient Care, and 53.4% for Procedures. As anticipated, performance was best on on easy questions (78.5%). CONCLUSION: ChatGPT demonstrated effective performance on the BoardVitals question bank for ARRT certification. Future studies could benefit from analyzing the correlation between BoardVitals scores and actual exam outcomes. Further development in AI, particularly in image processing and interpretation, is necessary to enhance its utility in educational settings.

Authors

Al-Naser Y; Halka F; Ng B; Mountford D; Sharma S; Niure K; Yong-Hing C; Khosa F; Van der Pol C

Journal

Academic Radiology, Vol. 32, No. 2, pp. 597–603

Publisher

Elsevier

Publication Date

February 1, 2025

DOI

10.1016/j.acra.2024.08.009

ISSN

1076-6332

Contact the Experts team