Home
Scholarly Works
OP1.5 Evaluating ChatGPT’s Performance in...
Journal article

OP1.5 Evaluating ChatGPT’s Performance in Answering Patients’ Questions Relating to Femoroacetabular Impingement Syndrome and Arthroscopic Hip Surgery

Abstract

Abstract Background: This study evaluates the efficacy of large language models (LLMs) like ChatGPT in providing accurate and reliable patient information on Femoroacetabular Impingement (FAI) syndrome and its arthroscopic management. The advent of AI and LLMs has revolutionized the accessibility of medical information, necessitating an examination of their reliability and accuracy. With the known preponderant reliance on online resources for medical information, this research aims to assess the precision of ChatGPT responses to common patient inquiries about FAI and its surgical treatment. Hence, this project’s primary goal was to ascertain the overall accuracy and reliability of ChatGPT-generated information, with a secondary aim of comparing the performance between ChatGPT versions 3.5 and 4.0. Methods: Utilizing a set of twelve frequently asked questions about FAI, collected from scientific literature and reputable healthcare websites, the study evaluated and compared responses from ChatGPT versions 3.5 and 4.0. These responses were evaluated in a blinded fashion by three experienced hip arthroscopy surgeons using a previously published ChatGPT Response Rating System, ranging from “excellent response not requiring clarification” to “unsatisfactory requiring substantial clarification.” A descriptive quantitative and qualitative analysis was conducted. A Wilcoxon signed-rank test was used to compare the paired groups (GPT 3.5 versus GPT 4.0) and Gwet’s AC2 coefficient was used to assess the weighted level of agreement, corrected for chance, employing quadratic weights. Results: The findings indicated that both ChatGPT versions predominantly produced responses that were either “excellent” or “satisfactory requiring minimal clarification”, representing 75% and 92% of the responses for ChatGPT 3.5 and 4.0 respectively. The median accuracy scores were 2 (range 1-3) and 1.5 (range 1-3) for ChatGPT 3.5 and ChatGPT 4.0, respectively. No response was judged “unsafe or requiring substantial clarification” by the experts. However, no significant statistical difference was found between the two versions (p=0.279), although ChatGPT-4 showed a tendency towards higher accuracy in some areas. Conclusion: ChatGPT demonstrates a promising capacity to provide accurate and helpful information on FAI syndrome and its treatment, with both versions performing to satisfaction. This research underscores the importance of ongoing evaluation and refinement of AI tools in healthcare, ensuring their reliability and effectiveness in patient education and support.

Authors

Slawaska-Eng D; Bourgeault-Gagnon Y; Cohen D; Pauyo T; Belzile E; Ayeni O

Journal

Journal of Hip Preservation Surgery, Vol. 12, No. Supplement_1, pp. i27–i27

Publisher

Oxford University Press (OUP)

Publication Date

March 27, 2025

DOI

10.1093/jhps/hnaf011.084

ISSN

2054-8397

Contact the Experts team