Machine learning for predicting long-term kidney allograft survival: a scoping review
- Additional Document Info
- View All
Supervised machine learning (ML) is a class of algorithms that "learn" from existing input-output pairs, which is gaining popularity in pattern recognition for classification and prediction problems. In this scoping review, we examined the use of supervised ML algorithms for the prediction of long-term allograft survival in kidney transplant recipients. Data sources included PubMed, the Cumulative Index to Nursing and Allied Health Literature, and the Institute for Electrical and Electronics Engineers (IEEE) Xplore libraries from inception to November 2019. We screened titles and abstracts and potentially eligible full-text reports to select studies and subsequently abstracted the data. Eleven studies were identified. Decision trees were the most commonly used method (n = 8), followed by artificial neural networks (ANN) (n = 4) and Bayesian belief networks (n = 2). The area under receiver operating curve (AUC) was the most common measure of discrimination (n = 7), followed by sensitivity (n = 5) and specificity (n = 4). Model calibration examining the reliability in risk prediction was performed using either the Pearson r or the Hosmer-Lemeshow test in four studies. One study showed that logistic regression had comparable performance to ANN, while another study demonstrated that ANN performed better in terms of sensitivity, specificity, and accuracy, as compared with a Cox proportional hazards model. We synthesized the evidence related to the comparison of ML techniques with traditional statistical approaches for prediction of long-term allograft survival in patients with a kidney transplant. The methodological and reporting quality of included studies was poor. Our study also demonstrated mixed results in terms of the predictive potential of the models.
has subject area