Home
Scholarly Works
STAR-ML: A Rapid Screening Tool for Assessing...
Conference

STAR-ML: A Rapid Screening Tool for Assessing Reporting of Machine Learning in Research

Abstract

Literature review provides researchers with an overview of the field and when presented as a systematic assessment, it summarizes state-of-the-art information and identifies knowledge gaps. While there are many tools for assessing quality and risk-of-bias within studies, there is currently no generalized tool for evaluating the transparency, reproducibility, and correctness of machine learning (ML) reporting in the literature. This study proposes a new tool (Screening Tool for Assessing Reporting of Machine Learning; STAR-ML) that can be used to screen articles for a systematic or scoping review focusing on the reporting of the ML algorithm. This paper describes the development of the tool to assess the quality of ML research reporting and how it can be applied to improve the literature review methodology. The tool was tested and updated using three independent raters on 15 studies. The inter-rater reliability and the time used to review an article were evaluated. The current version of STAR-ML has a very high inter-rater reliability of 0.923, and the average time to screen an article was 4.73 minutes. This new tool will allow for filtering ML-related papers that can be included in a systematic or scoping review by ensuring transparent, reproducible, and correct screening of research for inclusion in the review article.

Authors

Khan A; Koh RGL; Hassan S; Liu T; Tucci V; Kumbhare D; Doyle TE

Volume

00

Pagination

pp. 336-341

Publisher

Institute of Electrical and Electronics Engineers (IEEE)

Publication Date

January 20, 2022

DOI

10.1109/ccece49351.2022.9918312

Name of conference

2022 IEEE Canadian Conference on Electrical and Computer Engineering (CCECE)
View published work (Non-McMaster Users)

Contact the Experts team