Home
Scholarly Works
Fine-Tuning and Benchmarking Transformer Models...
Preprint

Fine-Tuning and Benchmarking Transformer Models for Multiclass Classification of Clinical Research Articles: Modeling Study (Preprint)

Abstract

BACKGROUND

The exponential growth of digital information has led to an unprecedented expansion in the volume of unstructured text data. Efficient classification of these articles is critical for timely evidence synthesis and informed decision-making in healthcare. Machine learning techniques have shown considerable promise for text classification tasks. However, multiclass classification of articles by study publication type has been largely overlooked compared to binary or multilabel classification. Addressing this gap could significantlyThe objective of this study was to fine-tune and evaluate domain-specific transformer-based language models on a gold-standard dataset for multiclass classification of clinical literature into mutually exclusive categories: original study, review, evidence-based guideline, and non-experimental. enhance knowledge translation workflows and support systematic review processes.

OBJECTIVE

The objective of this study was to fine-tune and evaluate domain-specific transformer-based language models on a gold-standard dataset for multiclass classification of clinical literature into mutually exclusive categories: original study, review, evidence-based guideline, and non-experimental.

METHODS

The titles and abstracts of McMaster’s Premium LiteratUre Service (PLUS) dataset of 162,380 articles were used for fine-tuning 7 domain-specific transformers. Clinical experts classified articles into four mutually exclusive publication types. PLUS data were split 80:10:10 for training, validation, and testing, with Clinical Hedges used for external validation. A grid search evaluated the impact of class weight adjustments, learning rate, batch size, warmup ratio, and weight decay, totaling 1,890 configurations. Models were assessed using 10 metrics, including area under the receiver operating characteristic curve (AUROC), F1 score, and Matthew’s correlation coefficient (MCC). Performance of individual classes was assessed using a one-to-rest approach, and overall performance was assessed using macro average. Optimal models identified from validation results were further tested on both PLUS and Clinical Hedges datasets, with calibration assessed visually.

RESULTS

Ten best-performing models achieved macro AUROC ≥0.99, F1 ≥0.89 and MCC ≥0.88 on the validation and test sets. Performance declined on Clinical Hedges. Models were consistently better at classifying original studies and reviews. BioBERT-based models had superior calibration performance, especially for original studies and reviews. Optimal configurations for search included lower learning rates (1E-5 and 3E-5), mid-range batch sizes (32–128), and lower weight decay (0.005-0.010). Class weight adjustments improved recall but generally reduced performance in other metrics. Models generally struggled with accurately classifying non-experimental and guideline articles, potentially due to class imbalance and content heterogeneity.

CONCLUSIONS

This study utilized a comprehensive hyperparameter search to highlight the effectiveness of fine-tuned transformer models, notably BioBERT variants, for multiclass clinical literature classification. While class weighting generally decreased overall performance, addressing class imbalance through alternative methods such as hierarchical classification or targeted resampling warrants future exploration. Optimal hyperparameter configurations were crucial for robust performance, aligning with previous literature. These findings support future modelling research and the practical deployment in human-in-the-loop systems to support knowledge synthesis and translation workflows using optimal configurations found in this work.

Authors

Zhou F; Lokker C; Parrish R; Haynes RB; Iorio A; Saha A; Afzal M

Publication date

May 12, 2025

DOI

10.2196/preprints.77311

Preprint server

JMIR Preprints
View published work (Non-McMaster Users)

Contact the Experts team