Home
Scholarly Works
Explainability of Protein Deep Learning Models
Journal article

Explainability of Protein Deep Learning Models

Abstract

Protein embeddings are the new main source of information about proteins, producing state-of-the-art solutions to many problems, including protein interaction prediction, a fundamental issue in proteomics. Understanding the embeddings and what causes the interactions is very important, as these models lack transparency due to their black-box nature. In the first study of its kind, we investigate the inner workings of these models using XAI (explainable AI) approaches. We perform extensive testing (3.3 TB of total data) involving nine of the best-known XAI methods on two problems: (i) the prediction of protein interaction sites using the current top method, Seq-InSite, and (ii) the production of protein embedding vectors using three methods, ProtBERT, ProtT5, and Ankh. The results are evaluated in terms of their ability to correlate with six basic amino acid properties-aromaticity, acidity/basicity, hydrophobicity, molecular mass, van der Waals volume, and dipole moment-as well as the propensity for interaction with other proteins, the impact of distant residues, and the infidelity scores of the XAI methods. The results are unexpected. Some XAI methods are much better than others at discovering essential information. Simple methods can be as good as advanced ones. Different protein embedding vectors can capture distinct properties, indicating significant room for improvement in embedding quality.

Authors

Fazel Z; de Souza CPE; Golding GB; Ilie L

Journal

International Journal of Molecular Sciences, Vol. 26, No. 11,

Publisher

MDPI

Publication Date

June 1, 2025

DOI

10.3390/ijms26115255

ISSN

1661-6596

Contact the Experts team