Home
Scholarly Works
Multimodal unified generalization and translation...
Journal article

Multimodal unified generalization and translation network for intelligent fault diagnosis under dynamic environments

Abstract

Multimodal data fusion can generate reliable fault representations for intelligent fault diagnosis. However, simple data fusion strategies often introduce fault-irrelevant information, thereby reducing robustness against unknown domain shifts. Moreover, traditional methods generally lack adaptive mechanisms to address missing modalities, leading to considerable performance degradation under sensor failure conditions. To address these problems, this paper proposes a multimodal unified generalization and translation network. To learn invariant unified representations for resisting unknown data distribution shifts, information-enhanced concatenation first generates intra-domain and cross-domain representations. Subsequently, mutual information maximization is applied to remove fault-unrelated information from these representations. Finally, A hybrid ensemble diagnosis strategy fully leverages the interaction of multimodal information across different levels. In addition, semantic supervision investigates the relationships among different modalities and enables intermodal translation in the event of a sensor failure within the monitoring system. Extensive experimental results based on a public bearing dataset and a self-collected motor dataset indicate that the proposed method improves accuracy by 10.53 % and 8.47 % compared to the state-of-the-art methods, respectively. The code and datasets are available at https://github.com/CHAOZHAO-1/MUGTN.

Authors

Zhao C; Shen W; Zio E; Ma H

Journal

Engineering Applications of Artificial Intelligence, Vol. 162, ,

Publisher

Elsevier

Publication Date

December 22, 2025

DOI

10.1016/j.engappai.2025.112559

ISSN

0952-1976

Contact the Experts team