Home
Scholarly Works
GHAttack: Generative Adversarial Attacks on...
Journal article

GHAttack: Generative Adversarial Attacks on Heterogeneous Graph Neural Networks

Abstract

Heterogeneous graph neural networks (HGNNs) have witnessed remarkable progress and widespread applications in recent years. Meanwhile, there is growing attention regarding their vulnerability to adversarial attacks. Existing attack methods for HGNNs generate perturbations to slightly modify the structure of a heterogeneous graph, thereby degrading the predictive performance of HGNNs on target nodes. However, to craft such a perturbation, these methods require solving a complicated optimization problem, which makes them computationally inefficient for launching attacks during the inference phase. In this work, we, therefore, introduce generative heterogeneous attack (GHAttack), a novel generative attack method for efficient and effective adversarial attacks on HGNNs. Specifically, GHAttack aims to train a perturbation generator, which produces a perturbation for each target node via a simple forward pass, while allowing the perturbation to modify edges on the heterogeneous relations of the graph to obtain high attack effectiveness. To achieve this, we design a novel model architecture for the generator, consisting of an HGNN backbone and a relation-aware output layer. We formulate the training of the generator as an optimization problem and efficiently solve it by addressing a series of technical challenges. Extensive experiments on ten representative HGNNs and six datasets verify the high efficiency and excellent effectiveness of GHAttack.

Authors

Li S; Liao X; Zhu H; Le J; Chu L

Journal

IEEE Transactions on Neural Networks and Learning Systems, Vol. PP, No. 99, pp. 1–15

Publisher

Institute of Electrical and Electronics Engineers (IEEE)

Publication Date

January 13, 2026

DOI

10.1109/tnnls.2025.3648367

ISSN

2162-237X

Contact the Experts team