Home
Scholarly Works
Boosting Universal Adversarial Attack on Deep...
Journal article

Boosting Universal Adversarial Attack on Deep Neural Networks

Abstract

Deep neural networks (DNNs) are well-known to be susceptible to many universal adversarial perturbations (UAPs), where each UAP can successfully attack many images when added to the input. In this paper, we explore the existence of diversified UAPs, each of which successfully attacks a large but substantially different set of images. Since the sets of images successfully attacked by different UAPs are often complementary to each other, strategically selecting the most effective UAP to attack each new image could maximize the overall coverage of successful attacks. Following this insight, we propose a novel attack framework named boosting universal adversarial attack. The key idea is to simultaneously train a set of diversified UAPs and a selective neural network, such that the selective neural network can choose the most effective UAP when attacking a new target image. Due to the simplicity and effectiveness of the proposed boosting attack framework, it can be generally used to significantly boost the attack effectiveness of many classic single- UAP methods that only use a single UAP to attack all target images. Meanwhile, the boosting attack framework is also able to perform real-time attacks as it does not require any additional training or fine-tuning when attacking new target images. Extensive experiments demonstrate the outstanding performance of the proposed boosting attack framework.

Authors

Li S; Liao X; Che X; Chu L

Journal

IEEE Transactions on Multimedia, Vol. PP, No. 99, pp. 1–16

Publisher

Institute of Electrical and Electronics Engineers (IEEE)

Publication Date

January 1, 2026

DOI

10.1109/tmm.2026.3651134

ISSN

1520-9210

Contact the Experts team