Computer Science ›› 2026, Vol. 53 ›› Issue (1): 323-330.doi: 10.11896/jsjkx.241200002

• Information Security • Previous Articles     Next Articles

Section Sparse Attack:A More Powerful Sparse Attack Method

WEN Zerui, JIANG Tian, HUANG Zijian, CUI Xiaohui   

  1. Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University, Wuhan 430000, China
  • Received:2024-12-02 Revised:2025-02-22 Published:2026-01-08
  • About author:WEN Zerui,born in 2000,postgra-duate,is a member of CCF(No.L7493G).His main research interests include ar-tificial intelligence security and adver-sarial attack.
    CUI Xiaohui,born in 1971, Ph.D,professor,Ph.D supervisor, is a member of CCF(No.36210S).His main research interests include big data,blockchain technology,food safety and high performance computing.
  • Supported by:
    National Key Research and Development Program of China(2024YFE0199500).

Abstract: Deep neural networks(DNNs) have long been threatened by adversarial attacks,particularly sparse attacks in black-box attacks.These attacks rely on the target model’s output to guide the generation of adversarial examples and typically deceive image classifiers by perturbing only a few pixels.However,existing sparse attack methods suffer from inefficiencies due to the use of fixed step-size strategies and poor initialization approaches,which fail to fully exploit perturbations.To address these issues,SSA is proposed. Unlike other methods that use fixed step sizes,SSA adapts the step size based on historical search information,thus accelerating the discovery of adversarial examples.Additionally,recognizing that sparse attacks in black-box settings tend to perturb high-importance pixels,SSA uses an initialization strategy based on the CAM,interpretability method,to quickly identify and initialize populations of high-importance pixels.Finally,to confine perturbations within critical sections and maximize their effectiveness during the search process,SSA adopts a section search strategy to reduce the search space.Experimental results de-monstrate that SSA outperforms the SOTA(State-of-the-Art) methods,in attacking traditional convolutional networks and Vision Transformer(ViT) models.Specifically,SSA achieves a 2%~8% improvement in attack success rates and approximately a 30% enhancement in efficiency.

Key words: Artificial intelligence security, Adversarial examples, Interpretability, Sparse attack, Random search

CLC Number: 

  • TP389.1
[1]LIU J H,GUANG J C,FANG H Q,et al.Efficient View Transformation for Autonomous Driving[J].Computer Systems and Applications,2025,34(2):246-253.
[2]CHENG C Z.Financial Time Series Prediction Based on DeepLearning[D].Chengdu:University of Electronic Science and Technology of China,2021.
[3]WANG K.Research on Medical Image Classification and Segmentation Based on Deep Learning[D].Changsha:National University of Defense Technology,2022.
[4]SZEGEDY C.Intriguing properties of neural networks[J].ar-Xiv:1312.6199,2013.
[5]LI Z,CHENG H,CAI X,et al.Sa-es:Subspace activation evolution strategy for black-box adversarial attacks[J].IEEE Transactions on Emerging Topics in Computational Intelligence,2022,7(3):780-790.
[6]WILLIAMS P N,LI K.Black-box sparse adversarial attack via multiobjective optimisation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2023:12291-12301.
[7]WANG H,ZHU C,CAO Y,et al.ADSAttack:An Adversarial Attack Algorithm via Searching Adversarial Distribution in Latent Space[J].Electronics,2023,12(4):816.
[8]CROCE F,ANDRIUSHCHENKO M,SINGH N D,et al.Sparse-rs:a versatile framework for query-efficient sparse black-box adversarial attacks[C]//Proceedings of the AAAI Conference on Artificial Intelligence.2022:6437-6445.
[9]JI S H,HU L,ZHANG P C,et al.Adversarial Example Generation Method Based on Sparse Perturbation[J].Journal of Software,2023,34(9):4003-4017.
[10]ZHOU B,KHOSLA A,LAPEDRIZA A,et al.Learning deepfeatures for discriminative localization[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:2921-2929.
[11]PAPERNOT N,MCDANIEL P,JHA S,et al.The limitations of deep learning in adversarial settings[C]//2016 IEEE European Symposium on Security and Privacy(EuroS&P).IEEE,2016:372-387.
[12]SU J,VARGAS D V,SAKURAI K.One pixel attack for fooling deep neural networks[J].IEEE Transactions on Evolutionary Computation,2019,23(5):828-841.
[13]MODAS A,MOOSAVI-DEZFOOLI S M,FROSSARD P.Sparsefool:a few pixels make a big difference[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2019:9087-9096.
[14]WU W,SU Y,CHEN X,et al.Boosting the transferability of adversarial samples via attention[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2020:1161-1170.
[15]HE K,ZHANG X,REN S,et al.Deep residual learning forimage recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:770-778.
[16]SELVARAJU R R,COGSWELL M,DAS A,et al.Grad-cam:Visual explanations from deep networks via gradient-based localization[C]//Proceedings of the IEEE International Confe-rence on Computer Vision.2017:618-626.
[17]LI W T,XIAO R,YANG X.Improving Transferability of Adversarial Samples Through Laplacian Smoothing Gradient[J].Computer Science,2024,51(S1):938-943.
[18]CHEN J Y,CHEN Y Q,ZHENG H B,et al.Black-box Adversarial Attack Against Road Sign Recognition Model via PSO[J].Journal of Software, 2020,31(9):2785-2801.
[19]DONG X,CHEN D,BAO J,et al.Greedyfool:Distortion-aware sparse adversarial attack[J].Advances in Neural Information Processing Systems,2020,33:11226-11236.
[20]CROCE F,HEIN M.Sparse and imperceivable adversarial at-tacks[C]//Proceedings of the IEEE/CVF International Confe-rence on Computer Vision.2019:4724-4732.
[21]BAI Z X,WANG H J.Adversarial Example Generation Method Based on Improved Genetic Algorithm[J].Computer Enginee-ring,2023,49(5):139-149.
[22]LI Z,CHENG H,CAI X,et al.Sa-es:Subspace activation evolution strategy for black-box adversarial attacks[J].IEEE Transactions on Emerging Topics in Computational Intelligence,2022,7(3):780-790.
[23]WANG H,ZHU C,CAO Y,et al.ADSAttack:An Adversarial Attack Algorithm via Searching Adversarial Distribution in Latent Space[J].Electronics,2023,12(4):816.
[24]VO V Q,ABBASNEJAD E,RANASINGHE D C.BruSLeAttack:A Query-Efficient Score-Based Black-Box Sparse Adversarial Attack[J].arXiv:2404.05311,2024.
[25]CHATTOPADHAY A,SARKAR A,HOWLADER P,et al.Grad-cam++:Generalized gradient-based visual explanations for deep convolutional networks[C]//2018 IEEE Winter Confe-rence on Applications of Computer Vision(WACV).IEEE,2018:839-847.
[26]ANDRIUSHCHENKO M,CROCE F,FLAMMARION N,et al.Square attack:a query-efficient black-box adversarial attack via random search[C]//European Conference on Computer Vision.Cham:Springer,2020:484-501.
[27]LIN C,HAN S,ZHU J,et al.Sensitive region-aware black-box adversarial attacks[J].Information Sciences,2023,637:118929.
[28]DENG J,DONG W,SOCHER R,et al.Imagenet:A large-scale hierarchical image database[C]//2009 IEEE Conference on Computer Vision and Pattern Recognition.IEEE,2009:248-255.
[29]SIMONYAN K.Very deep convolutional networks for large-scale image recognition[J].arXiv:1409.1556,2014.
[30]SZEGEDY C,VANHOUCKE V,IOFFE S,et al.Rethinking the inception architecture for computer vision[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:2818-2826.
[31]YUAN L,CHEN Y,WANG T,et al.Tokens-to-token vit:Training vision transformers from scratch on imagenet[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision.2021:558-567.
[32]POMPONI J,SCARDAPANE S,UNCINI A.Pixle:a fast and effective black-box attack based on rearranging pixels[C]//2022 International Joint Conference on Neural Networks(IJCNN).IEEE,2022:1-7.
[33]KIM H.Torchattacks:A pytorch repository for adversarial at-tacks[J].arXiv:2010.01950,2020.
[34]BANY MUHAMMAD M,YEASIN M.Eigen-CAM:Visual explanations for deep convolutional neural networks[J].SN Computer Science,2021,2(1):47.
[1] LYU Zhenghao, XIAN Hequn. Deep Learning Model Protection Method Based on Robust Partitioned Watermarking [J]. Computer Science, 2026, 53(1): 423-429.
[2] JIANG Yunliang, JIN Senyang, ZHANG Xiongtao, LIU Kaining, SHEN Qing. Multi-scale Multi-granularity Decoupled Distillation Fuzzy Classifier and Its Application inEpileptic EEG Signal Detection [J]. Computer Science, 2025, 52(9): 37-46.
[3] ZHENG Xu, HUANG Xiangjie, YANG Yang. Reversible Facial Privacy Protection Method Based on “Invisible Masks” [J]. Computer Science, 2025, 52(5): 384-391.
[4] CHEN Zigang, PAN Ding, LENG Tao, ZHU Haihua, CHEN Long, ZHOU Yousheng. Explanation Robustness Adversarial Training Method Based on Local Gradient Smoothing [J]. Computer Science, 2025, 52(2): 374-379.
[5] WANG Liuyi, ZHOU Chun, ZENG Wenqiang, HE Xingxing, MENG Hua. High-frequency Feature Masking-based Adversarial Attack Algorithm [J]. Computer Science, 2025, 52(10): 374-381.
[6] YUAN Mengjiao, LU Tianliang, HUANG Wanxin, HE Houhan. Benign-salient Region Based End-to-End Adversarial Malware Generation Method [J]. Computer Science, 2025, 52(10): 382-394.
[7] ZHU Fukun, TENG Zhen, SHAO Wenze, GE Qi, SUN Yubao. Semantic-guided Neural Network Critical Data Routing Path [J]. Computer Science, 2024, 51(9): 155-161.
[8] XIN Bo, DING Zhijun. Interpretable Credit Evaluation Model for Delayed Label Scenarios [J]. Computer Science, 2024, 51(8): 45-55.
[9] WANG Chundong, LI Quan, FU Haoran, HAO Qingbo. Face Anti-spoofing Method with Adversarial Robustness [J]. Computer Science, 2024, 51(6A): 230400022-7.
[10] QIAO Fan, WANG Peng, WANG Wei. Multivariate Time Series Classification Algorithm Based on Heterogeneous Feature Fusion [J]. Computer Science, 2024, 51(2): 36-46.
[11] WANG Baocai, WU Guowei. Feature-weighted Counterfactual Explanation Method:A Case Study in Credit Risk Control Scenarios [J]. Computer Science, 2024, 51(12): 259-268.
[12] GUO Yuqi, LI Dongyang, YAN Bin, WANG Linyuan. Black-box Adversarial Attack Methods on Modulation Recognition Neural Networks Based onSignal Proximal Linear Combination [J]. Computer Science, 2024, 51(10): 425-431.
[13] ZHOU Fengfan, LING Hefei, ZHANG Jinyuan, XIA Ziwei, SHI Yuxuan, LI Ping. Facial Physical Adversarial Example Performance Prediction Algorithm Based on Multi-modal Feature Fusion [J]. Computer Science, 2023, 50(8): 280-285.
[14] LI Kun, GUO Wei, ZHANG Fan, DU Jiayu, YANG Meiyue. Adversarial Malware Generation Method Based on Genetic Algorithm [J]. Computer Science, 2023, 50(7): 325-331.
[15] WANG Dongli, YANG Shan, OUYANG Wanli, LI Baopu, ZHOU Yan. Explainability of Artificial Intelligence:Development and Application [J]. Computer Science, 2023, 50(6A): 220600212-7.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!