计算机科学 ›› 2026, Vol. 53 ›› Issue (1): 323-330.doi: 10.11896/jsjkx.241200002

• 信息安全 • 上一篇    下一篇

分区稀疏攻击:一种更高效的黑盒稀疏对抗攻击

温泽瑞, 姜天, 黄子健, 崔晓晖   

  1. 武汉大学国家网络安全学院空天信息安全与可信计算教育部重点实验室 武汉 430000
  • 收稿日期:2024-12-02 修回日期:2025-02-22 发布日期:2026-01-08
  • 通讯作者: 崔晓晖(xcui@whu.edu.cn)
  • 作者简介:(zeruiwen2022@whu.edu.cn)
  • 基金资助:
    国家重点研发计划(2024YFE0199500)

Section Sparse Attack:A More Powerful Sparse Attack Method

WEN Zerui, JIANG Tian, HUANG Zijian, CUI Xiaohui   

  1. Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University, Wuhan 430000, China
  • Received:2024-12-02 Revised:2025-02-22 Online:2026-01-08
  • About author:WEN Zerui,born in 2000,postgra-duate,is a member of CCF(No.L7493G).His main research interests include ar-tificial intelligence security and adver-sarial attack.
    CUI Xiaohui,born in 1971, Ph.D,professor,Ph.D supervisor, is a member of CCF(No.36210S).His main research interests include big data,blockchain technology,food safety and high performance computing.
  • Supported by:
    National Key Research and Development Program of China(2024YFE0199500).

摘要: 深度神经网络长期以来受到对抗样本的攻击威胁,特别是黑盒攻击分类下的稀疏攻击,这类攻击依靠目标模型的输出结果来指导生成对抗样本,通常只需扰动少量像素即可达到欺骗图片分类器的目的。然而现有的稀疏攻击方法采用固定步长和欠佳的初始化策略,使得对扰动的利用率较低,导致整体攻击效率不佳。为解决这些问题,分区稀疏攻击(SSA)方法1)应运而生。不同于其他方法使用的固定步长策略,SSA利用历史搜索信息来自适应调整步长,从而加速对抗样本的发现过程。此外,针对不同稀疏攻击在黑盒环境中都倾向于扰动高重要性像素的共性,设计了一种基于类激活图(CAM)可解释性方法的初始化策略,使得SSA能够快速识别并初始化具有高重要性像素的种群。最后,为了在随机搜索过程中将扰动限制在关键区域内并提升扰动的利用率,提出了分区搜索策略以进一步缩小SSA的搜索空间。实验结果表明,SSA在攻击传统卷积网络和视觉Transformer模型时均表现出色。与现有的先进方法相比,SSA能够将攻击成功率提高2%~8%,效率提升近30%。

关键词: 人工智能安全, 对抗样本, 可解释性, 稀疏攻击, 随机搜索

Abstract: Deep neural networks(DNNs) have long been threatened by adversarial attacks,particularly sparse attacks in black-box attacks.These attacks rely on the target model’s output to guide the generation of adversarial examples and typically deceive image classifiers by perturbing only a few pixels.However,existing sparse attack methods suffer from inefficiencies due to the use of fixed step-size strategies and poor initialization approaches,which fail to fully exploit perturbations.To address these issues,SSA is proposed. Unlike other methods that use fixed step sizes,SSA adapts the step size based on historical search information,thus accelerating the discovery of adversarial examples.Additionally,recognizing that sparse attacks in black-box settings tend to perturb high-importance pixels,SSA uses an initialization strategy based on the CAM,interpretability method,to quickly identify and initialize populations of high-importance pixels.Finally,to confine perturbations within critical sections and maximize their effectiveness during the search process,SSA adopts a section search strategy to reduce the search space.Experimental results de-monstrate that SSA outperforms the SOTA(State-of-the-Art) methods,in attacking traditional convolutional networks and Vision Transformer(ViT) models.Specifically,SSA achieves a 2%~8% improvement in attack success rates and approximately a 30% enhancement in efficiency.

Key words: Artificial intelligence security, Adversarial examples, Interpretability, Sparse attack, Random search

中图分类号: 

  • TP389.1
[1]LIU J H,GUANG J C,FANG H Q,et al.Efficient View Transformation for Autonomous Driving[J].Computer Systems and Applications,2025,34(2):246-253.
[2]CHENG C Z.Financial Time Series Prediction Based on DeepLearning[D].Chengdu:University of Electronic Science and Technology of China,2021.
[3]WANG K.Research on Medical Image Classification and Segmentation Based on Deep Learning[D].Changsha:National University of Defense Technology,2022.
[4]SZEGEDY C.Intriguing properties of neural networks[J].ar-Xiv:1312.6199,2013.
[5]LI Z,CHENG H,CAI X,et al.Sa-es:Subspace activation evolution strategy for black-box adversarial attacks[J].IEEE Transactions on Emerging Topics in Computational Intelligence,2022,7(3):780-790.
[6]WILLIAMS P N,LI K.Black-box sparse adversarial attack via multiobjective optimisation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2023:12291-12301.
[7]WANG H,ZHU C,CAO Y,et al.ADSAttack:An Adversarial Attack Algorithm via Searching Adversarial Distribution in Latent Space[J].Electronics,2023,12(4):816.
[8]CROCE F,ANDRIUSHCHENKO M,SINGH N D,et al.Sparse-rs:a versatile framework for query-efficient sparse black-box adversarial attacks[C]//Proceedings of the AAAI Conference on Artificial Intelligence.2022:6437-6445.
[9]JI S H,HU L,ZHANG P C,et al.Adversarial Example Generation Method Based on Sparse Perturbation[J].Journal of Software,2023,34(9):4003-4017.
[10]ZHOU B,KHOSLA A,LAPEDRIZA A,et al.Learning deepfeatures for discriminative localization[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:2921-2929.
[11]PAPERNOT N,MCDANIEL P,JHA S,et al.The limitations of deep learning in adversarial settings[C]//2016 IEEE European Symposium on Security and Privacy(EuroS&P).IEEE,2016:372-387.
[12]SU J,VARGAS D V,SAKURAI K.One pixel attack for fooling deep neural networks[J].IEEE Transactions on Evolutionary Computation,2019,23(5):828-841.
[13]MODAS A,MOOSAVI-DEZFOOLI S M,FROSSARD P.Sparsefool:a few pixels make a big difference[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2019:9087-9096.
[14]WU W,SU Y,CHEN X,et al.Boosting the transferability of adversarial samples via attention[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2020:1161-1170.
[15]HE K,ZHANG X,REN S,et al.Deep residual learning forimage recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:770-778.
[16]SELVARAJU R R,COGSWELL M,DAS A,et al.Grad-cam:Visual explanations from deep networks via gradient-based localization[C]//Proceedings of the IEEE International Confe-rence on Computer Vision.2017:618-626.
[17]LI W T,XIAO R,YANG X.Improving Transferability of Adversarial Samples Through Laplacian Smoothing Gradient[J].Computer Science,2024,51(S1):938-943.
[18]CHEN J Y,CHEN Y Q,ZHENG H B,et al.Black-box Adversarial Attack Against Road Sign Recognition Model via PSO[J].Journal of Software, 2020,31(9):2785-2801.
[19]DONG X,CHEN D,BAO J,et al.Greedyfool:Distortion-aware sparse adversarial attack[J].Advances in Neural Information Processing Systems,2020,33:11226-11236.
[20]CROCE F,HEIN M.Sparse and imperceivable adversarial at-tacks[C]//Proceedings of the IEEE/CVF International Confe-rence on Computer Vision.2019:4724-4732.
[21]BAI Z X,WANG H J.Adversarial Example Generation Method Based on Improved Genetic Algorithm[J].Computer Enginee-ring,2023,49(5):139-149.
[22]LI Z,CHENG H,CAI X,et al.Sa-es:Subspace activation evolution strategy for black-box adversarial attacks[J].IEEE Transactions on Emerging Topics in Computational Intelligence,2022,7(3):780-790.
[23]WANG H,ZHU C,CAO Y,et al.ADSAttack:An Adversarial Attack Algorithm via Searching Adversarial Distribution in Latent Space[J].Electronics,2023,12(4):816.
[24]VO V Q,ABBASNEJAD E,RANASINGHE D C.BruSLeAttack:A Query-Efficient Score-Based Black-Box Sparse Adversarial Attack[J].arXiv:2404.05311,2024.
[25]CHATTOPADHAY A,SARKAR A,HOWLADER P,et al.Grad-cam++:Generalized gradient-based visual explanations for deep convolutional networks[C]//2018 IEEE Winter Confe-rence on Applications of Computer Vision(WACV).IEEE,2018:839-847.
[26]ANDRIUSHCHENKO M,CROCE F,FLAMMARION N,et al.Square attack:a query-efficient black-box adversarial attack via random search[C]//European Conference on Computer Vision.Cham:Springer,2020:484-501.
[27]LIN C,HAN S,ZHU J,et al.Sensitive region-aware black-box adversarial attacks[J].Information Sciences,2023,637:118929.
[28]DENG J,DONG W,SOCHER R,et al.Imagenet:A large-scale hierarchical image database[C]//2009 IEEE Conference on Computer Vision and Pattern Recognition.IEEE,2009:248-255.
[29]SIMONYAN K.Very deep convolutional networks for large-scale image recognition[J].arXiv:1409.1556,2014.
[30]SZEGEDY C,VANHOUCKE V,IOFFE S,et al.Rethinking the inception architecture for computer vision[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:2818-2826.
[31]YUAN L,CHEN Y,WANG T,et al.Tokens-to-token vit:Training vision transformers from scratch on imagenet[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision.2021:558-567.
[32]POMPONI J,SCARDAPANE S,UNCINI A.Pixle:a fast and effective black-box attack based on rearranging pixels[C]//2022 International Joint Conference on Neural Networks(IJCNN).IEEE,2022:1-7.
[33]KIM H.Torchattacks:A pytorch repository for adversarial at-tacks[J].arXiv:2010.01950,2020.
[34]BANY MUHAMMAD M,YEASIN M.Eigen-CAM:Visual explanations for deep convolutional neural networks[J].SN Computer Science,2021,2(1):47.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!