Computer Science ›› 2026, Vol. 53 ›› Issue (1): 404-412.doi: 10.11896/jsjkx.250600144

• Information Security • Previous Articles     Next Articles

Adaptive Box-constraint Optimization Method for Adversarial Attacks

ZHOU Qiang1, LI Zhe1, TAO Wei2, TAO Qing3,4   

  1. 1 College of Air Defense, Army Branch University, Zhengzhou 450000, China;
    2 Institute of Evaluation and Assessment Research, PLA Academy of Military Science, Beijing 100091, China;
    3 College of Information Engineering, Army Branch University, Hefei 230000, China;
    4 Hefei Institute of Technology, Hefei 238076, China
  • Received:2025-06-20 Revised:2025-09-08 Published:2026-01-08
  • About author:ZHOU Qiang,born in 1990,master,is a member of CCF(No.P1378G).His main research interests include machine learning and mathematical optimization.
    TAO Qing,born in 1965.Ph.D,professor,doctoral supervisor,is a senior member of CCF(No.08091S).His main research interests include machine learning,pattern recognition and applied mathematics.
  • Supported by:
    National Natural Science Foundation of China(62076252,62576351).

Abstract: Deep neural networks are vulnerable to adversarial example attacks.Existing transfer-based attack optimization methods commonly employ fixed constraint upper bounds to represent imperceptibility intensity,focusing primarily on improving attack success rates.However,such approaches overlook inter-sample sensitivity variations,resulting in suboptimal imperceptibi-lity(measured by Fréchet Inception Distance,FID).Inspired by adaptive gradient methods,this paper proposes an adversarial attack optimization method with adaptive constraint upper bounds,aiming to enhance imperceptibility.Firstly,a sensitivity metric based on gradient magnitudes is established to quantify sensitivity differences across samples.Building on this,adaptive constraint upper bounds are determined to enable differentiated perturbation handling-applying low-intensity perturbations to sensitive samples and high-intensity perturbations to non-sensitive ones.Furthermore,by replacing the projection operator and step size,the adaptive constraintmechanism is seamlessly integrated into existing attack methods.Experiments on the ImageNet-Compatible dataset demonstrate that,under equivalent black-box attack success rates,the proposed method reduces FID by 2.68%~3.49% compared to traditional fixed-constraint methods.Additionally,the MI-LA attack algorithm based on this approach achieves 6.32%~26.35% lower FID than five state-of-the-art adversarial attack methods.

Key words: Adversarial attack, Adaptive, Upper bound constraint, Sample sensitivity, Black-box transferability, Imperceptibility

CLC Number: 

  • TP391
[1]GOODFELLO I J,JONATHON S,SZEGEDY C.Explainingand harnessing adversarial examples[J].arXiv:1412.6572,2014.
[2]MADRY A,MAKELOV A,SCHMIDT L,et al.Towards deep learning models resistant to adversarial attacks[C]//ICLR.2018.
[3]ZHENG J H,LIN C H,SUN J H,et al.Physical 3D adversarial attacks against monocular depth estimation in autonomous dri-ving[C]//Proceedings of the 42nd IEEE/CVF Conf on ComputerVision and Pattern Recognition.Piscataway,NJ:IEEE,2024:24452-24461
[4]WANG J S,MAO X T,WANG Y,et al.Automatic Generation of Pathological Benchmark Dataset from Hyperspectral Images of Double Stained Tissues[J].Optics and Laser Technology,2023,163:109331-109331.
[5]MADRY A,MAKELOV A,SCHMIDT L,et al.Towards deep learning models resistant to adversarial attacks[C]//Procee-dings of the 6th International Conference on Learning Representations.2018:1-23.
[6]DUCHI J,HAZAN E,SINGER Y.Adaptive subgradient me-thods for online learning and stochastic optimization[J].Journal of Machine Learning Research,2011,12:2121-2159.
[7]PROVENZI E.Rudiments of Human Visual System(hvs) Features[M]//Computational Color Science Variational Retinex-Like Methods.John Wiley & Sons,Inc.,2017:1-11.
[8]MARTIN H,HUBER R,THOMAS U,et al.Gans trained by a two time-scale update rule converge to a local nash equilibrium[C]//Advances in Neural Information Processing Systems.San Diego:NIPS,2017.
[9]PAPERNOT N,MCDANIEL P D,JHA S,et al.The limitations of deep learning in adversarial settings[C]//Proceedings of the IEEE European Symposium on Security and Privacy.2016:372-387.
[10]CARLINI N,WAGNER D. Towards evaluating the robustness of neural networks[C]//Proceedings of the 38th IEEE Symposium on Security and Privacy.2017:39.
[11]SZEGEDY C,ZAREMBA W,SUTSKEVER I,et al.Intriguing properties of neural networks[C]//Proceedings of the 2nd International Conference on Learning Representations.2014:1.
[12]WANG Z B,WANG X,MA J J,et al.A review of adversarial sample attacks for computer vision systems[J].Journal of Computer Science,2023,46(2):436-468.
[13]DONG Y,LIAO F,PANG T,et al.Boosting adversarial attacks with momentum[C]//Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.2018:9185-9193.
[14]KURAKIN A,GOODFELLOW I J,BENGIO S.Adversarialmachine Learning at scale[C]//Proceedings of the 5th International Conference on Learning Representations.2017:1-17.
[15]OSELEDETS I,KHRULKOV V.Art of singular vectors and universal adversarial perturbations[C]//Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.2018:8562-8570.
[16]DI M,PENG R,WANG Y L,et al.Boosting the Transferability of Adversarial Attack on Vision Transformer with Adaptive Token Tuning[C]//NeurIPS.2024.
[17]ZHOU Q,CHEN J,TAO Q.Adversarial attack optimize method based on L1-mask constraint[J].CAAI Transactions on Intelligent Systems,2025(3):594-604.
[18]SZEGEDY C,VANHOUCKE V,IOFFE S,et al.Rethinking the inception architecture for computer vision[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:2818-2826.
[19]SZEGEDY C,LIU W,JIA Y Q,et al.Going deeper with convo- lutions[C]//2015 IEEE Conference on Computer Vision and Pattern Recognition(CVPR).2014:1-9.
[20]HE K M,ZHANG X Y,REN S Q,et al.Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:770-778.
[21]KAREN S,ANDREW Z.Very deep convolutional networks for large-scale image recognition[J].arXiv:1409.1556,2014.
[22]MARK S,ANDREW H,ZHU M L,et al.Mobilenetv2:inverted residuals and linear bottlenecks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2018:4510-4520.
[23]ALEXEY D,LUCAS B,ALEXANDER K,et al.An image is worth 16x16 words:transformers for image recognition at scale[C]//International Conference on Learning Representations.2021.
[24]PASZKE A,GROSS S,MASSA F,et al.PyTorch:an imperative style,high- performance deep learning library[J].arXiv:1912.01703,2019.
[25]KIM H.Torchattacks:a pytorch repository for adversarial attacks[J].arXiv:2010.01950,2020.
[26]FRANCESCO C,HEIN M.Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks[C]//International Conference on Machine Learning.Washington DC:ICLR,2020.
[27]FRANCESCO C,HEIN M.Mind the box:l1-APGD for sparse adversarial attacks on image classifiers[C]//International Conference on Machine Learning.Washington DC:ICLR,2021.
[28]WANG X S,HE X R,WANG J D,et al.Admix:Enhancing the Tranferability of Adversarial Attacks[J].arXiv:2102.00436,2021.
[29]WANG Z,GUO H,ZHANG Z,et al.Feature Importance-aware Transferable Adversarial Attacks[C]//2021 IEEE/CVF International Conference on Computer Vision(ICCV).2021.
[30]LIN Q L,LUO C,NIU Z H,et al.Boosting Adversarial Transferability across Model Genus by Deformation-Constrained Warping[C]//AAAI.2024.
[1] ZHANG Xiaomin, ZHAO Junzhi, HE Hongjie. Screen-shooting Resilient Watermarking Method for Document Image Based on Attention Mechanism [J]. Computer Science, 2026, 53(1): 413-422.
[2] HUANG Chao, CHENG Chunling, WANG Youkang. Source-free Domain Adaptation Method Based on Pseudo Label Uncertainty Estimation [J]. Computer Science, 2025, 52(9): 212-219.
[3] SUN Jingyu, HUANG He, SUN Yu'e, ZHANG Boyu. Super Spreader Detection Algorithm Based on Adaptive Sampling [J]. Computer Science, 2025, 52(8): 393-402.
[4] CHEN Jun, ZHOU Qiang, BAO Lei, TAO Qing. Linear Interpolation Method for Adversarial Attack [J]. Computer Science, 2025, 52(8): 403-410.
[5] LI Junwen, SONG Yuqiu, ZHANG Weiyan, RUAN Tong, LIU Jingping, ZHU Yan. Cross-lingual Information Retrieval Based on Aligned Query [J]. Computer Science, 2025, 52(8): 259-267.
[6] XU Yongwei, REN Haopan, WANG Pengfei. Object Detection Algorithm Based on YOLOv8 Enhancement and Its Application Norms [J]. Computer Science, 2025, 52(7): 189-200.
[7] JIANG Kun, ZHAO Zhengpeng, PU Yuanyuan, HUANG Jian, GU Jinjing, XU Dan. Cross-modal Hypergraph Optimisation Learning for Multimodal Sentiment Analysis [J]. Computer Science, 2025, 52(7): 210-217.
[8] GU Zhaojun, YANG Xueying, SUI He. Improved SVM Model for Industrial Control Anomaly Detection Based on InterferenceSample Distribution Optimization [J]. Computer Science, 2025, 52(7): 388-398.
[9] ZHENG Chuangrui, DENG Xiuqin, CHEN Lei. Traffic Prediction Model Based on Decoupled Adaptive Dynamic Graph Convolution [J]. Computer Science, 2025, 52(6A): 240400149-8.
[10] CHEN Yue, FENG Feng. Three Dimensional DV-Hop Location Based on Improved Beluga Whale Optimization [J]. Computer Science, 2025, 52(6A): 240800125-9.
[11] LIU Huayong, ZHU Ting. Semi-supervised Cross-modal Hashing Method for Semantic Alignment Networks Basedon GAN [J]. Computer Science, 2025, 52(6): 159-166.
[12] ZHANG Dabin, WU Qin, ZHOU Haojie. Oriented Object Detection Based on Multi-scale Perceptual Enhancement [J]. Computer Science, 2025, 52(6): 247-255.
[13] KANG Kai, WANG Jiabao, XU Kun. Balancing Transferability and Imperceptibility for Adversarial Attacks [J]. Computer Science, 2025, 52(6): 381-389.
[14] WANG Liming, ZHONG Guomin, SUN Mingxuan, HE Xiongxiong. Finitely-valued Terminal Zeroing Neural Networks with Application to Robotic Motion Planning [J]. Computer Science, 2025, 52(5): 270-280.
[15] LI Qing, JIA Haipeng, ZHANG Yunquan, ZHANG Sijia. Input-aware Generalized Matrix-Vector Product Algorithm for Adaptative PerformanceOptimization of Hygon DCU [J]. Computer Science, 2025, 52(4): 291-300.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!