计算机科学 ›› 2026, Vol. 53 ›› Issue (1): 404-412.doi: 10.11896/jsjkx.250600144

• 信息安全 • 上一篇    下一篇

自适应约束上界的对抗攻击优化方法

周强1, 李哲1, 陶蔚2, 陶卿3,4   

  1. 1 陆军兵种大学防空兵学院 郑州 450000;
    2 军事科学院战略评估中心 北京 100091;
    3 陆军兵种大学信息工程学院 合肥 230000;
    4 合肥理工学院 合肥 238076
  • 收稿日期:2025-06-20 修回日期:2025-09-08 发布日期:2026-01-08
  • 通讯作者: 陶卿(taoqing@gmail.com)
  • 作者简介:(1071391319@qq.com)
  • 基金资助:
    国家自然科学基金(62076252,62576351)

Adaptive Box-constraint Optimization Method for Adversarial Attacks

ZHOU Qiang1, LI Zhe1, TAO Wei2, TAO Qing3,4   

  1. 1 College of Air Defense, Army Branch University, Zhengzhou 450000, China;
    2 Institute of Evaluation and Assessment Research, PLA Academy of Military Science, Beijing 100091, China;
    3 College of Information Engineering, Army Branch University, Hefei 230000, China;
    4 Hefei Institute of Technology, Hefei 238076, China
  • Received:2025-06-20 Revised:2025-09-08 Online:2026-01-08
  • About author:ZHOU Qiang,born in 1990,master,is a member of CCF(No.P1378G).His main research interests include machine learning and mathematical optimization.
    TAO Qing,born in 1965.Ph.D,professor,doctoral supervisor,is a senior member of CCF(No.08091S).His main research interests include machine learning,pattern recognition and applied mathematics.
  • Supported by:
    National Natural Science Foundation of China(62076252,62576351).

摘要: 深度神经网络易受对抗样本攻击。现有迁移攻击优化方法普遍使用固定的约束上界表示不可察觉性强度,重点关注如何提升攻击成功率,忽略了样本间的敏感性差异,导致不可察觉性(FID)效果有待提高。受自适应梯度方法的启发,以提高不可察觉性为主要目的,提出了一种自适应约束上界的对抗攻击优化方法。首先,通过梯度幅值建立敏感性指标,量化不同样本的敏感性差异程度;在此基础上,自适应确定对抗攻击优化方法的约束上界,实现敏感样本低强度、非敏感样本高强度对抗扰动的差异化处理;最后,通过替换投影算子和步长,将自适应约束机制无缝集成至现有攻击方法。ImageNet-Compatible数据集上的实验表明,所提方法在相同的黑盒攻击成功率下,FID较传统固定约束方法降低2.68%~3.49%;基于该方法的MI-LA对抗攻击算法较对抗攻击领域表现优异的5种攻击方法,FID降低6.32%~26.35%。

关键词: 对抗攻击, 自适应, 约束上界, 样本敏感性, 黑盒迁移性, 不可察觉性

Abstract: Deep neural networks are vulnerable to adversarial example attacks.Existing transfer-based attack optimization methods commonly employ fixed constraint upper bounds to represent imperceptibility intensity,focusing primarily on improving attack success rates.However,such approaches overlook inter-sample sensitivity variations,resulting in suboptimal imperceptibi-lity(measured by Fréchet Inception Distance,FID).Inspired by adaptive gradient methods,this paper proposes an adversarial attack optimization method with adaptive constraint upper bounds,aiming to enhance imperceptibility.Firstly,a sensitivity metric based on gradient magnitudes is established to quantify sensitivity differences across samples.Building on this,adaptive constraint upper bounds are determined to enable differentiated perturbation handling-applying low-intensity perturbations to sensitive samples and high-intensity perturbations to non-sensitive ones.Furthermore,by replacing the projection operator and step size,the adaptive constraintmechanism is seamlessly integrated into existing attack methods.Experiments on the ImageNet-Compatible dataset demonstrate that,under equivalent black-box attack success rates,the proposed method reduces FID by 2.68%~3.49% compared to traditional fixed-constraint methods.Additionally,the MI-LA attack algorithm based on this approach achieves 6.32%~26.35% lower FID than five state-of-the-art adversarial attack methods.

Key words: Adversarial attack, Adaptive, Upper bound constraint, Sample sensitivity, Black-box transferability, Imperceptibility

中图分类号: 

  • TP391
[1]GOODFELLO I J,JONATHON S,SZEGEDY C.Explainingand harnessing adversarial examples[J].arXiv:1412.6572,2014.
[2]MADRY A,MAKELOV A,SCHMIDT L,et al.Towards deep learning models resistant to adversarial attacks[C]//ICLR.2018.
[3]ZHENG J H,LIN C H,SUN J H,et al.Physical 3D adversarial attacks against monocular depth estimation in autonomous dri-ving[C]//Proceedings of the 42nd IEEE/CVF Conf on ComputerVision and Pattern Recognition.Piscataway,NJ:IEEE,2024:24452-24461
[4]WANG J S,MAO X T,WANG Y,et al.Automatic Generation of Pathological Benchmark Dataset from Hyperspectral Images of Double Stained Tissues[J].Optics and Laser Technology,2023,163:109331-109331.
[5]MADRY A,MAKELOV A,SCHMIDT L,et al.Towards deep learning models resistant to adversarial attacks[C]//Procee-dings of the 6th International Conference on Learning Representations.2018:1-23.
[6]DUCHI J,HAZAN E,SINGER Y.Adaptive subgradient me-thods for online learning and stochastic optimization[J].Journal of Machine Learning Research,2011,12:2121-2159.
[7]PROVENZI E.Rudiments of Human Visual System(hvs) Features[M]//Computational Color Science Variational Retinex-Like Methods.John Wiley & Sons,Inc.,2017:1-11.
[8]MARTIN H,HUBER R,THOMAS U,et al.Gans trained by a two time-scale update rule converge to a local nash equilibrium[C]//Advances in Neural Information Processing Systems.San Diego:NIPS,2017.
[9]PAPERNOT N,MCDANIEL P D,JHA S,et al.The limitations of deep learning in adversarial settings[C]//Proceedings of the IEEE European Symposium on Security and Privacy.2016:372-387.
[10]CARLINI N,WAGNER D. Towards evaluating the robustness of neural networks[C]//Proceedings of the 38th IEEE Symposium on Security and Privacy.2017:39.
[11]SZEGEDY C,ZAREMBA W,SUTSKEVER I,et al.Intriguing properties of neural networks[C]//Proceedings of the 2nd International Conference on Learning Representations.2014:1.
[12]WANG Z B,WANG X,MA J J,et al.A review of adversarial sample attacks for computer vision systems[J].Journal of Computer Science,2023,46(2):436-468.
[13]DONG Y,LIAO F,PANG T,et al.Boosting adversarial attacks with momentum[C]//Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.2018:9185-9193.
[14]KURAKIN A,GOODFELLOW I J,BENGIO S.Adversarialmachine Learning at scale[C]//Proceedings of the 5th International Conference on Learning Representations.2017:1-17.
[15]OSELEDETS I,KHRULKOV V.Art of singular vectors and universal adversarial perturbations[C]//Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.2018:8562-8570.
[16]DI M,PENG R,WANG Y L,et al.Boosting the Transferability of Adversarial Attack on Vision Transformer with Adaptive Token Tuning[C]//NeurIPS.2024.
[17]ZHOU Q,CHEN J,TAO Q.Adversarial attack optimize method based on L1-mask constraint[J].CAAI Transactions on Intelligent Systems,2025(3):594-604.
[18]SZEGEDY C,VANHOUCKE V,IOFFE S,et al.Rethinking the inception architecture for computer vision[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:2818-2826.
[19]SZEGEDY C,LIU W,JIA Y Q,et al.Going deeper with convo- lutions[C]//2015 IEEE Conference on Computer Vision and Pattern Recognition(CVPR).2014:1-9.
[20]HE K M,ZHANG X Y,REN S Q,et al.Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:770-778.
[21]KAREN S,ANDREW Z.Very deep convolutional networks for large-scale image recognition[J].arXiv:1409.1556,2014.
[22]MARK S,ANDREW H,ZHU M L,et al.Mobilenetv2:inverted residuals and linear bottlenecks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2018:4510-4520.
[23]ALEXEY D,LUCAS B,ALEXANDER K,et al.An image is worth 16x16 words:transformers for image recognition at scale[C]//International Conference on Learning Representations.2021.
[24]PASZKE A,GROSS S,MASSA F,et al.PyTorch:an imperative style,high- performance deep learning library[J].arXiv:1912.01703,2019.
[25]KIM H.Torchattacks:a pytorch repository for adversarial attacks[J].arXiv:2010.01950,2020.
[26]FRANCESCO C,HEIN M.Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks[C]//International Conference on Machine Learning.Washington DC:ICLR,2020.
[27]FRANCESCO C,HEIN M.Mind the box:l1-APGD for sparse adversarial attacks on image classifiers[C]//International Conference on Machine Learning.Washington DC:ICLR,2021.
[28]WANG X S,HE X R,WANG J D,et al.Admix:Enhancing the Tranferability of Adversarial Attacks[J].arXiv:2102.00436,2021.
[29]WANG Z,GUO H,ZHANG Z,et al.Feature Importance-aware Transferable Adversarial Attacks[C]//2021 IEEE/CVF International Conference on Computer Vision(ICCV).2021.
[30]LIN Q L,LUO C,NIU Z H,et al.Boosting Adversarial Transferability across Model Genus by Deformation-Constrained Warping[C]//AAAI.2024.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!