计算机科学 ›› 2025, Vol. 52 ›› Issue (8): 403-410.doi: 10.11896/jsjkx.240700058

• 信息安全 • 上一篇    下一篇

一种基于线性插值的对抗攻击方法

陈军, 周强, 鲍蕾, 陶卿   

  1. 陆军炮兵防空兵学院信息工程系 合肥 230031
  • 收稿日期:2024-07-09 修回日期:2024-09-24 出版日期:2025-08-15 发布日期:2025-08-08
  • 通讯作者: 陶卿(taoqing@gmail.com)
  • 作者简介:(chenjun342423@sina.com)

Linear Interpolation Method for Adversarial Attack

CHEN Jun, ZHOU Qiang, BAO Lei, TAO Qing   

  1. Department of Information Engineering,PLA Army Academy of Artillery and Air Defense,Hefei 230031,China
  • Received:2024-07-09 Revised:2024-09-24 Online:2025-08-15 Published:2025-08-08
  • About author:CHEN Jun,born in 1989,master.His main research interests include machine learning and mathematical optimization.
    TAO Qing,born in 1965,Ph.D,professor,doctoral supervisor,is a senior member of CCF(No.09081S).His main research interests include machine learning,pattern recognition and applied mathematics.

摘要: 深度神经网络在对抗性样本面前表现出显著的脆弱性,易遭受攻击。对抗性样本的构造可被抽象为一个最大化目标函数的优化问题。然而,基于梯度迭代的方法在处理此类优化问题时往往面临收敛性挑战。这类方法主要依赖梯度符号进行迭代更新,却忽略了梯度的大小和方向信息,导致算法性能不稳定。研究表明,I-FGSM对抗攻击算法源自优化领域中的随机投影次梯度方法。已有文献指出,在优化问题中,采用线性插值方法替代随机投影次梯度方法能够获得优异的性能。鉴于此,提出一种新型的基于线性插值的对抗攻击方法。该方法将插值策略应用于对抗攻击中,并以实际梯度替代传统的符号梯度。理论上,所提出的线性插值对抗攻击算法已被证明在一般凸优化问题中能够实现最优的个体收敛速率,从而克服符号梯度类算法的收敛难题。实验结果证实,线性插值方法作为一种通用且高效的策略,与基于梯度的对抗攻击方法相结合,能够形成新的攻击算法。相较于已有算法,这些新的攻击算法在保持对抗性样本的不可察觉性的同时,显著提升了攻击成功率,并在迭代过程中保持了较高的稳定性。

关键词: 线性插值, 对抗攻击, 梯度符号, 收敛性, 稳定性

Abstract: Deep neural networks exhibit significant vulnerability in the face of adversarial examples and are prone to attacks.The construction of adversarial examples can be abstracted as an optimization problem that maximizes the objective function.How-ever,gradient-based iterative methods often face convergence challenges when dealing with such optimization problems.These methods primarily rely on the gradient sign for iterative updates,neglecting the magnitude and direction information of the gra-dient,which can lead to algorithm instability.Studies have shown that the I-FGSM adversarial attack algorithm originates from the stochastic projection subgradient method in the field of optimization.Literature has indicated that in optimization problems,using linear interpolation methods to replace stochastic projection subgradient methods can achieve superior performance.Based on this,this paper proposes a novel linear interpolation-based adversarial attack method,which applies the interpolation strategy to adversarial attacks and replaces the traditional sign gradient with the actual gradient.Theoretically,the proposed linear interpolation adversarial attack algorithm is proved can achieve the optimal individual convergence rate in general convex optimization pro-blems,thereby overcoming the convergence difficulties of sign gradient-based algorithms.Experimental results confirm that the linear interpolation method,as a universal and efficient strategy,when combined with gradient-based adversarial attack methods,can form new attack algorithms.Compared to the original algorithms,these new algorithms significantly increase the success rate of attacks while maintaining the imperceptibility of adversarial examples and exhibit high stability during the iterative process.

Key words: Linear interpolation, Adversarial attack, Gradient sign, Convergence, stability

中图分类号: 

  • TP391
[1]KIM M,JAIN A K,LIU X.AdaFace:Quality Adaptive Margin for Face Recognition[C]//2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition.2022:18729-18738.
[2]FENG S,SUN H W,YAN X T,et al.Dense reinforcementlearning for safety validation of autonomous vehicles[J].Nature,2023,615(7953):620-627.
[3]WANG Y,YU J,ZHANG J.Zero-Shot Image Restoration Using Denoising Diffusion Null-Space Model[J].arXiv:2212.00490,2022.
[4]HESSEL J,MARASOVIĆ A,HWANG J D,et al.Do Androids Laugh at Electric Sheep? Humor “Understanding” Benchmarks from The New Yorker Caption Contest[J].arXiv:2209.06293,2022.
[5]GU J,JIA X,JORGE P D,et al.A Survey on Transferability of Adversarial Examples across Deep Neural Networks[J].arXiv:2310.17626,2023.
[6]WANG X,HE K.Enhancing the Transferability of Adversarial Attacks through Variance Tuning[C]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition.2021:1924-1933.
[7]JI S L,DU T Y,DENG S G,et al.Robustness certification research on deep learning models:a survey[J].Chinese Journal of Computers,2022,45(1):190-206.
[8]GOODFELLOW I J ,SHLENS J,SZEGEDY C.Explaining and Harnessing Adversarial Examples[J].arXiv:1412.6572,2014.
[9]KURAKIN A,GOODFELLOW I J,BENGIO S.Adversarial examples in the physical world[J].arXiv:1607.02533,2016.
[10]CARLIN IN,WAGNER D A.Towards Evaluating the Robustness of Neural Networks[C]//2017 IEEE Symposium on Secu-rity and Privacy.2017:39-57.
[11]MADRY A,MAKELOV A,SCHMIDT L,et al.Towards Deep Learning Models Resistant to Adversarial Attacks[J].arXiv:1706.06083,2019.
[12]TRAMÉR F, KURAKIN A, PAPERNOT N,et al.Ensemble Adversarial Training:Attacks and Defenses[J].arXiv:1705.07204,2017.
[13]DONG Y,LIAO F,PANG T,et al.Boosting Adversarial Attacks with Momentum[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.2018:9185-9193.
[14]LIN J,SONG C,HE K,et al.Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks[J].arXiv:1908.06281,2019.
[15]XIE C,ZHANG Z,WANG J,et al.Improving Transferability of Adversarial Examples With Input Diversity[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition.2019:2725-2734.
[16]DONG Y,PANG T,SU H,et al.Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition.2019:4307-4316.
[17]WANG J,CHEN Z,JIANG K,et al.Boosting the Transferability of Adversarial Attacks with Global Momentum Initialization[J].arXiv:2211.11236,2022.
[18]WANG X,LIN J,HU H,et al.Boosting Adversarial Transferability through Enhanced Momentum[J].arXiv:2103.10609,2021.
[19]PENG A,LIN Z,ZENG H,et al.Boosting Transfera-bility of Adversarial Example via an Enhanced Euler's Method[C]//ICASSP 2023-2023 IEEE International Conference on Acoustics,Speech and Signal Processing.2023.
[20]GE Z,SHANG F,LIU H,et al.Boosting Adversarial Transferability by Achieving Flat Local Maxima[J].arXiv:2306.05225,2023.
[21]FANG Z,WANG R,HUANG T,et al.Strong Transferable Adversarial Attacks via Ensembled Asymptotically Normal Distribution Learning[C]//CVPR2024.2024.
[22]KARIMIREDDY S P,REBJOCK Q,STICH S U,et al.Error Feedback Fixes SignSGD and other Gradient Compression Schemes[J].arXiv:1901.09847,2019.
[23]TAO W,PAN Z S,ZHU X H,et al.The Optimal individual convergence rate for the projected subgradient method with linear interpolation operation [J].Journal of Computer Research and Development,2017,54(3):529-536.
[24]MUKKAMALA M C,HEIN M.Variants of RMSProp andAdagrad with Logarithmic Regret Bounds[J].arXiv:1706.05507,2017.
[25]KINGMA D P,BA J.Adam:A Method for Stochastic Optimization[J].arXiv:1412.6980,2017.
[26]SITAWARIN C.New perspectives on adversarially robust machine learning systems:UCB-EECS-2024-10[R].2024.
[27]RUSSAKOVSKY O,DENG J,SU H,et al.ImageNet LargeScale Visual Recognition Challenge[J].International Journal of Computer Vision,2015,115:211-252.
[28]HE K,ZHANG X,REN S,et al.Deep Residual Learning forImage Recognition[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition.2016:770-778.
[29]SZEGEDY C,VANHOUCKE V,IOFFE S,et al.Rethinking the Inception Architecture for Computer Vision[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition.2016:2818-2826.
[30]SIMONYAN K,ZISSERMAN A.Very Deep Convolutional Networks for Large-Scale Image Recognition[J].arXiv:1409.1556,2014.
[31]HUANG G,LIU Z,WEINBERGER K Q.Densely ConnectedConvolutional Networks[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition.2017:2261-2269.
[32]SANDLER M,HOWARD A G,ZHU M,et al.MobileNetV2:Inverted Residuals and Linear Bottlenecks[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.2018:4510-4520.
[33]DOSOVITSKIY A,BEYER L,KOLESNIKOV A,et al.AnImage is Worth 16×16 Words:Transformers for Image Recognition at Scale[J].arXiv:2010.11929,2020.
[34]LIU Z,LIN Y,CAO Y,et al.Swin Transformer:Hierarchical Vision Transformer using Shifted Windows[C]//2021 IEEE/CVF International Conference on Computer Vision.2021:9992-10002.
[35]LIU Y,CHEN X,LIU C,et al.Delving into Transferable Adversarial Examples and Black-box Attacks[J].arXiv:1611.02770,2016.
[36]BAO L,TAO W,TAO Q.Enhancing Adversarial Attack Transferability with an Adaptive Step-Size Strategy and Data Augmentation Mechanism[J].Electronics Letters,2024,52(1):157-169.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!