Computer Science ›› 2025, Vol. 52 ›› Issue (8): 403-410.doi: 10.11896/jsjkx.240700058

• Information Security • Previous Articles     Next Articles

Linear Interpolation Method for Adversarial Attack

CHEN Jun, ZHOU Qiang, BAO Lei, TAO Qing   

  1. Department of Information Engineering,PLA Army Academy of Artillery and Air Defense,Hefei 230031,China
  • Received:2024-07-09 Revised:2024-09-24 Online:2025-08-15 Published:2025-08-08
  • About author:CHEN Jun,born in 1989,master.His main research interests include machine learning and mathematical optimization.
    TAO Qing,born in 1965,Ph.D,professor,doctoral supervisor,is a senior member of CCF(No.09081S).His main research interests include machine learning,pattern recognition and applied mathematics.

Abstract: Deep neural networks exhibit significant vulnerability in the face of adversarial examples and are prone to attacks.The construction of adversarial examples can be abstracted as an optimization problem that maximizes the objective function.How-ever,gradient-based iterative methods often face convergence challenges when dealing with such optimization problems.These methods primarily rely on the gradient sign for iterative updates,neglecting the magnitude and direction information of the gra-dient,which can lead to algorithm instability.Studies have shown that the I-FGSM adversarial attack algorithm originates from the stochastic projection subgradient method in the field of optimization.Literature has indicated that in optimization problems,using linear interpolation methods to replace stochastic projection subgradient methods can achieve superior performance.Based on this,this paper proposes a novel linear interpolation-based adversarial attack method,which applies the interpolation strategy to adversarial attacks and replaces the traditional sign gradient with the actual gradient.Theoretically,the proposed linear interpolation adversarial attack algorithm is proved can achieve the optimal individual convergence rate in general convex optimization pro-blems,thereby overcoming the convergence difficulties of sign gradient-based algorithms.Experimental results confirm that the linear interpolation method,as a universal and efficient strategy,when combined with gradient-based adversarial attack methods,can form new attack algorithms.Compared to the original algorithms,these new algorithms significantly increase the success rate of attacks while maintaining the imperceptibility of adversarial examples and exhibit high stability during the iterative process.

Key words: Linear interpolation, Adversarial attack, Gradient sign, Convergence, stability

CLC Number: 

  • TP391
[1]KIM M,JAIN A K,LIU X.AdaFace:Quality Adaptive Margin for Face Recognition[C]//2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition.2022:18729-18738.
[2]FENG S,SUN H W,YAN X T,et al.Dense reinforcementlearning for safety validation of autonomous vehicles[J].Nature,2023,615(7953):620-627.
[3]WANG Y,YU J,ZHANG J.Zero-Shot Image Restoration Using Denoising Diffusion Null-Space Model[J].arXiv:2212.00490,2022.
[4]HESSEL J,MARASOVIĆ A,HWANG J D,et al.Do Androids Laugh at Electric Sheep? Humor “Understanding” Benchmarks from The New Yorker Caption Contest[J].arXiv:2209.06293,2022.
[5]GU J,JIA X,JORGE P D,et al.A Survey on Transferability of Adversarial Examples across Deep Neural Networks[J].arXiv:2310.17626,2023.
[6]WANG X,HE K.Enhancing the Transferability of Adversarial Attacks through Variance Tuning[C]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition.2021:1924-1933.
[7]JI S L,DU T Y,DENG S G,et al.Robustness certification research on deep learning models:a survey[J].Chinese Journal of Computers,2022,45(1):190-206.
[8]GOODFELLOW I J ,SHLENS J,SZEGEDY C.Explaining and Harnessing Adversarial Examples[J].arXiv:1412.6572,2014.
[9]KURAKIN A,GOODFELLOW I J,BENGIO S.Adversarial examples in the physical world[J].arXiv:1607.02533,2016.
[10]CARLIN IN,WAGNER D A.Towards Evaluating the Robustness of Neural Networks[C]//2017 IEEE Symposium on Secu-rity and Privacy.2017:39-57.
[11]MADRY A,MAKELOV A,SCHMIDT L,et al.Towards Deep Learning Models Resistant to Adversarial Attacks[J].arXiv:1706.06083,2019.
[12]TRAMÉR F, KURAKIN A, PAPERNOT N,et al.Ensemble Adversarial Training:Attacks and Defenses[J].arXiv:1705.07204,2017.
[13]DONG Y,LIAO F,PANG T,et al.Boosting Adversarial Attacks with Momentum[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.2018:9185-9193.
[14]LIN J,SONG C,HE K,et al.Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks[J].arXiv:1908.06281,2019.
[15]XIE C,ZHANG Z,WANG J,et al.Improving Transferability of Adversarial Examples With Input Diversity[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition.2019:2725-2734.
[16]DONG Y,PANG T,SU H,et al.Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition.2019:4307-4316.
[17]WANG J,CHEN Z,JIANG K,et al.Boosting the Transferability of Adversarial Attacks with Global Momentum Initialization[J].arXiv:2211.11236,2022.
[18]WANG X,LIN J,HU H,et al.Boosting Adversarial Transferability through Enhanced Momentum[J].arXiv:2103.10609,2021.
[19]PENG A,LIN Z,ZENG H,et al.Boosting Transfera-bility of Adversarial Example via an Enhanced Euler's Method[C]//ICASSP 2023-2023 IEEE International Conference on Acoustics,Speech and Signal Processing.2023.
[20]GE Z,SHANG F,LIU H,et al.Boosting Adversarial Transferability by Achieving Flat Local Maxima[J].arXiv:2306.05225,2023.
[21]FANG Z,WANG R,HUANG T,et al.Strong Transferable Adversarial Attacks via Ensembled Asymptotically Normal Distribution Learning[C]//CVPR2024.2024.
[22]KARIMIREDDY S P,REBJOCK Q,STICH S U,et al.Error Feedback Fixes SignSGD and other Gradient Compression Schemes[J].arXiv:1901.09847,2019.
[23]TAO W,PAN Z S,ZHU X H,et al.The Optimal individual convergence rate for the projected subgradient method with linear interpolation operation [J].Journal of Computer Research and Development,2017,54(3):529-536.
[24]MUKKAMALA M C,HEIN M.Variants of RMSProp andAdagrad with Logarithmic Regret Bounds[J].arXiv:1706.05507,2017.
[25]KINGMA D P,BA J.Adam:A Method for Stochastic Optimization[J].arXiv:1412.6980,2017.
[26]SITAWARIN C.New perspectives on adversarially robust machine learning systems:UCB-EECS-2024-10[R].2024.
[27]RUSSAKOVSKY O,DENG J,SU H,et al.ImageNet LargeScale Visual Recognition Challenge[J].International Journal of Computer Vision,2015,115:211-252.
[28]HE K,ZHANG X,REN S,et al.Deep Residual Learning forImage Recognition[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition.2016:770-778.
[29]SZEGEDY C,VANHOUCKE V,IOFFE S,et al.Rethinking the Inception Architecture for Computer Vision[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition.2016:2818-2826.
[30]SIMONYAN K,ZISSERMAN A.Very Deep Convolutional Networks for Large-Scale Image Recognition[J].arXiv:1409.1556,2014.
[31]HUANG G,LIU Z,WEINBERGER K Q.Densely ConnectedConvolutional Networks[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition.2017:2261-2269.
[32]SANDLER M,HOWARD A G,ZHU M,et al.MobileNetV2:Inverted Residuals and Linear Bottlenecks[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.2018:4510-4520.
[33]DOSOVITSKIY A,BEYER L,KOLESNIKOV A,et al.AnImage is Worth 16×16 Words:Transformers for Image Recognition at Scale[J].arXiv:2010.11929,2020.
[34]LIU Z,LIN Y,CAO Y,et al.Swin Transformer:Hierarchical Vision Transformer using Shifted Windows[C]//2021 IEEE/CVF International Conference on Computer Vision.2021:9992-10002.
[35]LIU Y,CHEN X,LIU C,et al.Delving into Transferable Adversarial Examples and Black-box Attacks[J].arXiv:1611.02770,2016.
[36]BAO L,TAO W,TAO Q.Enhancing Adversarial Attack Transferability with an Adaptive Step-Size Strategy and Data Augmentation Mechanism[J].Electronics Letters,2024,52(1):157-169.
[1] KANG Kai, WANG Jiabao, XU Kun. Balancing Transferability and Imperceptibility for Adversarial Attacks [J]. Computer Science, 2025, 52(6): 381-389.
[2] TAN Zhengyuan, ZHONG Jiaqing, CHEN Juan. AI+HPC:An Overview of Supercomputing System Software and Application Technology Development Driven by “AI+” [J]. Computer Science, 2025, 52(5): 1-10.
[3] WANG Liming, ZHONG Guomin, SUN Mingxuan, HE Xiongxiong. Finitely-valued Terminal Zeroing Neural Networks with Application to Robotic Motion Planning [J]. Computer Science, 2025, 52(5): 270-280.
[4] CHEN Zigang, PAN Ding, LENG Tao, ZHU Haihua, CHEN Long, ZHOU Yousheng. Explanation Robustness Adversarial Training Method Based on Local Gradient Smoothing [J]. Computer Science, 2025, 52(2): 374-379.
[5] LI Xing, ZHONG Guomin. Fixed-time Recurrent Neural Networks for Time-variant Matrix Computing and Its Application in Repeatable Motion Planning [J]. Computer Science, 2024, 51(8): 324-332.
[6] WANG Xiaojie, LIU Jinhua, LU Shuyi, ZHOU Yuanfeng. Color Transfer Method for Unpaired Medical Images Based on Color Flow Model [J]. Computer Science, 2024, 51(8): 176-182.
[7] YU Mingyang, LI Ting, XU Jing. Adaptive Grey Wolf Optimizer Based on IMQ Inertia Weight Strategy [J]. Computer Science, 2024, 51(7): 354-361.
[8] GAO Yuzhao, NIE Yiming. Survey of Multi-agent Deep Reinforcement Learning Based on Value Function Factorization [J]. Computer Science, 2024, 51(6A): 230300170-9.
[9] HAN Lijun, WANG Peng, LI Ruixu, LIU Zhongyao. Dual Direction Vectors-based Large-scale Multi-objective Evolutionary Algorithm [J]. Computer Science, 2024, 51(6A): 230700155-11.
[10] LI Wenting, XIAO Rong, YANG Xiao. Improving Transferability of Adversarial Samples Through Laplacian Smoothing Gradient [J]. Computer Science, 2024, 51(6A): 230800025-6.
[11] LI Shasha, XING Hongjie. Robust Anomaly Detection Based on Adversarial Samples and AutoEncoder [J]. Computer Science, 2024, 51(5): 363-373.
[12] HUANG Lulu, TANG Shuyu, ZHANG Wei, DAI Xiangguang. Non-negative Matrix Factorization Parallel Optimization Algorithm Based on Lp-norm [J]. Computer Science, 2024, 51(2): 100-106.
[13] YOU Wenlong, DENG Li, LI Ruilong, XIE Yuxin, REN Zhengwei. Load Prediction Method of Cloud Resource Based on v-Informer [J]. Computer Science, 2024, 51(12): 147-156.
[14] GAO Zhuofan, GUO Wenli. Novel Probability Distribution Update Strategy for Distributed Deep Q-Networks Based on Sigmoid Function [J]. Computer Science, 2024, 51(12): 277-285.
[15] GUO Yuxing, YAO Kaixuan, WANG Zhiqiang, WEN Liangliang, LIANG Jiye. Black-box Graph Adversarial Attacks Based on Topology and Feature Fusion [J]. Computer Science, 2024, 51(1): 355-362.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!