Computer Science ›› 2024, Vol. 51 ›› Issue (6A): 230800025-6.doi: 10.11896/jsjkx.230800025

• Information Security • Previous Articles     Next Articles

Improving Transferability of Adversarial Samples Through Laplacian Smoothing Gradient

LI Wenting, XIAO Rong, YANG Xiao   

  1. School of Computer and Information Engineering,Hubei University,Wuhan 430000,China
  • Published:2024-06-06
  • About author:LI Wenting,born in 1999,postgra-duate.Her main research interests include deep learning and adversarial attacks.
    XIAO Rong,born in 1980,Ph.D,lectu-rer.Her main research interests include industrial Internet and intelligent analysis,natural language processing.
  • Supported by:
    Research and Application of Infrared Target Detection and Recognition Technology based on Artificial Intelligence(2022KZ00125) and Cluster Analysis of Optoelectronic International Patent Data from The Perspective of Achievement Transformation(E1KF291005).

Abstract: Deep neural networks are vulnerable to adversarial sample attacks due to the fragility of the model structure.Existing adversarial sample generation methods have a high white box attack rate,but their transferability is limited when attacking other DNN models.In order to improve the success rate of black box migration attack,this paper proposes a migration counterattack method using Laplacian smooth gradient.This method is improved on the gradient-based black box migration attack method.Firstly,Laplacian smoothing is used to smooth the gradient of the input image,and the smoothed gradient is input into the attack method using gradient attack for further calculation,aiming to improve the migration ability of the adversary-sample between different models.The advantage of Laplacian smoothing is that it can effectively reduce the impact of noise and outliers on the data,thus improving the reliability and stability of the data.The approach does further improve the migration success of adversarial samples by evaluating them on multiple models,with the best migrable success rate 2%,higher than the baseline attack method.The results show that this method is of great significance to enhance the migration performance of adversarial attack algorithms,and provides a new idea for further research and application.

Key words: Deep neural networks, Adversarial attack, Adversarial samples, Black-box attack, Transferability

CLC Number: 

  • TP393.08
[1]DUAN R,MAO X,QIN A K,et al.Adversarial laser beam:Effective physical-world attack to DNNs in a blink[C]//Procee-dings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2021:16062-16071.
[2]YAN H,WEI X:Efficient Sparse Attacks on Videos using Reinforcement Learning[C]//Proceedings of the 29th ACM International Conference on Multimedia.2021:2326-2334.
[3]YE Q S,DAI X C.Current Status Analysis of Adversarial Sample Generation Techniques for Attack Classifiers[J].Computer Engineering and Applications,2020,56(5):34-42.
[4]JIN S,LI M H,DU Y.Loss smoothing based countersample attack algorithm[J/OL].Journal of Beijing University of Aeronautics and Astronautics:1-10[2023-08-02].
[5]YANG W B,YUAN J D.Local time series black box counterattack attacks[J].Computer Science,2022,49(10):285-290.
[6]LI J,GUOY M.Multi-Objective Class Adversarial Sample Generation Algorithm based on Generative Adversarial Networks[J].Computer Science,2022,49(2):83-91.
[7]GOODFELLOW I J,SHLENS J,SZEGEDY C.Explaining and Harnessin adversarial examples[EB/OL].[2020-06-20].https://arxiv.org/abs/1412.6572.
[8]KURAKIN A,GOODFELLOW I,BENGIO S.Adversarial ex-amples in the physical world[EB/OL].[2020-06-20].https://arxiv.org//abs/1607.02533.
[9]DONG Y,LIAO F,PANG T,et al.Boosting adversarial attacks with momentum[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition.LongBeach.USA:IEEE Press,2018:9185-9193.
[10]TU C C,TING P,CHEN P Y,et al.Autozoom:Autoencoder-based zeroth order optimization method for attacking black-box neural networks[C]//Proceedings of the AAAI Conference on Artificial Intelligence.2019:742-749.
[11]WANG X,HE K.Enhancing the transferability of adversarialattacks through variance tuning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2021:1924-1933.
[12]WANG G,YAN H,WEI X.Enhancing transferability of adversarial examples with spatial momentum[C]//Chinese Confe-rence on Pattern Recognition and Computer Vision(PRCV).Cham:Springer International Publishing,2022:593-604.
[13]DONG Y P,PANG T Y,SU H,et al.Evading defenses to transferable adversarial examples by translation-invariant attacks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2019:4312-4321.
[14]BAI Z X,WANG H J,GUO K X.Adversarial sample generation method based on random transformation of image color[J].Computer Science,2023,50(4):88-95.
[15]ZHAO H,CHANG Y K,WANG W J.A review of adversarial attack and defense methods of deep neural networks[J].Computer Science,2022,49(S2):662-672.
[16]LIN J D,SONG C B,HE K,et al.Nesterov accelerated gradient and scale invariance for adversarial attacks[C]//International Conferenceon Learning Representations.2020.
[17]CHEN X N,HU J M,ZHANG B J.Based on the model of black box method against the attacks start increasing mobility between[J].Computer Engineering,2021,47(8):162-169.
[18]DING J,XU Z W.Anti-migration attacks based on RectifiedAdam and color invariance[J].Journal of software,2022,33(7):2525-2537.
[19]XIE C H,ZHANG Z S,ZHOU Y Y,et al.Improving transfer-ability of adversarial examples with input diversity[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2019:2730-2739.
[20]DONG Y P,LIAO F Z,PANG T Y,et al.Boosting adversarial attacks with momentum[C]//Proceedings of the IEEE Confe-rence on Computer Vision and Pattern Recognition.2018:9185-9193.
[21]RUSSAKOVSKY O,DENG J,SU H,et al.Imagenet large scale visual recognition challenge[J].International Journal of Computer Vision,2015,115(3):211-25.
[22]SZEGEDY C,VANHOUCKE V,IOFFE S,et al.Rethinking the inception architecture for computer vision[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:2818-2826.
[23]SZEGEDY C,IOFFE S,VANHOUCKE V,et al.Inception-v4,inception-resnet and the impact of residual connections on lear-ning[C]//AAAI Conference on Artificial Intelligence.2017:4278-4284.
[24]HE K M,ZHANG X Y,REN S Q,et al.Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision andPatternrecognition.2016:770-778.
[1] LI Shasha, XING Hongjie. Robust Anomaly Detection Based on Adversarial Samples and AutoEncoder [J]. Computer Science, 2024, 51(5): 363-373.
[2] GUO Yuxing, YAO Kaixuan, WANG Zhiqiang, WEN Liangliang, LIANG Jiye. Black-box Graph Adversarial Attacks Based on Topology and Feature Fusion [J]. Computer Science, 2024, 51(1): 355-362.
[3] ZHOU Fengfan, LING Hefei, ZHANG Jinyuan, XIA Ziwei, SHI Yuxuan, LI Ping. Facial Physical Adversarial Example Performance Prediction Algorithm Based on Multi-modal Feature Fusion [J]. Computer Science, 2023, 50(8): 280-285.
[4] LI Kun, GUO Wei, ZHANG Fan, DU Jiayu, YANG Meiyue. Adversarial Malware Generation Method Based on Genetic Algorithm [J]. Computer Science, 2023, 50(7): 325-331.
[5] BAI Zhixu, WANG Hengjun, GUO Kexiang. Adversarial Examples Generation Method Based on Image Color Random Transformation [J]. Computer Science, 2023, 50(4): 88-95.
[6] LIU Yifan, OU Bo, XIONG Jianqin. Adaptive Image Adversarial Reprogramming Based on Noise Invisibility Factors [J]. Computer Science, 2023, 50(4): 110-116.
[7] YANG Youhuan, SUN Lei, DAI Leyu, GUO Song, MAO Xiuqing, WANG Xiaoqin. Generate Transferable Adversarial Network Traffic Using Reversible Adversarial Padding [J]. Computer Science, 2023, 50(12): 359-367.
[8] LI Yanda, FAN Chunlong, TENG Yiping, YU Kaibo. Batch Zeroth Order Gradient Symbol Method Based on Substitution Model [J]. Computer Science, 2023, 50(11A): 230100036-6.
[9] REN Shuyao, SONG Jiangling, ZHANG Rui. Early Screening Method for Depression Based on EEG Signal [J]. Computer Science, 2023, 50(11A): 221100139-6.
[10] HAO Zhi-rong, CHEN Long, HUANG Jia-cheng. Class Discriminative Universal Adversarial Attack for Text Classification [J]. Computer Science, 2022, 49(8): 323-329.
[11] WU Zi-bin, YAN Qiao. Projected Gradient Descent Algorithm with Momentum [J]. Computer Science, 2022, 49(6A): 178-183.
[12] YAN Meng, LIN Ying, NIE Zhi-shen, CAO Yi-fan, PI Huan, ZHANG Lan. Training Method to Improve Robustness of Federated Learning [J]. Computer Science, 2022, 49(6A): 496-501.
[13] LI Jian, GUO Yan-ming, YU Tian-yuan, WU Yu-lun, WANG Xiang-han, LAO Song-yang. Multi-target Category Adversarial Example Generating Algorithm Based on GAN [J]. Computer Science, 2022, 49(2): 83-91.
[14] CHEN Meng-xuan, ZHANG Zhen-yong, JI Shou-ling, WEI Gui-yi, SHAO Jun. Survey of Research Progress on Adversarial Examples in Images [J]. Computer Science, 2022, 49(2): 92-106.
[15] WANG Xiao-ming, WEN Xu-yun, XU Meng-ting, ZHANG Dao-qiang. Graph Convolutional Network Adversarial Attack Method for Brain Disease Diagnosis [J]. Computer Science, 2022, 49(12): 340-345.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!