计算机科学 ›› 2024, Vol. 51 ›› Issue (6A): 230800025-6.doi: 10.11896/jsjkx.230800025

• 信息安全 • 上一篇    下一篇

通过拉普拉斯平滑梯度提高对抗样本的可迁移性

李文婷, 肖蓉, 杨肖   

  1. 湖北大学计算机与信息工程学院 武汉 430000
  • 发布日期:2024-06-06
  • 通讯作者: 肖蓉(x_rong@whu.edu.cn)
  • 作者简介:(1749115318@qq.com)
  • 基金资助:
    基于人工智能的红外目标探测识别技术研究与应用(2022KZ00125);成果转化视角下的光电类国际专利数据聚类分析(E1KF291005)

Improving Transferability of Adversarial Samples Through Laplacian Smoothing Gradient

LI Wenting, XIAO Rong, YANG Xiao   

  1. School of Computer and Information Engineering,Hubei University,Wuhan 430000,China
  • Published:2024-06-06
  • About author:LI Wenting,born in 1999,postgra-duate.Her main research interests include deep learning and adversarial attacks.
    XIAO Rong,born in 1980,Ph.D,lectu-rer.Her main research interests include industrial Internet and intelligent analysis,natural language processing.
  • Supported by:
    Research and Application of Infrared Target Detection and Recognition Technology based on Artificial Intelligence(2022KZ00125) and Cluster Analysis of Optoelectronic International Patent Data from The Perspective of Achievement Transformation(E1KF291005).

摘要: 深度神经网络因模型自身结构的脆弱性,容易受对抗样本的攻击。现有的对抗样本生成方法具有较高的白盒攻击率,但在攻击其他DNN模型时可转移性有限。为了提升黑盒迁移攻击成功率,提出了一种利用拉普拉斯平滑梯度的可迁移对抗攻击方法。该方法在基于梯度的黑盒迁移攻击方法上做了改进,先利用拉普拉斯平滑对输入图片的梯度进行平滑,将平滑后的梯度输入利用梯度攻击的攻击方法中继续用于计算,旨在提高对抗样本在不同模型之间的迁移能力。拉普拉斯平滑的优点在于它可以有效地降低噪声和异常值对数据的影响,从而提高数据的可靠性和稳定性。通过在多个模型上进行评估,该方法进一步提高了对抗样本的迁移成功率,最佳的可迁移成功率比基线攻击方法高出2%。结果表明,该方法对于增强对抗攻击算法的迁移性能具有重要意义,为进一步研究和应用提供了新的思路。

关键词: 深度神经网络, 对抗攻击, 对抗样本, 黑盒攻击, 可迁移性

Abstract: Deep neural networks are vulnerable to adversarial sample attacks due to the fragility of the model structure.Existing adversarial sample generation methods have a high white box attack rate,but their transferability is limited when attacking other DNN models.In order to improve the success rate of black box migration attack,this paper proposes a migration counterattack method using Laplacian smooth gradient.This method is improved on the gradient-based black box migration attack method.Firstly,Laplacian smoothing is used to smooth the gradient of the input image,and the smoothed gradient is input into the attack method using gradient attack for further calculation,aiming to improve the migration ability of the adversary-sample between different models.The advantage of Laplacian smoothing is that it can effectively reduce the impact of noise and outliers on the data,thus improving the reliability and stability of the data.The approach does further improve the migration success of adversarial samples by evaluating them on multiple models,with the best migrable success rate 2%,higher than the baseline attack method.The results show that this method is of great significance to enhance the migration performance of adversarial attack algorithms,and provides a new idea for further research and application.

Key words: Deep neural networks, Adversarial attack, Adversarial samples, Black-box attack, Transferability

中图分类号: 

  • TP393.08
[1]DUAN R,MAO X,QIN A K,et al.Adversarial laser beam:Effective physical-world attack to DNNs in a blink[C]//Procee-dings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2021:16062-16071.
[2]YAN H,WEI X:Efficient Sparse Attacks on Videos using Reinforcement Learning[C]//Proceedings of the 29th ACM International Conference on Multimedia.2021:2326-2334.
[3]YE Q S,DAI X C.Current Status Analysis of Adversarial Sample Generation Techniques for Attack Classifiers[J].Computer Engineering and Applications,2020,56(5):34-42.
[4]JIN S,LI M H,DU Y.Loss smoothing based countersample attack algorithm[J/OL].Journal of Beijing University of Aeronautics and Astronautics:1-10[2023-08-02].
[5]YANG W B,YUAN J D.Local time series black box counterattack attacks[J].Computer Science,2022,49(10):285-290.
[6]LI J,GUOY M.Multi-Objective Class Adversarial Sample Generation Algorithm based on Generative Adversarial Networks[J].Computer Science,2022,49(2):83-91.
[7]GOODFELLOW I J,SHLENS J,SZEGEDY C.Explaining and Harnessin adversarial examples[EB/OL].[2020-06-20].https://arxiv.org/abs/1412.6572.
[8]KURAKIN A,GOODFELLOW I,BENGIO S.Adversarial ex-amples in the physical world[EB/OL].[2020-06-20].https://arxiv.org//abs/1607.02533.
[9]DONG Y,LIAO F,PANG T,et al.Boosting adversarial attacks with momentum[C]//Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition.LongBeach.USA:IEEE Press,2018:9185-9193.
[10]TU C C,TING P,CHEN P Y,et al.Autozoom:Autoencoder-based zeroth order optimization method for attacking black-box neural networks[C]//Proceedings of the AAAI Conference on Artificial Intelligence.2019:742-749.
[11]WANG X,HE K.Enhancing the transferability of adversarialattacks through variance tuning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2021:1924-1933.
[12]WANG G,YAN H,WEI X.Enhancing transferability of adversarial examples with spatial momentum[C]//Chinese Confe-rence on Pattern Recognition and Computer Vision(PRCV).Cham:Springer International Publishing,2022:593-604.
[13]DONG Y P,PANG T Y,SU H,et al.Evading defenses to transferable adversarial examples by translation-invariant attacks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2019:4312-4321.
[14]BAI Z X,WANG H J,GUO K X.Adversarial sample generation method based on random transformation of image color[J].Computer Science,2023,50(4):88-95.
[15]ZHAO H,CHANG Y K,WANG W J.A review of adversarial attack and defense methods of deep neural networks[J].Computer Science,2022,49(S2):662-672.
[16]LIN J D,SONG C B,HE K,et al.Nesterov accelerated gradient and scale invariance for adversarial attacks[C]//International Conferenceon Learning Representations.2020.
[17]CHEN X N,HU J M,ZHANG B J.Based on the model of black box method against the attacks start increasing mobility between[J].Computer Engineering,2021,47(8):162-169.
[18]DING J,XU Z W.Anti-migration attacks based on RectifiedAdam and color invariance[J].Journal of software,2022,33(7):2525-2537.
[19]XIE C H,ZHANG Z S,ZHOU Y Y,et al.Improving transfer-ability of adversarial examples with input diversity[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2019:2730-2739.
[20]DONG Y P,LIAO F Z,PANG T Y,et al.Boosting adversarial attacks with momentum[C]//Proceedings of the IEEE Confe-rence on Computer Vision and Pattern Recognition.2018:9185-9193.
[21]RUSSAKOVSKY O,DENG J,SU H,et al.Imagenet large scale visual recognition challenge[J].International Journal of Computer Vision,2015,115(3):211-25.
[22]SZEGEDY C,VANHOUCKE V,IOFFE S,et al.Rethinking the inception architecture for computer vision[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:2818-2826.
[23]SZEGEDY C,IOFFE S,VANHOUCKE V,et al.Inception-v4,inception-resnet and the impact of residual connections on lear-ning[C]//AAAI Conference on Artificial Intelligence.2017:4278-4284.
[24]HE K M,ZHANG X Y,REN S Q,et al.Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision andPatternrecognition.2016:770-778.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!