Computer Science ›› 2021, Vol. 48 ›› Issue (6A): 509-513.doi: 10.11896/jsjkx.200800081

• Information Security • Previous Articles     Next Articles

Defense Method of Adversarial Training Based on Gaussian Enhancement and Iterative Attack

WANG Dan-ni, CHEN Wei, YANG Yang, SONG Shuang   

  1. School of Information and Software Engineering(Software Engineering),University of Electronic Science and Technology of China,Chengdu 610054,China
  • Online:2021-06-10 Published:2021-06-17
  • About author:WANG Dan-ni,born in 1995,postgraduate.Her main research interest includes information security of artificial intelligence.
    CHEN Wei,born in 1978,Ph.D,associate professor.His main research interest includes network security.
  • Supported by:
    Funds for International Cooperation and Exchange of the National Natural Science Foundation of China(61520106007).

Abstract: In recent years,the existing deep learning network models have been able to achieve high accuracy in various classification tasks,but they are still extremely vulnerable to be attacked by adversarial samples.At present,adversarial training is one of the best methods to defend against adversarial sample attacks.However,the known single-step attack adversarial training me-thods only have a good defensive effect against single-step attacks,but have poor defense performance against iterative attacks.The iterative attack adversarial training methods only improve the defense performance against iterative attacks,but the defense effect of single-step attacks is not ideal.In order to improve the robustness of the deep learning network model against single-step attacks and iterative attacks at the same time,this paper proposes GILLC,an adversarial training defense method that combines Gaussian enhancement and ILLC iterative attacks.First,a Gaussian perturbation is added to the clean samples to improve thegene-ralization ability of the deep learning network model.Then,the adversarial samples generated by ILLC are used for adversarial training,which approximately solves the internal maximization problem of adversarial training.In this paper,a white box attack experiment is conducted with CIFAR10 as the data set.The results show that the GILLC method effectively improves the robustness of the deep learning network model against single-step attacks and iterative attacks by comparing with the baseline,single-step attack adversarial training and iterative attack adversarial training methods,without significantly reducing the classification performance of the clean samples.

Key words: Adversarial samples, Adversarial training, Deep learning, Gaussian enhancement, Iterative attacks, Single-step attacks

CLC Number: 

  • TP391
[1] HE K,ZHANG X,REN S,et al.Deep Residual Learning for Image Recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:770-778.
[2] ZHANG Z,QIAO S,XIE C,et al.Single-shot Object Detection with Enriched Semantics[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2018:5813-5821.
[3] CHEN L,PAPANDREOU G,KOKKINOS I,et al.DeepLab:Semantic Image Segmentation with Deep Convolutional Nets,Atrous Convolution,and Fully Connected CRFs[J].IEEE Annals of the History of Computing,2018(4):834-848.
[4] SZEGEDY C,ZAREMBA W,SUTSKEVER I,et al.Intriguing Properties of Neural Networks[C]//International Conference on Learning Representations.2014.
[5] AKHTAR N,MIAN A.Threat of Adversarial Attacks on Deep Learning in Computer Vision:A Survey[J].IEEE Access,2018:14410-14430.
[6] MADRY A,MAKELOV A,SCHMIDT L,et al.Towards Deep Learning Models Resistant to Adversarial Attacks[C]//International Conference on Learning Representations.2018.
[7] LI Y,LI L,WANG L,et al.NATTACK:Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks[C]//International Conference on Machine Learning.2019:3866-3876.
[8] KURAKIN A,GOODFELLOW I,BENGIO S,et al.Adversarial Machine Learning at Scale[C]//International Conference on Learning Representations.2017.
[9] SONG C,HE K,LIN J,et al.Robust Local Features for Improving the Generalization of Adversarial Training[C]//International Conference on Learning Representations.2020.
[10] SHAFAHI A,NAJIBI M,GHIASI M A,et al.Adversarialtraining for free[C]//Neural Information Processing Systems.2019:3358-3369.
[11] GOODFELLOW I,SHLENS J,SZEGEDY C,et al.Explaining and Harnessing Adversarial Examples[C]//International Conference on Learning Representations.2015.
[12] KURAKIN A,GOODFELLOW I,BENGIO S,et al.Adversarial examples in the physical world[C]//International Conference on Learning Representations.2017.
[13] MOOSAVI-DEZFOOLI S M,FAWZI A,FROSSARD P.Deepfool:A Simple and Accurate Method to Fool Deep Neural Networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:2574-2582.
[14] CARLINI N,WAGNER D.Towards Evaluating the Robustness of Neural Networks[C]//IEEE Symposium on Security and Privacy.2017:39-57.
[15] MENG D,CHEN H.MagNet:A Two-Pronged Defense against Adversarial Examples[C]//Computer and Communications Security.2017:135-147.
[16] GU S,RIGAZIO L.Towards Deep Neural Network Architectures Robust to Adversarial Examples[J].arXiv:Learning,2014.
[17] PAPERNOT N,MCDANIEL P,WU X,et al.Distillation as aDefense to Adversarial Perturbations Against Deep Neural Networks[C]//IEEE Symposium on Security and Privacy.2016:582-597.
[18] XU W,EVANS D,QI Y,et al.Feature Squeezing Mitigates and Detects Carlini/Wagner Adversarial Examples[J].arXiv:Cryptography and Security,2017.
[19] XU W,EVANS D,QI Y,et al.Feature Squeezing:Detecting Adversarial Examples in Deep Neural Networks[C]//Network and Distributed System Security Symposium.2018.
[20] WONG E,RICE L,KOLTER J Z,et al.Fast is Better thanFree:Revisiting Adversarial Training[J].arXiv preprint arXiv:2001.03994,2020.
[21] XIAO C,ZHONG P,ZHENG C,et al.Enhancing Adversarial Defense by k-Winners-Take-All[J].arXiv preprint arXiv:1905.10510,2019.
[22] ZANTEDESCHI V,NICOLAE M,RAWAT A,et al.Efficient Defenses Against Adversarial Attacks[J].arXiv:Learning,2017.
[1] RAO Zhi-shuang, JIA Zhen, ZHANG Fan, LI Tian-rui. Key-Value Relational Memory Networks for Question Answering over Knowledge Graph [J]. Computer Science, 2022, 49(9): 202-207.
[2] TANG Ling-tao, WANG Di, ZHANG Lu-fei, LIU Sheng-yun. Federated Learning Scheme Based on Secure Multi-party Computation and Differential Privacy [J]. Computer Science, 2022, 49(9): 297-305.
[3] XU Yong-xin, ZHAO Jun-feng, WANG Ya-sha, XIE Bing, YANG Kai. Temporal Knowledge Graph Representation Learning [J]. Computer Science, 2022, 49(9): 162-171.
[4] WANG Jian, PENG Yu-qi, ZHAO Yu-fei, YANG Jian. Survey of Social Network Public Opinion Information Extraction Based on Deep Learning [J]. Computer Science, 2022, 49(8): 279-293.
[5] HAO Zhi-rong, CHEN Long, HUANG Jia-cheng. Class Discriminative Universal Adversarial Attack for Text Classification [J]. Computer Science, 2022, 49(8): 323-329.
[6] JIANG Meng-han, LI Shao-mei, ZHENG Hong-hao, ZHANG Jian-peng. Rumor Detection Model Based on Improved Position Embedding [J]. Computer Science, 2022, 49(8): 330-335.
[7] SUN Qi, JI Gen-lin, ZHANG Jie. Non-local Attention Based Generative Adversarial Network for Video Abnormal Event Detection [J]. Computer Science, 2022, 49(8): 172-177.
[8] HU Yan-yu, ZHAO Long, DONG Xiang-jun. Two-stage Deep Feature Selection Extraction Algorithm for Cancer Classification [J]. Computer Science, 2022, 49(7): 73-78.
[9] CHENG Cheng, JIANG Ai-lian. Real-time Semantic Segmentation Method Based on Multi-path Feature Extraction [J]. Computer Science, 2022, 49(7): 120-126.
[10] HOU Yu-tao, ABULIZI Abudukelimu, ABUDUKELIMU Halidanmu. Advances in Chinese Pre-training Models [J]. Computer Science, 2022, 49(7): 148-163.
[11] ZHOU Hui, SHI Hao-chen, TU Yao-feng, HUANG Sheng-jun. Robust Deep Neural Network Learning Based on Active Sampling [J]. Computer Science, 2022, 49(7): 164-169.
[12] SU Dan-ning, CAO Gui-tao, WANG Yan-nan, WANG Hong, REN He. Survey of Deep Learning for Radar Emitter Identification Based on Small Sample [J]. Computer Science, 2022, 49(7): 226-235.
[13] WANG Jun-feng, LIU Fan, YANG Sai, LYU Tan-yue, CHEN Zhi-yu, XU Feng. Dam Crack Detection Based on Multi-source Transfer Learning [J]. Computer Science, 2022, 49(6A): 319-324.
[14] CHU Yu-chun, GONG Hang, Wang Xue-fang, LIU Pei-shun. Study on Knowledge Distillation of Target Detection Algorithm Based on YOLOv4 [J]. Computer Science, 2022, 49(6A): 337-344.
[15] LIU Wei-ye, LU Hui-min, LI Yu-peng, MA Ning. Survey on Finger Vein Recognition Research [J]. Computer Science, 2022, 49(6A): 1-11.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!