Computer Science ›› 2021, Vol. 48 ›› Issue (7): 25-32.doi: 10.11896/jsjkx.210300299

Special Issue: Artificial Intelligence Security

• Artificial Intelligence Security • Previous Articles     Next Articles

Feature Gradient-based Adversarial Attack on Modulation Recognition-oriented Deep Neural Networks

WANG Chao1, WEI Xiang-lin2, TIAN Qing1, JIAO Xiang1, WEI Nan1, DUAN Qiang2   

  1. 1 School of Computer and Software,Nanjing University of Information Science and Technology,Nanjing 210044,China
    2 The 63rd Research Institute,National University of Defense Technology,Nanjing 210007,China
  • Received:2021-03-30 Revised:2021-05-06 Online:2021-07-15 Published:2021-07-02
  • About author:WANG Chao,born in 1997,postgra-duate.His main research interests include deep learning and adversarial example.(wangchao2020@nuist.edu.cn)
    WEI Xiang-lin,born in 1985,Ph.D,engineer.His main research interests include edge computing,deep learning and wireless network security.
  • Supported by:
    National Natural Science Foundation of China(61702273) and Natural Science Foundation of Jiangsu Province(BK20170956).

Abstract: Deep neural network (DNN)-based automatic modulation recognition (AMR) outperforms traditional AMR methods in automatic feature extraction,recognition accuracy with less manual intervention.However,high recognition accuracy is the first priority of the practitioners when designing AMR-oriented DNN (ADNN) models while security is usually neglected.In this backdrop,from the perspective of the security of artificial intelligence,this paper presents a novel characteristic gradient-based adversarial attack method on ADNN models.Compared with traditional label gradient-based attack method,the proposed method can better attack the extracted temporal and spatial features by ADNN models.Experimental results on an open dataset show that the proposed method outperforms label gradient-based method in the attacking success ratio and transferability in both white-box and black-box attacks.

Key words: Adversarial examples, Automatic modulation recognition, Deep learning, Modulation signal, Neural Networks

CLC Number: 

  • TP183
[1]O’SHEA T J,WEST N.Radio machine learning dataset generation with gnu radio[C]//Proceedings of the GNU Radio Confe-rence.2016:16.
[2]LIU Y,YANG C.Modulation recognition with graph convolutional network[J].IEEE Wireless Communications Letters,2020,9(5):624-627.
[3]KATO N,FADLULLAH Z M,MAO B,et al.The deep learning vision for heterogeneous network traffic control:Proposal,challenges,and future perspective [J].IEEE Wireless Communications,2016,24(3):146-153.
[4]O’SHEA T J,ROY T,CLANCY T C.Over-the-air deep lear-ning based radio signal classification[J].IEEE Journal of Selec-ted Topics in Signal Processing,2018,12(1):168-179.
[5]WANG Y,LIU M,YANG J,et al.Data-driven deep learning for automatic modulation recognition in cognitive radios[J].IEEE Transactions on Vehicular Technology,2019,68(4):4074-4077.
[6]RAJENDRAN S,MEERT W,GIUSTINIANO D,et al.Deeplearning models for wireless signal classification with distributed low-cost spectrum sensors[J].IEEE Transactions on Cognitive Communications and Networking,2018,4(3):433-445.
[7]TANG B,TU Y,ZHANG Z,et al.Digital signal modulationclassification with data augmentation using generative adver-sarial nets in cognitive radio networks[J].IEEE Access,2018,6:15713-15722.
[8]CHEN K,ZHANG S,ZHU L,et al.Modulation Recognition of Radar Signals Based on Adaptive Singular Value Reconstruction and Deep Residual Learning[J].Sensors,2021,21(2):449.
[9]SZEGEDY C,ZAREMBA W,SUTSKEVER I,et al.Intriguing properties of neural networks[J].arXiv:1312.6199,2013.
[10]GOODFELLOW I J,SHLENS J,SZEGEDYC.Explaining and harnessing adversarial examples [C]//ICML.2015.
[11]KURAKIN A,GOODFELLOW I,BENGIO S.Adversarialexamples in the physical world[J].arXiv:1607.02533,2016.
[12]DONG Y,LIAO F,PANG T,et al.Boosting adversarial attacks with momentum[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2018:9185-9193.
[13]MOOSAVI-DEZFOOLI S M,FAWZI A,FROSSARD P.Deepfool:a simple and accurate method to fool deep neural networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:2574-2582.
[14]LIN J,SONG C,HE K,et al.Nesterov accelerated gradient and scale invariance for adversarial attacks[J].arXiv:1908.06281,2019.
[15]MOOSAVI-DEZFOOLI S M,FAWZI A,FAWZI O,et al.Universal adversarial perturbations[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2017:1765-1773.
[16]KURAKIN A,GOODFELLOW I,BENGIO S.Adversarial machine learning at scale[J].arXiv:1611.01236,2016.
[17]CARLINI N,WAGNER D.Towards evaluating the robustness of neural networks[C]//2017 IEEE Symposium on Security and Privacy (SP).IEEE,2017:39-57.
[18]ATHALYE A,ENGSTROM L,ILYAS A,et al.Synthesizingrobust adversarial examples[C]//International Conference on Machine Learning.PMLR,2018:284-293.
[19]LIN Y,ZHAO H,TU Y,et al.Threats of adversarial attacks in DNN-based modulation recognition[C]//IEEE Conference on Computer Communications(IEEE INFOCOM 2020).IEEE,2020:2469-2478.
[20]ZHAO H,LIN Y,GAO S,et al.Evaluating and Improving Adversarial Attacks on DNN-Based Modulation Recognition[C]//2020 IEEE Global Communications Conference(GLOBECOM 2020) .IEEE,2020:1-5.
[21]DeepSig.Deepsig dataset:Radioml 2016.10a[OL].https://www.deepsig.io/datasets,2016.
[22]SIMONYAN K,ZISSERMAN A.Very Deep Convolutional Networks for Large-Scale Image Recognition[J].arXiv:1409.1556,2014.
[23]HE K,ZHANG X,REN S,et al.Deep residual learning forimage recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:770-778.
[1] XU Yong-xin, ZHAO Jun-feng, WANG Ya-sha, XIE Bing, YANG Kai. Temporal Knowledge Graph Representation Learning [J]. Computer Science, 2022, 49(9): 162-171.
[2] RAO Zhi-shuang, JIA Zhen, ZHANG Fan, LI Tian-rui. Key-Value Relational Memory Networks for Question Answering over Knowledge Graph [J]. Computer Science, 2022, 49(9): 202-207.
[3] NING Han-yang, MA Miao, YANG Bo, LIU Shi-chang. Research Progress and Analysis on Intelligent Cryptology [J]. Computer Science, 2022, 49(9): 288-296.
[4] TANG Ling-tao, WANG Di, ZHANG Lu-fei, LIU Sheng-yun. Federated Learning Scheme Based on Secure Multi-party Computation and Differential Privacy [J]. Computer Science, 2022, 49(9): 297-305.
[5] WANG Jian, PENG Yu-qi, ZHAO Yu-fei, YANG Jian. Survey of Social Network Public Opinion Information Extraction Based on Deep Learning [J]. Computer Science, 2022, 49(8): 279-293.
[6] HAO Zhi-rong, CHEN Long, HUANG Jia-cheng. Class Discriminative Universal Adversarial Attack for Text Classification [J]. Computer Science, 2022, 49(8): 323-329.
[7] JIANG Meng-han, LI Shao-mei, ZHENG Hong-hao, ZHANG Jian-peng. Rumor Detection Model Based on Improved Position Embedding [J]. Computer Science, 2022, 49(8): 330-335.
[8] ZHU Cheng-zhang, HUANG Jia-er, XIAO Ya-long, WANG Han, ZOU Bei-ji. Deep Hash Retrieval Algorithm for Medical Images Based on Attention Mechanism [J]. Computer Science, 2022, 49(8): 113-119.
[9] SUN Qi, JI Gen-lin, ZHANG Jie. Non-local Attention Based Generative Adversarial Network for Video Abnormal Event Detection [J]. Computer Science, 2022, 49(8): 172-177.
[10] HOU Yu-tao, ABULIZI Abudukelimu, ABUDUKELIMU Halidanmu. Advances in Chinese Pre-training Models [J]. Computer Science, 2022, 49(7): 148-163.
[11] ZHOU Hui, SHI Hao-chen, TU Yao-feng, HUANG Sheng-jun. Robust Deep Neural Network Learning Based on Active Sampling [J]. Computer Science, 2022, 49(7): 164-169.
[12] SU Dan-ning, CAO Gui-tao, WANG Yan-nan, WANG Hong, REN He. Survey of Deep Learning for Radar Emitter Identification Based on Small Sample [J]. Computer Science, 2022, 49(7): 226-235.
[13] HU Yan-yu, ZHAO Long, DONG Xiang-jun. Two-stage Deep Feature Selection Extraction Algorithm for Cancer Classification [J]. Computer Science, 2022, 49(7): 73-78.
[14] CHENG Cheng, JIANG Ai-lian. Real-time Semantic Segmentation Method Based on Multi-path Feature Extraction [J]. Computer Science, 2022, 49(7): 120-126.
[15] WANG Jun-feng, LIU Fan, YANG Sai, LYU Tan-yue, CHEN Zhi-yu, XU Feng. Dam Crack Detection Based on Multi-source Transfer Learning [J]. Computer Science, 2022, 49(6A): 319-324.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!