Computer Science ›› 2024, Vol. 51 ›› Issue (10): 425-431.doi: 10.11896/jsjkx.230900054

• Information Security • Previous Articles     Next Articles

Black-box Adversarial Attack Methods on Modulation Recognition Neural Networks Based onSignal Proximal Linear Combination

GUO Yuqi, LI Dongyang, YAN Bin, WANG Linyuan   

  1. Laboratory of Imaging and Intelligent Processing,PLA Strategy Support Force Information Engineering University,Zhengzhou 450001,China
  • Received:2023-09-11 Revised:2024-03-05 Online:2024-10-15 Published:2024-10-11
  • About author:GUO Yuqi,born in 1991,postgraduate.Her main research interests include intelligent signal processing and artificial intelligence security.
    WANG Linyuan,born in 1985,Ph.D,associate professor,master supervisor.His main research interests include sparse optimization theory,mathematical foundations of artificial intelligence,and hybrid intelligence in brain-compu-ter interaction.
  • Supported by:
    National Natural Science Foundation of China(62271504).

Abstract: With the extensive application of deep learning in the field of wireless communication,especially in signal modulation recognition,the vulnerability of neural networks to adversarial example attacks poses challenges to the security of wireless communication.Addressing the black-box attack scenario in wireless signals,where real-time feedback from the neural network is hard to obtain and only recognition results can be accessed,a black-box query adversarial attack method based on proximal linear combination is proposed.Initially,on a subset of the dataset,each original signal undergoes a proximal linear combination with target signals,where they are linearly combined within a range very close to the original signal(with weighting coefficients no greater than 0.05) and then input into the neural network to query.By counting the number of misrecognitions by the network for all proximal linear combinations,specific target signals most susceptible to linear combination effects for each original signal category are determined,which is termed the optimal perturbation signals.During attack testing,adversarial examples are generated by executing proximal linear combinations using the optimal perturbation signal corresponding to the signal category.Experimental results demonstrate that using the optimal perturbation signal for each modulation category on the chosen subset,the re-cognition accuracy of the neural network dropped from 94% to 50% when applied to the entire dataset,with a lower perturbation power compared to adding random noise attacks.Furthermore,the generated adversarial examples exhibit some transferability to structurally similar neural networks.This method,which generates new adversarial examples after statistical queries,is easy to implement and eliminates the need for further black-box queries.

Key words: Deep learning, Adversarial examples, Signal recognition, Black-box attack, Adversarial signal

CLC Number: 

  • TP391
[1]ZHANG F X,LUO C B,XU J L,et al.Deep learning based automatic modulation recognition:Models,datasets,and challenges [J].Digital Signal Processing,2022,129:14.
[2]SZEGEDY C,ZAREMBA W,SUTSKEVER I,et al.Intriguingproperties of neural networks [C]//Proceedings of the 2nd International Conference on Learning Representations.2014.
[3]ADESINA D,HSIEH C C,SAGDUYU Y E,et al.Adversarial machine learning in wireless communications using RF data:A review [J].IEEE Communications Surveys & Tutorials,2023,25(1):77-100.
[4]HAMEED M Z,GYORGY A,GUNDUZ D.The Best Defense Is a Good Offense:Adversarial Attacks to Avoid Modulation Detection [J].IEEE Transactions on Information Forensics and Security,2021,16:1074-1087.
[5]SADEGHI M,LARSSON E G.Adversarial Attacks on Deep-Learning Based Radio Signal Classification [J].IEEE Wirel Commun Lett,2019,8(1):213-216.
[6]XU H,MA Y,LIU H C,et al.Adversarial Attacks and Defenses in Images,Graphs and Text:A Review [J].Int J Autom Comput,2020,17(2):151-178.
[7]LIN Y,ZHAO H,MA X,et al.Adversarial attacks in modulation recognition with convolutional neural networks [J].IEEE Transactions on Reliability,2020,70(1):389-401.
[8]HUANG S N,LI Y X,MAO Y Hg,et al.Black-box transferable adversarial attacks based on ensemble advGAN[J].Journal of Jilin University(Engineering and Technology Edition),2022,52(10):2391-2398.
[9]LIN Y,ZHAO H J,TU Y,et al.Threats of Adversarial Attacks in DNN-Based Modulation Recognition [C]//Proceedings of IEEE INFOCOM 2020-IEEE Conference on Computer Communications.IEEE,2020:2469-2478.
[10]KIM B,SAGDUYU Y E,DAVASLIOGLU K,et al.Over-the-air adversarial attacks on deep learning based modulation classifier over wireless channels [C]//Proceedings of 2020 54th Annual Conference on Information Sciences and Systems.IEEE,2020:1-6.
[11]CHEN P Y,ZHANG H,SHARMA Y,et al.Zoo:Zeroth order optimization based black-box attacks to deep neural networks without training substitute models[C]//Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security.2017:15-26.
[12]CHEN J,JORDAN M I,WAINWRIGHT M J.Hopskipjumpattack:A query-efficient decision-based attack [C]//2020 IEEE Symposium on Security and Privacy.IEEE,2020:1277-1294.
[13]BAHRAMALI A,NASR M,HOUMANSADR A,et al.Robust adversarial attacks against DNN-based wireless communication systems [C]//Proceedings of the 2021 ACM SIGSAC Confe-rence on Computer and Communications Security.2021:126-140.
[14]SAGDUYU Y E,SHI Y,ERPEK T.Adversarial deep learning for over-the-air spectrum poisoning attacks [J].IEEE Transactions on Mobile Computing,2019,20(2):306-319.
[15]GOODFELLOW I J,SHLENS J,SZEGEDY C.Explaining and Harnessing Adversarial Examples[C]//Proceedings of the International Conference on Learning Representations.2014.
[16]ILYAS A,SANTURKAR S,TSIPRAS D,et al.Adversarialexamples are not bugs,they are features [C]//Proceedings of the 33rd International Conference on Neural Information Processing Systems.Curran Associates Inc.,Red Hook,NY,USA,125-136.
[17]SHAMIR A,SAFRAN I,RONEN E,et al.A simple explanation for the existence of adversarial examples with small hamming distance [J].arXiv:1901.10861,2019.
[18]SHAMIR A,MELAMED O,BENSHMUEL O.The dimpledmanifold model of adversarial examples in machine learning [J].arXiv:2106.10151,2021.
[19]ZHANG H,CISSE M,DAUPHIN Y N,et al.mixup:Beyondempirical risk minimization [J].arXiv:1710.09412,2017.
[20]O'SHEA T J,CORGAN J,CLANCY T C.Convolutional Radio Modulation Recognition Networks [C]//Engineering Applications of Neural Networks:17th International Conference,EANN 2016,Aberdeen,UK,September 2-5,2016,Proceedings 17;Springer International Publishing,2016:213-226.
[21]O'SHEA T J,ROY T,CLANCY T C.Over-the-Air DeepLearning Based Radio Signal Classification [J].IEEE J Sel Top Signal Process,2018,12(1):168-179.
[1] XU Jinlong, GUI Zhonghua, LI Jia'nan, LI Yingying, HAN Lin. FP8 Quantization and Inference Memory Optimization Based on MLIR [J]. Computer Science, 2024, 51(9): 112-120.
[2] DU Yu, YU Zishu, PENG Xiaohui, XU Zhiwei. Padding Load:Load Reducing Cluster Resource Waste and Deep Learning Training Costs [J]. Computer Science, 2024, 51(9): 71-79.
[3] CHEN Siyu, MA Hailong, ZHANG Jianhui. Encrypted Traffic Classification of CNN and BiGRU Based on Self-attention [J]. Computer Science, 2024, 51(8): 396-402.
[4] SUN Yumo, LI Xinhang, ZHAO Wenjie, ZHU Li, LIANG Ya’nan. Driving Towards Intelligent Future:The Application of Deep Learning in Rail Transit Innovation [J]. Computer Science, 2024, 51(8): 1-10.
[5] KONG Lingchao, LIU Guozhu. Review of Outlier Detection Algorithms [J]. Computer Science, 2024, 51(8): 20-33.
[6] TANG Ruiqi, XIAO Ting, CHI Ziqiu, WANG Zhe. Few-shot Image Classification Based on Pseudo-label Dependence Enhancement and NoiseInterferenceReduction [J]. Computer Science, 2024, 51(8): 152-159.
[7] XIAO Xiao, BAI Zhengyao, LI Zekai, LIU Xuheng, DU Jiajin. Parallel Multi-scale with Attention Mechanism for Point Cloud Upsampling [J]. Computer Science, 2024, 51(8): 183-191.
[8] ZHANG Junsan, CHENG Ming, SHEN Xiuxuan, LIU Yuxue, WANG Leiquan. Diversified Label Matrix Based Medical Image Report Generation [J]. Computer Science, 2024, 51(8): 200-208.
[9] GUO Fangyuan, JI Genlin. Video Anomaly Detection Method Based on Dual Discriminators and Pseudo Video Generation [J]. Computer Science, 2024, 51(8): 217-223.
[10] GAN Run, WEI Xianglin, WANG Chao, WANG Bin, WANG Min, FAN Jianhua. Backdoor Attack Method in Autoencoder End-to-End Communication System [J]. Computer Science, 2024, 51(7): 413-421.
[11] YANG Heng, LIU Qinrang, FAN Wang, PEI Xue, WEI Shuai, WANG Xuan. Study on Deep Learning Automatic Scheduling Optimization Based on Feature Importance [J]. Computer Science, 2024, 51(7): 22-28.
[12] LI Jiaying, LIANG Yudong, LI Shaoji, ZHANG Kunpeng, ZHANG Chao. Study on Algorithm of Depth Image Super-resolution Guided by High-frequency Information ofColor Images [J]. Computer Science, 2024, 51(7): 197-205.
[13] SHI Dianxi, GAO Yunqi, SONG Linna, LIU Zhe, ZHOU Chenlei, CHEN Ying. Deep-Init:Non Joint Initialization Method for Visual Inertial Odometry Based on Deep Learning [J]. Computer Science, 2024, 51(7): 327-336.
[14] FAN Yi, HU Tao, YI Peng. Host Anomaly Detection Framework Based on Multifaceted Information Fusion of SemanticFeatures for System Calls [J]. Computer Science, 2024, 51(7): 380-388.
[15] YIN Xudong, CHEN Junyang, ZHOU Bo. Study on Industrial Defect Augmentation Data Filtering Based on OOD Scores [J]. Computer Science, 2024, 51(6A): 230700111-7.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!