计算机科学 ›› 2024, Vol. 51 ›› Issue (10): 425-431.doi: 10.11896/jsjkx.230900054

• 信息安全 • 上一篇    下一篇

基于近端线性组合的信号识别神经网络黑盒对抗攻击方法

郭宇琦, 李东阳, 闫镔, 王林元   

  1. 战略支援部队信息工程大学成像与智能处理实验室 郑州 450001
  • 收稿日期:2023-09-11 修回日期:2024-03-05 出版日期:2024-10-15 发布日期:2024-10-11
  • 通讯作者: 王林元(wanglinyuanwly@163.com)
  • 作者简介:(guoyuqi728@foxmail.com)
  • 基金资助:
    国家自然科学基金(62271504)

Black-box Adversarial Attack Methods on Modulation Recognition Neural Networks Based onSignal Proximal Linear Combination

GUO Yuqi, LI Dongyang, YAN Bin, WANG Linyuan   

  1. Laboratory of Imaging and Intelligent Processing,PLA Strategy Support Force Information Engineering University,Zhengzhou 450001,China
  • Received:2023-09-11 Revised:2024-03-05 Online:2024-10-15 Published:2024-10-11
  • About author:GUO Yuqi,born in 1991,postgraduate.Her main research interests include intelligent signal processing and artificial intelligence security.
    WANG Linyuan,born in 1985,Ph.D,associate professor,master supervisor.His main research interests include sparse optimization theory,mathematical foundations of artificial intelligence,and hybrid intelligence in brain-compu-ter interaction.
  • Supported by:
    National Natural Science Foundation of China(62271504).

摘要: 随着深度学习在无线通信领域特别是信号调制识别方向的广泛应用,神经网络易受对抗样本攻击的问题同样影响着无线通信的安全。针对无线信号在通信中难以实时获得神经网络反馈且只能访问识别结果的黑盒攻击场景,提出了一种基于近端线性组合的黑盒查询对抗攻击方法。该方法首先在数据集的一个子集上对每个原始信号样本进行近端线性组合,即在非常靠近原始信号的范围内与目标信号进行线性组合(加权系数不大于0.05),并将其输入待攻击网络以查询识别结果。通过统计网络对全部近端线性组合识别出错的数量,确定每类原始信号最容易受到线性组合影响的特定目标信号,将其称为最佳扰动信号。在攻击测试时,根据信号的类别选择对应最佳扰动信号执行近端线性组合,生成对抗样本。实验结果显示,该方法在选定子集上将每种调制类别的最佳扰动信号添加在全部数据集上能将神经网络识别准确率从94%降至50%,且相较于添加随机噪声攻击的扰动功率更小。此外,生成的对抗样本对于结构近似的神经网络具有一定迁移性。这种方法在统计查询后生成新的对抗样本时,易于实现且无需再进行黑盒查询。

关键词: 深度学习, 对抗样本, 信号识别, 黑盒攻击, 对抗信号

Abstract: With the extensive application of deep learning in the field of wireless communication,especially in signal modulation recognition,the vulnerability of neural networks to adversarial example attacks poses challenges to the security of wireless communication.Addressing the black-box attack scenario in wireless signals,where real-time feedback from the neural network is hard to obtain and only recognition results can be accessed,a black-box query adversarial attack method based on proximal linear combination is proposed.Initially,on a subset of the dataset,each original signal undergoes a proximal linear combination with target signals,where they are linearly combined within a range very close to the original signal(with weighting coefficients no greater than 0.05) and then input into the neural network to query.By counting the number of misrecognitions by the network for all proximal linear combinations,specific target signals most susceptible to linear combination effects for each original signal category are determined,which is termed the optimal perturbation signals.During attack testing,adversarial examples are generated by executing proximal linear combinations using the optimal perturbation signal corresponding to the signal category.Experimental results demonstrate that using the optimal perturbation signal for each modulation category on the chosen subset,the re-cognition accuracy of the neural network dropped from 94% to 50% when applied to the entire dataset,with a lower perturbation power compared to adding random noise attacks.Furthermore,the generated adversarial examples exhibit some transferability to structurally similar neural networks.This method,which generates new adversarial examples after statistical queries,is easy to implement and eliminates the need for further black-box queries.

Key words: Deep learning, Adversarial examples, Signal recognition, Black-box attack, Adversarial signal

中图分类号: 

  • TP391
[1]ZHANG F X,LUO C B,XU J L,et al.Deep learning based automatic modulation recognition:Models,datasets,and challenges [J].Digital Signal Processing,2022,129:14.
[2]SZEGEDY C,ZAREMBA W,SUTSKEVER I,et al.Intriguingproperties of neural networks [C]//Proceedings of the 2nd International Conference on Learning Representations.2014.
[3]ADESINA D,HSIEH C C,SAGDUYU Y E,et al.Adversarial machine learning in wireless communications using RF data:A review [J].IEEE Communications Surveys & Tutorials,2023,25(1):77-100.
[4]HAMEED M Z,GYORGY A,GUNDUZ D.The Best Defense Is a Good Offense:Adversarial Attacks to Avoid Modulation Detection [J].IEEE Transactions on Information Forensics and Security,2021,16:1074-1087.
[5]SADEGHI M,LARSSON E G.Adversarial Attacks on Deep-Learning Based Radio Signal Classification [J].IEEE Wirel Commun Lett,2019,8(1):213-216.
[6]XU H,MA Y,LIU H C,et al.Adversarial Attacks and Defenses in Images,Graphs and Text:A Review [J].Int J Autom Comput,2020,17(2):151-178.
[7]LIN Y,ZHAO H,MA X,et al.Adversarial attacks in modulation recognition with convolutional neural networks [J].IEEE Transactions on Reliability,2020,70(1):389-401.
[8]HUANG S N,LI Y X,MAO Y Hg,et al.Black-box transferable adversarial attacks based on ensemble advGAN[J].Journal of Jilin University(Engineering and Technology Edition),2022,52(10):2391-2398.
[9]LIN Y,ZHAO H J,TU Y,et al.Threats of Adversarial Attacks in DNN-Based Modulation Recognition [C]//Proceedings of IEEE INFOCOM 2020-IEEE Conference on Computer Communications.IEEE,2020:2469-2478.
[10]KIM B,SAGDUYU Y E,DAVASLIOGLU K,et al.Over-the-air adversarial attacks on deep learning based modulation classifier over wireless channels [C]//Proceedings of 2020 54th Annual Conference on Information Sciences and Systems.IEEE,2020:1-6.
[11]CHEN P Y,ZHANG H,SHARMA Y,et al.Zoo:Zeroth order optimization based black-box attacks to deep neural networks without training substitute models[C]//Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security.2017:15-26.
[12]CHEN J,JORDAN M I,WAINWRIGHT M J.Hopskipjumpattack:A query-efficient decision-based attack [C]//2020 IEEE Symposium on Security and Privacy.IEEE,2020:1277-1294.
[13]BAHRAMALI A,NASR M,HOUMANSADR A,et al.Robust adversarial attacks against DNN-based wireless communication systems [C]//Proceedings of the 2021 ACM SIGSAC Confe-rence on Computer and Communications Security.2021:126-140.
[14]SAGDUYU Y E,SHI Y,ERPEK T.Adversarial deep learning for over-the-air spectrum poisoning attacks [J].IEEE Transactions on Mobile Computing,2019,20(2):306-319.
[15]GOODFELLOW I J,SHLENS J,SZEGEDY C.Explaining and Harnessing Adversarial Examples[C]//Proceedings of the International Conference on Learning Representations.2014.
[16]ILYAS A,SANTURKAR S,TSIPRAS D,et al.Adversarialexamples are not bugs,they are features [C]//Proceedings of the 33rd International Conference on Neural Information Processing Systems.Curran Associates Inc.,Red Hook,NY,USA,125-136.
[17]SHAMIR A,SAFRAN I,RONEN E,et al.A simple explanation for the existence of adversarial examples with small hamming distance [J].arXiv:1901.10861,2019.
[18]SHAMIR A,MELAMED O,BENSHMUEL O.The dimpledmanifold model of adversarial examples in machine learning [J].arXiv:2106.10151,2021.
[19]ZHANG H,CISSE M,DAUPHIN Y N,et al.mixup:Beyondempirical risk minimization [J].arXiv:1710.09412,2017.
[20]O'SHEA T J,CORGAN J,CLANCY T C.Convolutional Radio Modulation Recognition Networks [C]//Engineering Applications of Neural Networks:17th International Conference,EANN 2016,Aberdeen,UK,September 2-5,2016,Proceedings 17;Springer International Publishing,2016:213-226.
[21]O'SHEA T J,ROY T,CLANCY T C.Over-the-Air DeepLearning Based Radio Signal Classification [J].IEEE J Sel Top Signal Process,2018,12(1):168-179.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!