计算机科学 ›› 2024, Vol. 51 ›› Issue (7): 413-421.doi: 10.11896/jsjkx.230400113

• 信息安全 • 上一篇    下一篇

自编码器端到端通信系统后门攻击方法

甘润1, 魏祥麟2, 王超3, 王斌1, 王敏1, 范建华2   

  1. 1 南京信息工程大学电子与信息工程学院 南京 210044
    2 国防科技大学第六十三研究所 南京 210007
    3 南京信息工程大学计算机与软件学院 南京 210044
  • 收稿日期:2023-04-16 修回日期:2023-09-27 出版日期:2024-07-15 发布日期:2024-07-10
  • 通讯作者: 范建华(fjh7659@126.com)
  • 作者简介:(20211249655@nuist.edu.cn)

Backdoor Attack Method in Autoencoder End-to-End Communication System

GAN Run1, WEI Xianglin2, WANG Chao3, WANG Bin1, WANG Min1, FAN Jianhua2   

  1. 1 School of Electronic and Information Engineering,Nanjing University of Information Science and Technology,Nanjing 210044,China
    2 The 63rd Research Institute,National University of Defense Technology,Nanjing 210007,China
    3 School of Computer and Software,Nanjing University of Information Science and Technology,Nanjing 210044,China
  • Received:2023-04-16 Revised:2023-09-27 Online:2024-07-15 Published:2024-07-10
  • About author:GAN Run,born in 1998,postgraduate.His main reaserch interests include deep learning and backdoor attack.
    FAN Jianhua,born in 1971,Ph.D,research fellow,Ph.D supervisor.His main research interests include software defined radio and spectrum intelligent computing.

摘要: 自编码器端到端通信系统无需显式地设计通信协议,比传统模块式通信系统复杂性更低,且灵活性和鲁棒性更高。然而,自编码器模型的弱可解释性也给端到端通信系统带来了新的安全隐患。实验表明,在信道未知且解码器单独训练的场景下,通过在信道层添加精心设计的触发器就可以让原本表现良好的解码器产生误判,并且不影响解码器处理不含触发器样本时的性能,从而实现针对通信系统的后门攻击。文中设计了一种触发器生成模型,并提出了将触发器生成模型与自编码器模型进行联合训练的后门攻击方法,实现动态的触发器的自动生成,在增加攻击隐蔽性的同时提升了攻击成功率。为了验证所提方法的有效性,分别实现了4种不同的自编码器模型,考察了不同信噪比、不同投毒率、不同触发器尺寸以及不同触发信号比场景下的后门攻击效果。实验结果表明,在6dB信噪比下,针对4种不同的自编码器模型,所提方法的攻击成功率与干净样本识别率均超过92%。

关键词: 深度学习, 后门攻击, 端到端通信, 触发器, 自编码器

Abstract: End-to-end communication systems based on auto-encoders do not require an explicit design of communication protocols,resulting in lower complexity compared to traditional modular communication systems,as well as higher flexibility and robustness.However,the weak interpretability of the auto-encoder model has brought new security risks to the end-to-end communication system.Experiment shows that,in the scenario of unknown channel and separate training of the decoder,by adding carefully designed triggers at the channel layer,the originally well-performing decoder can produce misjudgments,without affecting the performance of the decoder when processing samples without triggers,achieving a backdoor attack on the communication system.This paper designs a trigger generation model and proposes a backdoor attack method that combines the trigger generation model with the auto-encoder model for joint training,realizing the automatic generation of dynamic triggers,increasing the stealthiness of the attack while improving the success rate of the attack.In order to verify the effectiveness of the proposed me-thod,four different auto-encoder models are implemented,and the backdoor attack effects under different signal-to-noise ratios,different poisoning rates,different trigger sizes,and different trigger signal ratios are studied.Experimental results show that under a 6dB signal-to-noise ratio,the attack success rate and clean sample recognition rate of our proposal are both greater than 92% for the four different auto-encoder models.

Key words: Deep learning, Backdoor attack, End-to-End communication, Trigger, Auto-encoder

中图分类号: 

  • TP183
[1]O'SHEA T,HOYDIS J.An Introduction to Deep Learning for the Physical Layer [J].IEEE Transactions on Cognitive Communications and Networking,2017,3(4):563-575.
[2]WU N,WANG X,LIN B,et al.A CNN-Based End-to-EndLearning Framework Toward Intelligent Communication Systems [J].IEEE Access,2019,7:110197-110204.
[3]CHAUDHARI H R,NAJLAH C P,SAMEER S M.A ResNet Based End-to-End Wireless Communication System under Rayleigh Fading and Bursty Noise Channels [C]//2020 IEEE 3rd 5G World Forum(5GWF).2020:53-58.
[4]ZHANG P,NIU K,YAO S H,et al.Semantic communications for future:basic principle and implementation methodology [J].Journal on Communications,2023,44(5):1-14.
[5]YU Y,WANG Y F,YANG W,et al.Backdoor Attacks Against Deep Image Compression via Adaptive Frequency Trigger [C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR).2023:12250-12259.
[6]LUO C X,LI Y,JIANG Y.Untargeted Backdoor Attack against Object Detection [J].arXiv:2211.05638,2023.
[7]YUAN Z H,ZHOU P.You Are Catching My Attention:Are Vision Transformers Bad Learners under Backdoor Attacks? [C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR).2023:24605-24615.
[8]LAN H,GU J D,TORR P.Influencer Backdoor Attack on Semantic Segmentation [J].arXiv:2303.12054,2023.
[9]MEI K,LI Z,WANG Z T,et al.NOTABLE:Transferable Backdoor Attacks Against Prompt-based NLP Models [J].arXiv:2305.17826,2023.
[10]DAI E,LIN M H,ZHANG X,et al.Unnoticeable Backdoor Attacks on Graph Neural Networks [C]//Proceedings of the ACM Web Conference.Austin,TX,USA,2023:2263-2273.
[11]SOURI H,GOLDBLUM M,FOWL L.Sleeper Agent:Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch[C]//Advances in Neural Information Processing Systems 35(NeurIPS 2022).2022:19165-19178.
[12]JIA J,LIU Y,GONG N Z.BadEncoder:Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning [C]//2022 IEEE Symposium on Security and Privacy(SP).San Francisco,CA,USA,2022:2043-2059.
[13]SOREMEKUN E,SAKSHI S,CHATT-OPADHYAY S.To-wards Backdoor Attacks and Defense in Robust Machine Lear-ning Models[J].Computers & Security,2023,127:103101.
[14]JIANG Y J,MA X J,ERFANI S M,et al.Backdoor Attacks on Time Series:A Generative Approach [J].arXiv:2211.07915,2023.
[15]SAHA A,SUBRAMANYA A,PIRS-IAVASH H.Hidden trigger backdoor attacks [C]//Proceedings of the AAAI Confe-rence on Artificial Intelligence.2020:11957-11965.
[16]ZHAO S H,MA X J,ZHENG X,et al.Clean-label backdoor attacks on video recognition models[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR).2020:14443-14452.
[17]CHENG S Y,LIU Y Q,MA S Q,et al.Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification [C]//Proceedings of the AAAI Conference on Artificial Intelligence.2021:1148-1156.
[18]ZHANG Z X,JIA J Y,WANG B H,et al.Backdoor attacks to graph neural networks [C]//Proceedings of the 26th ACM Symposium on Access Control Models and Technologies.2021:15-26.
[19]YANG Z Y,LYER N,REIMANN J,et al.Design of intentional backdoors in sequential models [J].arXiv:1902.09972,2019.
[20]NGUYEN A,TRAN A.Input-aware dynamic backdoor attack[C]//Advances in Neural Information Processing Systems 33(NeurIPS 2020).2020:3454-3464.
[21]WANG S,NEPAL S,RUDOLPH C,et al.Backdoor attacksagainst transfer learning with pre-trained deep learning models [J].IEEE Transactions on Services Computing,2022,15(3):1526-1539.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!