计算机科学 ›› 2023, Vol. 50 ›› Issue (4): 226-232.doi: 10.11896/jsjkx.220600242

• 计算机网络 • 上一篇    下一篇

基于多模态时-频特征融合的信号调制格式识别方法

贺超, 陈进杰, 金钊, 雷印杰   

  1. 四川大学电子信息学院 成都 610065
  • 收稿日期:2022-06-27 修回日期:2022-10-17 出版日期:2023-04-15 发布日期:2023-04-06
  • 通讯作者: 雷印杰(yinjie@scu.edu.cn)
  • 作者简介:(enaoan@foxmail.com)

Automatic Modulation Recognition Method Based on Multimodal Time-Frequency Feature Fusion

HE Chao, CHEN Jinjie, JIN Zhao, LEI Yinjie   

  1. College of Electronics and Information Engineering,Sichuan University,Chengdu 610065,China
  • Received:2022-06-27 Revised:2022-10-17 Online:2023-04-15 Published:2023-04-06
  • About author:HE Chao,born in 1997,postraduarte.His main research interests include deep learning and computer vision.
    LEI Yinjie,born in 1983,Ph.D,professor,Ph.D supervisor.His main research interests include deep learning and computer vision.

摘要: 自动调制识别(Automatic Modulation Recognition,AMR)是认知无线电中的关键技术,在无线通信中有着广泛应用。针对现有的自动调制识别方法大多都只利用了信号时域或频域的单模态信息,忽略了多模态信息之间的互补性的问题,提出了一种基于多模态时-频特征融合的信号调制格式识别方法。首先,在融合之前利用对比学习对齐信号的时域特征和频域特征,减小时-频特征间的异质性差异。然后,采用跨模态注意力实现时域特征和频域特征的互补性融合。最后,为了进一步提高模型整体的性能,在频域编码器中引入残差收缩模块来提取信号时频图的频域特征,并在时域编码器中引入复数双向门控循环单元,以提取I和Q两路信号之间的相关性特征及信号时序特征。在RadioML2016a上进行了实验,结果表明,所提方法具有较高的识别准确率和噪声鲁棒性。

关键词: 自动调制识别, 跨模态注意力融合, 对比学习, 残差收缩模块, 复数双向门控循环单元

Abstract: Automatic modulation recognition (AMR) is a key technology in cognitive radio and has a wide range of applications in wireless communication.Aiming at the problem that most of the existing automatic modulation classification methods only use the single-modal information in the time domain or frequency domain,ignoring the complementarity between the multi-modal information,a signal modulation classification recognition method based on multimodal time-frequency feature fusion is proposed.First,the time-domain features and frequency-domain features of the signal are aligned by contrastive learning before fusion to reduce the heterogeneity difference.Secondly,cross-modal attention is used to achieve complementary fusion of time-domain features and frequency-domain features.Finally,in order to further improve the overall performance of the model,a residual shrin-kage module is introduced into the frequency domain encoder to extract the frequency domain features of the time-frequency map and the complex bidirectional gated recurrent unit is introduced into the time domain encoder to extract the correlation features between the I and Q signals and the time-domain features.Experimental results on RadioML2016a show that the proposed me-thod has higher recognition accuracy and noise robustness.

Key words: Automatic modulation recognition, Cross-modal attention fusion, Contrastive learning, Residual shrinkage module, Complex bidirectional gate recurrent unit

中图分类号: 

  • TP389
[1]PENG S.A Survey of modulation classification using deep lear-ning:Signal Representation and Ata Preprocessing[C]//IEEE Transactions on Neural Networks and Learning Systems.2021:1088-1098.
[2]SHEA T O.Radio machine learning dataset generation withGNU Radio[C]//Proceedings of the GNU Radio Conference.2016:1-6.
[3]ZHAO M,ZHONG Z,FU X,et al.Deep residual shrinkage networks for fault diagnosis[C]//IEEE Transactions on Industrial Informatics.2020:4681-4690.
[4]TRABELSII C,BILANIUK O,SERDYUK D,et al.Deep complex networks[C]//IEEE Transactionson Vehicular Technology.2020:10085-10090.
[5]YU Z,YU J,FAN J,et al.Multi-modal factorized bilinear pooling with co-attention loearning for visual question answering[C]//Proceedings of the IEEE International Conference on Computer Vision.2017:1821-1830.
[6]XU J,SU W,ZHOU M,et al.Likelihood-Ratio approaches to automatic modulation classification[J].IEEE Transactions on Systems,Man,and Cybernetics,Part C (Applications and Reviews),2010,41(4):455-469.
[7]LAY N E,POLYDOROS A.Modulation classification of signals in unknown ISI environments[C]//Proceedings of MILCOM’95.IEEE,1995:170-174.
[8]ABDELBAR M,TRANTER H,BOSE T,et al.Cooperative cumulants-based modulation classification in distributed networks[J].IEEE Transactions on Cognitive Communications and Networking,2019,15(6):558-598.
[9]LIU P,SHUI P.Digital modulation classifier with rejection abi-lity via greedy convexhull learning and alternative convexhull shrinkage in feature Space[J].IEEE Transactions on Wireless Communications,2014,13(5):2683-2695.
[10]PAJIC M,VEINOVIC S.Modulation complexity reduction me-thod for improving the performance of AMC algorithm based on sixth-order cumulants[J].IEEE Access,2020,8(17):106386-106394.
[11]MENG F,CHEN P.Automatic modulation classification:A deep learning enabled approach[J].IEEE Transsactions,2018,16(15):10760-10772.
[12]NIE J,ZHANG Y,HE Z,et al.Deep hierarchical network for automatic modulation classification[J].IEEE Access,2018,18(20):94604-94613.
[13]PENG S.Modulation classification based on signal constellation diagrams and deep learning[J].IEEE Transactions on Neural Networks and Learning Systems,2019,13(23):718-727.
[14]TANG B,TU Y.Digital signal modulation classification with data augmentation using generative adversarial nets in cognitive radio networks[J].IEEE Access,2018,10(7):15713-15722.
[15]PENG S,JIANG H,WANG H,et al.Modulation classification based on signal constellation diagrams and deep learning[J].IEEE Transactions on Neural Networks and Learning Systems,2018,30(3):718-727.
[16]LI Y,SHAO G,WANG B,et al.Automatic modulation classification based on bispectrum and CNN[C]//Proceedings of 2019 IEEE 8th Joint International Information Technology and Artificial Intelligence Conference (ITAIC).2019:311-316.
[17]WANGY,LIU M,YANG J,et al.Data-driven deep learning for automatic modulation recognition in cognitive radios[J].IEEE Transactions on Vehicular Technology,2019,68(4):4074-4077.
[18]ZHANG Z,WANG C.Automatic modulation classification using convolutional neural network with features fusion of SPWVD and BJD[J].IEEE Transactions on Signal and Information Processing over Networks,2019,5(3):469-478.
[19]HE K,ZHANG X,REN S,et al.Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:770-778.
[20]SHI G,AARABI P.On the importance of phase in humanspeech recognition[J].IEEE/ACM Transactions on Audio,Speech,and Language Processing,2006,14(5):1867-1874.
[21]SHEA T O,HOYDIS J.An introduction to deep learning for the physical layer[J].IEEE Transactions on Cognitive Communications and Networking,2017,3(4):563-575.
[22]LI J,SELVARAJU R,GOTMARE A,et al.Align before fuse:vision and language representation learning with momentum distillation[J].Advances in Neural Information Processing Systems,2021,34(20):9694-9705.
[23]ZHANG Y,JIANG H,MIURA Y,et al.Contrastive learning of medical visual representations from paired images and text[J].arXiv:2010.00747,2020.
[24]WEST N E,SHEA T O.Deep architectures for modulation re-cognition[C]//Proc.IEEE International Symposium on Dyna-mic Spectrum Access Networks.(DySPAN).2017:1-6.
[25]LIU X,YANG D,GAMAL A E,et al.Deep neural network architectures for modulation classification[C]//Proceedings of 51st Asilomar Conference on Signals,Systems,and Computers.2017:915-919.
[26]HONG D,ZHANG Z,XU X,et al.Automatic modulation classification using recurrent neural networks[C]//Proceedings of 3rd IEEE International Conference on Computer Communications.2017:695-700.
[27]WU H,LI Y,ZHOU L,et al.Convolutional neural network and multi-feature fusion for automatic modulation classification[J].Electronics Letters,2019,55(6):895-897.
[1] 焦翔, 魏祥麟, 薛羽, 王超, 段强.
基于深度学习的自动调制识别研究
Automatic Modulation Recognition Based on Deep Learning
计算机科学, 2022, 49(5): 266-278. https://doi.org/10.11896/jsjkx.211000085
[2] 袁德森, 刘修敬, 吴庆波, 李宏亮, 孟凡满, 颜庆义, 许林峰.
基于反事实思考的视觉问答方法
Visual Question Answering Method Based on Counterfactual Thinking
计算机科学, 2022, 49(12): 229-235. https://doi.org/10.11896/jsjkx.220600038
[3] 王超, 魏祥麟, 田青, 焦翔, 魏楠, 段强.
基于特征梯度的调制识别深度网络对抗攻击方法
Feature Gradient-based Adversarial Attack on Modulation Recognition-oriented Deep Neural Networks
计算机科学, 2021, 48(7): 25-32. https://doi.org/10.11896/jsjkx.210300299
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!