Computer Science ›› 2022, Vol. 49 ›› Issue (9): 236-241.doi: 10.11896/jsjkx.220400148

• Computer Network • Previous Articles     Next Articles

Construction and Distribution Method of REM Based on Edge Intelligence

LIU Xing-guang, ZHOU Li, LIU Yan, ZHANG Xiao-ying, TAN Xiang, WEI Ji-bo   

  1. College of Electronic Science and Technology,National University of Defense Technology,Changsha 410073,China
  • Received:2022-04-17 Revised:2022-05-13 Online:2022-09-15 Published:2022-09-09
  • About author:LIU Xing-guang,born in 1998,postgraduate.His main research interests include radio environment map and mobile edge computing.
    ZHOU Li,born in 1988,Ph.D,master supervisor.His main research interests include intelligent communication network,wireless resource management and edge computing.
  • Supported by:
    National Natural Science Foundation of China(62171449,62001483,U19B2024).

Abstract: Radio environment map(REM) can assist cognitive users to accurately perceive and utilize spectrum holes,achieve interference coordination between network nodes,and improve the spectrum efficiency and robustness of wireless networks.However,when cognitive users utilize and share REM,there are problems of high computational complexity and high distribution delay overhead,which limit cognitive users' ability to perceive spatial spectrum situation in real time.To solve this problem,this paper proposes a reinforcement learning-based REM construction and distribution method in mobile edge intelligence networks.First,we employ a low-complexity construction technique that combines kriging interpolation and super-resolution for REM construction.Second,we model the computational offload strategy selection problem during REM construction and distribution as a mixed-integer nonlinear programming problem by using edge computing.Finally,we combine artificial intelligence technology and edge computing technology,and propose a centralized training,distributed execution reinforcement learning framework to learn REM construction and distribution strategies in different network scenarios.Simulation results show that the proposed method has good adaptability,and it can effectively reduce the energy consumption and delay of REM construction and distribution,and support the near real-time application of REM by cognitive users in mobile edge network scenarios.

Key words: Radio environment map, Edge intelligence, Computation migration, Reinforcement learning

CLC Number: 

  • TN915
[1]XIA H Y,ZHA S,HUANG J J,et al.Survey on the Construction Methods of Spectrum Map[J].Chinese Journal of Radio Science,2020,35(4):12.
[2]KATAGIRI K,FUJII T.Mesh-Clustering-Based Radio MapsConstruction for Autonomous Distributed Networks[C]//2021 Twelfth International Conference on Ubiquitous and Future Networks(ICUFN).IEEE,2021:345-349.
[3]BEDNAREK P,ŁOPATKA J,BICKI D.Radio environmentmap for the cognitive radio network simulator[J].International Journal of Electronics and Telecommunications,2018,64(1):45-49.
[4]EZZATI N,TAHERI H.Distributed spectrum sensing in rembased cognitive radio networks[J].Journal of Modeling in Engineering,2019,17(56):223-233.
[5]KANIEWSKI P,ROMANIK J,GOLAN E,et al.SpectrumAwareness for Cognitive Radios Supported by Radio Environment Maps:Zonal Approach[J].Applied Sciences,2021,11(7):2910.
[6]SANTANA Y H,PLETS D,ALONSO R M,et al.Tool for Recovering after Meteorological Events Using a Real-Time REM and IoT Management Platform[J].Wireless Communications and Mobile Computing,2019,2019(1):1-13.
[7]YIN L,LUO J,LUO H.Tasks scheduling and resource allocation in fog computing based on containers for smart manufactu-ring[J].IEEE Transactions on Industrial Informatics,2018,14(10):4712-4721.
[8]KIBRIA M G,NGUYEN K,VILLARDI G P,et al.Big dataanalytics,machine learning,and artificial intelligence in next-generation wireless networks[J].IEEE Access,2018,6:32328-32338.
[9]ATAWIA R,HASSANEIN H S,ALI N A,et al.Utilization of stochastic modeling for green predictive video delivery under network uncertainties[J].IEEE Transactions on Green Communications and Networking,2018,2(2):556-569.
[10]LI M S,GAO J,ZHAO L,et al.Deep reinforcement learning for collaborative edge computing in vehicular networks[J].IEEE Transactions on Cognitive Communications and Networking,2020,6(4):1122-1135.
[11]SHI W,CAO J,ZHANG Q,et al.Edge computing:Vision and challenges[J].IEEE Internet of Things Journal,2016,3(5):637-646.
[12]KHORAMNEJAD F,EROL-KANTARCI M.On Joint Offloa-ding and Resource Allocation:A Double Deep Q-Network Approach[J].IEEE Transactions on Cognitive Communications and Networking,2021,7(4):1126-1141.
[13]QU Y B,DONG C,ZHENG J C,et al.Empowering Edge Intelligence by Air-Ground Integrated Federated Learning[J].IEEE Network,2021,35(5):34-41.
[14]LIU Z C,ZHAO Y F,SONG J D,et al.Learn to Coordinate for Computation Offloading and Resource Allocation in Edge Computing:A Rational-based Distributed Approach[J].IEEE Transactions on Network Science and Engineering,2021,1(1):1-15.
[15]CHOWDAPPA V P,BOTELLA C,SAMPER-ZAPATER J J,et al.Distributed radio map reconstruction for 5G automotive[J].IEEE Intelligent Transportation Systems Magazine,2018,10(2):36-49.
[16]SERGEEV A P,TARASOV D A,BUEVICH A G,et al.High variation subarctic topsoil pollutant concentration prediction using neural network residual kriging[J].AIP Conference Proceedings,2017,1836(1):020023.
[17]DENG Y B,ZHOU L,WANG L,et al.Radio environment map construction using super-resolution imaging for intelligent transportation systems[J].IEEE Access,2020,8:47272-47281.
[18]DU J,YU F R,LU G,et al.MEC-assisted immersive VR video streaming over terahertz wireless networks:A deep reinforcement learning approach[J].IEEE Internet of Things Journal,2020,7(10):9517-9529.
[19]TAHA A E M,ALI N A,CHI H R,et al.MEC resource offloading for QoE-aware HAS video streaming[C]//ICC 2021-IEEE International Conference on Communications.IEEE,2021:1-5.
[20]GENDA K,ABE M,KAMAMURA S.Video CommunicationOptimization Using Distributed Edge Computing[C]//2020 21st Asia-Pacific Network Operations and Management Symposium(APNOMS).IEEE,2020:381-384.
[21]JIANG K,ZHOU H,LI D,et al.A Q-learning based Method for Energy-Efficient Computation Offloading in Mobile Edge Computing[C]//2020 29th International Conference on Computer Communications and Networks(ICCCN).IEEE,2020:1-7.
[1] YUAN Wei-lin, LUO Jun-ren, LU Li-na, CHEN Jia-xing, ZHANG Wan-peng, CHEN Jing. Methods in Adversarial Intelligent Game:A Holistic Comparative Analysis from Perspective of Game Theory and Reinforcement Learning [J]. Computer Science, 2022, 49(8): 191-204.
[2] SHI Dian-xi, ZHAO Chen-ran, ZHANG Yao-wen, YANG Shao-wu, ZHANG Yong-jun. Adaptive Reward Method for End-to-End Cooperation Based on Multi-agent Reinforcement Learning [J]. Computer Science, 2022, 49(8): 247-256.
[3] YU Bin, LI Xue-hua, PAN Chun-yu, LI Na. Edge-Cloud Collaborative Resource Allocation Algorithm Based on Deep Reinforcement Learning [J]. Computer Science, 2022, 49(7): 248-253.
[4] LI Meng-fei, MAO Ying-chi, TU Zi-jian, WANG Xuan, XU Shu-fang. Server-reliability Task Offloading Strategy Based on Deep Deterministic Policy Gradient [J]. Computer Science, 2022, 49(7): 271-279.
[5] GUO Yu-xin, CHEN Xiu-hong. Automatic Summarization Model Combining BERT Word Embedding Representation and Topic Information Enhancement [J]. Computer Science, 2022, 49(6): 313-318.
[6] FAN Jing-yu, LIU Quan. Off-policy Maximum Entropy Deep Reinforcement Learning Algorithm Based on RandomlyWeighted Triple Q -Learning [J]. Computer Science, 2022, 49(6): 335-341.
[7] XIE Wan-cheng, LI Bin, DAI Yue-yue. PPO Based Task Offloading Scheme in Aerial Reconfigurable Intelligent Surface-assisted Edge Computing [J]. Computer Science, 2022, 49(6): 3-11.
[8] HONG Zhi-li, LAI Jun, CAO Lei, CHEN Xi-liang, XU Zhi-xiong. Study on Intelligent Recommendation Method of Dueling Network Reinforcement Learning Based on Regret Exploration [J]. Computer Science, 2022, 49(6): 149-157.
[9] ZHANG Jia-neng, LI Hui, WU Hao-lin, WANG Zhuang. Exploration and Exploitation Balanced Experience Replay [J]. Computer Science, 2022, 49(5): 179-185.
[10] LI Peng, YI Xiu-wen, QI De-kang, DUAN Zhe-wen, LI Tian-rui. Heating Strategy Optimization Method Based on Deep Learning [J]. Computer Science, 2022, 49(4): 263-268.
[11] OUYANG Zhuo, ZHOU Si-yuan, LYU Yong, TAN Guo-ping, ZHANG Yue, XIANG Liang-liang. DRL-based Vehicle Control Strategy for Signal-free Intersections [J]. Computer Science, 2022, 49(3): 46-51.
[12] ZHOU Qin, LUO Fei, DING Wei-chao, GU Chun-hua, ZHENG Shuai. Double Speedy Q-Learning Based on Successive Over Relaxation [J]. Computer Science, 2022, 49(3): 239-245.
[13] LI Su, SONG Bao-yan, LI Dong, WANG Jun-lu. Composite Blockchain Associated Event Tracing Method for Financial Activities [J]. Computer Science, 2022, 49(3): 346-353.
[14] HUANG Xin-quan, LIU Ai-jun, LIANG Xiao-hu, WANG Heng. Load-balanced Geographic Routing Protocol in Aerial Sensor Network [J]. Computer Science, 2022, 49(2): 342-352.
[15] HOU Hong-xu, SUN Shuo, WU Nier. Survey of Mongolian-Chinese Neural Machine Translation [J]. Computer Science, 2022, 49(1): 31-40.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!