计算机科学 ›› 2022, Vol. 49 ›› Issue (9): 236-241.doi: 10.11896/jsjkx.220400148

• 计算机网络 • 上一篇    下一篇

基于边缘智能的频谱地图构建与分发方法

刘兴光, 周力, 刘琰, 张晓瀛, 谭翔, 魏急波   

  1. 国防科技大学电子科学学院 长沙 410073
  • 收稿日期:2022-04-17 修回日期:2022-05-13 出版日期:2022-09-15 发布日期:2022-09-09
  • 通讯作者: 周力(zhouli2035@nudt.edu.cn)
  • 作者简介:(liuxingguang20@nudt.edu.cn)
  • 基金资助:
    国家自然科学基金(62171449,62001483,U19B2024)

Construction and Distribution Method of REM Based on Edge Intelligence

LIU Xing-guang, ZHOU Li, LIU Yan, ZHANG Xiao-ying, TAN Xiang, WEI Ji-bo   

  1. College of Electronic Science and Technology,National University of Defense Technology,Changsha 410073,China
  • Received:2022-04-17 Revised:2022-05-13 Online:2022-09-15 Published:2022-09-09
  • About author:LIU Xing-guang,born in 1998,postgraduate.His main research interests include radio environment map and mobile edge computing.
    ZHOU Li,born in 1988,Ph.D,master supervisor.His main research interests include intelligent communication network,wireless resource management and edge computing.
  • Supported by:
    National Natural Science Foundation of China(62171449,62001483,U19B2024).

摘要: 频谱地图可协助认知用户准确感知和利用频谱空洞,实现网络节点间的干扰协调,提升无线网络的频谱效率和鲁棒性。然而,当认知用户在利用和共享频谱地图时,面临着计算复杂度高和分发时延开销大的问题,限制了认知用户对空间频谱态势的实时感知能力。为了解决该问题,提出了一种边缘智能网络中基于强化学习的频谱地图构建与分发方法。首先,在频谱地图构建上,采用了一种克里金插值和超分辨率相结合的低复杂度构建技术;其次,通过引入边缘计算,将频谱地图构建与分发过程中的计算迁移策略选择问题建模为一个混合整数非线性规划问题;最后,将人工智能和边缘计算相结合,采用了一种集中式训练、分布式执行的强化学习框架,对不同网络场景下的频谱地图构建和分发策略进行学习。实验结果表明,所提方法具备良好的适应性,可有效降低频谱地图构建与分发的能耗和时延,支持认知用户在移动边缘网络场景下对频谱地图的近实时级应用。

关键词: 频谱地图, 边缘智能, 计算迁移, 强化学习

Abstract: Radio environment map(REM) can assist cognitive users to accurately perceive and utilize spectrum holes,achieve interference coordination between network nodes,and improve the spectrum efficiency and robustness of wireless networks.However,when cognitive users utilize and share REM,there are problems of high computational complexity and high distribution delay overhead,which limit cognitive users' ability to perceive spatial spectrum situation in real time.To solve this problem,this paper proposes a reinforcement learning-based REM construction and distribution method in mobile edge intelligence networks.First,we employ a low-complexity construction technique that combines kriging interpolation and super-resolution for REM construction.Second,we model the computational offload strategy selection problem during REM construction and distribution as a mixed-integer nonlinear programming problem by using edge computing.Finally,we combine artificial intelligence technology and edge computing technology,and propose a centralized training,distributed execution reinforcement learning framework to learn REM construction and distribution strategies in different network scenarios.Simulation results show that the proposed method has good adaptability,and it can effectively reduce the energy consumption and delay of REM construction and distribution,and support the near real-time application of REM by cognitive users in mobile edge network scenarios.

Key words: Radio environment map, Edge intelligence, Computation migration, Reinforcement learning

中图分类号: 

  • TN915
[1]XIA H Y,ZHA S,HUANG J J,et al.Survey on the Construction Methods of Spectrum Map[J].Chinese Journal of Radio Science,2020,35(4):12.
[2]KATAGIRI K,FUJII T.Mesh-Clustering-Based Radio MapsConstruction for Autonomous Distributed Networks[C]//2021 Twelfth International Conference on Ubiquitous and Future Networks(ICUFN).IEEE,2021:345-349.
[3]BEDNAREK P,ŁOPATKA J,BICKI D.Radio environmentmap for the cognitive radio network simulator[J].International Journal of Electronics and Telecommunications,2018,64(1):45-49.
[4]EZZATI N,TAHERI H.Distributed spectrum sensing in rembased cognitive radio networks[J].Journal of Modeling in Engineering,2019,17(56):223-233.
[5]KANIEWSKI P,ROMANIK J,GOLAN E,et al.SpectrumAwareness for Cognitive Radios Supported by Radio Environment Maps:Zonal Approach[J].Applied Sciences,2021,11(7):2910.
[6]SANTANA Y H,PLETS D,ALONSO R M,et al.Tool for Recovering after Meteorological Events Using a Real-Time REM and IoT Management Platform[J].Wireless Communications and Mobile Computing,2019,2019(1):1-13.
[7]YIN L,LUO J,LUO H.Tasks scheduling and resource allocation in fog computing based on containers for smart manufactu-ring[J].IEEE Transactions on Industrial Informatics,2018,14(10):4712-4721.
[8]KIBRIA M G,NGUYEN K,VILLARDI G P,et al.Big dataanalytics,machine learning,and artificial intelligence in next-generation wireless networks[J].IEEE Access,2018,6:32328-32338.
[9]ATAWIA R,HASSANEIN H S,ALI N A,et al.Utilization of stochastic modeling for green predictive video delivery under network uncertainties[J].IEEE Transactions on Green Communications and Networking,2018,2(2):556-569.
[10]LI M S,GAO J,ZHAO L,et al.Deep reinforcement learning for collaborative edge computing in vehicular networks[J].IEEE Transactions on Cognitive Communications and Networking,2020,6(4):1122-1135.
[11]SHI W,CAO J,ZHANG Q,et al.Edge computing:Vision and challenges[J].IEEE Internet of Things Journal,2016,3(5):637-646.
[12]KHORAMNEJAD F,EROL-KANTARCI M.On Joint Offloa-ding and Resource Allocation:A Double Deep Q-Network Approach[J].IEEE Transactions on Cognitive Communications and Networking,2021,7(4):1126-1141.
[13]QU Y B,DONG C,ZHENG J C,et al.Empowering Edge Intelligence by Air-Ground Integrated Federated Learning[J].IEEE Network,2021,35(5):34-41.
[14]LIU Z C,ZHAO Y F,SONG J D,et al.Learn to Coordinate for Computation Offloading and Resource Allocation in Edge Computing:A Rational-based Distributed Approach[J].IEEE Transactions on Network Science and Engineering,2021,1(1):1-15.
[15]CHOWDAPPA V P,BOTELLA C,SAMPER-ZAPATER J J,et al.Distributed radio map reconstruction for 5G automotive[J].IEEE Intelligent Transportation Systems Magazine,2018,10(2):36-49.
[16]SERGEEV A P,TARASOV D A,BUEVICH A G,et al.High variation subarctic topsoil pollutant concentration prediction using neural network residual kriging[J].AIP Conference Proceedings,2017,1836(1):020023.
[17]DENG Y B,ZHOU L,WANG L,et al.Radio environment map construction using super-resolution imaging for intelligent transportation systems[J].IEEE Access,2020,8:47272-47281.
[18]DU J,YU F R,LU G,et al.MEC-assisted immersive VR video streaming over terahertz wireless networks:A deep reinforcement learning approach[J].IEEE Internet of Things Journal,2020,7(10):9517-9529.
[19]TAHA A E M,ALI N A,CHI H R,et al.MEC resource offloading for QoE-aware HAS video streaming[C]//ICC 2021-IEEE International Conference on Communications.IEEE,2021:1-5.
[20]GENDA K,ABE M,KAMAMURA S.Video CommunicationOptimization Using Distributed Edge Computing[C]//2020 21st Asia-Pacific Network Operations and Management Symposium(APNOMS).IEEE,2020:381-384.
[21]JIANG K,ZHOU H,LI D,et al.A Q-learning based Method for Energy-Efficient Computation Offloading in Mobile Edge Computing[C]//2020 29th International Conference on Computer Communications and Networks(ICCCN).IEEE,2020:1-7.
[1] 熊丽琴, 曹雷, 赖俊, 陈希亮.
基于值分解的多智能体深度强化学习综述
Overview of Multi-agent Deep Reinforcement Learning Based on Value Factorization
计算机科学, 2022, 49(9): 172-182. https://doi.org/10.11896/jsjkx.210800112
[2] 史殿习, 赵琛然, 张耀文, 杨绍武, 张拥军.
基于多智能体强化学习的端到端合作的自适应奖励方法
Adaptive Reward Method for End-to-End Cooperation Based on Multi-agent Reinforcement Learning
计算机科学, 2022, 49(8): 247-256. https://doi.org/10.11896/jsjkx.210700100
[3] 袁唯淋, 罗俊仁, 陆丽娜, 陈佳星, 张万鹏, 陈璟.
智能博弈对抗方法:博弈论与强化学习综合视角对比分析
Methods in Adversarial Intelligent Game:A Holistic Comparative Analysis from Perspective of Game Theory and Reinforcement Learning
计算机科学, 2022, 49(8): 191-204. https://doi.org/10.11896/jsjkx.220200174
[4] 于滨, 李学华, 潘春雨, 李娜.
基于深度强化学习的边云协同资源分配算法
Edge-Cloud Collaborative Resource Allocation Algorithm Based on Deep Reinforcement Learning
计算机科学, 2022, 49(7): 248-253. https://doi.org/10.11896/jsjkx.210400219
[5] 李梦菲, 毛莺池, 屠子健, 王瑄, 徐淑芳.
基于深度确定性策略梯度的服务器可靠性任务卸载策略
Server-reliability Task Offloading Strategy Based on Deep Deterministic Policy Gradient
计算机科学, 2022, 49(7): 271-279. https://doi.org/10.11896/jsjkx.210600040
[6] 郭雨欣, 陈秀宏.
融合BERT词嵌入表示和主题信息增强的自动摘要模型
Automatic Summarization Model Combining BERT Word Embedding Representation and Topic Information Enhancement
计算机科学, 2022, 49(6): 313-318. https://doi.org/10.11896/jsjkx.210400101
[7] 范静宇, 刘全.
基于随机加权三重Q学习的异策略最大熵强化学习算法
Off-policy Maximum Entropy Deep Reinforcement Learning Algorithm Based on RandomlyWeighted Triple Q -Learning
计算机科学, 2022, 49(6): 335-341. https://doi.org/10.11896/jsjkx.210300081
[8] 谢万城, 李斌, 代玥玥.
空中智能反射面辅助边缘计算中基于PPO的任务卸载方案
PPO Based Task Offloading Scheme in Aerial Reconfigurable Intelligent Surface-assisted Edge Computing
计算机科学, 2022, 49(6): 3-11. https://doi.org/10.11896/jsjkx.220100249
[9] 洪志理, 赖俊, 曹雷, 陈希亮, 徐志雄.
基于遗憾探索的竞争网络强化学习智能推荐方法研究
Study on Intelligent Recommendation Method of Dueling Network Reinforcement Learning Based on Regret Exploration
计算机科学, 2022, 49(6): 149-157. https://doi.org/10.11896/jsjkx.210600226
[10] 张佳能, 李辉, 吴昊霖, 王壮.
一种平衡探索和利用的优先经验回放方法
Exploration and Exploitation Balanced Experience Replay
计算机科学, 2022, 49(5): 179-185. https://doi.org/10.11896/jsjkx.210300084
[11] 李鹏, 易修文, 齐德康, 段哲文, 李天瑞.
一种基于深度学习的供热策略优化方法
Heating Strategy Optimization Method Based on Deep Learning
计算机科学, 2022, 49(4): 263-268. https://doi.org/10.11896/jsjkx.210300155
[12] 周琴, 罗飞, 丁炜超, 顾春华, 郑帅.
基于逐次超松弛技术的Double Speedy Q-Learning算法
Double Speedy Q-Learning Based on Successive Over Relaxation
计算机科学, 2022, 49(3): 239-245. https://doi.org/10.11896/jsjkx.201200173
[13] 李素, 宋宝燕, 李冬, 王俊陆.
面向金融活动的复合区块链关联事件溯源方法
Composite Blockchain Associated Event Tracing Method for Financial Activities
计算机科学, 2022, 49(3): 346-353. https://doi.org/10.11896/jsjkx.210700068
[14] 欧阳卓, 周思源, 吕勇, 谭国平, 张悦, 项亮亮.
基于深度强化学习的无信号灯交叉路口车辆控制
DRL-based Vehicle Control Strategy for Signal-free Intersections
计算机科学, 2022, 49(3): 46-51. https://doi.org/10.11896/jsjkx.210700010
[15] 黄鑫权, 刘爱军, 梁小虎, 王桁.
空中传感器网络中负载均衡的地理路由协议
Load-balanced Geographic Routing Protocol in Aerial Sensor Network
计算机科学, 2022, 49(2): 342-352. https://doi.org/10.11896/jsjkx.201000155
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!