Computer Science ›› 2022, Vol. 49 ›› Issue (2): 342-352.doi: 10.11896/jsjkx.201000155

• Computer Network • Previous Articles     Next Articles

Load-balanced Geographic Routing Protocol in Aerial Sensor Network

HUANG Xin-quan, LIU Ai-jun, LIANG Xiao-hu, WANG Heng   

  1. College of Communication Engineering,Army Engineering University,Nanjing 210007,China
  • Received:2020-10-26 Revised:2021-03-15 Online:2022-02-15 Published:2022-02-23
  • About author:HUANG Xin-quan,born in 1993,postgraduate.His main research interests include multi-agent systems,and flying ad-hoc network.
    LIU Ai-jun,born in 1970,professor.His main research interests include satellite communication system theory,signal processing,channel coding,and information theory.
  • Supported by:
    National Natural Science Foundation of China(61671476,61901516),Natural Science Foundation of Jiangsu Province of China(BK20180578) and China Postdoctoral Science Foundation(2019M651648).

Abstract: The unbalanced burden on the nodes nearing the ground station pose challenges on the multi-hop data transmission in aerial sensor networks(ASNs).In order to achieve reliable and efficient multi-hop data transmission in ASNs,a reinforcement-learning based queue-efficient geographic routing(RLQE-GR) protocol is proposed.The RLQE-GR protocol maps routing problem into the general reinforcement learning(RL) framework,where each UAV is treated as one state and each successful packet forwarding is treated as one action.Based on the framework,the RLQE-GR protocol designs a reward function related to geographical location,link quality and available transmission queue length.Then,the Q-function is employed to converge all the sta-teaction values(Q-values),and each packet is forwarded based on potential state-action values.To converge all Q values and minimize performance deterioration during the convergence process,a beacon mechanism is employed in RLQE-GR protocol.In contrast to existing geographic routing protocols,the RLQE-GR protocol simultaneously takes the queue utilization,link quality and relative distance into consideration for forwarding packets.This makes the RLQE-GR protocol achieve load balancing,meanwhile not introducing strict performance deteriorations on routing hop and link quality.Moreover,due to the near-optimization character of RL theory,the RLQE-GR protocol can achieve routing performance optimization on packet delivery ratio and end-to-end delay.

Key words: Aerial sensor network, Beacon mechanism, Geographic routing protocol, Reinforcement learning, Reward function

CLC Number: 

  • TN927
[1]ASADPOUR M,HUMMEL K A,GIUSTINIANO D,et al.Route or carry:Motion-driven packet forwarding in micro aerial vehicle networks[J].IEEE Transactions on Mobile Computing,2017,16(3):843-856.
[2]ZHANG C.Progress in Time Synchronization Technology forWireless Sensor Networks[J].Journal of Chongqing Technology and Business University(Natural Science Edition),2019,36(6):88-94.
[3]YANG J,GU Y H,XU Q,et al.A Node Sleeping Routing Algorithm for Underwater Wireless Sensor Networks[ J].Journal of Chongqing University of Technology( Natural Science),2020,34(1):226-234.
[4]ASADPOUR M,VAN DEN BERGH B,GIUSTINIANO D,et al.Micro aerial vehicle networks:An experimental analysis of challenges and opportunities[J].IEEE Communications Magazine,2014,52(7):141-149.
[5]BIOMO J D M M,KUNZ T,ST-HILAIRE M.Routing in unmanned aerial ad hoc networks:A recovery strategy for greedy geographic forwarding failure[C]//2014 IEEE Wireless Communications and Networking Conference(WCNC).2014:2236-2241.
[6]MOSTEFAOUI A,MELKEMI M,BOUKERCHE A.Localizedrouting approach to bypass holes in wireless sensor networks[J].IEEE Transactions on Computers,2013,63(12):3053-3065.
[7]HUANG H,YIN H,MIN G,et al.Coordinate-assisted routingapproach to bypass routing holes in wireless sensor networks[J].IEEE Communications Magazine,2017,55(7):180-185.
[8]HUANG H,YIN H,MIN G,et al.Energy-aware dual-path geo-graphic routing to bypass routing holes in wireless sensor networks[J].IEEE Transactions on Mobile Computing,2017,17(6):1339-1352.
[9]XIA Y,QIN X,LIU B,et al.A greedy traffic light and queueaware routing protocol for urban VANETs[J].China Communications,2018,15(7):77-87.
[10]WANG X,LIU X,WANG M,et al.Energy-Efficient SpatialQuery-Centric Geographic Routing Protocol in Wireless Sensor Networks[J].Sensors,2019,19(10):2363.
[11]SINGH P,CHEN Y C.Energy efficient greedy forwarding based on residual energy for wireless sensor networks[C]//2018 27th Wireless and Optical Communication Conference(WOCC).2018:1-6.
[12]JUNG W S,YIM J,KO Y B.QGeo:Q-learning-based geographic ad hoc routing protocol for unmanned robotic networks[J].IEEE Communications Letters,2017,21(10):2258-2261.
[13]ZHANG K,ZHANG W,LI W,et al.Research of applicability for UAV Ad Hoc networks preactive routing protocols[J].Computer Engineering and Applications,2010,46(2):4-6,18.
[14]TABBANA F.Performance Comparison and Analysis of Proactive,Reactive and Hybrid Routing Protocols for Wireless Sensor Networks[J].International Journal of Wireless & Mobile Networks,2020,12(4):20.
[15]ANSHORI H A,ABDUROHMAN M.Comparison of Reactive Routing Protocol Dynamic Manet on Demand and Ad Hoc on Demand Distance Vector for Improving Vehicular Ad hoc Network Performance[J].Advanced Science Letters,2015,21(1):20-23.
[16]JIANG J,HAN G.Routing protocols for unmanned aerial vehicles[J].IEEE Communications Magazine,2018,56(1):58-63.
[17]LEMMON C,LUI S M,LEE I.Geographic Forwarding andRouting for Ad-hoc Wireless Network:A Survey[C]//Procee-dings of 2009 Fifth International Joint Conference on INC,IMS and IDC.Seoul,South Korea,2009:188-195.
[18]AOUIZ A A,HACENE S B,LORENZ P.Channel BusynessBased Multipath Load Balancing Routing Protocol for Ad hoc Networks[J].IEEE Network,2019,33(5):118-125.
[19]DSOUZA M B,MANJAIAH D H.Congestion Free And Bandwidth Aware Multipath Protocol for MANET[C]//2019 1st International Conference on Advances in Information Technology(ICAIT).2019:267-270.
[20]POURBEMANY J,MIRJALILY G,ABOUEI J,et al.Load Ba-lanced Ad-Hoc On-Demand Routing Based on Weighted Mean Queue Length Metric[C]//Iranian Conference on Electrical Engineering(ICEE).2018:470-475.
[21]SINGH G,SHARMA A K,BAWA O S,et al.Effective Congestion Control In MANET[C]//2020 International Conference on Intelligent Engineering and Management(ICIEM).2020:86-90.
[22]REN W,BEARD R W,ATKINS E M.Information consensus in multivehicle cooperative control[J].IEEE Control Systems Magazine,2007,27(2):71-82.
[23]SON J,CHOI S,CHA J.A brief survey of sensors for detect,sense,and avoid operations of small unmanned aerial vehicles[C]//Proceedings of 17th International Conference on Control,Automation and Systems.Jeju,South Korea,2017:279-282.
[24]LUO W.An efficient sensor-mission assignment algorithm based on dynamic alliance and quantum genetic algorithm in wireless sensor networks[C]//Proceedings of 2010 International Confe-rence on Intelligent Computing and Integrated Systems.Guilin,China,2010:854-857.
[25]TALGINI A,SHAKARAMI V,SHEIKHOLESLAM F,et al.Aerial node placement in wireless sensor networks using Fuzzy K-means clustering[C]//Proceedings of 8th International Conference on e-Commerce in Developing Countries:With Focus on e-Trust.Mashhad,Iran,2014:1-7.
[26]OMAR H A,ZHUANG W,LI L.VeMAC:A TDMA-basedMAC protocol for reliable broadcast in VANETs[J].IEEE Transactions on Mobile Computing,2013,12(9):1724-1736.
[27]SENGOKU M,TAMURA H,MASE K,et al.A routing pro-blem on ad-hoc networks and graph theory[C]//Proceedings of International Conference on Communication Technology Proceedings.Beijing,China,2000:1710-1713.
[28]NOWE A,BRYS T.A Gentle Introduction to ReinforcementLearning[C]//Proceedings of International Conference on Sca-lable Uncertainty Management.Nice,France,2016:18-32.
[29]KUIPER E,NADJM-TEHRANI S.Mobility models for UAVgroup reconnaissance applications[C]//2006 International Conference on Wireless and Mobile Communications(ICWMĆ06).2006:33.
[30]WANG W,DONG C,WANG H,et al.Design and implementation of adaptive MAC framework for UAV ad hoc networks[C]//Proceedings of 12th International Conference on Mobile Ad-Hoc and Sensor Networks.Hefei,China,2016:195-201.
[1] LIU Xing-guang, ZHOU Li, LIU Yan, ZHANG Xiao-ying, TAN Xiang, WEI Ji-bo. Construction and Distribution Method of REM Based on Edge Intelligence [J]. Computer Science, 2022, 49(9): 236-241.
[2] SHI Dian-xi, ZHAO Chen-ran, ZHANG Yao-wen, YANG Shao-wu, ZHANG Yong-jun. Adaptive Reward Method for End-to-End Cooperation Based on Multi-agent Reinforcement Learning [J]. Computer Science, 2022, 49(8): 247-256.
[3] YUAN Wei-lin, LUO Jun-ren, LU Li-na, CHEN Jia-xing, ZHANG Wan-peng, CHEN Jing. Methods in Adversarial Intelligent Game:A Holistic Comparative Analysis from Perspective of Game Theory and Reinforcement Learning [J]. Computer Science, 2022, 49(8): 191-204.
[4] YU Bin, LI Xue-hua, PAN Chun-yu, LI Na. Edge-Cloud Collaborative Resource Allocation Algorithm Based on Deep Reinforcement Learning [J]. Computer Science, 2022, 49(7): 248-253.
[5] LI Meng-fei, MAO Ying-chi, TU Zi-jian, WANG Xuan, XU Shu-fang. Server-reliability Task Offloading Strategy Based on Deep Deterministic Policy Gradient [J]. Computer Science, 2022, 49(7): 271-279.
[6] XIE Wan-cheng, LI Bin, DAI Yue-yue. PPO Based Task Offloading Scheme in Aerial Reconfigurable Intelligent Surface-assisted Edge Computing [J]. Computer Science, 2022, 49(6): 3-11.
[7] HONG Zhi-li, LAI Jun, CAO Lei, CHEN Xi-liang, XU Zhi-xiong. Study on Intelligent Recommendation Method of Dueling Network Reinforcement Learning Based on Regret Exploration [J]. Computer Science, 2022, 49(6): 149-157.
[8] GUO Yu-xin, CHEN Xiu-hong. Automatic Summarization Model Combining BERT Word Embedding Representation and Topic Information Enhancement [J]. Computer Science, 2022, 49(6): 313-318.
[9] FAN Jing-yu, LIU Quan. Off-policy Maximum Entropy Deep Reinforcement Learning Algorithm Based on RandomlyWeighted Triple Q -Learning [J]. Computer Science, 2022, 49(6): 335-341.
[10] ZHANG Jia-neng, LI Hui, WU Hao-lin, WANG Zhuang. Exploration and Exploitation Balanced Experience Replay [J]. Computer Science, 2022, 49(5): 179-185.
[11] LI Peng, YI Xiu-wen, QI De-kang, DUAN Zhe-wen, LI Tian-rui. Heating Strategy Optimization Method Based on Deep Learning [J]. Computer Science, 2022, 49(4): 263-268.
[12] ZHOU Qin, LUO Fei, DING Wei-chao, GU Chun-hua, ZHENG Shuai. Double Speedy Q-Learning Based on Successive Over Relaxation [J]. Computer Science, 2022, 49(3): 239-245.
[13] LI Su, SONG Bao-yan, LI Dong, WANG Jun-lu. Composite Blockchain Associated Event Tracing Method for Financial Activities [J]. Computer Science, 2022, 49(3): 346-353.
[14] OUYANG Zhuo, ZHOU Si-yuan, LYU Yong, TAN Guo-ping, ZHANG Yue, XIANG Liang-liang. DRL-based Vehicle Control Strategy for Signal-free Intersections [J]. Computer Science, 2022, 49(3): 46-51.
[15] HOU Hong-xu, SUN Shuo, WU Nier. Survey of Mongolian-Chinese Neural Machine Translation [J]. Computer Science, 2022, 49(1): 31-40.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!