Computer Science ›› 2020, Vol. 47 ›› Issue (11): 316-321.doi: 10.11896/jsjkx.200400075

• Computer Network • Previous Articles     Next Articles

Optimization of Mobile Charging Path of Wireless Rechargeable Sensor Networks Based on Reinforcement Learning

ZHANG Hao, GUAN Xin-jie, BAI Guang-wei   

  1. Department of Computer Science and Technology,Nanjing University of Technology,Nanjing 211816,China
  • Received:2020-04-17 Revised:2020-07-13 Online:2020-11-15 Published:2020-11-05
  • About author:ZHANG Hao,born in 1995,postgra-duate.Her main research interests include reinforcement learning,artificial intelligence and wireless sensor network.
    GUAN Xin-jie,born in 1984,Ph.D,master instructor.Her main research inte-rests include network optimization,edge computing and software defined network.
  • Supported by:
    This work was supported by the National Natural Science Foundation of China (61802176).

Abstract: Wireless sensor networks occupy an important position in environmental perception and target tracking.In order to recharge sensor nodes in time,this paper proposes a low power consumption and high energy efficientcy mobile path charging algorithm based on reinforcement learning.Wireless sensor network uses a mobile charger to charge the sensor nodes.The Q-Lear-ning algorithm and the epsilon-greedy algorithm are combined to complete the charging of all sensor nodes in turn in the shortest path.Existing related researches usually ignore the maximum amount of power that the sensor node itself can withstand,which easily causes the power to exceed the maximum threshold during charging and suspend work,so the charging time of the mobile charger is limited.The result shows that the proposed mobile charging strategy has a higher utility.Compared with the traditional Q-Learning algorithm and the greedy algorithm,the training cycle is greatly reduced and the energy utilization rate is maximized.

Key words: Energy utilization, Mobile charging, Path, Reinforcement learning, Wireless rechargeable sensor network

CLC Number: 

  • TP393
[1] LÓPEZ RIQUELME J A,SOTO F,SUARDÍAZ J,et al.Wireless Sensor Networks for precision horticulture in Southern Spain[J].Computers & Electronics in Agriculture,2009,68(1):25-35.
[2] GIUSEPPE A,MARCO C,MARIO D F,et al.Energy conservation in wireless sensor networks:A survey[J].Ad Hoc Networks,2009,7(3):537-568.
[3] FAFOUTIS X,VUCKOVIC D,DI M A,DRAGON N,et al.Energy-Harvesting wireless sensor networks[C]//Proc.of the 9th European Conf.on Wireless Sensor Networks(EWSN).Trento:University of Trento,2012:84-85.
[4] KURS A,KARALIS A,MOFFATT R,et al.Wireless power transfer via strongly coupled magnetic resonances[J].Science,2007,317(5834):83-86.
[5] KURS A,MOFFATT R,SOLJACIC M.Simultaneous midrange power transfer to multiple devices[J].Applied Physics Letters,2010,96(4):34.
[6] DAI H P,CHEN G H,XU L J,et al.Effective Algorithm for Placement of Directional Wireless Chargers[J].Ruan Jian Xue Bao,2015,26(7):1711-1729.
[7] SHI Y,XIE L HOU Y T,et al.On Renewable Sensor Networks with Wireless Energy Transfer[C]//INFOCOM,2011 Procee-dings IEEE.IEEE,2012:1350-1358.
[8] XIE L,SHI Y,HOU Y T,et al.Multi-node wireless energy charging in sensor networks[J].IEEE/ACM Transactions on Networking,2015,23(2):437-450.
[9] HE L,KONG L,GU Y,et al.Evaluating the On-Demand Mobile Charging in Wireless Sensor Networks[J].IEEE Transactions on Mobile Computing,2015,14(9):1861-1875.
[10] JIANG F C,HE S B,CHENG P,et al.On optimal scheduling in wireless rechargeable sensor networks for stochastic event capture[C]//IEEE International Conference on Mobile Adhoc & Sensor Systems.IEEE,2011.
[11] SU Z Z.Research and Improvement on Routing Protocols ofWireless Sensor Networks Based on Clustering[D].Changchun:Jilin University,2016.
[12] XIE L,SHI Y,HOU Y T,et al.Multi-Node Wireless Energy Charging in Sensor Networks[J].IEEE/ACM Transactions on Networking,2015,23(2):437-450.
[13] HE S B,CHEN J M,JIANG F C,et al.Energy provisioning in wireless rechargeable sensor networks[J].IEEE Transactions on Mobile Computing,2013,12(10):1931-1942.
[14] YU L C,LV H F,HE L,et al.Optimization of Charging Path for Wireless Rechargeable Sensor Networks[J].Journal of Shanghai DianJi University,2018(4):25-30.
[15] FU L K,CHENG P,GU Y,et al.Minimizing charging delay in wireless rechargeable sensor networks[C]//2013 Proceedings IEEE INFOCOM.IEEE,2013:2922-2930.
[16] VARTIAINEN E M,INO Y,SHIMANO R,et al.Numericalphase correction method for terahertz time-domain reflection spectroscopy[J].Journal of Applied Physics,2004,96(8):4171-4176.
[17] GU S,LILLICRAP T,SUTSKEVER I,et al.Continuous deep q-learning with model-based acceleration[C]//International Conference on Machine Learning.2016:2829-2838.
[18] SONG Y C,ZHANG Y Y,MENG H D.Research Based on Euclid Distance with Weights of Clustering Method[J].Computer Engineering and Applications,2007,43(4):179-180,226.
[1] HUANG Li, ZHU Yan, LI Chun-ping. Author’s Academic Behavior Prediction Based on Heterogeneous Network Representation Learning [J]. Computer Science, 2022, 49(9): 76-82.
[2] LYU Xiao-feng, ZHAO Shu-liang, GAO Heng-da, WU Yong-liang, ZHANG Bao-qi. Short Texts Feautre Enrichment Method Based on Heterogeneous Information Network [J]. Computer Science, 2022, 49(9): 92-100.
[3] LIU Xing-guang, ZHOU Li, LIU Yan, ZHANG Xiao-ying, TAN Xiang, WEI Ji-bo. Construction and Distribution Method of REM Based on Edge Intelligence [J]. Computer Science, 2022, 49(9): 236-241.
[4] SHI Dian-xi, ZHAO Chen-ran, ZHANG Yao-wen, YANG Shao-wu, ZHANG Yong-jun. Adaptive Reward Method for End-to-End Cooperation Based on Multi-agent Reinforcement Learning [J]. Computer Science, 2022, 49(8): 247-256.
[5] YUAN Wei-lin, LUO Jun-ren, LU Li-na, CHEN Jia-xing, ZHANG Wan-peng, CHEN Jing. Methods in Adversarial Intelligent Game:A Holistic Comparative Analysis from Perspective of Game Theory and Reinforcement Learning [J]. Computer Science, 2022, 49(8): 191-204.
[6] WANG Bing, WU Hong-liang, NIU Xin-zheng. Robot Path Planning Based on Improved Potential Field Method [J]. Computer Science, 2022, 49(7): 196-203.
[7] YU Bin, LI Xue-hua, PAN Chun-yu, LI Na. Edge-Cloud Collaborative Resource Allocation Algorithm Based on Deep Reinforcement Learning [J]. Computer Science, 2022, 49(7): 248-253.
[8] LI Meng-fei, MAO Ying-chi, TU Zi-jian, WANG Xuan, XU Shu-fang. Server-reliability Task Offloading Strategy Based on Deep Deterministic Policy Gradient [J]. Computer Science, 2022, 49(7): 271-279.
[9] WANG Yong, CUI Yuan. Cutting Edge Method for Traveling Salesman Problem Based on the Shortest Paths in Optimal Cycles of Quadrilaterals [J]. Computer Science, 2022, 49(6A): 199-205.
[10] TAN Ren-shen, XU Long-bo, ZHOU Bing, JING Zhao-xia, HUANG Xiang-sheng. Optimization and Simulation of General Operation and Maintenance Path Planning Model for Offshore Wind Farms [J]. Computer Science, 2022, 49(6A): 795-801.
[11] GAO Wen-long, ZHOU Tian-yang, ZHU Jun-hu, ZHAO Zi-heng. Network Attack Path Discovery Method Based on Bidirectional Ant Colony Algorithm [J]. Computer Science, 2022, 49(6A): 516-522.
[12] CHEN Jun-wu, YU Hua-shan. Strategies for Improving Δ-stepping Algorithm on Scale-free Graphs [J]. Computer Science, 2022, 49(6A): 594-600.
[13] GUO Yu-xin, CHEN Xiu-hong. Automatic Summarization Model Combining BERT Word Embedding Representation and Topic Information Enhancement [J]. Computer Science, 2022, 49(6): 313-318.
[14] FAN Jing-yu, LIU Quan. Off-policy Maximum Entropy Deep Reinforcement Learning Algorithm Based on RandomlyWeighted Triple Q -Learning [J]. Computer Science, 2022, 49(6): 335-341.
[15] XU Jian-min, SUN Peng, WU Shu-fang. Microblog Rumor Detection Method Based on Propagation Path Tree Kernel Learning [J]. Computer Science, 2022, 49(6): 342-349.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!