Computer Science ›› 2021, Vol. 48 ›› Issue (5): 270-276.doi: 10.11896/jsjkx.201000005

• Computer Network • Previous Articles     Next Articles

Deep Reinforcement Learning-based Collaborative Computation Offloading Scheme in VehicularEdge Computing

FAN Yan-fang, YUAN Shuang, CAI Ying, CHEN Ruo-yu   

  1. School of Computer,Beijing Information Science & Technology University,Beijing 100101,China
  • Received:2020-10-03 Revised:2021-01-05 Online:2021-05-15 Published:2021-05-09
  • About author:FAN Yan-fang,born in 1979,Ph.D,is a member of China Computer Federation.Her main research interests include information security,vehicular networking and edge computing.
  • Supported by:
    National Natural Science Foundation of China(61672106),Natural Science Foundation of Beijing(L192023),Foundation of Beijing Information Science & Technology University(2025028),Graduate Science and Technology Innovation Project of Beijing Information Science & Technology University and Open Project of Beijing Key Laboratory of Internet Culture and Digital Dissemination Research.

Abstract: Vehicular edge computing (VEC) is a key technology that can realize low latency and high reliability of internet of vehicles.Users offload computing tasks to mobile edge computing (MEC) servers,which can not only solve the problem of insufficient computing capability of vehicles,but also reduce the energy consumption and the latency of communication service.How-ever,the contradiction between the mobility of vehicles and the static deployment of edge servers in highway scenarios poses a challenge to the reliability of computing offloading.To solve this problem,this paper designs a collaborative deep reinforcement learning-based scheme for vehicles to adapt to the dynamic high-speed environment by combining the computing resources of MEC servers and neighboring vehicles.Simulation results show that compared with the scheme without vehicle collaboration,this scheme can reduce the delay and the failure rate of offloading.

Key words: Collaborative computing, Computation offloading, Deep reinforcement learning, Mobile edge computing, Vehicular edge computing

CLC Number: 

  • TN929.5
[1]ETSI.MEC in 5G networks[OL]. https://www.etsi.org/images/files/ETSIWhitePapers/etsi_wp28_mec_in_5G_FINAL.pdf.
[2]YUAN S,FAN Y,CAI Y.A survey on computation offloading for vehicular edge computing[C]// Proceedings of the 2019 7th International Conference on Information Technology:IoT and Smart City (ICIT 2019).New York:ACM Press,2019:107-112.
[3]YU P,ZHANG J,LI W,et al.Energy-efficient resource allocation method in mobile edge network based on double deep Q-learning [J].Journal on Communications,2020,41(12):148-161.
[4]MEHDI M,ALA A F,SAMEH S,et al.Deep Learning for IoT Big Data and Streaming Analytics:A Survey [J].IEEE Communications Surveys & Tutorials,2018,20(4):2923-296.
[5]NING Z L,FENG Y F,COLLOTTA M,et al.Deep learning in edge of vehicles:Exploring relationship for data transmission [J].IEEE Transactions on Industrial Informatics,2019,15(10):5737-5746.
[6]ZHANG J,GUO H,LIU H,et al.Task offloading in vehicular edge computing networks:A load-balancing solution [J].IEEE Transactions on Vehicular Technology,2020,69(2):2092-2104.
[7]KHAN I,TAO X,RAHMAN G M S,et al.Advanced Energy-Efficient Computation Offloading Using Deep Reinforcement Learning in MTC Edge Computing [J].IEEE Access,2020,8:82867-82875.
[8]HUANG X,YU R,KANG J,et al.Exploring mobile edge computing for 5g-enabled software defined vehicular networks [J].IEEE Wireless Commun,2017,24(6):55-63.
[9]QIAO G,LENG S,ZHANG K,et al.Collaborative Task Off-loading in Vehicular Edge Multi-Access Networks [J].IEEE Communications Magazine,2018,56(8):48-54.
[10]DAI S,WANG H L,GAO Z,et al.An adaptive computation offloading mechanism for mobile health applications [J].IEEE Transactions on Vehicular Technology,2020,69(1):998-1007.
[11]HUANG X,YU R,LIU J,et al.Parked vehicle edge computing:Exploiting opportunistic resources for distributed mobile applications [J].IEEE Access,2018,6:66649-66663.
[12]LI Y,YANG B,CHEN Z,et al.A Contract-Stackelberg Offloa-ding Incentive Mechanism for Vehicular Parked-Edge Computing Networks[C]//Proceedings of 2019 IEEE 89th Vehicular Technology Conference (VTC2019-Spring).Piscataway:IEEE Press,2019:1-5.
[13]LIU J,WANG S,WANG J,et al.A task-oriented computationoffloading algorithm for intelligent vehicle network with mobile edge computing [J].IEEE Access,2019,7:180491-180502.
[14]NING Z,DONG P,WANG X,et al.When Deep ReinforcementLearning Meets 5G Vehicular Networks:A Distributed Offloading Framework for Traffic Big Data [J].IEEE Transactions on Industrial Informatics,2019,16(2):1352-1361.
[1] SUN Hui-ting, FAN Yan-fang, MA Meng-xiao, CHEN Ruo-yu, CAI Ying. Dynamic Pricing-based Vehicle Collaborative Computation Offloading Scheme in VEC [J]. Computer Science, 2022, 49(9): 242-248.
[2] YU Bin, LI Xue-hua, PAN Chun-yu, LI Na. Edge-Cloud Collaborative Resource Allocation Algorithm Based on Deep Reinforcement Learning [J]. Computer Science, 2022, 49(7): 248-253.
[3] ZHANG Chong-yu, CHEN Yan-ming, LI Wei. Task Offloading Online Algorithm for Data Stream Edge Computing [J]. Computer Science, 2022, 49(7): 263-270.
[4] LI Meng-fei, MAO Ying-chi, TU Zi-jian, WANG Xuan, XU Shu-fang. Server-reliability Task Offloading Strategy Based on Deep Deterministic Policy Gradient [J]. Computer Science, 2022, 49(7): 271-279.
[5] FANG Tao, YANG Yang, CHEN Jia-xin. Optimization of Offloading Decisions in D2D-assisted MEC Networks [J]. Computer Science, 2022, 49(6A): 601-605.
[6] LIU Zhang-hui, ZHENG Hong-qiang, ZHANG Jian-shan, CHEN Zhe-yi. Computation Offloading and Deployment Optimization in Multi-UAV-Enabled Mobile Edge Computing Systems [J]. Computer Science, 2022, 49(6A): 619-627.
[7] XIE Wan-cheng, LI Bin, DAI Yue-yue. PPO Based Task Offloading Scheme in Aerial Reconfigurable Intelligent Surface-assisted Edge Computing [J]. Computer Science, 2022, 49(6): 3-11.
[8] HONG Zhi-li, LAI Jun, CAO Lei, CHEN Xi-liang, XU Zhi-xiong. Study on Intelligent Recommendation Method of Dueling Network Reinforcement Learning Based on Regret Exploration [J]. Computer Science, 2022, 49(6): 149-157.
[9] LI Peng, YI Xiu-wen, QI De-kang, DUAN Zhe-wen, LI Tian-rui. Heating Strategy Optimization Method Based on Deep Learning [J]. Computer Science, 2022, 49(4): 263-268.
[10] OUYANG Zhuo, ZHOU Si-yuan, LYU Yong, TAN Guo-ping, ZHANG Yue, XIANG Liang-liang. DRL-based Vehicle Control Strategy for Signal-free Intersections [J]. Computer Science, 2022, 49(3): 46-51.
[11] ZHANG Hai-bo, ZHANG Yi-feng, LIU Kai-jian. Task Offloading,Migration and Caching Strategy in Internet of Vehicles Based on NOMA-MEC [J]. Computer Science, 2022, 49(2): 304-311.
[12] DAI Shan-shan, LIU Quan. Action Constrained Deep Reinforcement Learning Based Safe Automatic Driving Method [J]. Computer Science, 2021, 48(9): 235-243.
[13] CHENG Zhao-wei, SHEN Hang, WANG Yue, WANG Min, BAI Guang-wei. Deep Reinforcement Learning Based UAV Assisted SVC Video Multicast [J]. Computer Science, 2021, 48(9): 271-277.
[14] ZHOU Shi-cheng, LIU Jing-ju, ZHONG Xiao-feng, LU Can-ju. Intelligent Penetration Testing Path Discovery Based on Deep Reinforcement Learning [J]. Computer Science, 2021, 48(7): 40-46.
[15] LI Bei-bei, SONG Jia-rui, DU Qing-yun, HE Jun-jiang. DRL-IDS:Deep Reinforcement Learning Based Intrusion Detection System for Industrial Internet of Things [J]. Computer Science, 2021, 48(7): 47-54.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!