计算机科学 ›› 2024, Vol. 51 ›› Issue (9): 319-330.doi: 10.11896/jsjkx.240200036

• 计算机网络 • 上一篇    下一篇

面向多目标状态感知的自适应云边协同调度研究

周文晖, 彭清桦, 谢磊   

  1. 南京大学软件新技术全国重点实验室 南京 210000
  • 收稿日期:2024-02-05 修回日期:2024-06-07 出版日期:2024-09-15 发布日期:2024-09-10
  • 通讯作者: 谢磊(lxie@nju.edu.cn)
  • 作者简介:(whzhou@smail.nju.edu.cn)
  • 基金资助:
    国家重点研发计划(2022YFB3303900);国家自然科学基金(62272216)

Study on Adaptive Cloud-Edge Collaborative Scheduling Methods for Multi-object State Perception

ZHOU Wenhui, PENG Qinghua, XIE Lei   

  1. State Key Laboratory for Novel Software Technology,Nanjing University,Nanjing 210000,China
  • Received:2024-02-05 Revised:2024-06-07 Online:2024-09-15 Published:2024-09-10
  • About author:ZHOU Wenhui,born in 2000,postgra-duate,is a student member of CCF(No.E0259G).His main research interests include edge computing and edge intelligence.
    XIE Lei,born in 1982,Ph.D,professor,Ph.D supervisor,Young Chang Jiang Scholar,is a distinguished member of CCF(No.17652D).His main research interests include wireless sensing,wea-rable computing and edge computing.
  • Supported by:
    National Key R&D Program of China(2022YFB3303900) and National Natural Science Foundation of China(62272216).

摘要: 随着智能城市和工业智能制造的蓬勃发展,从监控摄像头获取详尽信息以进行多目标视觉分析的需求日益突出。现有研究主要关注在服务器上进行资源调度以及改良视觉模型,往往不能很好地应对设备资源状态和任务状态的动态变化。随着边缘端硬件资源的升级和任务处理模型的改进,设计一个自适应的云边协同调度模型来满足任务的实时用户需求成为优化多目标状态感知任务的重要方式。因此,在深入分析云边场景下多目标状态感知任务特性的基础上,提出了一种基于深度强化学习的自适应云边协同调度模型ATS-SAC。ATS-SAC通过实时解析多目标状态感知任务的运行时状态,动态给出任务执行的视频流配置、模型部署配置等调度决策,从而显著优化环境不稳定的云边场景下多目标状态感知任务的精度、时延的综合性能质量。同时,还引入了一种基于用户体验极限阈值的动作筛选方法,有助于去除冗余的决策动作,进一步优化模型的决策空间。针对用户对多目标状态感知任务性能结果的不同需求,ATS-SAC模型能提供包括极速模式、均衡模式和精度模式在内的多种灵活的调度策略。实验结果表明,相比其他的任务执行方式,在ATS-SAC模型的调度策略下,多目标状态感知任务在精度质量和处理时延上更能满足用户的体验需求。同时,当实时运行状态变化时,ATS-SAC模型能够动态调整其调度策略,以保持稳定的任务处理效果。

关键词: 边缘计算, 云边协同, 调度策略, 多目标状态感知, 深度强化学习

Abstract: With the development of smart cities and intelligent industrial manufacturing,the demand for comprehensive information from surveillance cameras for multi-objective visual analysis has become increasingly prominent.Existing research mainly focuses on resource scheduling on servers and improvements of visual model,which often struggle to adequately handle dynamic changes in system resource and task state.With the advancement of edge hardware resources and task processing models,designing an adaptive cloud-edge collaborative scheduling model to meet the real-time user requirements of tasks has become an essential approach to optimize multi-objective state perception tasks.Thus,based on a profound analysis of characteristics of multi-objective state perception tasks in cloud-edge scenarios,this paper proposes a model of adaptive task scheduler based on soft actor-critic(ATS-SAC).ATS-SAC intelligently decides key factors of tasks such as video stream configuration and model deployment configuration according to real-time analysis of runtime state,thereby significantly optimizing the accuracy and delay of multi-objective state perception tasks in cloud-edge scenarios.Furthermore,we introduce an action filtering method based on user expe-rience threshold that helps to eliminate redundant decision-making actions,so as to reduce the decision-making space of the mo-del.Depending on user's varied demands for performance outcomes of the multi-objective state perception tasks,ATS-SAC model can provide three flexible scheduling strategies,namely speed mode,balance mode,and precision mode.Experimental results show that,comparing to other executing methods,the scheduling strategies of ATS-SAC mo-del make multi-objective state perception tasks more satisfactory in terms of accuracy and delay.Moreover,when the real-time operating state changes,the ATS-SAC mo-del can dynamically adjust its scheduling strategies to maintain stable task processing results.

Key words: Edge computing, Cloud-Edge collaboration, Scheduling policy, Multi-object state perception, Deep reinforcement learning

中图分类号: 

  • TP393
[1]SHAO Z,CAI J J,WANG Z Y,et al.Big data analysis and processing of surveillance video for intelligent surveillance cameras [J].Journal of Electronics and Information Technology,2017,39(5):1116-1122.
[2]WANG N.Surveillance camera visualization management sys-tem in smart grid [J].Journal of North China Institute of Water Resources and Hydropower,2011,32(1):111-113.
[3]SMOLYANSKIY N,KAMENEV A,SMITH J,et al.Toward low-flying autonomous MAV trail navigation using deep neural networks for environmental awareness[C]//2017 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS).IEEE,2017:4241-4247.
[4]KOURIS A,BOUGANIS C S.Learning to fly by myself:A self-supervised cnn-based approach for autonomous navigation[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS).IEEE,2018:1-9.
[5]GEORGANAS E,AVANCHA S,BANERJEE K,et al.Anato-my of high-performance deep learning convolutions on simd architectures[C]//International Conference for High Performance Computing,Networking,Storage and Analysis.IEEE,2018:830-841.
[6]JOUPPI N P,YOUNG C,PATIL N,et al.In-datacenter per-formance analysis of a tensor processing unit[C]//Proceedings of the 44th Annual International Symposium on Computer Architecture.2017:1-12.
[7]ZHANG M,WANG F,LIU J.Casva:Configuration-adaptivestreaming for live video analytics[C]//IEEE Conference on Computer Communications.IEEE,2022:2168-2177.
[8]SINGH A,CHATTERJEE K.Cloud security issues and challenges:A survey[J].Journal of Network and Computer Applications,2017,79:88-115.
[9]RAN X,CHEN H,ZHU X,et al.Deepdecision:A mobile deeplearning framework for edge video analytics[C]//IEEE Confe-rence on Computer Communications.IEEE,2018:1421-1429.
[10]HAN S,SHEN H,PHILIPOSE M,et al.Mcdnn:An approximation-based execution framework for deep stream processing under resource constraints[C]//Proceedings of the 14th Annual International Conference on Mobile Systems,Applications,and Services.2016:123-136.
[11]ZHANG H,ANANTHANARAYANAN G,BODIK P,et al.Live video analytics at scale with approximation and {Delay-To-lerance}[C]//14th USENIX Symposium on Networked Systems Design and Implementation(NSDI 17).2017:377-392.
[12]PAKHA C,CHOWDHERY A,JIANG J.Reinventing videostreaming for distributed vision analytics[C]//10th USENIX Workshop on Hot Topics in Cloud Computing(HotCloud 18).2018.
[13]WANG C Y,BOCHKOVSKIY A,LIAO H Y M.Scaled-yolov4:Scaling cross stage partial network[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2021:13029-13038.
[14]CHEN Q,WANG Y,YANG T,et al.You only look one-levelfeature[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2021:13039-13048.
[15]LI X,WANG W,HU X,et al.Generalized focal loss v2:Lear-ning reliable localization quality estimation for dense object detection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2021:11632-11641.
[16]FAN M,LAI S,HUANG J,et al.Rethinking bisenet for real-time semantic segmentation[C]//Procee-dings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2021:9716-9725.
[17]YU C,ZHOU Q,LI J,et al.Foundation Model Drives Weakly Incremental Learning for Semantic Segmentation[C]//Procee-dings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2023:23685-23694.
[18]CAI J,XU M,LI W,et al.Memot:Multi-object tracking withmemory[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2022:8090-8100.
[19]BEWLEY A,GE Z,OTT L,et al.Simple online and realtimetracking[C]//2016 IEEE International Conference on Image Processing(ICIP).IEEE,2016:3464-3468.
[20]WANDT B,RUDOLPH M,ZELL P,et al.Canonpose:Self-su-pervised monocular 3d human pose estimation in the wild[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2021:13294-13304.
[21]ALMEIDA M,LASKARIDIS S,LEONTIADIS I,et al.Em-Bench:Quantifying performance variations of deep neural networks across modern commodity devices[C]//The 3rd International Workshop on Deep Learning for Mobile Systems and Applications.2019:1-6.
[22]SHEN H,CHEN L,JIN Y,et al.Nexus:A GPU cluster engine for accelerating DNN-based video analysis[C]//Proceedings of the 27th ACM Symposium on Operating Systems Principles.2019:322-337.
[23]JIANG J,ANANTHANARAYANAN G,BODIK P,et al.Chameleon:scalable adaptation of video analytics[C]//Proceedings of the 2018 Conference of the ACM Special Interest Group on Data Communication.2018:253-266.
[24]LIU W,ANGUELOV D,ERHAN D,et al.Ssd:Single shotmultibox detector[C]//14th European Conference,Amsterdam,The Netherlands,October 11-14,2016,Proceedings,Part I 14.Springer International Publishing,2016:21-37.
[25]ZHAO H,QI X,SHEN X,et al.Icnet for real-time semantic segmentation on high-resolution images[C]//Proceedings of the European Conference on Computer Vision(ECCV).2018:405-420.
[26]HE Y,PAN Z,LI L,et al.Real-time vehicle detection fromshort-range aerial image with compressed mobilenet[C]//2019 International Conference on Robotics and Automation(ICRA).IEEE,2019:8339-8345.
[27]JIANG A H,WONG D L K,CANEL C,et al.Mainstream:Dynamic {Stem-Sharing} for {Multi-Tenant} Video Processing[C]//2018 USENIX Annual Technical Conference(USENIX ATC 18).2018:29-42.
[28]MATSUBARA Y,BAIDYA S,CALLEGARO D,et al.Distilled split deep neural networks for edge-assisted real-time systems[C]//Proceedings of the 2019 Workshop on Hot Topics in Vi-deo Analytics and Intelligent Edges.2019:21-26.
[29]RA M R,SHETH A,MUMMERT L,et al.Odessa:enabling in-teractive perception applications on mobile devices[C]//Proceedings of the 9th International Conference on Mobile Systems,Applications,and Services.2011:43-56.
[30]WANG Y,WANG W,ZHANG J,et al.Bridging the {Edge-Cloud} Barrier for Real-time Advanced Vision Analytics[C]//11th USENIX Workshop on Hot Topics in Cloud Computing(HotCloud 19).2019.
[31]LI Y,PADMANABHAN A,ZHAO P,et al.Reducto:On-ca-mera filtering for resource-efficient real-time video analytics[C]//Proceedings of the Annual Conference of the ACM Special Interest Group on Data Communication on the Applications,Technologies,Architectures,and Protocols for Computer Communication.2020:359-376.
[32]CHEN T Y H,RAVINDRANATH L,DENG S,et al.Glimpse:Continuous,real-time object recognition on mobile devices[C]//Proceedings of the 13th ACM Conference on Embedded Networked Sensor Systems.2015:155-168.
[33]HUANG T,ZHOU C,ZHANG R X,et al.Comyco:Quality-aware adaptive video streaming via imitation learning[C]//Proceedings of the 27th ACM International Conference on Multi-media.2019:429-437.
[34]WANG Z,BAPST V,HEESS N,et al.Sample efficient actor-critic with experience replay[J].arXiv:1611.01224,2016.
[35]HAARNOJA T,ZHOU A,ABBEEL P,et al.Soft Actor-Critic:Off-Policy Maximum Entropy Deep Reinforcement[C]//Proceedings of the 35th International Conference on Machine Learning,Stockholm,Sweden.2018:1861-1870.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!