计算机科学 ›› 2023, Vol. 50 ›› Issue (11A): 230200095-8.doi: 10.11896/jsjkx.230200095

• 网络&通信 • 上一篇    下一篇

用户公平保障的边缘服务缓存与任务卸载算法

吴纯, 陈龙, 孙一飞, 武继刚   

  1. 广东工业大学计算机学院 广州 510006
  • 发布日期:2023-11-09
  • 通讯作者: 武继刚(asjgwucn@outlook.com)
  • 作者简介:(1366988186@qq.com)
  • 基金资助:
    国家自然科学基金(62072118,62202108);广东省自然科学基金(2023A1515011230)

Fairness-aware Service Caching and Task Offloading with Cooperative Mobile Edge Computing

WU Chun, CHEN Long, SUN Yifei, WU Jigang   

  1. School of Computer Science and Technology,Guangdong University of Technology,Guangzhou 510006,China
  • Published:2023-11-09
  • About author:WU Chun,born in 1998,postgraduate.His main research interest is edge computing.
    WU Jigang,born in 1963,Ph.D,professor,Ph.D supervisor,is a member of China Computer Federation.His main research interests include intelligent computing,and mobile computing.
  • Supported by:
    National Natural Science Foundation of China(62072118,62202108) and Natural Science Foundation of Guangdong Province,China(2023A1515011230).

摘要: 在边缘服务器中缓存服务可缩短请求响应时间,提升用户体验。现有研究主要从整体上优化系统性能,例如最大化系统吞吐量,而无法保障个体用户请求异构服务的公平性。针对用户异构计算任务的不公平服务问题,研究边缘协同计算中的服务缓存和任务卸载策略,基于最大最小公平原则,构建了一个最大化最小服务完成率问题,并证明了其NP难解性。为此,利用线性松弛将原问题从0-1整数规划转化为线性规划,设计了一种近似比为MS/N(S-2 ln S)的随机舍入算法,其中S为边缘服务器数,N为服务数,M为终端设备数。同时,基于优先缓存和卸载完成率最小的服务及其任务,提出了一种快速高效的贪心算法。实验结果表明,与已有最大化系统吞吐量算法相比,提出的随机舍入算法和贪心算法将最小服务完成率分别提升至少44.1%和90.6%,并且其额外的系统吞吐量损失分别不超过22.4%和27.0%。

关键词: 边缘计算, 服务缓存, 任务卸载, 最大最小公平, 随机舍入

Abstract: Caching services in edge servers can reduce the response time of user requests and improve user experience.Most of existing works focus on optimizing the overall system performance,i.e.,maximizing the system throughput,which cannot gua-rantee the user fairness in requesting heterogeneous services.To fill this gap,this paper investigates fairness-aware joint service caching and task offloading strategy with cooperative edge computing.A minimum service completion rate maximization problem is formulated based on max-min fairness principle,which is proved to be NP-hard.A randomized rounding algorithm with MS/N(S-2 ln S)-approximation ratio is proposed by transforming the original problem from 0-1 integer programming into linear programming using linear relaxation,where S,N and M are the numbers of edge servers,services and end devices,respectively.Moreover,a fast and efficient greedy algorithm is proposed by caching the service with the minimum completion rate and offloa-ding its corresponding tasks preferentially.Extensive simulation results demonstrate that not only the minimum service completion rate can be improved by at least 44.1% and 90.6% by our two algorithms,but also the extra loss of system throughput is no more than 22.4% and 27.0%,respectively,compared with the existing algorithms for maximizing system throughput.

Key words: Edge computing, Service caching, Task offloading, Max-Min fairness, Randomized rounding

中图分类号: 

  • TP391
[1]CHEN M,HAO Y.Task offloading for mobile edge computing in software defined ultra-dense network[J].IEEE Journal on Selected Areas in Communications,2018,36(3):587-597.
[2]SUN H T,FAN Y F,MA M X,et al.Dynamic Pricing-basedVehicle Collaborative Computation Offloading Scheme in VEC[J].Computer Science,2022,49(9):242-248.
[3]TRAN T X,HAJISAMI A,PANDEY P,et al.Collaborativemobile edge computing in 5G networks:new paradigms,scena-rios,and challenges[J].IEEE Communications Magazine,2017,55(4):54-61.
[4]TRAN T X,CHAN K,POMPILI D.Costa:cost-aware service caching and task offloading assignment in mobile edge compting[C]//IEEE International Conference on Sensing,Communication,and Networking.Boston:IEEE,2019:1-9.
[5]ZHANG J,LETAIEF K B.Mobile edge intelligence and computing for the internet of vehicles[J].Proceedings of the IEEE,2019,108(2):246-261.
[6]POULARAKIS K,LLORCA J,TULINO A M,et al.Serviceplacement and request routing in MEC networks with storage,computation,and communication constraints[J].IEEE/ACM Transactions on Networking,2020,28(3):1047-1060.
[7]ZHONG S,GUO S,YU H,et al.Cooperative service cachingand computation offloading in multi-access edge computing [J].Computer Networks,2021,189:107916.
[8]XU J,CHEN L,ZHOU P.Joint service caching and task offloading for mobile edge computing in dense networks[C]//IEEE International Conference on Computer Communications.Honolulu:IEEE,2018:207-215.
[9]SAPIJASZKO G,MIKHAEL W B.An overview of recent con-volutional neural network algorithms for image recognition[C]//IEEE International Midwest Symposium on Circuits and Systems.Windsor:IEEE,2018:743-746.
[10]CAMPAGNER A,CIUCCI D,CABITZA F.Aggregation models in ensemble learning:a large-scale comparison[J].Information Fusion,2023,90:241-252.
[11]YAO M,CHEN L,WU Y,et al.Loading cost-aware model ca-ching and request routing in edge-enabled wireless sensor networks[J].The Computer Journal,2022.
[12]FARHADI V,MEHMETI F,HE T,et al.Service placementand request scheduling for data-intensive applications in edge clouds [J].IEEE/ACM Transactions on Networking,2021,29(2):779-792.
[13]ZHAO G,XU H,ZHAO Y,et al.Offloading tasks with depen-dency and service caching in mobile edge computing [J].IEEE Transactions on Parallel and Distributed Systems,2021,32(11):2777-2792.
[14]BI S,HUANG L,ZHANG Y J A.Joint optimization of service caching placement and computation offloading in mobile edge computing systems[J].IEEE Transactions on Wireless Communications,2020,19(7):4947-4963.
[15]PREMSANKAR G,GHADDAR B.Energy-efficient serviceplacement for latency-sensitive applications in edge computing [J].IEEE Internet of Things Journal,2022,9(18):17926-17937.
[16]SHEN Q,HU B J,XIA E.Dependency-aware task offloadingand service caching in vehicular edge computing [J].IEEE Transactions on Vehicular Technology,2022,71(12):13182-13197.
[17]ZHOU J,ZHANG X.Fairness-aware task offloading and re-source allocation in cooperative mobile-edge computing[J].IEEE Internet of Things Journal,2021,9(5):3812-3824.
[18]BAKTIR A C,AHAT B,ARAS N,et al.Sla-aware optimal resource allocation for service-oriented networks[J].Future Ge-neration Computer Systems,2019,101:959-974.
[19]CHEN M,WANG H,MENG Z,et al.Joint data collection and resource allocation for distributed machine learning at the edge[J].IEEE Transactions on Mobile Computing,2020,21(8):2876-2894.
[20]YANG S,LI F,TRAJANOVSKI S,et al.Delay-aware virtualnetwork function placement and routing in edge clouds[J].IEEE Transactions on Mobile Computing,2019,20(2):445-459.
[21]WU Y,WU J,CHEN L,et al.Load balance guaranteed vehicle-to-vehicle computation offloading for min-max fairness in VANETs[J].IEEE Transactions on Intelligent Transportation Systems,2021.
[22]JEONG H J,LEE H J,SHIN K Y,et al.Perdnn:offloading deep neural network computations to pervasive edge servers[C]//IEEE International Conference on Distributed Computing Systems.Singapore:IEEE,2020:1055-1066.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!