Not found Intelligent Edge Computing

Default Latest Most Read
Please wait a minute...
For Selected: Toggle Thumbnails
Edge Computing Enabling Industrial Internet:Architecture,Applications and Challenges
LI Hui, LI Xiu-hua, XIONG Qing-yu, WEN Jun-hao, CHENG Lu-xi, XING Bin
Computer Science    2021, 48 (1): 1-10.   DOI: 10.11896/jsjkx.200900150
Abstract780)      PDF(pc) (3930KB)(2108)       Save
Industrial Internet integrates advanced technologies such as 5G communication and artificial intelligence,and integrates various sensors and controllers with perception and control capabilities into the industrial production process to optimize production processes,reduce costs and increase productivity.Due to the centralized deployment of the traditional cloud computing mo-del,the location of computing node is usually far away from the smart terminal,which is difficult to meet the requirements of the industrial field for high real-time and low latency.By sinking computing,storage and network resources to the edge of the industrial network,edge computing can respond to device requests more conveniently,meet key requirements such as intelligent access,real-time communication and privacy protection in the Industrial Internet environment,and realize intelligent green communication.This paper firstly introduces the development status of the Industrial Internet and the related concepts of edge computing,then systematically discusses the Industrial Internet edge computing architecture and the core technologies that promote the development of Industrial Internet edge computing.Finally,it lists some successful application cases of edge computing and elaborates the current status and challenges of applying edge computing technology in Industrial Internet.
Reference | Related Articles | Metrics
Survey of Task Offloading in Edge Computing
LIU Tong, FANG Lu, GAO Hong-hao
Computer Science    2021, 48 (1): 11-15.   DOI: 10.11896/jsjkx.200900217
Abstract851)      PDF(pc) (1452KB)(4100)       Save
Recently,with the popularization of mobile smart devices and the development of wireless communication technologies such as 5G,edge computing is proposed as a novel and promising computing mode,which is regarded as an extension of traditional cloud computing.The basic idea of edge computing is to transferm the computing tasks generated on mobile devices from offloading to remote clouds to offloading to the edge of the network,to meet the low-latency requirements of computing-intensive applications such as real online game and augmented reality.The offloading problem of computing tasks in edge computing is an important issue that studies whether computing tasks should be performed locally or offloaded to edge nodes or remote clouds,since it has a big impact on task completion delay and energy consumption of devices.This paper firstly explains the basic concepts of edge computing and introduces several system architectures of edge computing.Then,it expounds the task offloading problem in edge computing.Considering the research necessity and difficulty of task offloading in edge computing,it comprehensively reviews the existing related works and discusses the future research directions.
Reference | Related Articles | Metrics
Survey on Task Offloading Techniques for Mobile Edge Computing with Multi-devices and Multi-servers in Internet of Things
LIANG Jun-bin, TIAN Feng-sen, JIANG Chan, WANG Tian-shu
Computer Science    2021, 48 (1): 16-25.   DOI: 10.11896/jsjkx.200500095
Abstract646)      PDF(pc) (2195KB)(1376)       Save
With the rapid development of the Internet of Things (IoT) technology,there are a large number of devices with different functions (such as a variety of smart home equipment,mobile intelligent transportation devices,intelligent logistics or warehouse management equipment,etc.,with different sensors),which are connected to each other and widely used in intelligent cities,smart factories and other fields.However,the limited processing power of these IoT devices makes it difficult to meet the demand for delay-sensitive,computation-intensive applications.The emergence of mobile edge computing (MEC) effectively solves this problem.IoT devices can offload tasks to edge servers and use them to perform computing tasks.These servers are usually deployed by the network operator at the edge of the network,that is,the network access layer close to the client,which is used to aggregate the user network.At a certain time,IoT devices may be in the coverage area of multiple edge servers,and they share the limited computing and communication resources of the servers.In this complex environment,it is an NP-hard problem to formulate a task offloading and resource allocation scheme to optimize the delay of task completion or the energy consumption of IoT devices.At present,lots of work has been done on this issue and make some progress,but some problems still exist in the practical application.In order to further promote the research in this field,this paper analyzes and summarizes the latest achievements in recent years,compares their advantages and disadvantages,and looks forward to the future work.
Reference | Related Articles | Metrics
Survey on Service Resource Availability Forecast Based on Queuing Theory
ZHNAG Kai-qi, TU Zhi-ying, CHU Dian-hui, LI Chun-shan
Computer Science    2021, 48 (1): 26-33.   DOI: 10.11896/jsjkx.200900211
Abstract479)      PDF(pc) (1486KB)(2134)       Save
Queuing theory solves various complex queuing problems in many fields.This paper first introduces the general model representation and common models of queuing theory.Secondly,it briefly summarizes the various problems solved by queuing theory,and focuses on the literatures on service availability prediction of queuing theory in recent years,including its application in different fields,such as daily life,cloud computing,and network resources.To find the relationship between service and user needs,it classifies and summarizes the purpose of predicting service availability,including predicting resources,reasonably planning and allocating resources,meeting user needs,reducing user waiting time,and improving system reliability,etc.Through the summary of such documents,it finds the existing problems,and puts forward improvement methods and suggestions.Finally,the application of service resource availability predictions based on queuing theory in recommendation is forecasted,and the future research directions and challenges are briefly explained.
Reference | Related Articles | Metrics
Mobile Edge Computing Based In-vehicle CAN Network Intrusion Detection Method
YU Tian-qi, HU Jian-ling, JIN Jiong, YANG Jian-feng
Computer Science    2021, 48 (1): 34-39.   DOI: 10.11896/jsjkx.200900181
Abstract569)      PDF(pc) (2718KB)(1188)       Save
With the rapid development and pervasive deployment of the Internet of Vehicles (IoV),it provides the services of Internet and big data analytics to the intelligent and connected vehicles,while incurs the issues of security and privacy.The closure of traditional in-vehicle networks leads to the communications protocols,particularly,the most commonly applied controller area network (CAN) bus protocol,lack of security and privacy protection mechanisms.Thus,to detect the network intrusions and protect the vehicles from being attacked,a support vector data description (SVDD) based intrusion detection method is proposed in this paper.Specifically,the weighted self-information of message IDs and the normalized values of IDs are selected as features for SVDD modeling,and the SVDD models are trained at the mobile edge computing (MEC) servers.The vehicles use the trained SVDD models for identifying the abnormal values of the selected features to detect the network intrusions.Simulations are conducted based on the CAN network dataset published by the HCR Lab of Korea University,where three conventional information entropy based in-vehicle network intrusion detection methods are adopted as the benchmarks.As compared to the benchmarks,the proposed method has dramatically improved the intrusion detection accuracy,especially when the number of intruded messages is small.
Reference | Related Articles | Metrics
Multi-workflow Offloading Method Based on Deep Reinforcement Learning and ProbabilisticPerformance-awarein Edge Computing Environment
MA Yu-yin, ZHENG Wan-bo, MA Yong, LIU Hang, XIA Yun-ni, GUO Kun-yin, CHEN Peng, LIU Cheng-wu
Computer Science    2021, 48 (1): 40-48.   DOI: 10.11896/jsjkx.200900195
Abstract508)      PDF(pc) (4467KB)(1287)       Save
Mobile edge computing is a new distributed and ubiquitous computing model.By transferring computation-intensive and time-delay sensitive tasks to closer to the edge servers,it effectively alleviates the resource shortage of mobile terminals andthe communication transmission overhead between users and computing processing nodes.However,if multiple users request computation-intensive tasks simultaneously,especially process-based workflow task requests,edge computing are often difficult to respond effectively and cause task congestion.Inaddition,the performance of edge servers is affected by detrimental factors such as task overload,power supply and real-time change of communication capability,and its performance fluctuates and changes,which brings challenges to ensure task execution and user-perceived service efficiency.To solve the above problems,a Deep-Q-Network (DQN) and probabilistic performance aware based multi-workflow scheduling approach in edge computing environment is proposed.Firstly,the historical performance data of edge cloudservers is analyzed probabilistically,then the DQN model is driven by performance probability distribution data,and iterative optimization is carried out continuously to generate multi-workflow offloading strategy.In the process of experimental verification,simulation experiments areconducted in multiple scenarios reflecting difterent levels of system load based on edge server Location data set,performance test data and multiple scientific workflow templates.The results show that the proposed method is superior to the traditional method in the execution efficiency of multi-workflow.
Reference | Related Articles | Metrics
Multi-user Task Offloading Based on Delayed Acceptance
MAO Ying-chi, ZHOU Tong, LIU Peng-fei
Computer Science    2021, 48 (1): 49-57.   DOI: 10.11896/jsjkx.200600129
Abstract429)      PDF(pc) (2576KB)(894)       Save
With the application of artificial intelligence,the demand for computing resources is higher and higher.Due to the limi-ted computing power and energy storage,mobile devices can not deal with this kind of computing intensive applications with real-time requirements.Mobile edge computing (MEC) can provide computing offload service at the edge of wireless network,so as to reduce the delay and save energy.Aiming at the problem of multi-user dependent task offloading,a user dependent task model is established based on the comprehensive consideration of delay and energy consumption.The multi-user task offloading strategy based on delay acceptance (MUTODA) is proposed to solve the problem of minimizing energy consumption under delay constraints.MUTODA solves the problem of multi-user task offloading through two steps of non dominated single user optimal offloading strategy and adjustment strategy to solve resource competition.The experimental results show that compared with the benchmark strategy and heuristic strategy,the multi-user task offloading strategy based on delayed acceptance can improve about 8% user satisfaction and save 30%~50% of the energy consumption of mobile terminals.
Reference | Related Articles | Metrics
User Allocation Approach in Dynamic Mobile Edge Computing
TANG Wen-jun, LIU Yue, CHEN Rong
Computer Science    2021, 48 (1): 58-64.   DOI: 10.11896/jsjkx.200900079
Abstract372)      PDF(pc) (3029KB)(1153)       Save
In edge computing environment,matching suitable servers for users is a key issue,which can effectively improve the quality of service.In this paper,the edge user assignment (EUA) problem is converted into a bipartite graph matching problem constrained by distance and server resources,and it is modeled as a 0-1 integer programming problem for optimal assignment solution.In the offline state,the optimization model based on exact algorithm can obtain the optimal assignment strategy,but its solution time is too long,and it cannot process large-scale of data,which is not suitable for the real service environment.Therefore,the online user assignment method based on heuristic strategy is proposed to optimize the user-server assignment under limited time.The experimental results show that the competitive ratio obtained by Proximal Heuristic online method (PH) can reach close to 100%,and can obtain a better assignment solution within an acceptable time.At the same time,the online PH method performs better than other basic heuristic methods.
Reference | Related Articles | Metrics
Privacy Protection Offloading Algorithm Based on Virtual Mapping in Edge Computing Scene
YU Xue-yong, CHEN Tao
Computer Science    2021, 48 (1): 65-71.   DOI: 10.11896/jsjkx.200500098
Abstract693)      PDF(pc) (2098KB)(748)       Save
With the development of mobile edge computing (MEC) and wireless power transfer (WPT),more and more computing tasks are offloaded to the MEC server for processing.The terminal equipment is powered by WPT technology to alleviate the limited computing power of the terminal equipment and high energy consumption of the terminal equipment.However,since the offloaded tasks and data often carry information such as users' personal usage habits,tasks are offloaded to the MEC server for processing results in new privacy leakage issues.A privacy-aware computation offloading method based on virtual mapping is proposed in this paper.Firstly,the privacy of the computing task is defined,and then a virtual task mapping mechanism that can reduce the amount of privacy accumulated by users on the MEC server is designed.Secondly,the online privacy-aware computing offloading algorithm is proposed by considering the optimization of the mapping mechanism and privacy constraints jointly.Finally,simulation results validate that the proposed offloading method can keep the cumulative privacy of users below the threshold,increase the system calculation rate and reduce users' calculation delay at the same time.
Reference | Related Articles | Metrics
Multi-edge Collaborative Computing Unloading Scheme Based on Genetic Algorithm
GAO Ji-xu, WANG Jun
Computer Science    2021, 48 (1): 72-80.   DOI: 10.11896/jsjkx.200800088
Abstract520)      PDF(pc) (2314KB)(1308)       Save
As a supplement to cloud computing,edge computing can ensure that the calculation delay meets system requirements when processing computing tasks generated by lOT equipment.Aiming at the problem of insufficient utilization of the remote edge cloud due to the empty window period of the computing task in the traditional offloading scenario,a genetic algorithm-based multi-edge and cloud collaborative computing offloading model (Genetic Algorithm-based Multi-edge Collaborative Computing Offloading Model,GAMCCOM) is proposed.This computing offloading solution combines local edge and remote edge to perform task offloading and uses a genetic algorithm to get the minimum system cost under consideration of both delay and energy consumption at the same time.The results of simulation experiments show that when considering the time delay consumption and energy consumption of the unloading system,the overall cost of this scheme is reduced by 23% compared with the basic three-layer unloading scheme.In the case of considering time delay consumption and energy consumption respectively,the system cost can still be reduced by 17% and 15% respectively.Therefore,the GAMCCOM offloading method can effectively reduce the system cost for different offloading targets of edge computing.
Reference | Related Articles | Metrics
Computational Task Offloading Scheme Based on Load Balance for Cooperative VEC Servers
YANG Zi-qi, CAI Ying, ZHANG Hao-chen, FAN Yan-fang
Computer Science    2021, 48 (1): 81-88.   DOI: 10.11896/jsjkx.200800220
Abstract379)      PDF(pc) (2388KB)(791)       Save
In the Vehicular Edge Computing (VEC) network,a large number of computational tasks cannot be processed due to the vehicle's limited computation resource.Therefore,computational tasks generated by on-board applications need to be offloa-ded to the VEC servers for processing.However,the mobility of vehicles and the differences in regional deployment lead to the unbalance among VEC servers,resulting in low computation offloading efficiency and resource utilization.In order to solve the problem,a scheme of computation offloading and resource utilization is proposed to maximize the utility of users.The problem of user utility maximization is decoupled into two subproblems,the VEC server selection decision algorithm based on matching and the joint optimization algorithm for offloading ratio and computation resource allocation based on Adam are proposed to solve the subproblems respectively.After that,the above two algorithms are iterated together until convergence,and the approximate optimal solution is obtained to achieve the load balance.The simulation results show that the proposed scheme can effectively decrease the processing delay of computational tasks,save vehicle's energy,enhance the vehicle utility,and perform well on load balance compared to the nearest offloading scheme and the predictive offloading scheme.
Reference | Related Articles | Metrics
L-YOLO:Real Time Traffic Sign Detection Model for Vehicle Edge Computing
SHAN Mei-jing, QIN Long-fei, ZHANG Hui-bing
Computer Science    2021, 48 (1): 89-95.   DOI: 10.11896/jsjkx.200800034
Abstract648)      PDF(pc) (3375KB)(900)       Save
In the vehicle edge computing unit,due to the limited resources of its hardware equipment,it becomes more and more urgent to develop a lightweight and efficient traffic sign detection model for vehicle edge computing.This paper proposes a lightweight traffic sign detection model based on Tiny YOLO,which is called L-YOLO.Firstly,L-YOLO uses partial residual connection to enhance the learning ability of lightweight network.Secondly,in order to reduce the false detection and missed detection of traffic signs,L-YOLO uses Gauss loss function as the location loss of boundary box.In the traffic sign detection dataset named TAD16K,the parameter amount of L-YOLO is 18.8M,the calculation amount is 8.211BFlops,the detection speed is 83.3FPS,and the mAP reaches 86%.Experimental results show that the algorithm not only guarantees the real-time performance,but also improves the detection accuracy.
Reference | Related Articles | Metrics
Cache Management Method in Mobile Edge Computing Based on Approximate Matching
LI Rui-xiang, MAO Ying-chi, HAO Shuai
Computer Science    2021, 48 (1): 96-102.   DOI: 10.11896/jsjkx.200800215
Abstract412)      PDF(pc) (2352KB)(772)       Save
For the case of massive identical or similar computing requests from end users,a search of similar data in the cache space of the edge server by approximate match can be applied to select computing results that can be reused.Most of the existing algorithms do not consider the uneven distribution of data,resulting in a large amount of calculation and time overhead.In this paper,a cache selection strategy based on dynamic-locality sensitive hashing (LSH) algorithm and Weighted-k nearest neighbor (KNN) algorithm(CSS-DLWK) is proposed.The Dynamic-LSH algorithm can deal with uneven data distribution by dynamically adjusting the hash bucket size accordingly,thereby selecting data sets that are similar to the input data from the cache space.Then,regarding distance and sample size as weights,the weighted-KNN algorithm re-selects the data in the similar data sets acquired by the dynamic-LSH algorithm.From this approach,the data most similar to the input data are obtained,and the corresponding computing result is acquired for reuse.As demonstrated by simulation experiments,in the CIFAR-10 dataset,CSS-DLWK increases the ave-rage selection accuracy by 4.1% compared to the cache selection strategy based on A-LSH and H-KNN algorithms.The improvement is 16.8% compared to traditional LSH algorithms.Overall,with acceptable time costs in data selection,the proposed strategy can effectively improve the selection accuracy of reusable data,thereby reducing repetitive computation in the edge server.
Reference | Related Articles | Metrics
Mobile Edge Server Placement Method Based on User Latency-aware
GUO Fei-yan, TANG Bing
Computer Science    2021, 48 (1): 103-110.   DOI: 10.11896/jsjkx.200900146
Abstract503)      PDF(pc) (3351KB)(1122)       Save
The rapid development of the Internet-of-Things and 5G networks generates a large amount of data.By offloading computing tasks from mobile devices to edge servers with sufficient computing resources,network congestion and data propagation delays can be effectively reduced.The placement of edge server is the core of task offloading,and efficient placement method can effectively satisfy the needs of mobile users to access services with low latency and high bandwidth.To this end,an optimization model of edge server placement is established through minimizing both access delay and load difference as the optimization goal.Then,based on the heuristic algorithm,a mobile edge server placement method called ESPHA (Edge Server Placement Method Based on Heuristic Algorithm) is proposed to achieve multi-objective optimization.Firstly,the K-means algorithm is combined with the ant colony algorithm,the pheromone feedback mechanism is introduced into the placement method by emulating the mechanism of ant colony sharing pheromone in the foraging process,and the ant colony algorithm is improved by setting the taboo table to improve the convergence speed.Finally,the improved heuristic algorithm is used to solve the optimal placement.Experiments using Shanghai Telecom's real datasets show that the proposed method achieves an optimal balance between low latency and load balancing under the premise of guaranteeing quality of service,and outperforms several existing representative methods.
Reference | Related Articles | Metrics
Computer Science    2021, 48 (1): 0-00.  
Abstract393)      PDF(pc) (477KB)(728)       Save
Related Articles | Metrics
Task Offloading Online Algorithm for Data Stream Edge Computing
ZHANG Chong-yu, CHEN Yan-ming, LI Wei
Computer Science    2022, 49 (7): 263-270.   DOI: 10.11896/jsjkx.210300195
Abstract311)      PDF(pc) (2316KB)(758)       Save
With the development of Internet of Things (IoT) technology,its application scenarios have exploded recently,and such applications are generally delay-sensitive and resource-constrained.It is a focused issue in the way of offloading the real-time tasks under the condition of limited resource.Besides,it is a NP-hard combinatorial optimization problem to allocate limited computational resources for the real-time tasks.To solve this problem,this paper proposes a real-time resource management algorithm based on Lyapunov optimization,aiming at stabilizing the virtual queues while optimizing the total power consumption and total utility.Firstly,the optimization model for the total power consumption and weighted total utility is proposed under the constraint of computation and communication resources.This model contains of two virtual buffer queues,and tasks are unloaded in a device-to-device (D2D) scheduling model.Then,an optimization algorithm is proposed based on Lyapunov optimization to decompose the joint long-term average sum energy consumption and sum utility optimization problem into a series of real-time optimization problems.To solve these problems,a greedy-based matching algorithm is proposed.Experimental results demonstrate that the performance of the proposed algorithm is 8.6% better than the best result of random method and can approximate the exhaustive attack method under different connection degrees.
Reference | Related Articles | Metrics
Construction and Distribution Method of REM Based on Edge Intelligence
LIU Xing-guang, ZHOU Li, LIU Yan, ZHANG Xiao-ying, TAN Xiang, WEI Ji-bo
Computer Science    2022, 49 (9): 236-241.   DOI: 10.11896/jsjkx.220400148
Abstract430)      PDF(pc) (2114KB)(748)       Save
Radio environment map(REM) can assist cognitive users to accurately perceive and utilize spectrum holes,achieve interference coordination between network nodes,and improve the spectrum efficiency and robustness of wireless networks.However,when cognitive users utilize and share REM,there are problems of high computational complexity and high distribution delay overhead,which limit cognitive users' ability to perceive spatial spectrum situation in real time.To solve this problem,this paper proposes a reinforcement learning-based REM construction and distribution method in mobile edge intelligence networks.First,we employ a low-complexity construction technique that combines kriging interpolation and super-resolution for REM construction.Second,we model the computational offload strategy selection problem during REM construction and distribution as a mixed-integer nonlinear programming problem by using edge computing.Finally,we combine artificial intelligence technology and edge computing technology,and propose a centralized training,distributed execution reinforcement learning framework to learn REM construction and distribution strategies in different network scenarios.Simulation results show that the proposed method has good adaptability,and it can effectively reduce the energy consumption and delay of REM construction and distribution,and support the near real-time application of REM by cognitive users in mobile edge network scenarios.
Reference | Related Articles | Metrics
Dynamic Pricing-based Vehicle Collaborative Computation Offloading Scheme in VEC
SUN Hui-ting, FAN Yan-fang, MA Meng-xiao, CHEN Ruo-yu, CAI Ying
Computer Science    2022, 49 (9): 242-248.   DOI: 10.11896/jsjkx.210700166
Abstract298)      PDF(pc) (2340KB)(571)       Save
Vehicular edge computing(VEC) is an important application of mobile edge computing(MEC) in Internet of vehicles.In VEC,to meet the computing requirement of task vehicles(TaV),TaV can pay to offload tasks to the VEC server or service vehicles(SeV)with abundant idle computing resources.For the VEC provider,one of its goals is to maximize revenue.Since the computing requirements and the computing resources of the system change dynamically,it is an important issue to design a reasonable pricing strategy in vehicle collaboration scenarios.To solve this problem,this paper designs a dynamic pricing strategy.In this strategy,service prices of the VEC server and SeV are adjusted dynamically according to the relationship of the supply and demand of computing resources.On this basis,a vehicle collaborative computation offloading scheme is designed to maximize provider's revenue.By transforming the revenue maximization problem of VEC provider under the delay constraint into a multi-user matching problem,the offloading results are obtained using the Kuhn-Munkres(KM)algorithm.Simulation results show that compared to existing strategies,with this dynamic pricing strategy,the prices of the VEC server and SeV can be adjusted dynamically with the supply and demand of resources,so as to maximize the provider's revenue.Compared to existing offloading schemes,this scheme can improve the provider's revenue while meeting task delay.
Reference | Related Articles | Metrics
Resource Allocation Strategy Based on Game Theory in Mobile Edge Computing
CHEN Yipeng, YANG Zhe, GU Fei, ZHAO Lei
Computer Science    2023, 50 (2): 32-41.   DOI: 10.11896/jsjkx.220300198
Abstract439)      PDF(pc) (2129KB)(2054)       Save
The existing research on resource allocation strategies of mobile edge computing mostly focus on delay and energy consumption,but relatively less consideration is given to the benefits of edge servers.When considering the benefits of edge servers,many studies ignore the optimization of delay.Therefore,a two-way update strategy based on game theory(TUSGT) is proposed.TUSGT transforms the task competition between servers into a non-cooperative game problem on the side of edge servers,and adopts a joint optimization strategy based on potential game,which allows every edge server to determine the task selection prefe-rence with the goal of maximizing its own profit.On the side of mobile devices,the EWA algorithm in online learning is used to update the parameters,which affects the task selection preference of the edge servers from a global perspective and improves the overall deadline hit rate.Simulation results show that,compared with BGTA,MILP,greedy strategy,random strategy,and ideal strategy,TUSGT can increase the deadline hit rate by up to 30%,and increase the average profit of edge servers by up to 65%.
Reference | Related Articles | Metrics
Deployment Optimization and Computing Offloading of Space-Air-Ground Integrated Mobile Edge Computing System
ZHENG Hongqiang, ZHANG Jianshan, CHEN Xing
Computer Science    2023, 50 (2): 69-79.   DOI: 10.11896/jsjkx.220600057
Abstract602)      PDF(pc) (2318KB)(1978)       Save
As a new architecture,the space-air-ground integrated communication technology can effectively improve the network service quality of ground terminal,and has attracted widespread attention in recent years.This paper studies a space-air-ground integrated mobile edge computing system,in which multiple UAVs provide low-latency edge computing services for ground devices,and low earth orbit satellites provide ubiquitous cloud computing services for ground devices.Since the deployment position of the UAVs and the scheduling scheme of computing tasks are the key factors affecting the performance of the system,the deployment position of the UAVs,the link relationship between the ground terminal and the UAVs,and the offloading ratio of computing tasks need to be jointly optimized to minimize the average task response delay of the system.Since the formally defined joint optimization problem is a mixed nonlinear programming problem,this paper designs a two-layer optimization algorithm.In the upper layer of the algorithm,a particle swarm optimization algorithm that combines genetic algorithm operators is proposed to optimize the deployment position of the UAVs,and the greedy algorithm is used in the lower layer of the algorithm to optimize the computing task offloading scheme.The extensive simulation experiments verify the feasibility and effectiveness of the proposed method.The results show that the proposed method can achieve lower average task response time,compared to other baseline methods.
Reference | Related Articles | Metrics
Task Offloading Strategy Based on Game Theory in 6G Overlapping Area
GAO Lixue, CHEN Xin, YIN Bo
Computer Science    2023, 50 (5): 302-312.   DOI: 10.11896/jsjkx.220500120
Abstract147)      PDF(pc) (2871KB)(326)       Save
In order to realize the efficient computing of complex tasks in the overlapping area of 6G network base station(BS) service,the task offloading problem in the overlapping area is studied.Based on the comprehensive consideration of delay constraints,energy consumption,social effects and economic incentives,a multi-access edge computing network model with multiple BSs and multiple Internet of things(IoT) devices is constructed,and the BSs pricing strategy,the base station selection strategy and the task offloading strategy of IoT devices are jointly optimized to maximize the profit of BSs and the utility of IoT devices.To solve the problem of base station selection for IoT devices in overlapping areas,a many-to-one matching game model is built,and the BSs selection algorithm based on swap matching is proposed.A two-stage game model for pricing and task offloading interaction between BSs and IoT devices is established by introducing Stackelberg game theory,the existence and uniqueness of Stackelberg equilibrium are proved by backward induction.The optimal price and best response algorithm based on game theory(OBGT) based on game theory is proposed.Simulation and comparison experiments illustrate that OBGT algorithm can achieve convergence in a short time,and effectively improve the profit of BSs and the utility of IoT devices.
Reference | Related Articles | Metrics
Stackelberg Model Based Distributed Pricing and Computation Offloading in Mobile Edge Computing
CHEN Xuzhan, LIN Bing, CHEN Xing
Computer Science    2023, 50 (7): 278-285.   DOI: 10.11896/jsjkx.220500254
Abstract153)      PDF(pc) (2412KB)(323)       Save
As a novel computing paradigm,mobile edge computing(MEC) provides low latency and flexible computing and communication services for mobile devices by offloading computing tasks from mobile devices to the physically proximal network edge.However,because edge servers and mobile devices often belong to different parties,the conflicts of interest between them present a great challenge for MEC systems.Therefore,it is important to design a pricing and computing offloading scheme for MEC systems with multiple edge servers and mobile devices to maximize the utility of edge servers and optimize the quality of experience for mobile devices.Considering the complex interaction between multi-edge servers and mobile devices,the multi-leader and multi-follower Stackelberg model is used to analyze the interaction between them.The edge server acts as the leader to set the price for its computing resources,and the mobile device as the follower adjusts the offloading strategy according to the pricing of the edge server.Based on the Stackelberg model,a distributed iterative algorithm based on subgradient method is proposed,which can effectively converge to Stackelberg equilibrium.Simulation results show that the proposed scheme can improve the utility of edge server and guarantee the experience quality of mobile devices.
Reference | Related Articles | Metrics
Edge Offloading Framework for D2D-MEC Networks Based on Deep Reinforcement Learningand Wireless Charging Technology
ZHANG Naixin, CHEN Xiaorui, LI An, YANG Leyao, WU Huaming
Computer Science    2023, 50 (8): 233-242.   DOI: 10.11896/jsjkx.220900181
Abstract265)      PDF(pc) (2442KB)(445)       Save
A large amount of underutilized computing resources in IoT devices is what mobile edge computing requires.An edge offloading framework based on device-to-device communication technology and wireless charging technology can maximize the utilization of computing resources of idle IoT devices and improve user experience.The D2D-MEC network model of IoT devices can be established on this basis.In this model,the device chooses to offload multiple tasks to multiple edge devices according to the current environment information and the estimated device state.It applies wireless charging technology to increase the success rate of transmission and computation stability.The reinforcement learning method is used to solve the joint optimization allocation problem,which aims to minimize the computation delay,energy consumption,and task dropping loss as well as maximize the utilization of edge devices and the proportion of task offloading.In addition,to adapt to larger state space and improve learning speed,an offloading scheme based on deep reinforcement learning is proposed.Based on the above theory and model,the optimal solution and upper limit of performance of the D2D-MEC system are calculated by mathematical derivation.Simulation results show that the D2D-MEC offloading model and its offloading strategy have better all-around performance and can make full use of the computing resources of IoT devices.
Reference | Related Articles | Metrics