Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
    Content of Computer Network in our journal
        Published in last 1 year |  In last 2 years |  In last 3 years |  All
    Please wait a minute...
    For Selected: Toggle Thumbnails
    Multi-objective Optimization of D2D Collaborative MEC Based on Improved NSGA-III
    WANG Zhihong, WANG Gaocai, ZHAO Qifei
    Computer Science    2024, 51 (3): 280-288.   DOI: 10.11896/jsjkx.221100250
    Abstract44)      PDF(pc) (2479KB)(99)       Save
    In the current mobile edge computing(MEC),since tasks are directly uploaded to the MEC server for execution,there are problems such as high computing pressure on the edge server and insufficient utilization of resources on idle mobile devices.Using idle devices in the edge network for collaborative computing can realize rational utilization of user's idle resources and enhance the computing capacity of MEC.Therefore,a device-to- device(D2D) collaborative MEC for partial offloading(DCM-PO) is proposed.In this model,in addition to local computing and MEC server computing,part of the tasks can be uploaded to idle D2D devices for auxiliary computing.First,a multi-objective optimization problem is established to minimize the delay,energy consumption and cost of the edge network.Then,the non-dominated sorting genetic algorithm III(NSGA-III) is improved in the aspects of multi-chromosome mixed coding,adaptive crossover rate and mutation rate,so that it is suitable for solving the multi-objective optimization problem in the DCM-PO.Finally,simulation results show that,compared with the baseline MEC,the DCM-PO has advantages in several performance indicators.
    Reference | Related Articles | Metrics
    Green Energy-saving Routing Framework Based on Link Correlation Model
    WANG Ling, JIN Zikun, WU Yong, GENG Haijun
    Computer Science    2024, 51 (3): 289-299.   DOI: 10.11896/jsjkx.230800103
    Abstract70)      PDF(pc) (2657KB)(82)       Save
    With the rapid development of information technology,the scale of the Internet is increasing.At the same time,the energy consumption of the network is rising.In order to reduce network energy consumption,the industry generally adopts the method of closing the link with low link utilization.However,the current network energy-saving scheme can not effectively ba-lance the trade-off among energy-saving rate,computational overhead and path stretch.In order to solve the above problems,this paper proposes a green energy-saving routing framework based on link correlation model.The framework supports different link correlation models.It only needs the network topology,not the real-time traffic matrix in the network,so it is easier to deploy in the actual network.Based on the green energy-saving routing framework based on link correlation model,this paper implements four different energy-saving green routing algorithms:LRC(link row correlation),LCC(link column correlation),LRCC(link row column correlation) and LBC(link betweenness correlation).Experimental results show that in the real topology published by The Internet Topology Zoo and the topology generated by Brite simulation,the average energy saving rate of LRC,LCC,LRCC and LBC is 12.65% and 7.17% higher than that of DLF algorithm,and their average path stretch under the real topology and simulated topology is 3.00% and 13.75% lower than that of DLF algorithm.
    Reference | Related Articles | Metrics
    Efficient Routing Algorithm Based on Virtual Currency Transaction in DTN
    CUI Jianqun, LIU Shan, CHANG Yanan, LIU Qiangqiang, WU Qingcheng
    Computer Science    2024, 51 (3): 300-308.   DOI: 10.11896/jsjkx.221200135
    Abstract40)      PDF(pc) (2762KB)(75)       Save
    Due to the characteristics of intermittent connection of the delay tolerant network,as well as the limited resources such as the node's own cache and energy,the nodes in the DTN tend to show a certain degree of selfishness.The existence of selfish nodes may increase the network overhead and reduce the successful delivery rate of messages.In order to promote selfish nodes to participate in cooperation,an efficient routing algorithm PVCT(efficient routing algorithm based on virtual currency transaction in DTN) is proposed,which combines the small world characteristics of the delay tolerant network to improve the efficiency of the routing algorithm.The algorithm uses the virtual currency transaction mode,and prices according to the basic attributes,location attributes,social attributes,etc.of the node.The node gives the corresponding quotation according to the designed price function,and uses the price function to reasonably allocate the number of message copies.In PVCT strategy,nodes are divided into normal nodes and selfish nodes according to their judgments.When the number of hops of messages is less than or equal to two hops,they are forwarded according to the probability routing strategy.On the contrary,when the number of hops of the message is more than two hops,if a selfish node is encountered,the routing algorithm of the virtual currency transaction is executed.If the bid of the message carrying node is higher than the price of the forwarding node,the transaction will be conducted and the respective revenue status will be updated.Otherwise,entering the secondary price adjustment stage to coordinate the virtual quotation of both parties.Simulation results show that PVCT routing algorithm can better promote message forwarding in DTN,thus improving the overall performance of the network.
    Reference | Related Articles | Metrics
    Computation Offloading with Wardrop Routing Game in Multi-UAV-aided MEC Environment
    WANG Xinlong, LIN Bing, CHEN Xing
    Computer Science    2024, 51 (3): 309-316.   DOI: 10.11896/jsjkx.221100242
    Abstract68)      PDF(pc) (2717KB)(69)       Save
    The combination of Unmanned aerial vehicles(UAVs) and multi-access edge computing(MEC) technology breaks the limitations of traditional terrestrial communications,which has become a significant approach to solve the tasks offloading pro-blem in MEC.Due to the limited computing resources and energy that a single UAV can provide,the tasks offloading problem in a multi-UAV-assisted MEC environment is considered to cope with the growing network scale.Based on the problem definition,to obtain the offloading strategies in the equilibrium and optimal states and analyze the gap between them quantitatively,the tasks offloading process can be viewed as a Wardrop routing game on parallel links with player-specific latency functions.Since the equilibrium solution is difficult to compute,a new potential function is introduced to convert the equilibrium problem into a minimization problem of potential function.Simultaneously,the Frank-Wolfe algorithm is used to obtain the equilibrium and the optimal offloading strategies finally.At each iteration of this algorithm,the objective function is linearized,and the feasible direction is thus obtained by solving the linear programming,along which a one-dimensional search is performed in the feasible domain.Simulation experiments verify that the equilibrium offloading strategy based on the Wardrop routing game on parallel links can effectively reduce the model's total cost compared with other benchmark methods,and the ratio between the total costs caused by the equilibrium and optimal offloading strategies is about 1.
    Reference | Related Articles | Metrics
    Improved Beluga Whale Optimization for RFID Network Planning
    CHEN Yijun, ZHENG Jiali, LI Zhiqian, ZHANG Jiangbo, ZHU Xinghong
    Computer Science    2024, 51 (3): 317-325.   DOI: 10.11896/jsjkx.230300019
    Abstract42)      PDF(pc) (3536KB)(78)       Save
    With the development of radio frequency identification(RFID) technology,the demand for its application is getting higher and higher,and the research in reader deployment is gradually deepening.In order to solve the RFID reader location planning problem in the defined area,a mathematical optimization model is established with the objectives of tag coverage,collision interference between readers and load balancing in the delimited area,and an improved beluga whale optimization is proposed on the basis of the beluga whale optimization.Firstly,to address the shortcomings of the standard beluga whale optimization,which is easy to fall into the local optimum and lose the suboptimal solution,an update elite group mechanism is proposed.Secondly,to enhance the exploration capability of the algorithm,an opposition-based learning strategy is added,Finally,the algorithm is applied to solve the RFID network planning problem.By placing different numbers of clusters and randomly distributed tags in a certain environment,the improved beluga whale optimization is compared with the particle swarm algorithm,the gray wolf algorithm and the standard beluga whale optimization and the results are derived.Simulation results show that the performance of the improved beluga whale optimization improves on average 21.1% over the particle swarm optimization,28.5% over the grey wolf optimizer,and 3.3% over the beluga whale optimization in the same environment,indicating that the algorithm has better performance than the other three algorithms in terms of search accuracy,then,the effectiveness and feasibility of the application are verified by reader optimization deployment tests.
    Reference | Related Articles | Metrics
    CARINA:An Efficient Application Layer Protocol Conversion Approach for IoT Interoperability
    WANG Lina, LAI Kunhao, YANG Kang
    Computer Science    2024, 51 (2): 278-285.   DOI: 10.11896/jsjkx.230100108
    Abstract79)      PDF(pc) (2122KB)(923)       Save
    To solve the interoperability problems caused by numerous IoT devices and protocols with varying architectures and application scenarios,this paper proposes an efficient and scalable application layer protocol conversion approach.This approach uses protocol packet parsing and key method mapping for widely used HTTP and other three protocols.Considering the significant differences in the underlying architecture,message format,communication mode,and application scenario of the four protocols,the proposed approach solves the uniformity of information storage for different protocols by parsing the original data pa-ckets of the protocols and extracting key information,and storing the information in the form of key-value pairs.By constructing the key method mapping table,the methods of different protocols are mapped,realizing the interconnection between different protocols.Experimental results show that the proposed approach performs well in message conversion between the four protocols.It demonstrates a significantly improved conversion speed compared to the Ponte method of a comparable type,with a nearly 10-fold difference observed in some cases when subjected to the same test conditions.Furthermore,it supports twice as many conversion types as Ponte.Experimental results show that the proposed method outperforms state-of-the-art methods in terms of scalability and efficiency.
    Reference | Related Articles | Metrics
    Online Task Offloading Decision Algorithm for High-speed Vehicles
    DING Shuang, CAO Muyu, HE Xin
    Computer Science    2024, 51 (2): 286-292.   DOI: 10.11896/jsjkx.221200069
    Abstract48)      PDF(pc) (2469KB)(945)       Save
    When and where to offloading tasks are the main problems to be solved in the task offloading decision in vehicular edge computing.High speed driving of the vehicle causes frequent changes of offloading access devices,and the offloading communication between the vehicle and the offloading access device may break at any time.This requires that the offloading decision should be made immediately once the vehicle obtains an offloading opportunity.The existing offloading decision research focuses on how to maximize the offloading gain,without fully considering the impact of the timeliness of offloading decision on offloading strategy.As a result,the proposed offloading decision methods have high time and space complexity,and cannot be used for online task offloading decisions of high-speed vehicles.In order to solve the above problems,this paper first comprehensively considers the influence of offloading decision-making timeliness and offloading gain factors,establishes a task offloading decision model for high-speed vehicles,and transforms it into a variation of the secretary problem.Then,an online vehicle task offloading decision algorithm OODA based on weighted bipartite graph matching is proposed to assist the vehicle to make real-time task offloading decisions when passing through multiple heterogeneous edge servers sequentially,and maximize the overall offloading gain.Finally,theoretical analysis shows that the competitive ratio of OODA algorithm is analyzed theoretically.Extensive simulation results show that OODA is feasible and effective.
    Reference | Related Articles | Metrics
    Study on Deep Reinforcement Learning for Energy-aware Virtual Machine Scheduling
    WANG Yangmin, HU Chengyu, YAN Xuesong, ZENG Deze
    Computer Science    2024, 51 (2): 293-299.   DOI: 10.11896/jsjkx.230100031
    Abstract36)      PDF(pc) (2327KB)(909)       Save
    With the rapid development of computer technology,cloud computing technology has become one of the best ways to solve users’ storage and computing power demands.Among them,dynamic virtual machine scheduling based on NUMA architecture has become a hot topic in academia and industry.However,in current research,heuristic algorithms are difficult to schedule virtual machines in real time,and most of the literatures do not consider the energy consumption caused by virtual machine sche-duling under NUMA architecture.This paper proposes a service migration framework of large-scale mobile cloud center virtual machine based on deep reinforcement learning,and constructs the energy consumption model under NUMA architecture.Hierarchical adaptive sampling soft actor critic(HASAC) is proposed.In the cloud computing scenario,the proposed algorithm is compared with the classical deep reinforcement learning methods.Experiment results show that the improved algorithm proposed in this paper can handle more user requests in different scenarios,and consumes less energy.In addition,experiments on various strategies in the algorithm prove the effectiveness of the proposed strategy.
    Reference | Related Articles | Metrics
    Study on Cache-oriented Dynamic Collaborative Task Migration Technology
    ZHAO Xiaoyan, ZHAO Bin, ZHANG Junna, YUAN Peiyan
    Computer Science    2024, 51 (2): 300-310.   DOI: 10.11896/jsjkx.230600128
    Abstract69)      PDF(pc) (5188KB)(922)       Save
    Task migration technology has been propelled by the continuous emergence of compute-intensive and delay-sensitive services in edge networks.However,the process of task migration is hindered by technical bottlenecks such as complex and time-varying application scenarios,as well as the high difficulty in problem modeling.Especially when considering user movement,designing a reasonable task migration strategy that ensures the stability and the continuity of user service remains a persistent challenge.Therefore,a mobile-aware service pre-caching model and task pre-migration strategy are proposed to transform the problem of task migration into an optimization problem that combines optimal clustering strategies with edge service pre-caching.First of all,the current state of the task is initially predicted based on the user′s movement trajectory.To solve the problem of when and where to migrate,a pre-migration model for two task scenarios,namely mobile and load,is proposed by introducing the concept of dynamic cooperation cluster and migration prediction radius.And then,according to the tasks that need to be migrated,the maximum tolerant delay constraint is utilized to derive the limit value of cooperative cluster radius and target server quantity in a cluster.Subsequently,a user-centric distributed dynamic multi-server cooperative clustering algorithm(DDMC) and a cache-based double deep Q network algorithm(C-DDQN) for service are proposed to solve the problem of optimal clustering and service ca-ching.Finally,a low-complexity alternate minimization service cache location update algorithm is designed using the causality of service caches to achieve the optimal set of migration target servers,which realize server collaboration and network load balancing in task migration.Experimental results demonstrate the robustness and the system performance of the proposed migration selection algorithm.Compared with other algorithms,the total cost consumed is reduced by at least 12.06%,the total latency consumed is reduced by at least 31.92%.
    Reference | Related Articles | Metrics
    EAGLE:A Network Telemetry Mechanism Based on Telemetry Data Graph in Kernel and UserMode
    XIAO Zhaobin, CUI Yunhe, CHEN Yi, SHEN Guowei, GUO Chun, QIAN Qing
    Computer Science    2024, 51 (2): 311-321.   DOI: 10.11896/jsjkx.221100196
    Abstract39)      PDF(pc) (4055KB)(870)       Save
    Network telemetry is a new type of network measurement technology,which has the characteristics of strong real-time performance,high accuracy and low overhead.Existing network telemetry technologies have problems such as being unable to collect multi-granularity network data,unable to effectively store a large amount of original network data,unable to quickly extract and generate network telemetry information,and unable to design network telemetry solutions using kernel-mode and user-mode features.In order to solve the above problems,this paper proposes a multi-granularity,scalable,and network-wide network tele-metry mechanismEAGLE,which integrates kernel mode and user mode,and is based on telemetry data graphs and synchronization control blocks.EAGLE has designed a flexible and controllable network telemetry packet structure on the data plane that can collect multi-granularity data,and is used to obtain the data required by upper-layer applications.In addition,in order to quickly store,query,count,and aggregate network status data,and realize the rapid extraction and generation of telemetry data required by network telemetry packets,EAGLE proposes a network telemetry information generation method based on telemetry data graphs and synchronization control blocks.On this basis,in order to maximize the processing efficiency of network telemetry packets in the network telemetry mecha-nism,EAGLE proposes a network telemetry information embedding architecture that integrates the characteristics of kernel state and user state.Finally,this paper implements and tests the EAGLE scheme on Open vSwitch.The test results show that EAGLE can collect multi-granularity data and quickly extract and generate telemetry data with only a little increase in processing time and resource usage.
    Reference | Related Articles | Metrics
    Research Developments of 5G Network Slicing
    TIAN Chenjing, XIE Jun, CAO Haotong, LUO Xijian, LIU Yaqun
    Computer Science    2023, 50 (11): 282-295.   DOI: 10.11896/jsjkx.221100044
    Abstract131)      PDF(pc) (4405KB)(2355)       Save
    As the key enabling technology for fifth-generation communication networks and beyond,network slicing(NS) has received a surge of attention and recognition from network operators and academia for its promising abilities in vertical industry customization,quality of service(QoS) assurance,isolation,flexibility,and reliability.In recent years,many institutions have pre-sented their understanding and development plans of NS through special reports or white papers.However,these works have varying focuses,and the terminology has not been standardized,which hampers the researchers’ overall grasp of NS picture.To facilitate researchers’ understanding of the developmental context,technology architecture,management and orchestration,and other relevant aspects of NS,this paper presents a comprehensive review for recent related work.First,it provides an overview of NS by examining its historical background,definitions,and key characteristics.Subsequently,we discuss the end-to-end NS realization in three components,namely access NS,carrier NS,and core NS.In each component,the network architecture developments,technological breakthroughs,and standardization achievements in recent years are presented and analyzed.Afterward,an introduction is made to the content of network slice management and orchestration,and the relevant research is discussed accor-ding to the slicing scenario.Finally,in view of NS's development requirements and practical dilemmas,several open research pro-blems are identified.
    Reference | Related Articles | Metrics
    RFID Multi-tag Relative Location Method Based on RSSI Sequence Features
    HE Yong, GUO Zhengxin, GUI Linqing, SHENG Biyun, XIAO Fu
    Computer Science    2023, 50 (11): 296-305.   DOI: 10.11896/jsjkx.230300165
    Abstract226)      PDF(pc) (4021KB)(2350)       Save
    High-precision indoor multi-target localization technology is crucial for implementing customized intelligent services.Currently,indoor localization technology based on radio frequency identification(RFID) has received extensive attention from both academia and industry due to its advantages such as low cost,easy deployment,and multi-target sensing.However,traditional RFID-based multi-target relative localization systems require the use of multiple receiving antennas for data transmission and reception,leading to high deployment costs.Additionally,the received signal strength indication(RSSI) sequence also have data interruption.To address these problems,this paper proposes an RFID multi-tag relative localization method based on the features of RSSI sequence.This method first uses uniformly moving antennas to obtain the received RSSI signal sequences of multiple target tags.Then,the received RSSI sequence data is pre-processed to fill in missing data and construct a sequence similarity measurement table based on cosine similarity.Finally,this paper designs different tag grouping algorithms from multiple group dimensions to achieve relative localization of RFID multi-tags.Through a large number of relative localization tests on a typical indoor multi-group RFID tag array,experimental results show that the proposed method has an average accuracy of over 92% for RFID tag relative localization,and the average localization calculation time for a 5*5 antenna array is less than 1 s.Compared with other relative localization works,the computational efficiency of this method is improved by nearly 10 times.
    Reference | Related Articles | Metrics
    Adaptive Model Quantization Method for Intelligent Internet of Things Terminal
    WANG Yuzhan, GUO Bin, WANG Hongli, LIU Sicong
    Computer Science    2023, 50 (11): 306-316.   DOI: 10.11896/jsjkx.230300078
    Abstract160)      PDF(pc) (5263KB)(2223)       Save
    With the rapid development of deep learning and the Internet of Everything,the combination of deep learning and mobile terminal devices has become a major research hotspot.While deep learning improves the performance of terminal devices,it also faces many challenges when deploying models on resource-constrained terminal devices,such as the limited computing and storage resources of terminal devices,and the inability of deep learning models to adapt to changing device context.We focus on the adaptive quantization of deep models with resource adaptive.Specifically,a resource-adaptive mixed-precision model quantization method is proposed,which firstly uses the gated network and the backbone network to construct the model and partitioned model at layer as the granularity to find the best quantization policy of the model,and combines the edge devices to reduce the model resource consumption.In order to find the optimal model quantization policy,FPGA-based deep learning model deployment is adopted.When the model needs to be deployed on resource-constrained edge devices,adaptive training is performed according to resource constraints,and a quantization-aware method isadopted to reduce the accuracy loss caused by model quantization.Experimental results show that our method can reduce the storage space by 50% while retaining 78% accuracy,and reduce the energy consumption by 60% on the FPGA device with no more than 2% accuracy loss.
    Reference | Related Articles | Metrics
    Efficient Distributed Training Framework for Federated Learning
    FENG Chen, GU Jingjing
    Computer Science    2023, 50 (11): 317-326.   DOI: 10.11896/jsjkx.221100224
    Abstract201)      PDF(pc) (3035KB)(2167)       Save
    Federated learning effectively solves the problem of isolated data island,but there are some challenges.Firstly,the training nodes of federated learning have a large hardware heterogeneity,which has an impact on the training speed and model performance.The existing researches mainly focus on federated optimization,but most methods do not solve the problem of resource waste caused by the different computing time of each node in synchronous communication mode.In addition,most of the training nodes in federated learning are mobile devices,so the poor network environment leads to high communication overhead and serious network bottlenecks.Existing methods reduce the communication overhead by compressing the gradient uploaded by the training nodes,but inevitably bring the loss of model performance and it is difficult to achieve a good balance between quality and speed.To solve these problems,at the computing stage,this paper proposes adap-tive federated averaging(AFA),which adaptatively coordinates the local iteration according to the hardware performance of each node,minimizes the idle time of waiting for global gradient download and improves the computational efficiency of federated learning.In the communication stage,it proposes double sparsification(DS) to minimize the communication overhead by gradient sparsification on the training node and parameter server.In addition,each training node compensates the error according to the lost value of the local gradient and the global gra-dient,and reduces the communication cost greatly in exchange for lower model performance loss.Experimental results on the image classification dataset and the spatio-temporal prediction dataset prove that the proposed method can effectively improve the training acceleration ratio,and is also helpful to the model performance.
    Reference | Related Articles | Metrics
    Joint Layered Message Passing Detection for Multi-user Large-scale LDPC-SM-MIMO System
    ZOU Xin, ZHANG Shunwai
    Computer Science    2023, 50 (11): 327-332.   DOI: 10.11896/jsjkx.220900103
    Abstract152)      PDF(pc) (2547KB)(2195)       Save
    Message passing detection(MPD) is the most commonly used detection algorithm in multi-user large-scale spatial mo-dulation multi-input multi-output(SM-MIMO) systems,but the traditional MPD algorithm is still complex.To overcome this pro-blem,the layered MPD(LMPD) algorithm isused to accelerate the convergence speed of the algorithm.Then,low-density parity-check(LDPC) codes are combined with SM-MIMO systems,and a joint LMPD-belief propagation(JLMPD-BP) algorithm in which the LMPD can use the feedback information of BP decoding is proposed to further improve the system detection perfor-mance.Theoretical analysis and simulation results show that,compared with the traditional MPD algorithm,the LMPD algorithm accelerates the convergence speed of the algorithm without losing the bit error rate(BER) performance.For example,when the signal-to-noise ratio is 4 dB,LMPD algorithm need only 2 iterations,while MPD algorithm need 3 iterations.At the same time,thanks to the great advantages of LDPC codes,JLMPD-BP algorithm greatly reduces BER of the system.When the iteration number is (2,2,2) and SNR=2 dB,compared with LMPD-BP algorithm with iteration (4,4,0),the BER of JLMPD-BP algorithm deceases from 10-2 to 5×10-3.
    Reference | Related Articles | Metrics
    vsocket:an RDMA-based Acceleration Method Compatible with Standard Socket
    CHEN Yunfang, MAO Haotian, ZHANG Wei
    Computer Science    2023, 50 (10): 239-247.   DOI: 10.11896/jsjkx.220800048
    Abstract246)      PDF(pc) (3675KB)(2199)       Save
    In order to be compatible with Linux standard sockets and utilize RDMA to improve the performance of programs using sockets,this paper proposes to construct a middleware Viscore Socket adaptor,referred to as vsocket between the upper-la-yer application and the underlying RDMA.By intercepting the socket API,we seamlessly transfer the data stream sent and received by the upper-layer application through the Linux socket to the RDMA bearer.The vsocket bypasses kernel and implements memory management mechanism in user space for TCP and UDP.It utilizes RC type RDMA network to support TCP acceleration,uses UD type RDMA network to support UDP acceleration,and reuses Linux UDP to assist routing.Experimental results show that vsocket can ensure the compatibility of the Linux standard socket interface,get rid of the limitation of the Linux kernel network protocol stack,and improve the network performance.
    Reference | Related Articles | Metrics
    Edge Server Placement Algorithm Based on Spectral Clustering
    GUO Yingya, WANG Lijuan, GENG Haijun
    Computer Science    2023, 50 (10): 248-257.   DOI: 10.11896/jsjkx.220900211
    Abstract146)      PDF(pc) (2532KB)(2007)       Save
    With the rapid development of the Internet of Things(IoT) and 5G networks,mobile edge computing has attracted widespread attention from industry and academia for its low access latency,low bandwidth costs,and low energy consumption.In mobile edge computing,edge servers provide services for mobile user requests,and the placement of edge servers has an important impact on edge computing performance and user experience.At present,the placement algorithm of edge servers only considers the geographical location of server placement,and lacks the consideration of the number of users connected to the base station.Therefore,in the case of uneven distribution of actual users,the average user access delay caused by the server placement position obtained by the existing algorithm is large.In order to better solve the above problems,this paper proposes a latency minimization edge server placement algorithm based on spectral clustering.When solving the problem of edge server placement,the algorithm not only considers the geographical location of the base station,but also takes into account the important parameter of the number of users connected to different base stations,which can effectively reduce the average access latency of users and make the workload of each edge server more balanced at the same time.In the simulation experiment,this paper uses the real base station dataset of Shanghai Telecom to test the performance of the proposed server placement algorithm.Simulation experiment results show that the user-distributed access delay minimization edge server placement algorithm has significant advantages in solving the edge server placement problem.In terms of access latency,the performance of LAMP algorithm is increased by 37.9% compared with K-means algorithm.Compared with the K-means algorithm,the performance of the LAMP algorithm can be improved by up to 82.85% in terms of load balancing.The LAMP algorithm exhibits superior performance in reducing access latency and balancing edge server workloads.
    Reference | Related Articles | Metrics
    UAV Geographic Location Routing Protocol Based on Cross Layer Link Quality State Awareness
    ZHOU Yanling, MI Zhichao, LU Yanxia, WANG Hai
    Computer Science    2023, 50 (10): 258-265.   DOI: 10.11896/jsjkx.230500221
    Abstract79)      PDF(pc) (3071KB)(2039)       Save
    The geographical location routing protocol has been widely used in FANET networks due to its low overhead and good scalability.However,its strategy of relying on the nearest neighbor node as the relay in greedy forwarding process still has certain limitations.This paper proposes a cross layer link quality state aware unmanned aerial vehicle geographic location routing protocol(CLAQ-GPSR) suitable for frequently changing topology and congested network environments by sensing channel link quality.By establishing a communication security zone,establishing a measurement model for link load and inter flow interference,using delivery ratio ETX to measure link quality,and combining data from the physical layer,MAC layer,and network layer to comprehensively measure the most reliable relay nodes,communication quality can be improved.At the same time,the combination of left and right hand forwarding rules is used to accelerate the forwarding speed in path recovery and avoid routing loops and other issues that occur in traditional peripheral forwarding.Through comparative analysis on network simulation platforms,it is found that,compared with the traditional GPSR,W-GeoR,and DGF-ETX protocols,the proposed protocol has advantages in terms of packet delivery success rate,end-to-end latency,and average hop count.
    Reference | Related Articles | Metrics
    Performance Analysis of Multi-server Gated Service System Based on BiLSTM Neural Networks
    YANG Zhijun, HUANG Wenjie, DING Hongwei
    Computer Science    2023, 50 (10): 266-274.   DOI: 10.11896/jsjkx.221000221
    Abstract78)      PDF(pc) (2545KB)(1907)       Save
    In order to meet the requirements of fast operation,low delay,good performance and fairness,a multi-server gated service system is proposed and its predictive analysis is carried out using BiLSTM (bi-directional long short-term memory) neural networks.Multi-server access is used to reduce the network delay and improve system performance.Both synchronous and asynchronous approaches can be used when multiple servers are scheduled.Firstly,the system model of multi-server gated service is investigated.Secondly,the average queue length,average cycle period and average delay of multi-server gated service are solved on the basis of single server using the analytical methods of embedded Markov chain and probabilistic generating function.Meanwhile,simulation experiments are conducted using Matlab to compare the theoretical and simulated values of single server system and multi-server system respectively system analysis,comparing both multi-server synchronous and asynchronous approaches.Finally,a BiLSTM neural network is constructed to predict the performance of the multi-server system.Experiments show that the asynchronous approach of this multi-server system is superior to the synchronous and the single-server system,and the multi-server asynchronous system has better performance,lower delay and higher efficiency.Comparing the three basic multi-server service systems,the gated service system is more stable while ensuring fairness.And the use of BiLSTM neural network prediction algorithm can accurately predict the performance of the system and improve the computational efficiency,which is a guideline for the performance evaluation of the polling system.
    Reference | Related Articles | Metrics
    Cost-minimizing Task Offload Strategy for Mobile Devices Under Service Cache Constraint
    ZHANG Junna, CHEN Jiawei, BAO Xiang, LIU Chunhong, YUAN Peiyan
    Computer Science    2023, 50 (10): 275-281.   DOI: 10.11896/jsjkx.220900185
    Abstract283)      PDF(pc) (2869KB)(1899)       Save
    Edge computing provides more computing and storage capabilities at the edge of the network to effectively reduce execution delay and power consumption of mobile devices.Since applications consume more and more computing and storage resources,task offloading has become one of effective solutions to address the inherent limitations in mobile terminals.However,existing researches on task offloading often ignore the diversity of service requirements for different types of tasks and that edge servers have limited services capabilities,resulting in infeasible offloading decisions.Therefore,we study the task offloading pro-blem that can optimize the execution cost of mobile devices under service cache constraints.We first design a collaborative offloa-ding model integrated remote cloud,edge server and local device to balance the load of edge server.Meanwhile,cloud server is used to make up for the limited-service caching capacity of the edge server.Secondly,a task offloading algorithm suitable for cloud-edge-device collaboration is proposed to optimize the execution delay and energy cost of mobile devices.When the task is offloaded,the improved greedy algorithm is used to select the best edge server.Then,the offload decision of the task is determined by comparing the execution cost of the task at different locations.Experimental results show that the proposed algorithm can effectively reduce the execution cost of mobile devices compared with the comparison algorithms.
    Reference | Related Articles | Metrics
    Bidirectional Quality Control Strategies Based on CIDA and PI-cosine in Crowdsourcing
    LIU Qingju, PAN Qingxian, TONG Xiangrong, YU Song, PAN Yanan
    Computer Science    2023, 50 (10): 282-290.   DOI: 10.11896/jsjkx.221000133
    Abstract251)      PDF(pc) (2645KB)(1902)       Save
    With the popularity of mobile smart terminals,crowdsourcing to collect large-scale perceptual data becomes easier and easier.The selfishness of crowdworkers makes them want to get the most pay with the least effort,and even collude with each other and submit crowdsourced data arbitrarily,resulting in poor quality of crowdsourced task completion.This paper proposes a jury-based quality control strategy,a mechanism that solves the data validation problem.To address the behaviors that degrade the quality of crowdsourcing,this paper uses the proposed community influence detection algorithm(CIDA) to detect conspiracy leaders and their organizations after determining the presence of spam employees and conspiracy organizations,and finally uses an improved similarity detection algorithm(PI-Cosine) to screen out for spam employees.These two aspects are used to improve the quality of crowdsourcing data.Experiments show that the proposed method improves the accuracy of 12.3% over Cosine similarity detection algorithm in accuracy and F1-score measures.
    Reference | Related Articles | Metrics
    Reliability Constraint-oriented Workflow Scheduling Strategy in Cloud Environment
    LI Jinliang, LIN Bing, CHEN Xing
    Computer Science    2023, 50 (10): 291-298.   DOI: 10.11896/jsjkx.220800039
    Abstract121)      PDF(pc) (1836KB)(1907)       Save
    As more and more computationally intensive dependent applications are offloaded to the cloud environment for execution,the problem of workflow scheduling has received extensive attention.Aiming at the workflow scheduling problem of multi-objective optimization in cloud environment,and considering that the server may experience performance fluctuations and downtime during task execution,based on fuzzy theory,a triangular fuzzy number is used to represent task execution time and data transmission time.A genetic algorithm-based adaptive particle swarm optimization based GA(APSOGA) is proposed.The purpose is to comprehensively optimize the completion time and execution cost of the workflow under the reliability constraints of the workflow.In order to avoid the premature convergence problem of the traditional particle swarm optimization algorithm,the proposed algorithm introduces the random two-point crossover operation and single-point mutation operation of the genetic algorithm,which effectively improves the search performance of the algorithm.Experimental results show that,compared with other strategies,APSOGA-based scheduling strategy can effectively reduce the time and cost of reliability-constrained scientific workflows in cloud environments.
    Reference | Related Articles | Metrics
    Edge Intelligent Sensing Based UAV Space Trajectory Planning Method
    LIU Xingguang, ZHOU Li, ZHANG Xiaoying, CHEN Haitao, ZHAO Haitao, WEI Jibo
    Computer Science    2023, 50 (9): 311-317.   DOI: 10.11896/jsjkx.220800032
    Abstract156)      PDF(pc) (3464KB)(1790)       Save
    With the emergence of a large number of frequency-using equipment,the radio environment for UAVs to perform tasks has become more and more complex,which puts forward higher requirements for UAVs to recognize the situation and autonomous obstacle avoidance.In view of this,this paper proposes a 3D trajectory planning method for UAVs based on side-end colla-boration.First,a UAV trajectory planning framework with side-end collaboration is proposed,which can synergistically improve the environment perception and autonomous obstacle avoidance capabilities of UAVs under communication connectivity constraints.Second,it proposes an artificial potential field method based on the deep deterministic policy gradient(DDPG) algorithm to avoid UAVs from falling into local minimum points and optimize UAV flight energy consumption.Finally,by performing simulation experiments in static and dynamic interference environments,compared with other trajectory planning methods,the proposed method can optimize the UAV flight trajectory and transmission data rate,which reduces the flight energy consumption of UAVs 5.59% and 11.99% respectively,and improve the transmission data rate 7.64% and 16.52% in static and dynamic interference environments.The proposed method also significantly improves the communication stability and the adaptability of UAVs to complex electromagnetic environments.
    Reference | Related Articles | Metrics
    EGCN-CeDML:A Distributed Machine Learning Framework for Vehicle Driving Behavior Prediction
    LI Ke, YANG Ling, ZHAO Yanbo, CHEN Yonglong, LUO Shouxi
    Computer Science    2023, 50 (9): 318-330.   DOI: 10.11896/jsjkx.221000064
    Abstract395)      PDF(pc) (3439KB)(1738)       Save
    In large-scale dynamic traffic scenarios,predicting vehicle driving behavior quickly and accurately is one of the most challenging issues in the field of intelligent traffic driving.The prediction of vehicle driving behavior should consider not only the efficiency of communication,but also the historical vehicle trajectory and the interaction between vehicles.Considering the above factors,this paper proposes a communication-efficient distributed machine learning framework based on edge-enhanced graph convolutional neural networks(EGCN-CeDML).Compared with the centralized prediction framework on a single device,EGCN-CeDML is a communication-efficient distributed machine learning framework,which does not need to transmit all the raw data to the cloud server,and directly stores,processes,and computes user data locally.This way of training neural networks on multiple edge devices relieves the pressure of centralized training neural networks,reduces the amount of transmitted data and communication latency,improves data processing efficiency,and preserves user privacy to a certain extent.EGCN-LSTM deployed on each edge device utilizes the edge-enhanced attention mechanism and the feature transfer mechanism of the graph convolutional neural network to promptly extract and transfer the interaction information between vehicles when the number of surrounding vehicles increases to more than a dozen,ensuring more accurate prediction performance and lower time complexity.In addition to vehicle driving behavior prediction,each edge device can flexibly control the type and scale of the neural network according to its own computing and storage capabilities,under the premise of ensuring the performance of the neural network,which is suitable for different application scenarios.The experimental results of EGCN-CeDML on public dataset NGSIMshow that the amount of data to be transmitted by only accounts for 21.56% of the centralized training.And the calculation time and prediction performance of EGCN-CeDML are better than those of previous models regardless of traffic complexity,with an accuracy rate of 0.939 1,a recall rate of 0.955 7,and an F1 score of 0.947 3.When the prediction time is one second,the prediction accuracy reaches 91.21%.Even if the number of vehicles increases,the algorithm maintains a low time complexity and is stable within 0.1 seconds.
    Reference | Related Articles | Metrics
    Feature Weight Perception-based Prediction of Virtual Network Function Resource Demands
    WANG Huaiqin, LUO Jian, WANG Haiyan
    Computer Science    2023, 50 (9): 331-336.   DOI: 10.11896/jsjkx.221000012
    Abstract254)      PDF(pc) (2441KB)(1692)       Save
    Virtual network function(VNF) provides services in the form of service function chain(SFC) to meet the performance requirements of different services.Due to the dynamic nature of the network,allocating fixed resources to VNF instances will lead to excessive or insufficient resources for VNF instances.Previous studies have not distinguished the importance of network load characteristics related to VNF profiles.Therefore,a dynamic VNF resource demand prediction method based on feature weight perception is proposed.Firstly,ECANet is used to learn the weight values of VNF features,to reduce the negative impact of useless features on the model prediction results.Secondly,because the VNF profile data set has structural characteristics,when building the VNF resource prediction model,it is necessary to consider mining the deep interrelationship between features by strengthening feature interaction.It is proposed to use the deep feature interactive network(DIN) to enhance the interaction between network load features and VNF performance features,so as to improve the prediction accuracy of the model.Finally,compared with similar methods on the benchmark dataset,it is found that the proposed method has more advantages in the effectiveness and accuracy of prediction.
    Reference | Related Articles | Metrics
    Routing Protection Scheme with High Failure Protection Ratio Based on Software-defined Network
    GENG Haijun, WANG Wei, ZHANG Han, WANG Ling
    Computer Science    2023, 50 (9): 337-346.   DOI: 10.11896/jsjkx.220900220
    Abstract222)      PDF(pc) (2195KB)(1702)       Save
    SDN has attracted extensive attention from academia for its advantages of strong programmability and centralized control.Existing SDN devices still use the shortest path protocol when performing packet forwarding.When a node in the shortest path fails,the network re-convergence is still required.During this period,packets may be discarded and thus cannot be delivered to the destination node,which has an impact on the flow of real-time applications and affects the user experience.The academia generally adopts the routing protection schemes to deal with network failures.The existing routing protection schemes have the following two problems:(1)the failure protection ratio is low;(2)when the network fails,the backup path may have routing loops.In order to solve the above two problems,a backup next hop calculation rule is proposed.Then,based on this rule,a routing protection algorithm with high hailure protection ratio(RPAHFPR) is designed,which combines the path generation algorithm(PGA),side branch first algorithm(SBF) and loop avoidance algorithm(LAA).It can simultaneously solve the low failure protection rate and routing loop problems faced by existing routing protection methods.Finally,the performance of RPAHFPR scheme is verified in a large number of real network topologies and simulated network topologies.Compared with the classic NPC and U-TURN,the failure protection rate of RPAHFPR is increased by 20.85% and 11.88% respectively,and it can achieve 100% fai-lure protection rate in 86.3% topology,and more than 99% failure protection rate in all topology.The path stretching degree of RPAHFPR is basically close to 1,without introducing too much time delay.
    Reference | Related Articles | Metrics
    Task Offloading Algorithm Based on Federated Deep Reinforcement Learning for Internet of Vehicles
    LIN Xinyu, YAO Zewei, HU Shengxi, CHEN Zheyi, CHEN Xing
    Computer Science    2023, 50 (9): 347-356.   DOI: 10.11896/jsjkx.220800243
    Abstract253)      PDF(pc) (2995KB)(1884)       Save
    With the rapid development of the service system of Internet of Vehicles applications,vehicles with limited computational resources have difficulty in handling these computation-intensive and latency-sensitive applications.As a key technique in mobile edge computing,task offloading can address the challenge.Specially,a task offloading algorithm based on federated deep reinforcement learning(TOFDRL) is proposed for dynamic multi-vehicle multi-road-side-unit(multi-RSU) task offloading environment in Internet of Vehicles.Each vehicle is considered as an agent,and a federated learning framework is used to train each agent.Each agent makes distributed decisions,aiming to minimize the average system response time.Evaluation experiments are set up to compare and analyze the performance of the proposed algorithm under a variety of dynamically changing scenarios.Si-mulation results show that the average response time of system solved by the proposed algorithm is shorter than that of the rule-based algorithm and the multi-agent deep reinforcement learning algorithm,close to the ideal scheme,and its solution time is much shorter than the ideal solution.Experimental results demonstrate that the proposed algorithm is able to solve an average system response time which is close to the ideal solution within an acceptable execution time.
    Reference | Related Articles | Metrics
    Solution to Cross-domain VPN Based on Virtualization
    TAO Zhiyong, ZHANG Jin, YANG Wangdong
    Computer Science    2023, 50 (9): 357-362.   DOI: 10.11896/jsjkx.220800252
    Abstract333)      PDF(pc) (3230KB)(1678)       Save
    To address the problems of complex implementation of cross-domain virtual private networks built in current carrier networks,excessive load on devices at the border of autonomous systems,and the existence of single points of failure,this paper proposes a solution for building cross-domain virtual private networks by virtualization.The scheme consists of four fundamental steps:the establishment of public network tunnels,the establishment of local VPN instances,the virtualization of autonomous system border devices,and the interaction of private network routes of border devices.To evaluate the feasibility and superiority of the scheme,comparative experiments are conducted with the cross-domain virtual private network constructed by the tradi-tional multi-hop EBGP approach in the dimensions of switching capacity,route entries,and label entries.Experimental results show that the cross-domain virtual private network constructed by this scheme enhances the data processing capability of the autonomous system boundary devices and reduces the amount of data to be processed by the autonomous system boundary devices.In general,this improved scheme is advanced and effective for building cross-domain virtual private networks.
    Reference | Related Articles | Metrics
    Edge Offloading Framework for D2D-MEC Networks Based on Deep Reinforcement Learningand Wireless Charging Technology
    ZHANG Naixin, CHEN Xiaorui, LI An, YANG Leyao, WU Huaming
    Computer Science    2023, 50 (8): 233-242.   DOI: 10.11896/jsjkx.220900181
    Abstract261)      PDF(pc) (2442KB)(433)       Save
    A large amount of underutilized computing resources in IoT devices is what mobile edge computing requires.An edge offloading framework based on device-to-device communication technology and wireless charging technology can maximize the utilization of computing resources of idle IoT devices and improve user experience.The D2D-MEC network model of IoT devices can be established on this basis.In this model,the device chooses to offload multiple tasks to multiple edge devices according to the current environment information and the estimated device state.It applies wireless charging technology to increase the success rate of transmission and computation stability.The reinforcement learning method is used to solve the joint optimization allocation problem,which aims to minimize the computation delay,energy consumption,and task dropping loss as well as maximize the utilization of edge devices and the proportion of task offloading.In addition,to adapt to larger state space and improve learning speed,an offloading scheme based on deep reinforcement learning is proposed.Based on the above theory and model,the optimal solution and upper limit of performance of the D2D-MEC system are calculated by mathematical derivation.Simulation results show that the D2D-MEC offloading model and its offloading strategy have better all-around performance and can make full use of the computing resources of IoT devices.
    Reference | Related Articles | Metrics
    Analysis and Prediction of Cloud VM CPU Load Based on EMPC-BCGRU
    XIE Tonglei, DENG Li, YOU Wenlong, LI Ruilong
    Computer Science    2023, 50 (8): 243-250.   DOI: 10.11896/jsjkx.220600264
    Abstract171)      PDF(pc) (3589KB)(279)       Save
    Cloud platform resource prediction is of great significance for resource management and energy saving.Cloud VM technology is a virtualization method implemented by the cloud to make full use of physical resources,but effective cloud VM load prediction is still challenging,because the cloud VM load has periodic and aperiodic change patterns and sudden load peaks,and the cloud VM load is affected by the random submission of jobs by users.In order to accurately analyze the change mode of VM load and improve the performance of VM CPU load prediction,a cloud VM load prediction method based on decomposition-prediction is proposed.Through EMD and PCA of cloud VM load mode decomposition,the characteristic fluctuation sequences of different time scales are obtained.The convolution layer of the prediction model can fully extract the decomposed features,and learn the forward and backward dependencies of the sequence through the bidirectional gated cyclic neural network,which improves the ability of the prediction model to learn the load change mode of the VM.Finally,single-step and multi-step prediction experiments are performed on the 2019 VM data sets generated by Microsoft Azure in the real cloud environment,which verifies the effectiveness of the prediction method.
    Reference | Related Articles | Metrics
      First page | Prev page | Next page | Last page Page 1 of 9, 243 records