Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
    Content of Network & Communication in our journal
        Published in last 1 year |  In last 2 years |  In last 3 years |  All
    Please wait a minute...
    For Selected: Toggle Thumbnails
    MEC Offloading Model Based on Linear Programming Relaxation
    LEI Xuemei, LIU Li, WANG Qian
    Computer Science    2023, 50 (6A): 211200229-5.   DOI: 10.11896/jsjkx.211200229
    Abstract322)      PDF(pc) (2473KB)(203)       Save
    In the mobile edge computing(MEC),the local device can offload tasks to the edge node near the network for computation processing,thereby reducing the delay,power consumption and overload of the client,also the computing loading core network.For the complex MEC environment of multi-type edge nodes,a three-stage computing offloading decision is modeled based on linear programming relaxation,that is CART-CRITIC-LR(CCLR) algorithm.First,the classification and regression decision tree algorithm(CART) is used to screen out the locally executed calculation tasks.Secondly,the multi-attribute decision-making algorithm(CRITIC) is used to determine the weight of the three performance indicators respectively.Then the calculation offloa-ding problem is modeled as a linear programming relaxation(LR ) to optimize the equilibrium solutions among the total delay,total energy consumption and total cost.Each offloading strategy is analyzed by comprehensively comparing the energy consumption,cost,delay.experimental results show that the CCLR algorithm achieves the shortest total delay while ensuring the multi-objective global optimization,which illustrates the effectiveness and applicability of the algorithm.
    Reference | Related Articles | Metrics
    Dynamic Energy Optimization Strategy Based on Relay Selection and Queue Stability
    CHEN Che, ZHENG Yifeng, YANG Jingmin, YANG Liwei, ZHANG Wenjie
    Computer Science    2023, 50 (6A): 220100082-8.   DOI: 10.11896/jsjkx.220100082
    Abstract205)      PDF(pc) (2236KB)(269)       Save
    Relay-assisted mobile edge computing(MEC) has recently emerged as a promising paradigm to enhance resource utilization and data processing capability of low-power networks,such as 5G networks and Internet of things (IoT).Nevertheless,the design of relay selection and computation offloading policies to improve the energy efficiency for queue stability system remains challenging.In order to solve the energy consumption optimization problem in relay-assisted MEC system,a mixed integer nonli-near stochastic optimization model is established,with the objective of minimizing the long-term average energy consumption,subject to a task buffer stability constraint.The problem is solved by decomposing into two stages:relay selection and relay offloa-ding decision.In relay selection stage,the relay node is determined by setting a weighted parameter V1 to minimize the weighted sum of transmission energy consumption and buffer queue length.In offloading decision stage,the stochastic optimization is converted to a deterministic optimization problem based on Lyapunov optimization method.Specifically,at each time slot,the theore-tical expressions of optimal relay calculation frequency,relay transmission power and remote calculation frequency are obtained under the constraint of task buffer queue stability.Simulation results show that the energy optimization strategy can effectively reduce the long-term average energy consumption under the constraint of buffer queue stability,and converge to the optimal solution obtained by exhaustive searching.Besides,the weight of energy consumption and waiting time can be changed by adjusting the values of parameters V1 and V2 in algorithm.
    Reference | Related Articles | Metrics
    Cluster Head Selection Algorithm Based on Improved Butterfly Optimization Algorithm in WSN
    YANG Shiyu, ZHAO Bing, PENG Yue
    Computer Science    2023, 50 (6A): 220100166-5.   DOI: 10.11896/jsjkx.220100166
    Abstract123)      PDF(pc) (2114KB)(242)       Save
    Aiming at the problem that the cluster head selection of clustering routing protocol in wireless sensor networks is unreasonable,resulting in uneven network load and shortened network life cycle,a cluster head selection algorithm CIBOA based on improved butterfly optimization algorithm IBOA is proposed.Firstly,based on the butterfly optimization algorithm BOA,the Circle chaotic map and nonlinear dynamic convergence factor are introduced to control the parameter,which improves the search speed and convergence accuracy of butterfly optimization algorithm,and makes the search ability stronger..In the process of cluster head selection,a new fitness function is built on the basis of the residual energy,distance among the nodes and BS and average distance between neighbor nodes.The IBOA is used for improving the random problem of cluster head selection and comprehensively select better cluster heads.Simulation results show that the cluster head selection algorithm CIBOA based on the improved butterfly optimization algorithm can comprehensively consider the factors such as node energy and distance and prolong the network lifetime.
    Reference | Related Articles | Metrics
    Tag Identification for UHF RFID Systems Based on Deep Learning
    YU Jiabao, YAO Junmei, XIE Ruitao, WU Kaishun, MA Junchao
    Computer Science    2023, 50 (6A): 220200151-6.   DOI: 10.11896/jsjkx.220200151
    Abstract247)      PDF(pc) (3305KB)(233)       Save
    The most basic function of radio frequency identification(RFID) system is tag identification.However,the current authentication system cannot detect forged or cloned tags,which leads to potential security and privacy issues.At present,there are encryption based authentication protocols and feature extraction based solutions,among which encryption based authentication protocol is incompatible with existing protocols and feature extraction based authentication protocol has limitations such as difficulty in feature extraction or short recognition distance.This paper proposes a tag identification method for UHF RFID systems to overcome the two shortenings.The core idea is to first extract signals irrelevant to the logical information of tags from the backscattered RFID signals,and then send them to the convolutional neural network for similarity matching.According to the score of similarity matching and a given threshold,the authenticity of the tag is finally recognized.In this paper,we establish an experimental system which contains an USRP N210 used as the reader of the RFID system,and contains 150 UHF commercial tags to backscatter signals from the reader.We then collects the RFID signals based on this experiment.Experimental results show that the tag recognition accuracy based on deep learning can reach more than 94%,and its equal error ratio(EER) is 0.034 when the recognition distance is up to 2m.
    Reference | Related Articles | Metrics
    Edge Server Placement for Energy Consumption and Load Balancing
    FU Xiong, FANG Lei, WANG Junchang
    Computer Science    2023, 50 (6A): 220300088-5.   DOI: 10.11896/jsjkx.220300088
    Abstract292)      PDF(pc) (2456KB)(243)       Save
    At present,the traditional cloud computing mode can not meet the needs of users in low latency scenarios,so mobile edge computing comes into being.In order to make the edge servers placed in the same area have lower total energy consumption and balanced workload,and an ant colony optimization energy consumption load balancing placement algorithm ACO-ELP(ant colony optimization energy-consumption load-balancing placement) for energy consumption optimization and load balancing is proposed.Firstly,by constructing the power consumption model and load balancing model,the problem is defined,and the actual parameters are matched with the algorithm variables.In the iterative process,the ant colony algorithm is optimized.By dynamically controlling the volatilization and retention rate of pheromone,the iterative speed of the algorithm is accelerated,and the maximum and minimum value of pheromone is controlled to ensure that the algorithm can search the global optimal solution as much as possible and will not fall into the local optimal solution.Finally,the algorithm is simulated and evaluated with the data of Telecom base stations in Shanghai.The results show that compared with the basic placement algorithm,the algorithm not only reduces the number of servers and energy consumption,but also significantly reduces the load deviation.
    Reference | Related Articles | Metrics
    Cloud Computing Load Prediction Method Based on Hybrid Model of CEEMDAN-ConvLSTM
    ZAHO Peng, ZHOU Jiantao, ZHAO Daming
    Computer Science    2023, 50 (6A): 220300272-9.   DOI: 10.11896/jsjkx.220300272
    Abstract210)      PDF(pc) (3784KB)(216)       Save
    With the rapid development of cloud computing technology,more and more users choose to use cloud services,and the problem of mismatch between load requests and resource supply becomes increasingly prominent.As a result,user requests cannot be timely responded,which greatly affects the cloud service quality.Real-time prediction of load requests will help the timely supply of resources.To solve the problem of low performance of load prediction methods in the cloud computing environment,a cloud computing load prediction method based on hybrid model of complete ensemble empirical mode decomposition with adaptive noise and convolutional long short-term memory(CEEMDAN-ConvLSTM) is proposed.To begin with,the data sequence is decomposed into several sub-sequences which are easy to analyze and model.Then the convolutional long short-term memory(ConvLSTM) prediction model is used to predict the series of sub-sequences.The research idea based on multi-process parallel computation is adopted to realize multi-sequence parallel prediction and Bayesian optimization parameter tuning.Finally,the prediction values are integrated and superimposed to obtain the prediction output of the whole model,to achieve the goal of high-precision prediction of the original complex sequence data.The CEEMDAN-ConvLSTM hybrid model is verified by using the Google cluster workload data set.Experiment results show that the CEEMDAN-ConvLSTM hybrid model had a good prediction effect.Compared with the autoregressive differential moving average model(ARIMA),long short-term memory network(LSTM) and the convolutional long short-term memory(ConvLSTM),the Root Mean Square Error(RMSE) increases by 30.9%,30.1% and 22.5%,respectively.
    Reference | Related Articles | Metrics
    Energy Efficiency Planning with SWIPT-MISO Dynamic Energy Consumption Model
    XU Chenyang, XUE Liang, WANG Jinlong, ZHU Long
    Computer Science    2023, 50 (6A): 220400185-7.   DOI: 10.11896/jsjkx.220400185
    Abstract353)      PDF(pc) (2683KB)(192)       Save
    In simultaneous wireless information and power transfer networks,multiple antennas are usually equipped at the transmitter,which is able to serve all sensors in one-time transmission over the same frequency band.However,collecting channel state information from all sensors may cause a colossal waste of time and frequency resources.Therefore,the energy-saving beamfor-ming design with only channel distribution information at the transmitter is studied in multi-user multi-input single-output network.Under the constraints of information interruption probability,total available power and available power of authorized users,the network energy efficiency is maximized by the improved teaching-learning-based optimization algorithm.In addition,for the proposed power consumption scheme,the nonlinear energy receiving mechanism is considered,and the power-splitting energy harvesting receiver architecture is proposed to prevent the receiver from entering the saturation region,so as to improve the power receiving efficiency.The improved teaching-learning-based optimization algorithm has the advantages of whale algorithm,solves the constructed nonconvex optimization problem,and improves the convergence speed.Simulation experiments analyze the effects of outage probability,dynamic power consumption coefficient and available power at the transmitter on the system energy efficiency in the dynamic energy allocation scenario,and verify the effectiveness of the proposed algorithm.
    Reference | Related Articles | Metrics
    Missing Localization Characteristic Estimation Algorithm for Passive UHF RFID Tag
    ZHAO Yang, LI Lingyun, ZHAO Xiaoxia, LIU Xianhui, ZHANG Liang
    Computer Science    2023, 50 (6A): 220500055-6.   DOI: 10.11896/jsjkx.220500055
    Abstract235)      PDF(pc) (2732KB)(230)       Save
    For the problem of missing localization characteristics caused by the activation failure of passive UHF RFID tags,given the significant challenges in precisely modeling the channel,this paper proposes a missing localization characteristic estimation algorithm based on the linear model of signal strength Euclidean distance-space Euclidean distance to improve the localization accuracy of the scene analysis algorithms by increasing the number of characteristic dimensions.To increase the completeness of the scene matching data for nonactivated reference tags,the missing localization characteristics could be calculated directly by using the linear model.For the nonactivated target tags,the linear model is used to estimate the distance between the target tag and multiple benchmark reference tags,the least squares algorithm is used to estimate the preliminary location information of the target tag,and again the missing localization features are estimated using the linear model in reverse to complete the localization characteristics of the target tags.Experiments show that the proposed algorithm can not only effectively improve the localization accuracy of all missing target tags,but also the target tags around the missing reference tags.In addition,there is no additional hardware equipment included for this algorithm,which meets the application requirements of low-cost and high-precision.
    Reference | Related Articles | Metrics
    Study on Performance of Wireless Train Communication Network Based on Wi-Fi 6
    YANG Shaolong, ZHU Guosheng, PANG Xinglong, LI Xiuyuan, PAN Deng
    Computer Science    2023, 50 (6A): 220600179-5.   DOI: 10.11896/jsjkx.220600179
    Abstract168)      PDF(pc) (2011KB)(293)       Save
    The normal operation of modern trains is inseparable from the cooperation of mechanical and electronic systems,especially the train control and management system(TCMS) plays a key role in it.TCMS-related applications and services run on the train communication network(TCN),which is wired and often redundant,resulting in a large number of wired links interconnection,making network deployment and maintenance difficulties,and poor flexibility.This paper proposes a Wi-Fi 6-based train wireless communication networking scheme,which applies Wi-Fi 6 technology to the vehicle-level ECN network.The work includes the design of network architecture,the selection of communication data and the experimental verification in simulation environment.Experimental results show that the proposed Wi-Fi 6-based train communication QoS in terms of delay,jitter and packet loss rate meets the IEC 61375-3-4 standard,and have advantages over long term evolution(LTE).
    Reference | Related Articles | Metrics
    Controlled Short-distance Quantum Teleportation for Arbitrary Two-particles State in Pauli Noise Environment
    XIANG Shengjian
    Computer Science    2023, 50 (6A): 220700024-4.   DOI: 10.11896/jsjkx.220700024
    Abstract157)      PDF(pc) (1725KB)(242)       Save
    The quantum teleportation is one of the hot topics in the quantum communication.The short-distance teleportation,different from the traditional teleportation,can further save costly quantum entanglement resource based on the restriction in the distance.However,this also increases the probability in terms of cheating for the participants.Therefore,this paper proposes another short-distance quantum teleportation for arbitrary two-particles state scheme with a controller in order to enhance the safety.At the same time,it is impossible for quantum teleportation in an ideal environment,due to a fact that the particle will be ine-vitably affected by the noise channel during the distributing period.This paper also analyzes the influence of Pauli noise,which is a widely used noise channel model,on the fidelity of a two-particles state.As a result,the different concurrence in the two-particles state can generate different fidelity in some typical Pauli noise channel.This research can provide some theoretical value in the aspect of quantum communication network and the experiment research.
    Reference | Related Articles | Metrics
    Semi-online Algorithms for Mixed Ring with Two Nodes
    XIAO Man, LI Wei-dong
    Computer Science    2021, 48 (11A): 441-445.   DOI: 10.11896/jsjkx.201100153
    Abstract331)      PDF(pc) (1766KB)(422)       Save
    Two semi-online scenarios of load balancing problem on a two-point mixed ring are studied.This paper gives a two-point mixed ring and several flow requests,finds a suitable flow transportation method to make the maximum load on the ring as small as possible.When there is a buffer with a capacity of K,the lower bound of 4/3 is given.In particular,when K=1,a lower bound of 3/2 is given,and a semi-online algorithm with a competitive ratio of at most 8/5 is given.When the sum of all flow demands is known,an optimal semi-online algorithm with a competitive ratio of 3/2 is designed.
    Reference | Related Articles | Metrics
    Real Time Wireless Connection Scheme for Multi-nodes
    QIAN Guang-ming, YI Chao
    Computer Science    2021, 48 (11A): 446-451.   DOI: 10.11896/jsjkx.201200209
    Abstract255)      PDF(pc) (1961KB)(527)       Save
    A wireless connection scheme,named six synchronous successive transmission (SSST),is presented.The network using this scheme is called SSST network.It is mainly used for the real time radio scenario where multiple slave nodes request to send data to a master node.It has the same working frequency band with the Bluetooh.Based on SSST,slave nodes are allowed to send data transmission requests after synchronous packets are received from the master only.Slaves send their requests in an arranged way.The master transmits six synchronous packets continuously,each of which has a different index.The SSST protocol is collision free when multiple slaves require connecting the master simultaneously.The slave with the higher priority can be selected to access the network first,and the response time of each consecutive stage is easy to predict.All these are just the important features that real-time applications should have.The related theories are demonstrated and verified with experiments,and compared with the advertising and scanning of the Bluetooth.
    Reference | Related Articles | Metrics
    Security Clustering Strategy Based on Particle Swarm Optimization Algorithm in Wireless Sensor Network
    JIANG Jian-feng, SUN Jin-xia, YOU Lan-tao
    Computer Science    2021, 48 (11A): 452-455.   DOI: 10.11896/jsjkx.210900131
    Abstract273)      PDF(pc) (2435KB)(471)       Save
    In order to solve the problem of short network survival time and lack of effective secure transmission mechanism in wireless sensor networks,a secure clustering strategy for wireless sensor networks based on particle swarm optimization algorithm(SC_PSO) is proposed.This strategy combines the polynomial hybrid key distribution technology to encrypt the communication data of nodes between clusters,which ensures the security performance of data transmission.On the other hand,an optimized particle swarm algorithm is used to construct a fitness function based on the remaining energy of the node and the communication distance to select the optimal cluster head and the number of clusters.It can solve the problem of energy loss caused by the encryption algorithm,and it can ensure the performance of the sensor network while realizing data security communication.The network simulation test shows that this strategy can increase the network throughput by 120% and extend the life cycle of the sensor network by 30%~65% while ensuring the security of the sensor network.
    Reference | Related Articles | Metrics
    PSO-GA Based Approach to Multi-edge Load Balancing
    YAO Ze-wei, LIU Jia-wen, HU Jun-qin, CHEN Xing
    Computer Science    2021, 48 (11A): 456-463.   DOI: 10.11896/jsjkx.210100191
    Abstract381)      PDF(pc) (2254KB)(490)       Save
    As a new paradigm,mobile edge computing (MEC) can provide an efficient method to solve the computing and storage resource constraints of mobile devices.Through the wireless network,it migrates the intensive tasks on mobile devices to the edges near the users for execution,then the edges transmit the execution results back to mobile devices.Due to the randomness of users' movement,the load on each edge which deployed in the city is usually inconsistent.To solve the problem of multi-edge load balancing,the task scheduling is considered to minimize the maximum response time of tasks in the edge set,thereby improving the performance of mobile devices.Firstly,the multi-edge load balancing problem is formally defined.Then particle swarm optimization-genetic algorithm(PSO-GA) is proposed to solve the multi-edge load balancing problem.Finally,the performance of the algorithm is compared and analyzed with the random migration algorithm and the greedy algorithm through simulation experiments.The experimental results show that PSO-GA is superior to random migration and greedy algorithm by 51.58% and 26.34%,respectively.Therefore,PSO-GA has a better potential for reducing task response time of the edges and improving user experience.
    Reference | Related Articles | Metrics
    Research on Two-level Scheduled In-band Full-duplex Media Access Control Mechanism
    GUAN Zheng, LYU Wei, JIA Yao, YANG Zhi-jun
    Computer Science    2021, 48 (11A): 464-470.   DOI: 10.11896/jsjkx.201200026
    Abstract299)      PDF(pc) (3938KB)(411)       Save
    In-band full-duplex (IBFD) wireless communication allows the nodes to send and receive simultaneously in the same frequency band,which is an effective way to improve the spectrum utilization rate.This paper proposes a two-level scheduled in-band full-duplex (TSIB-FD) for wireless networks,which divides the access process into three stages:information collection,full-duplex data transmission and acknowledgement.A first-level scheduling scheme is generated by AP in the information collection stage by node business requirements and inter-interference relations.During thefirst-level full-duplex link data transmission,AP can build the second-level scheduling dynamically according to current transmission.ACK is processed after complete bidirectional data transmission.The simulation results show that TSIB-FD has improved the system throughput with a lower delay.Compared with Janus,the throughput can be improved by 38.5%,and compared with HBPOLL,it can be improved by 100%.
    Reference | Related Articles | Metrics
    Devices Low Energy Consumption Scheduling Algorithm Based on Dynamic Priority
    ZHANG Yi-wen, LIN Ming-wei
    Computer Science    2021, 48 (11A): 471-475.   DOI: 10.11896/jsjkx.210100080
    Abstract167)      PDF(pc) (2052KB)(541)       Save
    Previous studies consider independent periodic task model and only apply dynamic voltage frequency scaling (DVFS) to reduce energy consumption.An algorithm that can support preemptive periodic tasks with non-preemptive shared resources is proposed to overcome this shortcoming.It combines DVFS and dynamic power management (DPM) techniques to reduce energy consumption.It consists of device scheduling and job scheduling.In device scheduling,DPM technique is used to reduce the energy consumption of IO devices.In job scheduling,the earliest deadline first policy is used to schedule tasks and the stack resource protocol is used as synchronization protocol for shared resources.In addition,the task executes at low speed without blocking and switches to high speed with blocking to reduce the energy consumption of the processor.The experimental result shows that the proposed algorithm can yield significantly energy savings with respect to the existing algorithm.
    Reference | Related Articles | Metrics
    Link Mapping Algorithm Based on Weighted Graph
    GAO Ming, ZHOU Hui-ying, JIAO Hai, YING Li-li
    Computer Science    2021, 48 (11A): 476-480.   DOI: 10.11896/jsjkx.201200216
    Abstract342)      PDF(pc) (2327KB)(446)       Save
    Service function chain (SFC),as a concept of service deployment,provides a higher flexibility for network.This paper studies the mapping problem in the service function deployment,and proposes a link mapping algorithm based on weighted graph for the service function chain's business choreography plane deployment,so as to balance the load requirements of the functional service nodes deployed to the physical nodes.And this paper presents a mapping algorithm for virtual link of service function,that is,the combination of service function is first carried out,and then the actual link situation is modeled and analyzed,the initial va-lue is obtained by using efficiency matrix,and finally the former is corrected by using heuristic algorithm.Through the modeling analysis and comparison with Eigen decomposition of adjacency matrices (Eigen) of map matching strategy which reduces the link bandwidth demand,the algorithm can complete the service request under the condition of balanced link node load and link bandwidth,and in the growing service chain length and flow,the algorithm is more stable to changes in throughput and can reduce the cost of mapping for existing physical network.
    Reference | Related Articles | Metrics
    Synchronization of Uncertain Complex Networks with Sampled-data and Input Saturation
    ZHAO Man-yu, YE Jun
    Computer Science    2021, 48 (11A): 481-484.   DOI: 10.11896/jsjkx.210100063
    Abstract271)      PDF(pc) (1712KB)(438)       Save
    In actual network systems,there are widespread external interference and parameter mutations and other uncertain phenomena,which will cause the system to fail to achieve synchronization,and even destroy the stability of the system.Therefore,it is important to study uncertain complex networks.This paper discusses the synchronization problem of nonlinear uncertain complex dynamical networks with sampled-data and input saturation.Firstly,we establish a nonlinear and uncertain complex network model.Secondly,a leader is introduced to design a sampled-data control protocol with input saturation.By constructing an appropriate time-dependent Lyapunov functional and applying the stability theory,integral inequality method and the linear matrix inequality method,it is proved that the nonlinear uncertain complex dynamical networks can achieve synchronization under certain conditions,that is every follower can track the leader,and the sufficient criteria for achieving synchronization of nonlinear uncertain complex networks is derived.Finally,simulation examples are provided to verify the effectiveness and validity of the theoretical results.
    Reference | Related Articles | Metrics
    QoS Optimization of Data Center Network Based on MPLS-TE
    JIANG Jian-feng, YOU Lan-tao
    Computer Science    2021, 48 (11A): 485-489.   DOI: 10.11896/jsjkx.210900190
    Abstract176)      PDF(pc) (3084KB)(512)       Save
    Network congestion and delay can be overcome by making full use of path allocation and relocation routing traffic methods.Multi-protocol label switching is one of the solutions to solve the QoS limitation of date center network service quality in IP networks.It designs a QoS algorithm based on traffic engineering and path allocation in combination with the QoS model to ensure the quality of network service.The network test results show that the algorithm can improve date center network transmission efficiency,reduce delay and packet loss rate,especially for voice and video traffic,the network performance has been greatly improved.
    Reference | Related Articles | Metrics
    TCAM Multi-field Rule Coding Technique Based on Hypercube
    WANG Yun-xiao, ZHAO Li-na, MA Lin, LI Ning, LIU Zi-yan, ZHANG Jie
    Computer Science    2021, 48 (11A): 490-494.   DOI: 10.11896/jsjkx.201100161
    Abstract189)      PDF(pc) (2970KB)(535)       Save
    With the development and popularization of the Internet,the network scale,bandwidth and network packet transmission speed are growing exponentially.The increasingly rapid growth of network users also puts increasing pressure on the Internet infrastructure.As a key link to improve the performance of link bandwidth,the improvement of packet classification processing speed plays a key role in the development of various application services in the high-speed network environment.The current message classification algorithm has the problems of insufficient throughput,low memory utilization,high power consumption and insufficient update performance.In terms of message classification,traditional TCAM cannot store range rules efficiently.Based on this problem,a multi-field TCAM coding technique based on hypercube is designed by taking advantage of the symmetry and regularity of hypercube.Through the comparison of simulation experiments,the encoding effect is 2 times more efficient than otherpopular TCAM encoding schemes,which greatly increases the space utilization of TCAM in message classification.
    Reference | Related Articles | Metrics
    Cross-layer Matching Mechanism of Link Communication Rate for Heterogeneous Communication in Power System
    XIAO Yong, JIN Xin, FENG Jun-hao
    Computer Science    2021, 48 (11A): 495-499.   DOI: 10.11896/jsjkx.200500113
    Abstract160)      PDF(pc) (2088KB)(503)       Save
    Various local communication techniques are employed in the power system.The coexist of multiple protocols in power networks leads to various communication rates,resulting in difficulty of heterogeneous fusion.Aiming at this problem,a cross-layer communication rate matching mechanism between heterogeneous links is proposed.In the link layer,the link utilization rate is improved by means of the rate requirement based access control mechanism.Using the dynamic store-and-forward method,the communication rate is matched and forwarded among different network interfaces.Furthermore,in the network layer,data rate matching based on multipath transmission is proposed to achieve the maximum data rate matching and improve the communication efficiency.Simulation results have demonstrated the validity of the proposed rate matching mechanism based on cross layer design.In addition,compared with the RBAR protocol and ARF protocol,the proposed mechanism can be used to improve the network throughput,while reduce the delay and packet loss rate.
    Reference | Related Articles | Metrics
    Overview of Onboard Satellite Communicating System
    DU Chen-hui, XIANG Si-si, HUANG De-gang, CHEN Lang
    Computer Science    2021, 48 (6A): 364-368.   DOI: 10.11896/jsjkx.210100154
    Abstract355)      PDF(pc) (2531KB)(685)       Save
    With the development of ground internet and the popularization of travel by flying,the passengers urgently want to surf the internet during flight.In this paper,based on the background of interconnection onboard,the necessity is discussed from three facts including requirement of passengers,interconnection onboard market and current application status.Focused on the development of satellite communications,it is found that the satellite broadband develops from brand Ku to brand Ka and brand HTS Ku (High Throughput Satellite).The analysis is carried out on the key technologies of interconnection onboard,and the results show that the technologies of satellite communications specially on the brand Ka and HTS Ku in USA and European are more mature.In order to realize the national production of interconnection onboard,it is necessary to increase investment in above two key technologies.The industry chain of onboard web interconnection is complex and long,it's not only an opportunity but also a challenge for the companies of the industry chain.Accordingly,the requirement of passenger,airlines,the third party service provider and airworthiness are concluded in this paper.
    Reference | Related Articles | Metrics
    Review of Low Power Architecture for Wireless Network Cameras
    HE Quan-qi, YU Fei-hong
    Computer Science    2021, 48 (6A): 369-373.   DOI: 10.11896/jsjkx.201100099
    Abstract195)      PDF(pc) (2810KB)(683)       Save
    At present,wireless network camera is playing an increasingly important role in the field of environmental monitoring,military monitoring and urban monitoring.When used in remote or closed environments,the wireless network camera is powered by batteries and it is not convenient to replace the battery.The camera must meet the requirements of long battery life.The battery life of the camera is determined by the power consumption of the camera and the battery capacity.Since there is no breakthrough in battery technology,the low-power architecture design of wireless network cameras has become an important research direction.Firstly,the hardware solutions and power consumption performance of wireless network cameras are listed and analyzed.Then,the power performance of different coding algorithms is compared.In the aspect of camera dynamic power management,the dynamic power model of wireless network camera is proposed and analyzed,which provides the theoretical basis for dynamic power management.The power model of camera state switching is also analyzed,and the threshold time of camera state switching in time-out mode is determined.Finally,the overall design process of low power architecture of wireless network camerais presented.
    Reference | Related Articles | Metrics
    Energy-aware Fault-tolerant Collaborative Task Execution Algorithm in Edge Computing
    XUE Yan-fen, GAO Ji-mei, FAN Gui-sheng, YU Hui-qun, XU Ya-jie
    Computer Science    2021, 48 (6A): 374-382.   DOI: 10.11896/jsjkx.200900027
    Abstract365)      PDF(pc) (5275KB)(763)       Save
    Edge computing has been envisioned as an effective solution to enhance the computing capabilities for resource-constrained mobile devices.It allows users to satisfy the resource requirement by offloading heavy computing tasks to the edge cloud.However,it still needs to commit to solving the issues of energy consumption and reliability.This paper firstly proposes an energy-aware collaborative task execution scheduling model,which combines computing offloading model and fault-tolerant model to reduce energy consumption while improving reliability of edge computing within time constraints of tasks.Then,an energy-aware fault-tolerant collaborative task execution scheduling algorithm including collaborative task execution,initial scheduling and online scheduling is proposed to improve reliability while reducing energy consumption.The collaborative task execution is to determine the execution decision of tasks by partial critical path analysis and one-climb policy.The initial scheduling is to determine the fault-tolerant strategy from replication and resubmission for tasks executed on the edge cloud,ensuring the tasks processing successfully.The online scheduling is to adjust the fault-tolerant strategy in real time when a fault occurs.Finally,through extensive simulation experiments with the three different representative task topologies,the performance difference under three different scenarios in terms of the task completion rate and the energy consumption ratio are evaluated.Results show that the proposed method is more reliable than collaborative task execution and more energy-aware than local execution in terms of the change of the deadline,the data transmission rate,and the fault tolerance rate.
    Reference | Related Articles | Metrics
    Research on Mobile Edge Computing in Expressway
    SONG Hai-ning, JIAO Jian, LIU Yong
    Computer Science    2021, 48 (6A): 383-386.   DOI: 10.11896/jsjkx.200900212
    Abstract224)      PDF(pc) (3056KB)(620)       Save
    With the improvement of expressway construction,a large number of computing devices appear on both sides of the expressway.Based on this,mobile edge computing technology can be used in expressway scenarios.Mobile edge computing can provide vehicles with low latency,high bandwidth and reliable computing services on highways.It is also an important means to rea-lize intelligent transportation system.Considering the special environment of expressway,this paper focuses on the problems of task offloading and resource allocation.Combined with 5G mobile network,a mobile edge computing model for expresswaydri-ving task is established.An efficient and reasonable resource scheduling strategy for different kinds of computing tasksis designed.And a dynamic scheduling strategy is put forward,which combines genetic algorithm and ant colony algorithm.To verify the effectiveness of this model,load balancing is used as the performance index for the simulation experiment.The experimental results show that the proposed algorithm can effectively reduce the load gaps and the computing cost compared with similar algorithms
    Reference | Related Articles | Metrics
    Low-complexity Subcarrier Allocation Algorithm for Underwater OFDM Acoustic CommunicationSystems
    YOU Ling, GUAN Zhang-jun
    Computer Science    2021, 48 (6A): 387-391.   DOI: 10.11896/jsjkx.201100064
    Abstract195)      PDF(pc) (2558KB)(500)       Save
    In recent years,underwater acoustic communication technology based on OFDM modulation has been developed rapidly,with the advancement of the national strategy of Smart Ocean,as well as the demand of marine resource development.One of the key issues is the allocation of subcarrier in order to optimize the system performance.In this paper,a subcarrier allocation algorithm with low complexity for underwater OFDM acoustic communication system is proposed,candidate nodes are selected according to a certain criterion in each round,node with the worst comprehensive channel state is the final objective.The algorithm can improve the overall transmission performance of the system,and the transmission performance of the worst sensor node is considered as well.Besides,in case a certain node cannot get any subcarrier resource in multi-round allocations,the idle node in the last round is assigned the subcarrier with the best channel condition in front of other nodes.Simulation results show that the improvement of the algorithm can solve the problemon the premise of hardly reducing the performance of the original algorithm.The proposed algorithm has certain reference significance for the resource allocation of underwater multi-sensor network.
    Reference | Related Articles | Metrics
    SDN Traffic Prediction Based on Graph Convolutional Network
    SONG Yuan-long, LYU Guang-hong, WANG Gui-zhi, JIA Wu-cai
    Computer Science    2021, 48 (6A): 392-397.   DOI: 10.11896/jsjkx.200800090
    Abstract714)      PDF(pc) (2970KB)(1514)       Save
    Accurate and real-time traffic forecasting plays an important role in the SDN and is of great significance for network traffic engineer,and network plan.Because of the constraints of network topological structure and the dynamic change of time,that is,spatial and time features,network traffic prediction has been considered as a scientific issue.In order to capture the spatial and temporal dependence simultaneously,the Graph Convolutional Gated Recurrent Unit Network model(GCGRU) is proposed,a neural network-based traffic forecasting method,which is in combination with the graph convolutional network (GCN) and gated recurrent unit (GRU).Specifically,GCN is used to learn complex topological structures to capture spatial dependence and Gated Recurrent Unit is used to learn dynamic changes of traffic data to capture temporal dependence.In terms of model perfor-mance comparison,GCGRU proposed in this paper is compared with classic methods.The evaluation metrics include MSE,RMSE,MAE.The experimental results show that GCGRU can perform better in traffic prediction.
    Reference | Related Articles | Metrics
    Energy Efficient Power Allocation for MIMO-NOMA Communication Systems
    CHEN Yong, XU Qi, WANG Xiao-ming, GAO Jin-yu, SHEN Rui-juan
    Computer Science    2021, 48 (6A): 398-403.   DOI: 10.11896/jsjkx.200900175
    Abstract353)      PDF(pc) (2462KB)(709)       Save
    In this paper,a power allocation algorithm is proposed for non-orthogonal multiple access (NOMA) communication systems with multiple input multiple output (MIMO).The base station (BS) divides the connected users into several clusters,and deals with the inter-cluster interference by zero-forcing receiving method and the intra-cluster interference by successive interference cancellation method.To improve the energy efficiency performance of the MIMO-NOMA system,the transmit power allocation is optimized at the BS.We formulate the power allocation problem as a nonconvex optimization problem with a fractio-nal form,which is difficult to solve directly.First,an auxiliary variable is introduced to transform the original objective function into a non-fractional form,and it is searched by the bisection method.Since the optimization problem after transformation is still a non-convex problem,it is then solved by a quadratic transformation and an iterative processing.The theoretical analysis and simulation results show that the proposed method can effectively improve the energy efficiency and sum-rate performance of the communication system compared with the traditional orthogonal access method.In addition,the proposed method can ensure the fairness between users by adjusting the weight factor.
    Reference | Related Articles | Metrics
    Outdoor Fingerprint Positioning Based on LTE Networks
    LI Da, LEI Ying-ke, ZHANG Hai-chuan
    Computer Science    2021, 48 (6A): 404-409.   DOI: 10.11896/jsjkx.200700170
    Abstract318)      PDF(pc) (3400KB)(853)       Save
    Owing to the satisfactory positioning accuracy in complex environments,fingerprint-based positioning technology has always been a hot topic of research.By leveraging long term evolution (LTE) signals,a deep learning based outdoor fingerprint positioning method is proposed to construct a positioning system.Inspiring by the computer vision technology,the geo-tag signals are converted into gray scale images for positioning.The positioning accuracy is expressed by the classification accuracy of the constructed gray image dataset.In this paper,a two-level training architecture is developed to realize the classification of deep neural network (DNN).First,a deep residual network (Resnet) is used to pre-train the fingerprint database and obtain a rough positioning model.Then,a transfer learning algorithm based on back propagation neural network (BPNN) is used to further extract signal features and obtain an accurate positioning model.The experiment is conducted in a real outdoor environment,and the experiment results show that the proposed positioning system can achieve a satisfactory positioning accuracy in complex environments.
    Reference | Related Articles | Metrics
    Reliable Transmission Strategy for Underwater Wireless Sensor Networks
    HONG Chang-jian, GAO Yang, ZHANG Fan, ZHANG Lei
    Computer Science    2021, 48 (6A): 410-413.   DOI: 10.11896/jsjkx.201100048
    Abstract259)      PDF(pc) (2156KB)(517)       Save
    Aiming at the limitation of network nodes needing to perceive the residual energy of the whole network nodes and neighbor node distance in the Layered-DBR algorithm,a reliable under water sensor network data transmission strategy RTS(Reliable Transmission Strategy) is proposed which calculates the energy and distance factors by current node depth information,residual energy and network layered spacing.This paper develops a network performance evaluation method to balance the network life cycle and data packet loss rate,to determine the proportion of energy factor and distance factor through simulation experiment,and finally gives the calculation method of the probability of message forwarding.The simulation comparison experiment shows that compared with DBR、DMBR、Layered-DBR,the RTS algorithm can effectively control network redundancy,reduce data packet loss,and has a long life cycle of the network.
    Reference | Related Articles | Metrics
      First page | Prev page | Next page | Last page Page 1 of 7, 193 records