Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 49 Issue 2, 15 February 2022
  
Computer Vision: Theory and Application
Micro-expression Recognition Method Combining Feature Fusion and Attention Mechanism
LI Xing-ran, ZHANG Li-yan, YAO Shu-jing
Computer Science. 2022, 49 (2): 4-11.  doi:10.11896/jsjkx.210900028
Abstract PDF(2093KB) ( 1214 )   
References | Related Articles | Metrics
Micro-expression refers to an uncontrollable muscle movement on the face when people try to hide or suppress their true emotions.Due to the short duration,small motion range,and difficulty in concealing and restraining,the recognition accuracy of such emotional facial expressions is restricted.In order to cope with these challenges,this paper proposes a novel micro-expression recognition method combining feature fusion and attention mechanism,considering optical flow features and face features,and further adding attention mechanism to improve the recognition performance.The processing steps of this method are as follows:1)Extract the optical flow and optical strain from Onset to Apex in each micro-expression segment,input the vertical optical flow,horizontal optical flow and optical strain into a shallow 3DCNN,and extract the optical flow features.2)Taking the deep convolution neural network ResNet-10 as the backbone network,the convolution attention module is added to extract face features.3)Combine the two feature vectors for classification.The experimental results reveal that the proposed method is superior to the traditional methods and existing deep learning methods in micro-expression recognition.
Survey on Generalization Methods of Face Forgery Detection
DONG Lin, HUANG Li-qing, YE Feng, HUANG Tian-qiang, WENG Bin, XU Chao
Computer Science. 2022, 49 (2): 12-30.  doi:10.11896/jsjkx.210900146
Abstract PDF(2379KB) ( 1835 )   
References | Related Articles | Metrics
The rapid development of deep learning technology provides powerful tools for the research of deepfake.Forged videos and images are more and more difficult for human eyes to distinguish between real and fake.Videos and images on the internet may have a huge negative impact on social life,such as financial fraud,the spread of fake news,and personal bullying.At present,the fake face detection technology based on deep learning has reached a high accuracy on multiple benchmark databases such as FaceForensics++,but the detection accuracy on cross-databases is much lower than accuracy on the source database,that is,it is difficult for many detection methods to generalize to different types of forgeries,or unknown types of forgeries,which also motivates more scholars to focus on generalization methods.The generalization research of face forgery detection focuses on methods based on deep learning.Firstly,the commonly used datasets including real-world datasets and multi-task datasets for forgery detection are discussed and compared.Secondly,it classifies and summarizes the generalization of video and image tampering detection from three aspects:data,features,and learning strategies.The data refers to data augmentation in deepfake detection.The features include single-domain features such as frequency domain features and multi-domain features.The learning strategies consist of transfer learning,multi-task learning,meta-learning,and incremental learning.And the advantages and shortcomings of three different types are analyzed.Finally,the future development direction and challenges of face tampering detection generalization are discussed.
Generation Model of Gender-forged Face Image Based on Improved CycleGAN
SHI Da, LU Tian-liang, DU Yan-hui, ZHANG Jian-ling, BAO Yu-xuan
Computer Science. 2022, 49 (2): 31-39.  doi:10.11896/jsjkx.210600012
Abstract PDF(3239KB) ( 1374 )   
References | Related Articles | Metrics
Deepfake can be used to combine human voices,faces and body movements into fake content,switch gender and change age,etc.There are some problems of gender-forged face images based on generative adversarial image translation networks such as the irrelevant image domain changes easily and insufficient face details in generated images.To solve these problems,an gene-ration model of gender-forged face image based on improved CycleGAN is proposed.Firstly,the generator is optimized by using the attention mechanism and adaptive residual blocks to extract richer facial features.Then,with the aim to improve the ability of the discriminator,the loss function is modified by the idea of relative loss.Finally,a model training strategy based on age constraints is proposed to reduce the impact of age changes on the generated images.Performing experiments on the CelebA and IMDB-WIKI datasets,the experimental results show that,compared with the original CycleGAN method and the UGATIT method,theproposed method can generate more real gender-forged face images.The average content accuracy of fake male images and fake female images is 82.65% and 78.83%,and the average FID score is 32.14 and 34.50,respectively.
Review of 3D Face Reconstruction Based on Single Image
HE Jia-yu, HUANG Hong-bo, ZHANG Hong-yan, SUN Mu-ye, LIU Ya-hui, ZHOU Zhe-hai
Computer Science. 2022, 49 (2): 40-50.  doi:10.11896/jsjkx.210500215
Abstract PDF(2361KB) ( 2049 )   
References | Related Articles | Metrics
In the field of computer vision,3D face reconstruction is a valuable research direction.High quality reconstruction of 3D faces can find applications in face recognition,anti-proofing,animation and medical cosmetology.In the last two decades,although great progress has been made 3D face reconstruction based on a single image,the results of reconstruction using traditionalalgorithms are still facing the challenge of facial expression,occlusion and ambient light,and there will be problems such as poor reconstruction accuracy and robustness.With the rapid development of deep learning in 3D face reconstruction,various methods which are superior to traditional reconstruction algorithms have emerged.Firstly,this paper focuses on deep-learning-based reconstruction algorithms.The algorithms are divided into four categories according to different network architecture,and the most popular methods are described in detail.Then commonly used 3D face data sets are introduced,and performance of representative methods are evaluated.Finally,conclusions and prospects of the single-image-based 3D face reconstruction are given.
Research Progress of Face Editing Based on Deep Generative Model
TANG Yu-xiao, WANG Bin-jun
Computer Science. 2022, 49 (2): 51-61.  doi:10.11896/jsjkx.210400108
Abstract PDF(3231KB) ( 1494 )   
References | Related Articles | Metrics
Face editing is widely used in public security pursuits,face beautification and other fields.Traditional statistical me-thods and prototype-based methods are the main means to solve face editing.However,these traditional technologies face pro-blems such as difficult operation and high computational cost.In recent years,with the development of deep learning,especially the emergence of generative networks,a brand new idea has been provided for face editing.Face editing technology using deep generative models has the advantages of fast speed and strong model generalization ability.In order to summarize and review the related theories and research on the use of deep generative models to solve the problem of face editing in recent years,firstly,we introduce the network framework and principles adopted by the face editing technology based on deep generative models.Then,the methods used in this technology are described in detail,and we summarize it into three aspects:image translation,introduction of conditional information within the network,and manipulation of potential space.Finally,we summarize the challenges faced by this technology,which consists of identity consistency,attribute decoupling,and attribute editing accuracy,and point out the issues of the technology that need to be resolved urgently in future.
Human Skeleton Action Recognition Algorithm Based on Dynamic Topological Graph
XIE Yu, YANG Rui-ling, LIU Gong-xu, LI De-yu, WANG Wen-jian
Computer Science. 2022, 49 (2): 62-68.  doi:10.11896/jsjkx.210900059
Abstract PDF(1920KB) ( 1016 )   
References | Related Articles | Metrics
Traditional human skeleton action recognition algorithms manually construct topological graphs to model the action sequence contained in multiple video frames and learn each video frame to reflect the data changes,which may lead to the high computational cost,low network generalization performance and catastrophic forgetting.To solve these problems,a human skeleton action recognition algorithm based on dynamic topological graph is proposed,in which the human skeleton topological graph is dynamically constructed based on continuous learning.Specifically,human skeleton sequence data with multi-relationship characte-ristics are recoded into relationship triplets,and feature embedding is learned in a decoupling manner via the long short-term me-mory network.When handling new skeleton relationship triplets,we dynamically construct the human skeleton topological graph by a partial update mechanism,and then send it to the skeleton action recognition algorithm based on spatio-temporal graph convolution network for action recognition.Experimental results demonstrate that the proposed algorithm achieves 40%,85% and 90% recognition accuracy on three benchmark datasets,namely Kinetics-Skeleton,NTU-RGB+D(X-Sub) and NTU-RGB+D(X-View),respectively,which improve the accuracy of human skeleton action recognition.
Predicting Tumor-related Indicators Based on Deep Learning and H&E Stained Pathological Images:A Survey
YAN Rui, LIANG Zhi-yong, LI Jin-tao, REN Fei
Computer Science. 2022, 49 (2): 69-82.  doi:10.11896/jsjkx.210900140
Abstract PDF(5973KB) ( 2660 )   
References | Related Articles | Metrics
Accurate diagnosis of tumor is very important for customizing treatment plans and predicting prognosis.Pathological diagnosis is considered the “gold standard” for tumor diagnosis,but the development of pathology still faces great challenges,such as the lack of pathologists,especially in underdeveloped areas and small hospitals,has led to long-term overload of pathologists.At the same time,pathological diagnosis relies heavily on the professional knowledge and diagnostic experience of pathologists,and this subjectivity of pathological diagnosis has led to a surge in diagnostic inconsistencies.The breakthrough of whole slide images (WSI) technology and deep learning methods provides new development opportunities for computer-aided diagnosis and prognosis prediction.Histopathological sections stained with hematoxylin-eosin (H&E) can show cell morphology and tissue structure very well,and are simple to make,inexpensive,and widely used.What can be predicted from pathological images alone? After the deep learning method was applied to the field of pathological images,this question got a new answer.In this paper,we first summarize the overall research framework of tumor-related indicators prediction based on deep learning and pathological images.According to the development sequence of the overall research framework,it can be summarized into three progressive stages:WSI predictions based on manually selected single patch,WSI predictions based on majority voting,and WSI predictions with general applicability;Secondly,four supervised or weakly supervised learning methods commonly used in WSI prediction are briefly introduced:convolutional neural network (CNN),recurrent neural network (RNN),graph neural network (GNN),multiple instance learning (MIL).Then,we reviewed the related deep learning methods used in this field,what are the tumor-related indicators that can be predicted through pathological images,and the latest research progress.We mainly reviewed the literature from two aspects:predicting tumor-related indicators (tumor classification,tumor grading,tumor area recognition) that pathologists can read and recognize,and predicting tumor-related indicators (genetic variation prediction,molecular subtype prediction,treatment effect evaluation,survival time prediction) that pathologists cannot read and recognize.Finally,the general problems in this field are summarized,and the possible development direction in the future is suggested.
Multi-target Category Adversarial Example Generating Algorithm Based on GAN
LI Jian, GUO Yan-ming, YU Tian-yuan, WU Yu-lun, WANG Xiang-han, LAO Song-yang
Computer Science. 2022, 49 (2): 83-91.  doi:10.11896/jsjkx.210800130
Abstract PDF(3708KB) ( 1361 )   
References | Related Articles | Metrics
Although deep neural networks perform well in many areas,research shows that deep neural networks are vulnerable to attacks from adversarial examples.There are many algorithms for attacking neural networks,but the attack speed of most attack algorithms is slow.Therefore,the rapid generation of adversarial examples has gradually become the focus of research in the area of adversarial examples.AdvGAN is an algorithm that uses the network to attack another network,which can generate adversarial samples extremely faster than other methods.However,when carrying out a targeted attack,AdvGAN needs to train a network for each target,so the efficiency of the attack is low.In this article,we propose a multi-target attack network(MTA) based on the generative adversarial network,which can complete multi-target attacks and quickly generate adversarial examples by training only once.Experiments show that MTA has a higher success rate for targeted attacks on the CIFAR10 and MNIST datasets than AdvGAN.We have also done adversarial sample transfer experiments and attack experiments under defense.The results show that the transferability of the adversarial examples generated by MTA is stronger than other multi-target attack algorithms,and our MTA method also has a higher attack success rate under defense.
Survey of Research Progress on Adversarial Examples in Images
CHEN Meng-xuan, ZHANG Zhen-yong, JI Shou-ling, WEI Gui-yi, SHAO Jun
Computer Science. 2022, 49 (2): 92-106.  doi:10.11896/jsjkx.210800087
Abstract PDF(4336KB) ( 2564 )   
References | Related Articles | Metrics
With the development of deep learning theory,deep neural network has made a series of breakthrough progress and has been widely applied in various fields.Among them,applications in the image field such as image classification are the most popular.However,research suggests that deep neural network has many security risks,especially the threat from adversarial examples,which seriously hinder the application of image classification.To address this challenge,many research efforts have recently been dedicated to adversarial examples in images,and a large number of research results have come out.This paper first introduces the relative concepts and terms of adversarial examples in images,reviews the adversarial attack methodsand defense me-thods based on the existing research results.In particular,it classifies them according to the attacker's ability and the train of thought in defense methods.This paper also analyzes the characteristics and the connections of different categories.Secondly,it briefly describes the adversarial attacks in the physical world.In the end,it discusses the challenges of adversarial examples in images and the potential future research directions.
Text-to-Image Generation Technology Based on Transformer Cross Attention
TAN Xin-yue, HE Xiao-hai, WANG Zheng-yong, LUO Xiao-dong, QING Lin-bo
Computer Science. 2022, 49 (2): 107-115.  doi:10.11896/jsjkx.210600085
Abstract PDF(3673KB) ( 1193 )   
References | Related Articles | Metrics
In recent years,the research on the methods of text to image based on generative adversarial network (GAN) continues to grow in popularity and have made some progress.The key of text-to-image generation technology is to build a bridge between the text information and the visual information,and promote the model to generate realistic images consistent with the corresponding text description.At present,the mainstream method is to complete the encoding of the descriptions of the input text by pre-training the text encoder,but these methods do not consider the semantic alignment with the corresponding image in the text encoder,and adopt the independent encoding of the input text,ignoring the semantic gap between the language space and the image space.To address the problem,in this paper,a generative adversarial network based on the cross-attention encoder (CAE-GAN) is proposed.The network uses a cross-attention encoder to translate and align text information with visual information,and captures the cross-modal mapping relationship between text and image information,so as to improve the fidelity of the gene-rated images and the matching degree with input text description.The experimental results show that,compared with the DM-GAN model,the inception score (IS) of CAE-GAN model increases by 2.53% and 1.54% on CUB and coco datasets,respectively.The fréchet inception distance score decreases by 15.10% and 5.54%,respectively,indicating that the details and the quality of the images generated by the CAE-GAN model are more perfect.
Study on Super-resolution Reconstruction Algorithm of Remote Sensing Images in Natural Scene
CHEN Gui-qiang, HE Jun
Computer Science. 2022, 49 (2): 116-122.  doi:10.11896/jsjkx.210700095
Abstract PDF(3459KB) ( 923 )   
References | Related Articles | Metrics
Due to the lack of paired datasets in the field of remote sensing image super-resolution reconstruction,current methods obtain low resolution images by bicubic interpolation,in which the degradation model is too idealized,resulting in unsatisfied reconstruction results in real low resolution remote sensing images situations.This paper proposes a super resolution reconstruction algorithm for real remote sensing images.For datasets that lack paired images,this paper builds a more reasonable degradation model,in which a prior of degradation in the imaging process (like blur,noise,down sampling,etc.) is randomly shuffled to generate realistic low-resolution images for training,simulating the generation process of low-resolution remote sensing images.Also,this paper improves a reconstruction algorithm based on generative adversarial networks(GAN) to enhance texture details by introducing attention mechanism.Experiments on UC Merced dataset show a promotion of 1.407 1 dB/0.067 2,0.821 1 dB/0.023 5 compared with ESRGAN and RCAN on the evaluation index of PSNR/SSIM,experiments on Alsat2B dataset promote 1.758 4 dB/0.048 5 compared with the baseline,which show the effective of the degradation model and reconstruction architecture.
Survey on Video Super-resolution Based on Deep Learning
LENG Jia-xu, WANG Jia, MO Meng-jing-cheng, CHEN Tai-yue, GAO Xin-bo
Computer Science. 2022, 49 (2): 123-133.  doi:10.11896/jsjkx.211000007
Abstract PDF(2634KB) ( 1607 )   
References | Related Articles | Metrics
Video super-resolution (VSR) aims to reconstruct a high-resolution video from its corresponding low-resolution version.Recently,VSR has made great progress driven by deep learning.In order to further promote VSR,this survey makes a comprehensive summary of VSR,and makes a taxonomy,analysis and comparison of existing algorithms.Firstly,since different frameworks are very important for VSR,we group the VSR approaches into two categories according to different frameworks:iterative- and recurrent-network based VSR approaches.The advantages and disadvantages of different networks are further compared and analyzed.Secondly,we comprehensively introduce the VSR datasets,summarize existing algorithms and further compare these algorithms on some benchmark datasets.Finally,the key challenges and the application of VSR methods are analyzed and prospected.
Ray Tracing Checkerboard Rendering in Molecular Visualization
LI Jia-zhen, JI Qing-ge, ZHU Yong-lin
Computer Science. 2022, 49 (2): 134-141.  doi:10.11896/jsjkx.210900126
Abstract PDF(3275KB) ( 669 )   
References | Related Articles | Metrics
Using advanced ray tracing technology in molecular visualization to render images can greatly enhance researchers' observation and perception of molecular structure.However,existing ray tracing methods have the problems of insufficient real-time performance and poor rendering quality.In this paper,a ray tracing checkerboard rendering method is proposed.The ray tracing method is optimized by using the checkerboard rendering technology.The process of the proposed method is divided into four phases:reprojection,rendering,reconstruction and hole filling.In these phases,improvements to the checkerboard rendering are proposed,including forward reprojection,molecular shading bounding box,dynamic image reconstruction and eight-neighbor interpolation hole filling strategy.The experiment in this paper is carried out on 6 molecules with different atomic numbers.Experimental results of the comparison between the proposed method and the current advanced methods on supercomputers show that the real-time frame rate of our method is significantly higher than that of the Tachyon-OSPRay method based on CPU calculation,which is 1.58 times to 1.86 times that of the Tachyon-OSPRay method.Moreover,the proposed method has better frame rate performance than the Tachyon-Optix method based on GPU-accelerated calculation under the condition of relatively few atoms.
Video Anomaly Detection Based on Implicit View Transformation
LENG Jia-xu, TAN Ming-pi, HU Bo, GAO Xin-bo
Computer Science. 2022, 49 (2): 142-148.  doi:10.11896/jsjkx.210900266
Abstract PDF(2298KB) ( 651 )   
References | Related Articles | Metrics
Existing deep learning-based video anomaly detection methods all detect anomalies in video clips under a single view,ignoring the importance of view information in video anomaly detection.Under a single view,when anomalies are occluded or not obvious,the performance of existing algorithms will suffer drops.To avoid this problem,the author firstly introduces the concept of view transformation into video anomaly detection,which improves the robustness of the model by judging abnormalities from multiple views.However,due to the lack of multi-view supervision information in the dataset,it is difficult to achieve explicit view transformation.Specifically,in order to reflect the idea of view transformation,the author proposes a video anomaly detection method based on implicit view transformation,using the optical flow information between frames to warp the implicit view information of the previous frame to the target frame,so as to realize the implicit view transformation from the target frame to the previous frame.And then,the method performs secondary anomaly detection on the target frame after view transformation.Experimental results show that the proposed method responds more sensitively to abnormal data and has a more robust normal data fitting ability.The AUC values on the UCSD Ped2 and CUHK Avenue datasets reached 97.0% and 88.9%,respectively.
Graph Convolutional Skeleton-based Action Recognition Method for Intelligent Behavior Analysis
MIAO Qi-guang, XIN Wen-tian, LIU Ru-yi, XIE Kun, WANG Quan, YANG Zong-kai
Computer Science. 2022, 49 (2): 156-161.  doi:10.11896/jsjkx.220100061
Abstract PDF(1737KB) ( 594 )   
References | Related Articles | Metrics
Smart education is a new education model using modern information technology,and smart behavior analysis is the core component.In the complex classroom scenarios,traditional action recognition algorithms are seriously deficient in accuracy and timeliness.A graph convolutional method based on separation and attention mechanism (DSA-GCN) is proposed to solve the above problems.First,in order to solve the challenge that traditional algorithms are inherently inadequate in aggregating information in the channel domain,multidimensional channel mapping is performed by point-wise convolution,combining the ability of ST-GC to preserve the original spatio-temporal information with the separation ability of depth-separable convolution in spatial and channel feature learning to enhance model feature learning and abstract expressivity.Second,a multi-dimensional fused attention mechanism is used to enhance the model dynamic sensitivity in the spatial convolution domain using self-attention and channel attention mechanisms,and to enhance the key frame discrimination in the temporal convolution domain using temporal and channel attention fusion method.Experiment results show that DSA-GCN achieves better accuracy and effectiveness performance on NTU RGB+D and N-UCLA datasets,and prove the improvement of the ability to aggregate channel information.
Database & Big Data & Data Science
Review of K-means Algorithm Optimization Based on Differential Privacy
KONG Yu-ting, TAN Fu-xiang, ZHAO Xin, ZHANG Zheng-hang, BAI Lu, QIAN Yu-rong
Computer Science. 2022, 49 (2): 162-173.  doi:10.11896/jsjkx.201200008
Abstract PDF(2269KB) ( 786 )   
References | Related Articles | Metrics
Differential privacy K-means algorithm (DP K-means),as a kind of privacy preserving data mining (PPDM) model based on differential privacy technology,has attracted much attention from researchers because of its simplicity,efficiency and ability to guarantee data privacy.Firstly,the principle and privacy attack model of differential privacy K-means Algorithm are described,and the shortcomings of the algorithm are analyzed.Then,the advantages and disadvantages of the improvement research of DP K-means algorithm are discussed and analyzed from three perspectives,including data preprocessing,privacy budget allocation and cluster partition,and the relevant data sets and common evaluation indexes in the research are summarized.At last,the challenging problems to be solved in the improvement research of DP K-means algorithm are pointed out,and the future development trend of DP K-means algorithm is prospected.
Method of Domain Knowledge Graph Construction Based on Property Graph Model
LIANG Jing-ru, E Hai-hong, Song Mei-na
Computer Science. 2022, 49 (2): 174-181.  doi:10.11896/jsjkx.210500076
Abstract PDF(3138KB) ( 1336 )   
References | Related Articles | Metrics
With the arrival of the big data era,the relationship that needs to be processed in various industries has increased exponentially,and there is an urgent need for a data model that supports the ability to express massive complex relationship,that is,domain knowledge graph.Although the domain knowledge graph has shown great potential,it is not difficult to find that there is still a lack of mature construction technologies and platforms.It still remains an important challenge to construct domain know-ledge graph rapidly.After the systematic study of domain knowledge graph,a method is proposed to construct domain knowledge graph based on property graph model.Concretely,for structured and semi-structured data stored in a variety of databases,the method completes the construction of the high-quality graph model by graph database data communication protocol,multiple configuration methods of entity and relation schema,etc.Then,the data from the original database is extracted,transformed and loa-ded into the property graph database HugeGraph,completing the construction of domain knowledge graph.Finally,experiments on multiple datasets and test results of Gremlin statement show that the proposed method is complete and reliable.
Competitive-Cooperative Coevolution for Large Scale Optimization with Computation Resource Allocation Pool
PAN Yan-na, FENG Xiang, YU Hui-qun
Computer Science. 2022, 49 (2): 182-190.  doi:10.11896/jsjkx.201200012
Abstract PDF(2409KB) ( 582 )   
References | Related Articles | Metrics
Through the strategy of divide and conquer,cooperative co-evolution (CC) has shown great prospects in evolutionary algorithm for solving large scale optimization problems.In CC,sub-problems have inconsistent contributions to the improvement of best overall solution according to different evolution states.Hence,evenly allocating computing resources will lead to waste.In response to the above-mentioned problem,a novel competitive-cooperative coevolution framework is proposed with adaptive resource allocation pool and competitive swarm optimization.Due to the imbalance of the sub-problems,the dynamic contribution of sub-problems is used as the criterion for allocating computing resources.For adapting to the evolution state of the sub-problems,pool model is exploited for adaptive allocation instead of fixed resource allocation unit.Specially,the framework is able to save computing resources by avoiding repeated evaluation of individuals in successive iterations of the same sub-problem.Then,competitive swarm optimization is combined with cooperative coevolution framework to improve efficiency.Compared with other five algorithms,experimental results on benchmark functions of the CEC 2010 and CEC 2013 suites for large scale optimization de-monstrate that the computation resource allocation pool is significant and the framework integrated with CSO shows highly competitive in solving large scale optimization problems.
Robust Joint Sparse Uncorrelated Regression
LI Zong-ran, CHEN XIU-Hong, LU Yun, SHAO Zheng-yi
Computer Science. 2022, 49 (2): 191-197.  doi:10.11896/jsjkx.210300034
Abstract PDF(3810KB) ( 463 )   
References | Related Articles | Metrics
Common unsupervised feature selection methods only consider the selection of discriminative features,while ignoring the redundancy of features and failing to consider the problem of small classes,which affect the classification performance.Based on this background,a robust uncorrelated regression algorithm is proposed.First,research on uncorrelated regression,use uncorrelated orthogonal constraints to find irrelevant but discriminative features.Uncorrelated constraints keep the data structure in the Stiefel manifold,making the model have a closed solution,avoiding the possible trivial solutions caused by the traditional ridge regression model.Secondly,the loss function and the regularization term use the L2,1 norm to ensure the robustness of the model and obtain a sparse projection matrix.At the same time,the small class problem is taken into account,so that the number of projection matrices is not limited by the number of classes,and the result is enough projection matrices to improve the classification performance of the model.Theoretical analysis and experimental results on multiple data sets show that the proposed method has better performance than other feature selection methods.
Mining Causality via Information Bottleneck
QIAO Jie, CAI Rui-chu, HAO Zhi-feng
Computer Science. 2022, 49 (2): 198-203.  doi:10.11896/jsjkx.210100053
Abstract PDF(2233KB) ( 745 )   
References | Related Articles | Metrics
Causal discovery from observational data is a fundamental problem in many disciplines.However,existing methods such as constraint-based methods and causal function-based methods have strong assumptions on the causal mechanism of data,and are only applicable to low-dimensional data,and cannot be applied to scenarios with hidden variables.To this end,we propose a causality discovery method using information bottlenecks,called causal information bottleneck.This method divides the causal mechanism into two stages:compression and extraction.In the compression stage,we assume that there is a compressed hidden variable in the middle,while in the extraction stage,we extract the correlated information from effect variable as much as possible.Based on the causal information bottleneck,by deriving its variational upper bound,a causality discovery method based on the variational autoencoder is designed.The experimental results shows that the information bottleneck based method improves the accuracy by 10% in synthetic data and 4% in real world data.
Maximum Likelihood-based Method for Locating Source of Negative Influence Spreading Under Independent Cascade Model
SHAO Yu, CHEN Ling, LIU Wei
Computer Science. 2022, 49 (2): 204-215.  doi:10.11896/jsjkx.201100190
Abstract PDF(7424KB) ( 494 )   
References | Related Articles | Metrics
Nowadays,the spread of negative influences such as internet rumors,infectious diseases and computer viruses has caused huge hidden dangers to social stability,human health and information security.It is of great significance to identify the source of their propagation to control the harm caused by the negative influence.However,most of the existing methods only focus on locating a single propagation source,while in the real world network,negative influence often comes from multiple sources.And the methods require time consuming simulation of the propagation process.In addition,due to ignoring the difference of topology features between the nodes,the accuracy of propagation source locating is not high and large amount of computation time is required.In order to solve these problems,a maximum likelihood based method is proposed to locate multiple sources using the information provided by a small number of observation points.Firstly,the concept of propagation graph is defined,and a method for constructing propagation graph is proposed.In the propagation graph,nodes in the network are divided into several levels according to their degrees and the weight of the edges.The edges with low propagation probability are removed,and the propagation graph is formed by combining observation nodes.Then,the activation probability of each node in each layer of the propagation graph is calculated,and the k nodes with the maximum likelihood relative to the observation points are selected to form the source node set.The simulation results show that the proposed method can accurately identify multiple propagation sources in the network,and the results of source location is higher than other similar algorithms.At the same time,it is verified that the selection of observation points and the network structure also affect the positioning results of propagation sources to varying degrees.
Link Prediction Method for Directed Networks Based on Path Connection Strength
ZHAO Xue-lei, JI Xin-sheng, LIU Shu-xin, LI Ying-le, LI Hai-tao
Computer Science. 2022, 49 (2): 216-222.  doi:10.11896/jsjkx.210100107
Abstract PDF(3094KB) ( 630 )   
References | Related Articles | Metrics
Link prediction aims to predict unknown links using available network topology information.Prediction methods based on paths perform well in undirected networks.However,paths of the same length have different node connection strength due to different type of links through the path in directed network.Traditional methods is difficult to distinguish the path heterogeneity.Given this,the difference in the strength of three types of directed links is first quantified in terms of the link weight matrix,then the connection strength of different heterogeneous classpaths between nodes is calculated and the effect of different paths under the same length path is distinguished.Finally,a directed network link prediction method based on the path connection strength is proposed by integrating the contribution of multi-order paths of different lengths.Validation of 9 real networks shows that accounting for differences in path connection strength effectively improves prediction performance under the AUC and Precision metrics.
Artificial Intelligence
Negative-emotion Opinion Target Extraction Based on Attention and BiLSTM-CRF
DING Feng, SUN Xiao
Computer Science. 2022, 49 (2): 223-230.  doi:10.11896/jsjkx.210100046
Abstract PDF(2442KB) ( 865 )   
References | Related Articles | Metrics
Aspect-based sentiment analysis (ABSA) is a popular topic for natural language processing,in which opinion target extraction and sentiment polarity classification of opinion target are one of the basic subtasks of ABSA.However,few studies directly extract the opinion targets of specific emotional polarity,especially the negative emotion opinion targets with more potential value.A new ABSA subtask--negative emotion opinion target extraction (NE-OTE) is proposed,and a BiLSTM-CRF model based on attention mechanism and character and word mixture embedding (AB-CE) is proposed.On the basis of bi-directional long short-term memory (BiLSTM) learning textual semantic information and capturing long distance bi-directional semantic dependency,through the attention mechanism,the model can better pay attention to the key parts in the input sequence and capture the implied characteristics related to the opinion target and its emotional tendency.Finally,the CRF layer can be used to predict the optimal tag sequence at the sentence level,so as to extract the negative emotional opinion target.This paper builds three NE-OTE task datasets based on the mainstream ABSA task baseline datasets and conducts extensive experiments on these datasets.Experimental results show that the model proposed in this paper can effectively identify the target of negative emotional opinions,and is significantly better than other baseline models,which verifies the effectiveness of the method proposed in this paper.
Dynamic Task Scheduling Method for Space Crowdsourcing
SHEN Biao, SHEN Li-wei, LI Yi
Computer Science. 2022, 49 (2): 231-240.  doi:10.11896/jsjkx.210400249
Abstract PDF(2777KB) ( 704 )   
References | Related Articles | Metrics
Space crowdsourcing is used to solve offline crowdsourcing tasks with time and space constraints,and it has developed rapidly in recent years.Task scheduling is an important research direction of space crowdsourcing.The difficulty lies in the dynamic uncertainty of tasks and workers in the scheduling process.In order to efficiently perform task scheduling,a dynamic task scheduling method for space crowdsourcing that considers the uncertainty of tasks and workers at the same time is proposed.The method has been improved in three aspects.First,the factors that need to be considered for scheduling are expanded.In addition to considering the uncertainty of the temporal and spatial attributes of the newly added tasks,it also considers the uncertainty of the transportation mode and temporal and spatial attributes of the newly added workers.Then,the scheduling strategy is improved.By using the aggregate scheduling strategy,the dynamically added tasks are aggregated first,and then the task allocation and path optimization are performed.Compared with the traditional non-aggregated scheduling,the calculation time is significantly reduced.The last aspect is to improve the scheduling algorithm.Based on the traditional genetic algorithm,the task allocation and path optimization operations are performed iteratively.Compared with the scheduling algorithm that first allocates tasks and then optimizes the path,it improves the accuracy of the optimal results.In addition,a simulation platform for dynamic scheduling of space crowdsourcing task paths based on real map navigation is designed and implemented,and the method is verified by this platform.
Comparative Analysis of Robustness of Resting Human Brain Functional Hypernetwork Model
ZHANG Cheng-rui, CHEN Jun-jie, GUO Hao
Computer Science. 2022, 49 (2): 241-247.  doi:10.11896/jsjkx.201200067
Abstract PDF(2170KB) ( 576 )   
References | Related Articles | Metrics
As a kind of dynamic behavior,robustness is also a research hotspot in the field of hypernetworks,which has important practical significance for the construction of robust networks.Although there are more and more researches on hypernetwork,the dynamic research is relatively less,especially in the field of neural imaging.Most of the existing researches on brain functional hypernetworks are about the static topological properties of the networks,and there is no relevant research on the dynamic characteristics robustness of brain functional hypernetworks.To solve these problems,lasso,group lasso and sparse group lasso me-thods are used to solve the sparse linear regression model to construct a hypernetwork.Then,based on the two experimental mo-dels of deliberate attack,node degree and node betweenness attack,the robustness of brain functional hypernetwork in response to node failure is explored by using the global efficiency and the relative size of the largest connected subgraph.Finally,a comparative analysis is made to explore a more stable network.The experimental results show that the hypernetwork constructed by group lasso and sparse group lasso is more robust in intentional attack mode.At the same time,the hypernetwork constructed by group lasso method is the most stable.
Scene Text Detection Algorithm Based on Enhanced Feature Pyramid Network
SHAO Hai-lin, JI Yi, LIU Chun-ping, XU Yun-long
Computer Science. 2022, 49 (2): 248-255.  doi:10.11896/jsjkx.201100072
Abstract PDF(3267KB) ( 625 )   
References | Related Articles | Metrics
Scene text detection helps machines understand image content,and is widely used in the fields such as intelligent transportation,scene understanding,and intelligent navigation.Existing scene text detection algorithms do not make full use of high-level semantic information and spatial information,which limits the model's ability to classify complex background pixels and the ability to detect and locate text instances of different scales.In order to solve the above problems,a scene text detection algorithm based on enhanced feature pyramid network is proposed.The algorithm includes a RIFE (ratio invariant feature enhanced) mo-dule and a RSR (rebuild spatial resolution) module.As the residual branch,the RIFE module enhances the high-level semantic information transmission of the network,improves the classification ability,and reduces the false positive rate and the false negative rate.The RSR module reconstructs multi-layer feature resolution and uses rich spatial information to improve the boundary location.Experimental results show that the proposed algorithm improves the detection capabilities on the multi-directional text dataset ICDAR2015,the curved text dataset Totaltext,and the long text dataset MSRA-TD500.
Improved Topic Sentiment Model with Word Embedding Based on Gaussian Distribution
LI Yu-qiang, ZHANG Wei-jiang, HUANG Yu, LI Lin, LIU Ai-hua
Computer Science. 2022, 49 (2): 256-264.  doi:10.11896/jsjkx.201200082
Abstract PDF(2348KB) ( 576 )   
References | Related Articles | Metrics
In recent years,the topic sentiment model as an important research in the field of unsupervised learning,has been used in text topic mining and sentiment analysis.However,Weibo has brought some challenges to the topic sentiment model because of its short text and in complete structure.Therefore,the related research and improvement work of this paper will be carried out around the topic sentiment model of Weibo.We introduce the word vector technology to the popular model-TSMMF(topic sentiment model based on multi-feature fusion),use multivariate Gaussian distribution to sample neighboring words fast from the word embedding space,and replace the words generated by the Dirichlet multinomial distribution.Thus,the words with lowcooccurrence frequency and less information will be transformed into words with prominent topic and clear information.At the same time,the nearest neighbor search algorithm is used to further improve the running speed of the model when processing large-scale Weibo corpus,and then the GWE-TSMMF model is proposed.The experimental results show that the average F1 value of GWE-TSMMF model is about 0.718.The sentiment polarity analysis is better than the original model and the existing mainstream word embedding topic sentiment models (WS-TSWE and HST-SCW).
Ensemble Regression Decision Trees-based lncRNA-disease Association Prediction
REN Shou-peng, LI Jin, WANG Jing-ru, YUE Kun
Computer Science. 2022, 49 (2): 265-271.  doi:10.11896/jsjkx.201100132
Abstract PDF(1939KB) ( 580 )   
References | Related Articles | Metrics
Long non-coding RNA (lncRNA) plays an important role in various complex human diseases.The development of effective prediction methods to infer the potential associations between lncRNA and diseases will not only help biologists understand the pathogenesis of diseases,but also contribute to the diagnosis,prevention,and treatment of human diseases.In this paper,an ensemble regression decision tree-based lncRNA-disease association method (ERDTLDA) is proposed to solve the lncRNA-disease association problem.First,ERDTLDA uses the open-source data of lncRNA to construct lncRNA,disease similarity matrix,lncRNA-disease association matrix respectively.Then,we obtain lncRNA,disease feature representations from these matrices.Principal component analysis is further exploited for feature extraction.Finally,a CART regression decision tree is used to yield association scores.An ensemble strategy for multiple decision trees is proposed to further improve the accuracy of our model.The results of LOOCV experiments show that the AUC of our method on three real lncRNA-disease datasets are 0.905 5,0.896 9 and 0.912 9 respectively,which are 6.46%,5.4% and 6.02% higher than the existing methods,respectively.Additionally,breast cancer,lung cancer,and gastric cancer are also used as case studies to further verify the accuracy and effectiveness of ERDTLDA.
Review Question Generation Based on Product Profile
XIAO Kang, ZHOU Xia-bing, WANG Zhong-qing, DUAN Xiang-yu, ZHOU Guo-dong, ZHANG Min
Computer Science. 2022, 49 (2): 272-278.  doi:10.11896/jsjkx.201200208
Abstract PDF(2677KB) ( 508 )   
References | Related Articles | Metrics
Automatic question generation is a research hotspot in the field of natural language processing,which aims to generate natural questions from texts.With the continuous development of internet,a large amount of commodity reviews has been generated in the electronic commerce fields.In the face of massive review information,how to quickly mine key reviews related to pro-duct information has great research value.It is of great importance to both customers and merchants.Most of existing question generation models are based on reading comprehension type corpus and use sequence-to-sequence network to generate questions.However,for question generation tasks based on product reviews,existing models fail to incorporate the product information that users and businesses focus on into the learning process.In order to make the generated questions more in line with the attributes of the goods,a question generation model based on product is proposed in this paper.Through joint learning and training with product attribute recognition,the model strengthens the attention to feature information related to product.Compared with the existing question generation models,this model can not only strengthen the recognition ability of product attributes,but also ge-nerate contents more accurately.This paper carries out experiments on the data sets of product reviews of JD and Amazon.The results show that in the question generation task based on reviews,this model achieves a great improvement compared with the existing question generation model,which is improved by 3.26% and 2.01% respectively on BLEU,and 2.33% and 2.10% respectively on ROUGE.
Graph Convolutional Networks with Long-distance Words Dependency in Sentences for Short Text Classification
ZHANG Hu, BAI Ping
Computer Science. 2022, 49 (2): 279-284.  doi:10.11896/jsjkx.201200062
Abstract PDF(2098KB) ( 660 )   
References | Related Articles | Metrics
With the wide application of graph neural network technology in the field of natural language processing,the research of text classification based on graph neural networks has received more and more attention.Building graph for text is an important research task in the application of graph neural networks for text classification.Existing methods cannot effectively capture the dependency of long-distance words in sentences when building graph.Short text classification is a special type of text classification task in which the classified text is generally short,so the traditional text representation is usually sparse and lacks rich semantic information.Based on this,in this paper we propose a short text classification method based on graph convolutional neural networks incorporating long-distance words dependency.Firstly,by using the co-occurrence relationship of words,the containment relationship between documents and words,and the long-distance words dependency in sentences,a text graph is constructed for the entire text corpus.Then,the text graph is input into the graph convolutional neural networks,and the category label prediction is made for each document node after 2-layer convolution.The experimental results on the three datasets of online_shopping_10_cats,summaries of Chinese papers and hotel reviews show that the proposed method achieves better results than the existing baselines.
Ensemble Learning Method for Nucleosome Localization Prediction
CHEN Wei, LI Hang, LI Wei-hua
Computer Science. 2022, 49 (2): 285-291.  doi:10.11896/jsjkx.201100195
Abstract PDF(2359KB) ( 434 )   
References | Related Articles | Metrics
Nucleosome localization refers to the position of DNA double helix relative to histone,and plays an important regulatory role in DNA transcription.It takes a lot of time and resources to detect nucleosome localization by biological experiments.Therefore,it is an important research direction to predict nucleosome localization by using DNA sequences based on computationalmethods.Aiming at the shortcomings of single model and single code in DNA sequence feature representation and learning in nucleosome location prediction,this paper proposes an end-to-end ensemble deep learning model FuseENup,which uses three coding methods to represent DNA data from multiple dimensions.Different models extract the key features hidden in the data from different dimensions,and construct a new DNA sequence representation model.Performing 20-fold cross-validation on the four data sets,compared to the current model CORENup with the best comprehensive performance for the nucleosome localization prediction problem,the accuracy and precision of FuseENup are improved by 3% and 9% on the HS data set,increases 2% and 6% on the DM data set,1% and 4% on the E data set.Compared with other machine learning and deep learning benchmark models,FuseENup has better performance.Experiments show that FuseENup can improve the prediction accuracy of nucleosomes localization,which shows the effectiveness and scientificity of the method.
Computer Network
Survey on the Application of Forward Error Correction Coding in Network Transmission Protocols
LIN Li-xiang, LIU Xu-dong, LIU Shao-teng, XU Yue-dong
Computer Science. 2022, 49 (2): 292-303.  doi:10.11896/jsjkx.210500104
Abstract PDF(1856KB) ( 1355 )   
References | Related Articles | Metrics
Forward error correction (FEC) coding is a technique to cope with packet loss in network transmission.By adding redundant data to the transmission process,the receiver can recover the original data directly from the redundant data in packet loss scenarios.In the scenario of high packet loss and high latency,adding forward error correction coding appropriately can save a lot of waiting time for timeout retransmission to improve the quality of service of network transmission.Adding too much redundancy will result in bandwidth wasting,while insufficient redundancy will fail to recover lost data,so the difficulty of using forward error correction coding in practice is to properly control the proportion of redundant data.Nowadays,most of the forward error correction coding research is based on traditional network protocols,but with the rise of QUIC (quick UDP internet connections) protocol,more forward error correction researches start to incorporate QUIC protocol due to its characters like 0-RTT (round trip time) connectivity,multiplexing,seamless connection migration to further improve transmission performance.This paper gives an overview of forward error correction coding,introduces its application scenarios,basic policies and adaptive redundancy control strategies.Then it introduces the research status of forward error correcting coding in traditional protocols in unicast and multicast scenarios.Finally,this paper introduces the current research status and challenges of forward error correction coding in QUIC protocol.
Task Offloading,Migration and Caching Strategy in Internet of Vehicles Based on NOMA-MEC
ZHANG Hai-bo, ZHANG Yi-feng, LIU Kai-jian
Computer Science. 2022, 49 (2): 304-311.  doi:10.11896/jsjkx.210100157
Abstract PDF(2600KB) ( 531 )   
References | Related Articles | Metrics
In the internet of vehicles systems that combining mobile edge computing (MEC) with non-orthogonal multiple access (NOMA) technology,to solve the high latency problem when user processes computationally intensive and latency-sensitive task,a strategy of task offloading,migration and cache optimization based on game theory and Q learning is proposed.Firstly,the mo-del of offloading delay,migration delay and cache delay of the internet of vehicles task based on NOMA-MEC is established.Se-condly,we use the cooperative game method to obtain the optimal user group to optimize the offloading delay.Finally,in order to avoid local optima,the Q learning algorithm is utilized to optimize the joint delay of the migration cache in the user group.The simulation results show that compared with other solutions,the proposed algorithm can effectively improve the offloading efficiency and reduce the task delay by about 22% to 43%.
Study on Scientific Workflow Scheduling Based on Fuzzy Theory Under Edge Environment
LIN Chao-wei, LIN Bing, CHEN Xing
Computer Science. 2022, 49 (2): 312-320.  doi:10.11896/jsjkx.201000102
Abstract PDF(2001KB) ( 602 )   
References | Related Articles | Metrics
As a novel computing paradigm,edge computing has become a significant approach to solve large-scale scientific applications.Aiming at scientific workflow scheduling under edge environment,task computation time and data transmission time are uncertain due to the fluctuation of server processing performance and bandwidth,respectively.In order to help capture and reflect the uncertainty during workflow execution,task computation time and data transmission time are represented as triangular fuzzy numbers (TFN),based on fuzzy theory.Simultaneously,an adaptive discrete fuzzy GA-based particle swarm optimization (ADFGA-PSO) is proposed to minimize fuzzy execution cost of workflow while satisfying deadline constraint.Besides,two-point crossover operator,neighborhood mutation and adaptive multipoint mutation operator of genetic algorithm (GA) are introduced to avoid particles being trapped in local optimum.Experimental results show that,compared with others,scheduling strategy based on ADFGA-PSO can more effectively reduce fuzzy execution cost in regard to deadline-constrained scientific workflow scheduling under edge environment.
BBR Unilateral Adaptation Algorithm for Improving Empty Window Phenomenon in STARTUP Phase
MA Li-wen, ZHOU Ying
Computer Science. 2022, 49 (2): 321-328.  doi:10.11896/jsjkx.201200266
Abstract PDF(4669KB) ( 676 )   
References | Related Articles | Metrics
In order to solve the problem of delay oscillation and empty window caused by the bottleneck bandwidth and round-trip time(BBR) congestion control algorithm in the STARTUP phase due to not receiving the acknowledge character(ACK) in the campus network,the BBR unilateral adaptation algorithm is proposed.The algorithm only runs on the sender,and it is not restricted by network protocols and upper-layer applications.By improving the weighting coefficient of the delay estimator,we design the instantaneous average deviation estimator of the delay and use the estimation result as the oscillation smoothing factor of the delay estimator to improve the ability of the delay estimator to deal with severe delay jitter.To solve the inevitable empty window problem and sequence number wraparound as much as possible,a flow state machine and a STARTUP state machine are designed at the sending end to maintain a high link throughput.According to the specific transmission situation,the traffic is divided into 6 states:new,blocked,waiting,time_waiting,running,terminated,and according to the traffic feedback,the transmission performance of the STARTUP stage is divided into 3 states:GOOD,NORMAL,BAD.Experimental results show that the improved BBR has better transmission performance in the STARTUP phase than the original BBR algorithm and is better than the passive congestion control algorithm (Reno,CUBIC) currently.
Single Node Failure Routing Protection Algorithm Based on Hybrid Software Defined Networks
GENG Hai-jun, WANG Wei, YIN Xia
Computer Science. 2022, 49 (2): 329-335.  doi:10.11896/jsjkx.210100051
Abstract PDF(2493KB) ( 489 )   
References | Related Articles | Metrics
Software defined network (SDN) is a new network architecture proposed by the clean slate research group of Stanford University.The significant feature of this architecture is to decouple the functions of control plane and forwarding plane,and to flexibly forward the network traffic.Based on this,Internet service providers have deployed SDN technology in their backbone network to maximize the utilization of network resources.However,due to the limitation of economic cost and technical conditions,the backbone network of internet service providers must be in the hybrid SDN network for a long time.The studies have shown that single network node failure is inevitable and occurs frequently.Therefore,it is a key scientific problem to study the routing protection me-thod for single network component failure in hybrid SDN networks.In this paper,the route protection method for single network component failure in hybrid SDN network is described,and then two heuristic methods are used to solve the problem.Finally,the proposed heuristic algorithms are tested in real and simulated topologies.The experimental results show that in the traditional backbone network,only a part of the traditional devices need to be upgraded to SDN devices,and the algorithms proposed in this paper can deal with all possible single network node failure cases in the network.
Load Scheduling Algorithm for Distributed On-board RTs System Based on Machine Learning
TAN Shuang-jie, LIN Bao-jun, LIU Ying-chun, ZHAO Shuai
Computer Science. 2022, 49 (2): 336-341.  doi:10.11896/jsjkx.201200126
Abstract PDF(2212KB) ( 525 )   
References | Related Articles | Metrics
The tasks of distributed on-board multi-RTs (remote terminals) system are mainly distributed based on functions,while the burstiness of data processing tasks often leads to unbalanced load among different computers.Using a flexible load scheduling mechanism can effectively adjust the load difference between different computers,thereby improving the overall performance of the computer system to a certain extent.A load scheduling algorithm for distributed on-board RTs system based on machine learning is proposed in this paper,which includes four steps:sample collection,task throughput prediction model construction,throughput prediction and load scheduling.In the process of constructing the task throughput prediction model,the weight of the model is obtained through the linear regression normal equation of machine learning,which reduces the time spent in constructing the model.In the load scheduling link,if the total throughput rate of RTs is greater than the total load data volume of the system,data will be allocated to each RT in proportion to the throughput rate;otherwise,only a certain amount of data will be allocated to RTs whose load data volume is less than their own throughput rate.The test results on the ground simulation system constructed by multiple on-board computers electrical performance products show that the algorithm can increase the average CPU utilization rate of all nodes of the system by 23.78%,and reduce the variance of CPU utilization rate between nodes to 34.59%.The total system throughput of the task is significantly increased by 225.97%.In other words,this method can effectively improve system resource utilization while ensuring system load balance,and improve the real-time data processing performance of the on-board computer system.
Load-balanced Geographic Routing Protocol in Aerial Sensor Network
HUANG Xin-quan, LIU Ai-jun, LIANG Xiao-hu, WANG Heng
Computer Science. 2022, 49 (2): 342-352.  doi:10.11896/jsjkx.201000155
Abstract PDF(3342KB) ( 448 )   
References | Related Articles | Metrics
The unbalanced burden on the nodes nearing the ground station pose challenges on the multi-hop data transmission in aerial sensor networks(ASNs).In order to achieve reliable and efficient multi-hop data transmission in ASNs,a reinforcement-learning based queue-efficient geographic routing(RLQE-GR) protocol is proposed.The RLQE-GR protocol maps routing problem into the general reinforcement learning(RL) framework,where each UAV is treated as one state and each successful packet forwarding is treated as one action.Based on the framework,the RLQE-GR protocol designs a reward function related to geographical location,link quality and available transmission queue length.Then,the Q-function is employed to converge all the sta-teaction values(Q-values),and each packet is forwarded based on potential state-action values.To converge all Q values and minimize performance deterioration during the convergence process,a beacon mechanism is employed in RLQE-GR protocol.In contrast to existing geographic routing protocols,the RLQE-GR protocol simultaneously takes the queue utilization,link quality and relative distance into consideration for forwarding packets.This makes the RLQE-GR protocol achieve load balancing,meanwhile not introducing strict performance deteriorations on routing hop and link quality.Moreover,due to the near-optimization character of RL theory,the RLQE-GR protocol can achieve routing performance optimization on packet delivery ratio and end-to-end delay.
Efficiency Model of Intelligent Cloud Based on BP Neural Network
XIA Jing, MA Zhong, DAI Xin-fa, HU Zhe-kun
Computer Science. 2022, 49 (2): 353-367.  doi:10.11896/jsjkx.201100140
Abstract PDF(4660KB) ( 519 )   
References | Related Articles | Metrics
Recently,we are facing the increasingly large and complex intelligent applications in cloud computing.Establishing an effective quality model of cloud service is an important methodology to evaluate cloud service quality.However,due to the diversity and dynamic characteristics of intelligent cloud resources,it is very difficult to evaluate the service efficiency of intelligent cloud.At present,there is a lack of standard and unified cloud service quality evaluation and cloud service model in the field of intelligent cloud computing.In this paper,the abstract service quality of intelligent cloud is embodied as cloud service efficiency,and cloud service efficiency is defined as the service availability,reliability and performance reflecting service efficiency.That is to quantitatively evaluate the overall service capability of intelligent cloud through the output of cloud service efficiency.Moreover,this paper proposes an efficiency model of intelligent cloud based on BP neural network.The complex nonlinear relationship between input characteristics and output service efficiency of intelligent cloud is simulated by BP neural network.Once the input characteristics are determined,the output service efficiency can be computed.The efficiency model is responsible for predicting the service level of the current system in real time according to the input characteristics of the system accurately.The experimental results show that the BP neural network model,as a modeling tool of service efficiency model,has good computing efficiency and accuracy.
Gating Mechanism for Real-time Network I/O Requests Based on Para-virtualization Virtio Framework
SHEN Hao-xi, NIU Bao-ning
Computer Science. 2022, 49 (2): 368-376.  doi:10.11896/jsjkx.210100110
Abstract PDF(2521KB) ( 605 )   
References | Related Articles | Metrics
Response time is an important performance indicator of the service level objective (SLO),which is related to the usage of resources.If resources are sufficient to ensure the normal execution of the request,the response time is short.If resources are insufficient,the request needs to wait for resources,and the response time is long.In the cloud computing virtualization environment,the control of resource access includes both the control of the overall resource and the control of individual resources such as CPU and network bandwidth.However,there are currently few direct control of network I/O requests to ensure response time.In order to achieve better performance,virtualization technology mostly uses the para-virtualization framework Virtio.Network I/O requests are transmitted through the Virtio shared channel,making it possible to set up a gating mechanism for network I/O requests in Virtio.Therefore,the study uses the two-end aggregation method (TAM) to propose gating mechanism for real-time network I/O requests (GMRNR),which controls the time when the network I/O request passes Virtio to ensure the response time of various requests.GMRNR is set up in the virtio-net module of Virtio front-end and classifies requests according to their response time indicators.It uses timers and aggregation queue length to control the time and aggregation frequency of diffe-rent levels of requests through Virtio to ensure the response time of the request.Experimental tests show that GMRNR can distinguish the priority of network I/O requests,and when resources are sufficient,network I/O requests of different levels can be completed within their respective required time.When resources are insufficient,the response time of high-priority network I/O requests is given priority.Meanwhile,GMRNR has high resource utilization efficiency.
Accelerating Forwarding Rules Issuance with Fast-Deployed-Segment-Routing(FDSR) in SD-MANET
ZHANG Geng-qiang, XIE Jun, YANG Zhang-lin
Computer Science. 2022, 49 (2): 377-382.  doi:10.11896/jsjkx.210800045
Abstract PDF(2268KB) ( 444 )   
References | Related Articles | Metrics
Aiming at the problem that the deployment of transmission paths in MANET(mobile ad hoc network) for SDN(software defined network) requires the controller to issue related flow entries for all nodes on the path,thereby causing that the transmission have to wait for a long-term period,a mechanism for quickly issuing forwarding rules,FDSR(fast-deployed-segment-routing),is proposed.The controller will issue data forwarding rules by adding labels corresponding to the forwarding path on the data packets in the way of segmented routing,and when dealing with the max stack depth problem,use label adhesion technology to divide the entire path into multiple label stacks by algorithm,sending to corresponding forwarding nodes that can quickly inte-ract with the controller node to reduce path configuration time.Experiments show that compared with the OpenFlow distribution method,FDSR in SD-MANET (software defined mobile ad hoc network) can reduce path deployment time and flow table overhead,and can effectively deal with the MSD problem of SR and improve the long path deployment speed of the controller.