Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP

Featured ArticlesMore...

  • Volume 48 Issue 4, 15 April 2021
      Computer Science Theory
      Survey of Constrained Evolutionary Algorithms and Their Applications
      LI Li, LI Guang-peng, CHANG Liang, GU Tian-long
      Computer Science. 2021, 48 (4): 1-13.  doi:10.11896/jsjkx.200600151
      Abstract ( 126 )   PDF(1765KB) ( 178 )   
      References | Related Articles | Metrics
      Constrained optimization problems exist widely in scientific research and engineering practice,and the corresponding constrained evolutionary algorithms have become an important research direction in the field of evolutionary computation.The essential problem of constrained evolutionary algorithm is how to effectively use the information of infeasible and feasible solutions and balance the objective function and constraints to make the algorithm more efficient.Firstly,this paper defines the problem of constraint optimization.Then it analyzes the current mainstream constraint evolution algorithms in detail.At the same time,based on different constraint handling mechanisms,these mechanisms are divided into constraint and objective separation methods,pena-lty function methods,multi-objective optimization methods,hybrid methods and so on,and these methods are analyzed and summarized comprehensively.Next,it points out the urgent problems that need to be solved as well as the research direction.Finally,the application of constrained evolutionary algorithm in engineering optimization,electronic and communication engineering,mechanical design,environmental resource allocation,scientific research and management allocation are introduced.
      Efficient Computation of MPE in Causal Bayesian Networks
      LI Chao, QIN Biao
      Computer Science. 2021, 48 (4): 14-19.  doi:10.11896/jsjkx.200500155
      Abstract ( 38 )   PDF(1497KB) ( 57 )   
      References | Related Articles | Metrics
      This paper investigates the efficient computation of most probable explanations(MPE) in causal Bayesian networks(CBNs).From the perspective of a directed acyclic graph,researchers find that every CBN has a corresponding Bayesian network.By comparing the semantics of intervention with that of differentiation,this paper reveals the differential semantics of a full ato-mic intervention on MPE.Using the differential semantics,it reduces the computation of an MPE instantiation for an atomic intervention on a CBN to that of an MPE instantiation in the corresponding Bayesian network.Next,it proposes a jointree algorithm called BJT to compute the best atomic intervention,which includes the BMPE probability and a corresponding instantiation,by building only one jointree for a CBN.The BMPE probability is the highest probability of all atomic interventions on MPE in a CBN.BJT can use the causal effect to compute the MPE probability and an MPE instantiation for the corresponding Bayesian network.Finally,experimental results show that BJT performs more than one order of magnitude faster than the-state-of-the-art algorithm for computing the best atomic intervention in most CBNs.
      Integer Decomposition Based on Grover Search Algorithm
      SONG Hui-chao, LIU Xiao-nan, WANG Hong, YIN Mei-juan, JIANG Duo
      Computer Science. 2021, 48 (4): 20-25.  doi:10.11896/jsjkx.200800117
      Abstract ( 46 )   PDF(2269KB) ( 55 )   
      References | Related Articles | Metrics
      One of the most fundamental problems in computer science is unstructured search,and Grover quantum search algorithm is designed for the unstructured search problem.Grover quantum search algorithm can be used to solve graph coloring,shortest path sorting and other problems,and it can also effectively decipher the cipher system.In this paper,Grover search algorithm combined with classical pretreatment is proposed to realize integer decomposition.Firstly,quantum circuits of Grover algorithm with different qubits are simulated based on IBMQ cloud platform,and the prime factors P and Q of N are simulated using Grover algorithm.Then,the simplified equation is transformed into Boolean logic relation to build Oracle in Grover algorithm.Finally,the probability of finding the solution is changed by changing the number of iterations.According to the experimental results of the simulation circuit,the feasibility of solving prime factors P and Q using Grover algorithm is verified,and the target item is searched with 78% probability of success under the condition of 16 search space and one G iteration.The differences of quantum bit number and time asymptotic complexity between Grover algorithm and Shor algorithm for solving some numbers are compared.The experiment of integer decomposition by Grover quantum search algorithm expands the application field of this algorithm,and the acceleration effect of Grover algorithm is especially obvious in large search problems.
      Community Structure of Regular (3,4)-CNF Formula
      HE Bin, XU Dao-yun
      Computer Science. 2021, 48 (4): 26-30.  doi:10.11896/jsjkx.201000178
      Abstract ( 38 )   PDF(1433KB) ( 39 )   
      References | Related Articles | Metrics
      By constructing an appropriate minimum unsatisfiable formula,the 3-CNF formula can be reduced and converted into a regular (3,4)-CNF formula in polynomial time.The converted regular (3,4)-CNF formula has the same satisfiability as the original 3-CNF formula.The structure of the formula has also changed accordingly.The community structure of the graph reflects the modular characteristics of the graph.The CNF formula can be transformed into the corresponding graph to study the relationship between the modular characteristics of the formula graph and some properties of the formula.The two types of formulas before and after the reduction are converted into corresponding graphs,and the module characteristics are studied.It is found that the regular (3,4)-CNF formula obtained after conversion has a high modularity.In addition,in the process of using the DPLL (Davis Putnam Logemann Loveland) algorithm to solve the CNF formula,when a conflict is encountered,the conflict-driven clause lear-ning strategy is used to obtain a learning clause and add it to the original formula,which reduces the modularity of the original formula.The research finds that when the DPLL algorithm combined with the conflict-driven clause learning strategy is applied to the regular (3,4)-CNF formula,most of the variables contained in the learning clause are located in different communities.
      Fuzzy Safety and Liveness Properties
      SHI Tie-zhu, QIAN Jun-yan, PAN Hai-yu
      Computer Science. 2021, 48 (4): 31-36.  doi:10.11896/jsjkx.200500036
      Abstract ( 20 )   PDF(1410KB) ( 22 )   
      References | Related Articles | Metrics
      Formal specification is to construct specification of the developed software and hardware systems by using formal language and describes their models and properties.Among which,the specification of properties which includes the specification of branching-time properties,plays an important role in verification of systems.In the classical setting,the specification of properties is based on two-valued logic,and hence cannot describe the inconsistent or uncertain information.Consequently,extending the specification languages of properties to the fuzzy setting helps to verify the fuzzy systems.In this paper,first,a formal definition of branching-time properties,especially the safety and liveness properties in the fuzzy setting,is given.Then,two types of closure operations are defined,resulting in 4 types of properties which are universal safety,universal liveness,existential safety,and existential liveness.Finally,it is shown that any branching-time property is the intersection between an existential safety property and an existential liveness property,or a universal safety property and a universal liveness property,or an existential safety property and a universal liveness property.
      Approximate Ratios Analysis of New Algorithm for Classical Parallel Scheduling
      GAO Ji-ji, YUE Xue-rong, CHEN Zhi-bin
      Computer Science. 2021, 48 (4): 37-42.  doi:10.11896/jsjkx.200600064
      Abstract ( 14 )   PDF(1478KB) ( 23 )   
      References | Related Articles | Metrics
      Given m parallel machines(identical machines) and n jobs,we need to find a distribution scheme so that the overall completion time is as short as possible after these n jobs are allocated to m machines.This NP-hard problem is called classical scheduling problem.If the processing time of each job meets certain conditions,it is expected to get the optimal allocation scheme effectively.Yue et al.consider a new algorithm for the classical scheduling problem that the processing time satisfies the divisional property,which can always obtain the optimal distribution in this special case.This algorithm can obtain the optimal distribution in polynomial time and is an approximate algorithm for the general classical scheduling problem.On this basis,the approximate ratio of the algorithm in general problems is considered.There are two versions of the proposed algorithm,and the approximate ratios of 3/2 and 2-1/2q(q∈Z+) are obtained respectively.The tight examples provided in this paper show that our analysis of two versions of the algorithm is optimal.
      Subnetwork Reliability of (n,k)-bubble-sort Networks
      FENG Kai, MA Xin-yu
      Computer Science. 2021, 48 (4): 43-48.  doi:10.11896/jsjkx.201100139
      Abstract ( 18 )   PDF(2173KB) ( 30 )   
      References | Related Articles | Metrics
      Topological properties of the interconnection network of a parallel computer system play an important role in realizing functions of the system.In order to measure the fault tolerance abilities of subnetworks in the parallel computer system which is built based on the (n,k)-bubble-sort network,the one-to-one relation between (n-m,k-m)-bubble-sort subnetworks of the (n,k)-bubble-sort network and specific strings is constructed,and the reliability of (n-m,k-m)-bubble-sort subnetworks in the (n,k)-bubble-sort network is studied under the node fault model.Assuming that 2≤k≤n-2 and 1≤m≤k-1,the probability that at least one (n-m,k-m)-bubble-sort subnetwork is fault-free in an (n,k)-bubble-sort network under the probabilistic fault condition is firstly given,and the simulation experiments demonstrate the accuracy of the given results.Then the calculation formula of the mean time to failure to maintain the fault-free status of different number of (n-m,k-m)-bubble-sort subnetworks is obtained,and the theoretical results are shown to be in accordance with the simulation results.
      Relationship Among Three Types of Rough Approximation Pairs
      LU Xun, LI Yan-yan, QIN Ke-yun
      Computer Science. 2021, 48 (4): 49-53.  doi:10.11896/jsjkx.200900089
      Abstract ( 20 )   PDF(1316KB) ( 26 )   
      References | Related Articles | Metrics
      In generalized approximation spaces,approximation operators can be constructed based on elements,knowledge gra-nules and subsystems.This paper is devoted to discussion of basic properties and relationships among these three types of approximation operators.Some necessary and sufficient conditions for approximation operators coincide with each other are provi-ded.In addition,different approximation spaces may generate same granule based or subsystem based rough approximation operators.Some necessary and sufficient conditions for different approximation spaces generating same approximate operators are surveyed.
      Attribute Exploration Algorithm Based on Unrelated Attribute Set
      SHEN Xia-jiong, YANG Ji-yong, ZHANG Lei
      Computer Science. 2021, 48 (4): 54-62.  doi:10.11896/jsjkx.200800082
      Abstract ( 12 )   PDF(1610KB) ( 34 )   
      References | Related Articles | Metrics
      As an important tool in the theory of formal concept analysis,the attribute exploration algorithm is problem-oriented and can interactively discover system knowledge step by step,which plays a central role in knowledge discovery and acquisition.However,if the size of formal context is large,the calculation process of attribute exploration algorithm will spend too much time to restrict seriously the promotion and application of the algorithm in the current era of big data.The bottleneck of time-consuming mainly lies in “finding the next problem to interact with experts”,traditional algorithms have a lot of redundant computation in this process.Aiming at this problem,three theorems are put forward and proved based on analyzing the logic relation between pseudo-intent,intent and implication set.According to these theorems,an attribute exploration algorithm based on an unrelated collection is given.During pseudo-intent and intent calculation,this algorithm,by means of the proposed theorems,can skip the process of determining whether or not an attribute set that violates the logical relationship is a pseudo-intent or intent,so as to reduce the search space and time complexity of the algorithm.The best time is O(mn2P2),the worst time is O(mn3P2).The experi-mental results show that the proposed algorithm has an obvious time performance advantage compared with the traditional algorithm.
      Database & Big Data & Data Science
      Indexing Bi-temporal RDF Model
      WANG Yin-di, ZHANG Zhe-qing, YAN Li
      Computer Science. 2021, 48 (4): 63-69.  doi:10.11896/jsjkx.200600084
      Abstract ( 13 )   PDF(2005KB) ( 25 )   
      References | Related Articles | Metrics
      RDF(Resource Description Framework) has been widely used for semantic representation and processing of big data.Traditional RDF can only represent static semantics and can not meet the needs of processing semantics dynamically over time in time-sensitive scenarios.Therefore,many temporal RDF models are proposed,including RDF model for transaction time,RDF model for valid time,and bi-temporal RDF model thatsupports both transaction time and valid time.To support efficient proces-sing of large-scale temporal RDF data,this paper proposes a three-level index structure based on bi-temporal RDF model.Specifi-cally,in the first level of this index structure,the dataset is divided into different subsets according to the update times of the temporal RDF data.In the second level,a quadtree is built for indexing time information in each subset,and in the third level,the bitmap with three composite keys is used to index the subject,predicate,object of RDF triples.Experiments are conducted from three aspects:the time of building index,the index size,and the required query time.Experimental results show that the proposed indexing scheme can reduce the query time effectively and improve the query performance.
      Destination Prediction of Moving Objects Based on Convolutional Neural Networks and Long-Short Term Memory
      LI Bing-rong, PI De-chang, HOU Meng-ru
      Computer Science. 2021, 48 (4): 70-77.  doi:10.11896/jsjkx.200200024
      Abstract ( 22 )   PDF(2170KB) ( 44 )   
      References | Related Articles | Metrics
      Destination prediction of moving objects is an important part of location-based service.There are always some difficult problems in this field,such as sparse data and long-term dependence.In order to solve these problems effectively,firstly,a trajectory segmentation method based on the minimum description length strategy (MDL) is introduced,which can obtain the best tra-jectory segmentation,improve the similarity between tracks and realize the simplification of trajectories.Then,the segmented data are processed by image processing and local extraction,and the trajectory destination is clustered to add labels to the trajectory data.Finally,this paper proposes a deep learning framework CNN-LSTM based on convolution and long-short term memory.In this framework,local image data and labels firstly are taken as the input of the CNN model,and the effective information is preserved through the depth extraction of spatial features.Then,the LSTM algorithm is used for training and destination prediction.Extensive experiments are carried out on real trajectory dataset of moving objects .The results demonstrate that the CNN-LSTM method proposed in this paper has a strong learning ability and can better capture the spatiotemporal correlation of trajectories.In comparison to state-of-the-art and latest prediction algorithms,this method has high accuracy of destination prediction.
      Recommendation Algorithm Based on Bipartite Graph Convolution Representation
      XIONG Xu-dong, DU Sheng-dong, XIA Wan-jun, LI Tian-rui
      Computer Science. 2021, 48 (4): 78-84.  doi:10.11896/jsjkx.200400023
      Abstract ( 16 )   PDF(2077KB) ( 34 )   
      References | Related Articles | Metrics
      With the rapid development of data-driven intelligent technology,personalized intelligent recommendation algorithms and related applications become research hotspots.Recommendation can be regarded as a matching problem between users and items.As the semantic gap between users and items,it’s inconvenient to match them directly.Many existing recommendation methods based on deep learning use the idea of mapping entities from different spaces into a unified semantic space to calculate the matching degree by embedding representation.With the emergence of network representation learning,the bipartite graph can be formed between users and items and the embedding representation in the recommendation algorithm can also be regarded as nodes representation in bipartite graph.Many recommendation algorithms based on bipartite graph nodes representation have been proposed.However,existing algorithms are still hard to extract high-order interactive information effectively.To solve this problem,a bipartite graph convolution representation-based recommendation algorithm(BGCRRA) is developed in this paper.Interactions between users and items are regarded as a bipartite graph at first,nodes are represented by adaptively fusing multi-order and multi-level graphs secondly,and finally the matching degree of users and items is calculated and the recommendation is realized.Comparative experiments are carried out on 3 open datasets and the effectiveness of the proposed algorithm is verified by comparing HR and NDCG(Normalized Discounted Cumulative Gain) indexes of our algorithm and the state-of-the-art algorithms.
      Study on Financial Credit Information Based on Graph Neural Network
      LI Si-di, GUO Bing-hui, YANG Xiao-bo
      Computer Science. 2021, 48 (4): 85-90.  doi:10.11896/jsjkx.200500109
      Abstract ( 24 )   PDF(3237KB) ( 41 )   
      References | Related Articles | Metrics
      The evaluation of the credit of users who apply for loans by financial institutions is one of the frontier directions in the field of Internet finance.Firstly,based on the historical data of the Internet financial loan network,the network modeling of the loan relationship between users reflects the complex network of loan correlation that integrates the interaction between user nodes and surrounding relationship nodes.Secondly,by introducing a graph neural network model based on the structural characteristic index of node centrality,a personal credit evaluation model with adjacent circle layer information and loan credit information is proposed.Finally,the model is implemented on a historical data set containing 756 100 transaction records,and is compared with the BP neural network algorithm and the RF-Logistic model.The results show that the proposed model has higher evaluation accuracy.
      Relief Feature Selection Algorithm Based on Label Correlation
      DING Si-fan, WANG Feng, WEI Wei
      Computer Science. 2021, 48 (4): 91-96.  doi:10.11896/jsjkx.200800025
      Abstract ( 19 )   PDF(1788KB) ( 31 )   
      References | Related Articles | Metrics
      Feature selection plays a vital role in machine learning and data mining.Relief,as an efficient filtering feature selection algorithm,is widely used because it can process multiple types of data and has a strong tolerance for noise.However,classic Relief algorithm provides a relatively simple evaluation to discrete features.In actual feature selection,the potential relationship between features and class labels is not fully explored,and there is a lot of room for improvement.Aiming at the shortcomings of classic Relief algorithm’s simple evaluation method for discrete features,a discrete feature evaluation method based on label correlation is proposed.The algorithm fully considers the characteristics of different features and gives a distance measurement method for mixed features.At the same time,starting from the correlation between discrete features and tags,it redefines the Relief algorithm’s evaluation system for discrete features.Experimental results show that,compared with the classic Relief algorithm and some existing feature selection algorithms for mixed data,the classification accuracy of the improved Relief algorithm has been improved to varying degrees and has a good performance.
      Customs Commodity HS Code Classification Integrating Text Sequence and Graph Information
      DU Shao-hua, WAN Huai-yu, WU Zhi-hao, LIN You-fang
      Computer Science. 2021, 48 (4): 97-103.  doi:10.11896/jsjkx.200900053
      Abstract ( 14 )   PDF(2142KB) ( 24 )   
      References | Related Articles | Metrics
      Customs commodity HS code classification is an important international procedure for cross-border trade of enterprises and individuals.HS code classification can be regarded as a text classification problem,that is,given a paragraph of description for a commodity,to determine the category of the commodity represented by HS code.However,this task is more challenging than general text classification task.First,commodity description texts are organized with special hierarchical structures.Then commodity description texts present sequential features at two levels.In addition,the key information in the commodity description text is scattered and the description forms are diverse.Most of the existing classification methods cannot comprehensivelyconsiderthe above factors to capture key information in the commodity description text.In this paper,we proposes a Text Sequence and Graph Information combination Neural Network(TSGINN) to solve the problem of customs commodity HS code classification.The TSGINN defines the HS code classification problem as a subgraph classification problem based on word co-occurrence network,models association between non-contiguous words through graph attention network,and captures multi-level sequential information through hierarchical long short-term memory network.Experiments on the real-world customs datasets show that the classification effect of TSGINN model is better than that of other methods.
      Top-N Recommendation Method for Graph Attention Based on Multi-level and Multi-view
      LIU Zhi-xin, ZHANG Ze-hua, ZHANG Jie
      Computer Science. 2021, 48 (4): 104-110.  doi:10.11896/jsjkx.200800027
      Abstract ( 10 )   PDF(3196KB) ( 27 )   
      References | Related Articles | Metrics
      Recommendation system is a research hotspot in the field of data mining.Due to the emergence of massive data,the reco-mmendation methodsof multi-source information fusion receive great attention.However,the existing recommendation methods based on heterogeneous information fusion often ignore the interaction information between users and items,as well as the interaction between meta-paths in feature representation.Therefore,considering the influence of different perspectives of attribute node embedding and structural meta-paths,a network recommendation method with multi-level graph attention is proposed.This method granulates the multi-source information network structure into multiple independent coarse-grained networks by constructing different meta-paths.Then,based on graph attention mechanism and local node attribute embedding,this method can learn the potential features of users and items separately.Finally,it gives a fine-grained network recommendation after fusion.The horizontal and vertical evaluations are conducted on real large-scale data sets,and the experimental results show that this method can effectively improve the recommendation performance.
      Fast Symbolic Data Clustering Algorithm Based on Symbolic Relation Graph
      ZHANG Yan-jin, BAI Liang
      Computer Science. 2021, 48 (4): 111-116.  doi:10.11896/jsjkx.200800011
      Abstract ( 13 )   PDF(1675KB) ( 32 )   
      References | Related Articles | Metrics
      Since a large amount of symbolic data is generated in practical applications,clustering of symbolicl data becomes an important research area of cluster analysis.Currently,many symbolic data clustering algorithms are proposed.When they are applied in big data environment,there are still problems such as high computational cost and slow operation speed.This paper proposes a fast symbolic data clustering algorithm based on symbolic relation graphs.It effectively solves this problem by replacing the original data with a symbolic relation graph and reducing the size of the data set.A large number of experiments show that the new algorithm is more effective than other algorithms.
      Computer Graphics & Multimedia
      Video Character Relation Extraction Based on Multi-feature Fusion and Fine-granularity Analysis
      LYU Jin-na, XING Chun-yu , LI Li
      Computer Science. 2021, 48 (4): 117-122.  doi:10.11896/jsjkx.200800160
      Abstract ( 11 )   PDF(1676KB) ( 26 )   
      References | Related Articles | Metrics
      Video character relation extraction is an important task of information extraction.It is valuable for video description,video retrieval,character search,public security supervision,etc.Due to the huge gap between the underlying pixels of video data and the semantics of high-level relation,it is difficult to accurately extract the relations.Most existing studies are based on coarse- granularity analysis,such as co-occurrence of characters,which ignores the fine-granularity information.In order to solve the problem that it is difficult to accurately and completely extract the relations among video characters,this paper proposes a new method for extracting relations of video characters based on multi-feature fusion and fine-granularity analysis.First,a new character entity recognition model,named CRMF(Character Recognition based on Multi-feature Fusion),is proposed.Through this manner,we can generate a more complete character set using face and body features fusion.Second,we exploit a character relationship recognition model based on fine-granularity features,named FGAG(Fine-Granularity Analysis based on GCN),which not only fuses the spatio-temporal features,but also considers the fine-granularity objects information related to the characters.Thus a better mapping can be established to accurately identify the character relations.Comprehensive evaluations are conducted on the movie video and SRIV character relationship recognition dataset,and the experimental results demonstrate that the proposed method outperforms the state-of-the-art methods on character entity and relation recognition,F1 value increases by 14.4% and accuracy increases by 10.1%.
      Object Tracking Algorithm Based on Temporal-Spatial Attention Mechanism
      CHENG Xu, CUI Yi-ping, SONG Chen, CHEN Bei-jing, ZHENG Yu-hui, SHI Jin-gang
      Computer Science. 2021, 48 (4): 123-129.  doi:10.11896/jsjkx.200800164
      Abstract ( 16 )   PDF(2929KB) ( 34 )   
      References | Related Articles | Metrics
      Object tracking technology is widely used in intelligent monitoring,human-computer interaction,unmanned driving and many other fields.In recent years,many efficient tracking methods are proposed.However,object tracking methods still face great challenges in the complex scenario such as occlusion,illumination variations,background clutter,which leads to tracking failure.To solve the above mentioned problems,in this paper,an effective object tracking algorithm is proposed based on temporal-spatial attention mechanism.Firstly,we utilize the Siamese network architecture to improve the discriminative ability of object features.Then,the improved channel attention module and spatial attention module are introduced into the Siamese network,which imposes different weights on the features of different channels and spatial positions and focuses on the features that are beneficial to object tracking in spatial and channel positions.In addition,an efficient online object template updating mechanism is developed,which combines the features of the first frame and the features of the following frames with high confidence to reduce the risk of the object drift.Finally,the proposed tracking algorithm is tested on OTB2013 and OTB2015 benchmarks.Experimental results show that the performance of the proposed algorithm improves by 6.3% compared with the current mainstream tracking algorithms.
      Novel Deep Learning Algorithm for Monocular Vision:H_SFPN
      SHI Xian-rang, SONG Ting-lun, TANG De-zhi, DAI Zhen-yong
      Computer Science. 2021, 48 (4): 130-137.  doi:10.11896/jsjkx.200400090
      Abstract ( 11 )   PDF(3051KB) ( 24 )   
      References | Related Articles | Metrics
      This paper proposes a single-stage deep learning based H_SFPN algorithm for monocular visual object detection.Compared with the existing YOLOv3 and CenterNet algorithms,the proposed algorithm can effectively improve the accuracy of small object detection without sacrificing the real-time performance.This paper designs a new network architecture (backbone),which uses an improved Hourglass network model to extract feature maps in order to make full use of the high resolution of the underlying features and the high semantic information of the high-level features.Then in the feature map fusion stage,a method SFPN based on the weighted fusion of feature maps is proposed.Finally,the proposed H_SFPN algorithm improves the loss function of the object position and size,which can effectively reduce the training error and accelerate the convergence speed.According to the experimental results on the MSCOCO data set,the proposed H_SFPN algorithm is significantly better than the existing mainstream deep learning object detection algorithms such as Faster-RCNN,YOLOv3 and EfficientDet.Among them,the small object detection index APs of this algorithm is the highest,reaching 32.7.
      Multi-person Activity Recognition Based on Bone Keypoints Detection
      LI Meng-he, XU Hong-ji, SHI Lei-xin, ZHAO Wen-jie, LI Juan
      Computer Science. 2021, 48 (4): 138-143.  doi:10.11896/jsjkx.200300042
      Abstract ( 12 )   PDF(1730KB) ( 26 )   
      References | Related Articles | Metrics
      Human activity recognition(HAR) technology is a research hotspot in the field of computer vision,but there are still many technical difficulties in the research of multi-person HAR.The problem of the inaccurate judgment of the number of people and the difficulty of feature extraction in multi-person activity recognition may lead to the low accuracy.A multi-person activity recognition system based on bone keypoints detection is proposed in this paper,which combines the extraction of bone points with the action recognition.Firstly,the image frame is extracted from the original video.Secondly,the OpenPose algorithm is used to obtain keypoints data of the human skeleton to detect the number of people in the image and mark activity information.At last,human posture features are extracted according to characteristics of skeleton points.Meanwhile,in order to accurately describe the relationship between posture features,a feature description method based on frame window matrix is proposed.Finally,a support vector machine(SVM) is used as a classifier to complete multi-person activity recognition.10 types of daily typical activities from UT-Interaction and HMDB51 datasets are taken as test objects,and experimental results prove that the proposed method can effectively extract keypoints of multiple human bones in the image.Its average recognition accuracy of 10 activities is 86.25%,which is higher than other compared methods.
      Underwater Image Enhancement Based on Color Correction and Deblurring
      WEI Dong, LIU Hao, CHEN Gen-long, GONG Xiao-hui
      Computer Science. 2021, 48 (4): 144-150.  doi:10.11896/jsjkx.200800185
      Abstract ( 17 )   PDF(2708KB) ( 25 )   
      References | Related Articles | Metrics
      Due to the absorption and scattering of light when propagating underwater,underwater images often exhibit color shift,low contrast,blurry and uneven illumination.According to the imaging model of underwater image,many images from the seabed are often degraded severely,and these low-quality images cannot fully provide the ocean scene information,which is difficult to meet the practical application requirements.Therefore,this paper proposes an underwater image enhancement method based on color correction and deblurring.The proposed method effectively combines the two stages of both color correction and deblurring.During the color correction stage,the contrast of each original image is firstly stretched.After the contrast stretching is completed,some images may be overstretched or understretched.According to the gray world prior,after the contrast stretching,the gamma correction is further used to optimize and adjust the contrast and color of these images,so that the sum of the gray values from the R,G,and B channels of each image tends to be equal.Then,in the deblurring stage,it utilizes the dark channel prior to deblur the color-corrected image for the final enhanced image.Experimental results show that the proposed method has a good overall recovery effect,it can effectively restore image information,and obtain good enhancement performance in both subjective and objective evaluation.In addition,the proposed method can be used as a pre-processing step for computer vision tasks such as underwater image classification,and it can improve the classification accuracy of underwater image set by about 16% in our experiment.
      Transverse Section Recognition Algorithm Based on BCNN for Fetal Craniocerebral Ultrasound
      SHU Xin, CHANG Feng, ZHANG Xin, DU Rui, YU Zhuan
      Computer Science. 2021, 48 (4): 151-156.  doi:10.11896/jsjkx.200500049
      Abstract ( 9 )   PDF(2219KB) ( 18 )   
      References | Related Articles | Metrics
      Ultrasound examination during pregnancy is an important step to discover whether the fetus is abnormal,so it is of great value to carry out accurate and efficient examination and diagnosis on fetus.In this paper,bilinear convolutional neural network(BCNN) is used to identify the transverse section of the fetal head.On this basis,we propose BCNN-R and BCNN-S.The BCNN model takes the fetal craniocerebral ultrasound image as input,and firstly preprocesses the input data.Secondly,the two parallel sub-networks can extract the cross-sectional features with high identification and strong robustness from the input,after that the model integrates the extracted features,which is helpful to extract the fine features for recognition.Finally,the linear layer outputs the result of classification.In order to verify the effectiveness of the proposed algorithm,this paper makes contrast experiment on self-built fetal ultrasound dataset JTU19.The experimental results show that the proposed algorithms have ob-vious improvement on classification performance compared with the basic network(GoogleNet,DenseNet,SeNet,etc.),the overall accuracy of BCNN-S reaches 88.95%,and the precision and recall of BCNN-R in horizontal cross-sections achieves 97.22% and 88.61%.In addition,we also use the public dataset HC18 to conduct classification experiments.The accuracy,precision and recall of BCNN reach 89.48%,87.66% and 87.71% respectively,which further verifies the effectiveness of the proposed algorithms.
      Generation of Image Caption of Joint Self-attention and Recurrent Neural Network
      WANG Xi, ZHANG Kai, LI Jun-hui, KONG Fang, ZHANG Yi-tian
      Computer Science. 2021, 48 (4): 157-163.  doi:10.11896/jsjkx.200300146
      Abstract ( 10 )   PDF(2378KB) ( 26 )   
      References | Related Articles | Metrics
      At present,most image caption generation models consist of an image encoder based on convolutional neural network(CNN) and a caption decoder based on recurrent neural network(RNN).The image encoder is used to extract visual features from images,while the caption decoder generates captions based on visual features with an attention mechanism.Although the decoder uses RNN with an attention mechanism to model the interaction between image features and captions,it ignores the self-attention of the internal interaction of images or captions.Therefore,this paper proposes a novel model that combines the advantages of RNN and self-attention network for image caption generation.On the one hand,this model can capture interactions within and between modalities in the unified attention area through the self -attention simultaneously.On the other hand,it maintains the inherent advantages of RNN.Experimental results on the MSCOCO dataset show that the proposed model outperforms baseline by improving the performance from 1.135 to 1.166 in CIDEr.
      Study on Super-resolution Image Reconstruction of Leukocytes
      WANG Wei, HU Tao, LI Xin-wei, SHEN Si-wan, JIANG Xiao-ming, LIU Jun-yuan
      Computer Science. 2021, 48 (4): 164-168.  doi:10.11896/jsjkx.200100099
      Abstract ( 14 )   PDF(2083KB) ( 18 )   
      References | Related Articles | Metrics
      In recent years,computer vision has become the focus of research in various disciplines and has been gradually applied to numerous scientific research scenarios.Medical workers often use blood cell image analysis systems to automatically count and classify white blood cell images when performing blood routine tests in the clinic.Among them,the white blood cell image quality affects the counting classification effect of the blood cell analysis system.This paper focuses on the problem of blurred details of white blood cell images under the microscope and attempts to introduce a super-resolution method to solve the problem.This method introduces a Residual-in-Residual Dense Block(RRDB) based on the Super-Resolution Generative Adversarial Network(SRGAN) to improve the network structure and remove the batch normalization layer in the standard residual block.The network performance is improved and the loss function of the discriminator is improved.Experimental results show that,compared with 3 interpolation methods and 4 learning-based super-resolution methods,the proposed method(SRGAN+) improves the reso-lution while obtaining images with richer textures and more realistic visuals.Compared with the SRGAN method,the proposed algorithm has a 1.008 dB improvement in peak signal-to-noise ratio(PSNR) and 1.07% improvement in Structural SIMilarity(SSIM).
      Semantic-contrast Generative Adversarial Network Based Highly Undersampled MRI Reconstruction
      MA Feng-fei, LIN Su-zhen, LIU Feng, WANG Li-fang, LI Da-wei
      Computer Science. 2021, 48 (4): 169-173.  doi:10.11896/jsjkx.200600047
      Abstract ( 11 )   PDF(2607KB) ( 15 )   
      References | Related Articles | Metrics
      Exploiting the sparse nature of data to reconstruct image from randomly undersampled K-space is the main method to solve the problem of limited application of magnetic resonance imaging(MRI) due to long acquisition time.However,the loss of textures is serious when the existing methods are used to reconstruct the highly undersampled MRI.Aiming at this problem,referring to the adversarial learning idea of generative adversarial networks(GAN),a novel method for highly undersampled MRI reconstruction based on semantic-contrast generative adversarial network(SC-GAN) is proposed.This method consists of two successive parts.In the first part,the MRI image with Cartesian highly random undersampled is input into the U-NET-based ge-nerator to compete with the discriminator for generating the initial reconstructed image,so as to construct the reconstruction subnet.The other part is the semantic contrast subnet,which compares the semantic information between the initial reconstructed image and its fully-sampled image with VGG-16 network.Comparison results are fed back to the first part for parameter adjustment until the best reconstructed image is generated .Experimental results show that the excellent reconstructed results are verified by subjective and objective evaluation when the acceleration factor is up to 7(14%).Compared with the state-of-art methods,the proposed SC-GAN has lower memory consumption,faster convergence speed and more textures,and can provide algorithm support for the development of next-generation MRI machines.
      Object Detection in Remote Sensing Images Based on Saliency Feature and Angle Information
      YUAN Xing-xing, WU Qin
      Computer Science. 2021, 48 (4): 174-179.  doi:10.11896/jsjkx.191200027
      Abstract ( 11 )   PDF(2119KB) ( 21 )   
      References | Related Articles | Metrics
      Multi-class object detection of remote sensing images is a challenging subject since objects in remote sensing images are dense,multi-oriented and multi-scale.This paper presents a new framework for object detection in remote sensing images.The framework enhances object information,suppresses non-object information,and improves the ability of feature representation by extracting salient features and the relationship between different convolutional channels.Meanwhile,multi-scale features are added to the convolution module to capture more context information without adding extra parameters to the model.To solve the problem of variable object angle in rencote sensing image,we add angle information to the Region Proposal Network (RPN) to get oriented rectangular object proposals.In the training stage,the attention loss function is added to guide the significance of network learning.The proposed framework is validated on the public remote sensing image data set,and the experimental results on the horizontal boxes task and the oriented boxes task prove the effectiveness of the proposed method.
      Resource-aware Based Adaptive-scaling Image Target Detection Under Multi-resolution Scenario
      ZHANG Kai-qiang, JIANG Cong-feng, CHENG Xiao-lan, JIA Gang-yong, ZHANG Ji-lin, WAN Jian
      Computer Science. 2021, 48 (4): 180-186.  doi:10.11896/jsjkx.201200116
      Abstract ( 12 )   PDF(2398KB) ( 21 )   
      References | Related Articles | Metrics
      Edge video processing can reduce the video transmission delay,video processing overhead and storage cost in the video processing system of cloud platform,but the diversity of video parameters (resolution,frame rate,etc.) will lead to unsatisfactory effect of edge video processing.Usually,in the stage of image pre-processing,image scaling and transformation will be applied to ensure the best effect of image processing.However,in uncertain scenes such as video monitoring,the target detection rate can be reduced largely by directly scaling down the resolution of all the images.According to the problems mentioned above,this paper chooses the scaling ratio of the horizontal and vertical pixels as image scaling factor.For the video data with different resolutions,it analyzes the influence of the image scaling factor on the performance of video data processing and puts forward the dynamically-setting scheme of image scaling factor.In this scheme,the system performance index (system power consumption and memory utilization of the server-side) is taken as the constraint condition of video processing performance index (face detection rate),to get the image scaling factor corresponding to the optimal face detection rate with this resolution.Experimental results show that for video data with different resolutions,the dynamic setting scheme of image scaling factor can reduce system power consumption and memory utilization,improve video processing efficiency and capability while guaranteeing video processing performance.
      Concrete Pavement Crack Detection Algorithm Based on Full U-net
      QU Zhong, XIE Yi
      Computer Science. 2021, 48 (4): 187-191.  doi:10.11896/jsjkx.200100113
      Abstract ( 11 )   PDF(2274KB) ( 22 )   
      References | Related Articles | Metrics
      Aiming at the problems of insufficient precision and robustness of the existing crack detection algorithms in complex environments,a new model full U network is proposed based on the deep learning theory and U-net model.Firstly,the network is constructed based on the U-net model.Then,an upsampling is performed at every pooling layer to restore the feature map specification before this pooling layer andfuse it with the convolution layer before pooling.Finally,the new feature map is concatenated with the layer after upsampling on the U-net.In order to verify the effectiveness of the algorithm,experiments are performed on the test set.Experimental results show that the average precision of the proposed algorithm can reach 83.48%,the recall rate is 85.08%,and F1 is 84.11%.They are 1.48%,4.68%,3.29% higher than the precision,recall,and F1 in U-net respectively.It shows that in a complex environment,the full U network can still extract complete cracks and ensure the robustness.
      3D Point Cloud Shape Completion GAN
      ZHAO Xin-can, CHANG Han-xing, JIN Ren-biao
      Computer Science. 2021, 48 (4): 192-196.  doi:10.11896/jsjkx.200100048
      Abstract ( 9 )   PDF(2913KB) ( 20 )   
      References | Related Articles | Metrics
      In the real scanning environment,due to the occlusion of line of sight or improper operation of technicians,the actual point cloud model has incomplete shape.The incompleteness of point cloud model has a serious impact on subsequent applications.Therefore,this paper proposes a 3D point cloud shape completion generative adversarial networks to complete the shape completion of point cloud model.The point cloud reconstruction part of the network combines the T-Net structure used for data alignment in PointNet with the 3D point cloud AutoEncoder network to complete the prediction and fill in the missing data.The discriminator uses the Encoder part of the 3D point cloud AutoEncoder to distinguish the completed 3D point cloud data from the real 3D point cloud data.Finally,in the ShapeNet trained the above network structure,the trained network model is verified and compared with other benchmark methods qualitatively.From the experimental results,it can be seen that the 3D point cloud shape completion generation adversarial network can complete the point cloud model with missing data into a complete 3D point cloud.In ShapeNet’s three sub-datasets chair,table,and bed,compared with the method based on 3D point cloud AutoEncoder,the F1 score is improved by 3.0%,3.3% and 3.1%,and compared with the method based on the voxel 3D-EPN method,the F1 score is increased by 9.9%,5.8%,and 4.3%,respectively.
      Optimization of GPU-based Eigenface Algorithm
      LI Fan, YAN Xing, ZHANG Xiao-yu
      Computer Science. 2021, 48 (4): 197-204.  doi:10.11896/jsjkx.200600033
      Abstract ( 14 )   PDF(3066KB) ( 25 )   
      References | Related Articles | Metrics
      Eigenface algorithm is one of the commonly used face recognition methods based on facial representation.When the amount of training data is large,it is very time-consuming both training and testing modules.Based on this,the CUDA parallel computing architecture is used to implement GPU accelerated eigenface algorithm.The effect of GPU parallel computing depends on the hardware specifications,the complexity and parallelism of the algorithm itself,and the parallelization method used by the program developer to use GPU.Therefore,this paper first proposes the calculation of the average value and zero mean in the training phase of the eigenface algorithm.The calculation steps such as normalizing the eigenface and the calculation steps of the projection to the eigenface space and calculating the Euclidean distance in the test phase are optimized and accelerated by GPU.Secondly,different parallelization acceleration methods are used in the corresponding calculation steps and performance evaluation is made.Experimental results show that in the range of face training data from 320 to 1920,the acceleration effect of each calculation step is obvious.Compared with Intel i7-5960X,the GTX1060 display adapter can achieve an average acceleration effect of about 71.7 times in the training module,and an average acceleration effect of about 34.1 times in the test module.
      FPGA-based CNN Image Recognition Acceleration and Optimization
      QI Yan-rong, ZHOU Xia-bing, LI Bin, ZHOU Qing-lei
      Computer Science. 2021, 48 (4): 205-212.  doi:10.11896/jsjkx.200600089
      Abstract ( 9 )   PDF(2611KB) ( 17 )   
      References | Related Articles | Metrics
      Currently,CNN has been widely used in many application scenarios,including image classification,speech recognition,video analysis,document analysis,etc.Because CNN is computationally intensive,it is often accelerated with GPUs.However,GPU has a high power consumption and is not suitable for CNN inference stage.Based on this,this paper studies the application method of FPGA-based CNN image recognition acceleration and optimization.The OpenCL SDK provided by Intel FPGA is used to design and optimize the CNN forward model on the FPGA board.First of all,for the calculation problem,through the division of functional modules,the advantages of FPGA’s high computing efficiency are fully utilized.Secondly,this paper optimizes the core algorithm to improve the running speed,analyzes the feature map processing operations,uses the parameter sharing strategy to reduce the amount of data storage,uses the pipeline to transfer data,and reduce the number of accesses to off-chip storage.Finally,it optimizes the design of data cache,data flow and loop to alleviate the on-chip resource constraints of FPGA,quantizes the parameters and reduce the amount of FPGA memory resources occupied.Experimental results show that FPGA has lower power consumption,CPU power consumption is 2.1 times that of FPGA,and GPU power consumption is 6.5 times that of FPGA.Compared with the methods proposed in the literature of related fields in recent years,the proposed method has higher throughput and computational performance.
      Artificial Intelligence
      Review Progress of Learner Knowledge Tracing
      ZHANG Nuan, JIANG Bo
      Computer Science. 2021, 48 (4): 213-222.  doi:10.11896/jsjkx.200600044
      Abstract ( 16 )   PDF(1899KB) ( 32 )   
      References | Related Articles | Metrics
      Learner modeling is one of the supporting techniques of adaptive learning systems,among which learner knowledge state modeling represented by knowledge tracing is the most widely studied.Three representative knowledge tracing techniques are Bayesian Knowledge Tracing(BKT) based on hidden Markov model,Additive Factor Model(AFM) based on logistic regression model and Deep Knowledge Tracing(DKT) based on recurrent neural network.It is found that the BKT is suitable for know-ledge tracing in learning tasks that only contain single knowledge skill,AFM and DKT can be used for tracing students’ know-ledge state in learning tasks that have more than one knowledge skills.However,the DKT is hard to be interpreted from the perspective of pedagogy.Based on reviewing the existing research and inspired by the knowledge space theory,this paper argues that the integration of the prerequisite relationship among knowledge skills and the knowledge tracing is a promising research direction.Finally,this paper proposes a protype of additive factor model integrating knowledge prerequisite relationship.
      DQN Algorithm Based on Averaged Neural Network Parameters
      HUANG Zhi-yong, WU Hao-lin, WANG Zhuang, LI Hui
      Computer Science. 2021, 48 (4): 223-228.  doi:10.11896/jsjkx.200600177
      Abstract ( 11 )   PDF(2714KB) ( 33 )   
      References | Related Articles | Metrics
      In the field of deep reinforcement learning,how to efficiently explore environment is a hard problem.Deep Q-network algorithm explores environment with epsilon-greedy policy whose size and decay need manual tuning.Unsuitable tuning will cause a poor performance.The epsilon-greedy policy is ineffective and cannot resolve deep exploration problem.In this paper,in order to solve the problem,a deep reinforcement learning algorithm based on averaged neural network parameters (AP-DQN) is proposed.At the beginning of episode,the algorithm averages the multiple online network parameters learned by the agent to obtain a perturbed neural network parameter,and then selects an action through the perturbed neural network,which can improve the agent’s exploration efficiency.Experiment results show that the exploration efficiency of AP-DQN is better than that of DQN on deep exploration problem and AP-DQN get higher scores than DQN in five Atari games.The normalized score increases by 112.50% at most and 19.07% at least compared with DQN.
      Community Detection Algorithm in Complex Network Based on Network Embedding and Local Resultant Force
      YANG Xu-hua, WANG Chen
      Computer Science. 2021, 48 (4): 229-236.  doi:10.11896/jsjkx.200200102
      Abstract ( 13 )   PDF(3077KB) ( 29 )   
      References | Related Articles | Metrics
      Community detection can reveal the inherent structure dynamic behavior in complex networks and it is the current research hotspot.In this paper,we propose a community detection algorithm based on network embedding and local resultant force.The network topological space is transformed into euclidean space,and network nodes are converted into vector data points.First,based on the gravity model and the network topology,a local resultant force and a local resultant force cosine centrality index(LFC) are proposed.The center node of each initial small community is determined by the LFC of the node and the distance between nodes.Then the rest of the non-central nodes are classified to the nearest central node to form the initial small community.Finally,communities are merged by optimizing the modularity to find the optimal network community structure.Compared with 6 well-known community detection algorithms on 6 real-world networks and artificial networks with adjustable parameters,the new proposed algorithm shows good performance in the community detection.
      Multi-granularity Medical Entity Recognition for Chinese Electronic Medical Records
      ZHOU Xiao-jin, XU Chen-ming, RUAN Tong
      Computer Science. 2021, 48 (4): 237-242.  doi:10.11896/jsjkx.200100036
      Abstract ( 13 )   PDF(1622KB) ( 33 )   
      References | Related Articles | Metrics
      In the existing named entity recognition task for Chinese clinical electronic medical records,the granularity of annotation is usually too fine or too coarse,and it is difficult to find actual application scenarios for the too thin annotation results while the too thick annotation results usually need complex post-processing steps to clarify the standard form and the semantic type of entities,so as to facilitate subsequent data mining applications.In order to simplify post-processing steps,9 kinds of fine-grained analytical entities are defined to explain coarse-grained entities according to characteristics of 7 common coarse-grained clinical entities.Besides,according to characteristics of multi-granularity entities,a multi granularity clinical entity recognition model based on multi-task learning and self-attention mechanism is proposed,and 5 000 texts containing multi-granular entities are annotated on real hospital electronic medical records to verify the model.Experiment results show that this model outperforms the mainstream sequence labeling model.In the task of coarse and fine granularity entity recognition,their F1 scores reach 92.88 and 85.48,respectively.
      Rumor Source Detection in Social Networks via Maximum-a-Posteriori Estimation
      BAO Zhi-qiang, CHEN Wei-dong
      Computer Science. 2021, 48 (4): 243-248.  doi:10.11896/jsjkx.200400053
      Abstract ( 9 )   PDF(2359KB) ( 24 )   
      References | Related Articles | Metrics
      With the popularization of Internet,information can be transmitted to the public at an extremely rapid rate through Internet.But at the same time,some abnormal information,such as rumors,has been flooded with the cascade effect of Internet.How to quickly and accurately identify the source of a rumor spreading under a complex network becomes an urgent problem to be solved.This paper proposes a source localization algorithm in social networks.Different from some existing methods based on Maximum-a-posteriori(MAP) probability estimation,this method first considers the influence of global and local infected nodes and non-infected nodes,and proposes a better MAP prior probability estimation(PPE) calculation mode.Then,asocial network is sparsified through a greedy algorithm based on minimum spanning trees,which makes the likelihood estimation(LE) calculation in MAP more consistent with the real propagation structure.Finally,a new MAP value is used to estimate the possibility of a node as the source of propagation in the social network as to locate the source of the rumor more accurately.The proposed me-thod is compared with some existing methods by an experiment on some model networks and real networks,and experimental results show that the proposed method is superior to these existing methods of locating the rumor source.
      Chinese Event Detection with Joint Representation of Characters and Words
      WU Fan, ZHU Pei-pei, WANG Zhong-qing, LI Pei-feng, ZHU Qiao-ming
      Computer Science. 2021, 48 (4): 249-253.  doi:10.11896/jsjkx.200300156
      Abstract ( 13 )   PDF(1536KB) ( 39 )   
      References | Related Articles | Metrics
      As a sub-task of event extraction,event detection is a research hotspot in information extraction.It plays an important role in many NLP applications,such as knowledge graph,question answering and reading comprehension.Different with English character,a Chinese character can be regarded as single-character word and has its certain meaning.Moreover,there are specific structures in Chinese words.Therefore,this paper proposes a Chinese event detection model based on graph convolution neural network,called JRCW-GCN(Joint Representation of Characters and Words by Graph Convolution Neural Network),which integrates Chinese character and word representation.It uses the latest BERT and Transformer to encode the semantic information of words and characters respectively,and then uses the relationship between words and characters to construct the corresponding edges.Finally,it uses the graph convolution model to detect Chinese events by integrating the semantic information in Chinese character and word level.The experimental results on the ACE2005 Chinese corpus show that the performance of our model outperforms the state-of-the-art models.
      Two Types of Leaders Salp Swarm Algorithm
      YU Jia-shan, WU Lei
      Computer Science. 2021, 48 (4): 254-260.  doi:10.11896/jsjkx.200600181
      Abstract ( 8 )   PDF(2252KB) ( 28 )   
      References | Related Articles | Metrics
      In order to improve the solution accuracy and global search capability of Salp swarm algorithm(SSA),an improved Salp swarm algorithm based on normal process search and differential evolution algorithm is proposed,called two types of leaders salp swarm algorithm(TTLSSA).Two types of leaders and two following groups are set up in the algorithm.Among them,the leader performing normal process search needs to carry out normal process migration,crossover,selection and other operations,which are mainly used for global exploration.Under the influence of the gap parameter,which varies with the number of iterations,the leader near the current optimal solution combines both global search and local development functions.Eighteen different types of standard test functions are used to test the performance of the proposed algorithm,compared with DE,SSA,sines and cosines algorithm (SCA),grey wolf optimizer (GWO),and whale optimization algorithm (WOA).TTLSSA ranks the first or joint first in the average precision of 16 test functions,the second in the average precision of 2 test functions,and the second in the ave-rage time of 6 algorithms,indicating that TTLSSA significantly improves the optimization ability without increasing the time cost of SSA.
      Computer Network
      Survey of Resource Scheduling for Serverless Platforms
      MA Ze-hua, LIU Bo, LIN Wei-wei, LI Jia-wei
      Computer Science. 2021, 48 (4): 261-267.  doi:10.11896/jsjkx.200800023
      Abstract ( 8 )   PDF(1478KB) ( 23 )   
      References | Related Articles | Metrics
      Serverless computing is emerging as a promising paradigm for deploying cloud applications.It really implements pay-as-you-go billing without wasting resources.Developers don’t have to worry about the low-level details of the computing platform,they just need to pay when processing requests or events,thus lowering the threshold for developers.Although the change of serverless model brings opportunities,it also brings problems such as cold startup delay of platform function and insufficient resource utilization.Therefore,this paper makes an in-depth investigation and analysis of the resource scheduling technology of the serverless computing platform.The technical principle and research status of resource scheduling in serverless platform based on resource utilization,response time delay and multi-objective optimization are discussed.Finally,this paper points out the trends of future research directions:scheduling for different application types,the tradeoff between response time and resource utilization,joint scheduling between virtual machine and serverless platform,and hybrid algorithm for serverless resource scheduling.
      Congestion Control Based on Message Quality and Node Reliability in DTN
      CUI Jian-qun, HUANG Dong-sheng, CHANG Ya-nan, WU Shu-qing
      Computer Science. 2021, 48 (4): 268-273.  doi:10.11896/jsjkx.200500011
      Abstract ( 8 )   PDF(2356KB) ( 15 )   
      References | Related Articles | Metrics
      Delay Tolerant Network (DTN) has the characteristics of intermittent connection,limited resources and random dynamic change of topology structure.Therefore,limited network resources and uncertainty of network topology can easily lead to network congestion.In order to solve this problem,this paper puts forward a kind of congestion control strategy (CCMQ) based on message quality degree and node reliability.In this strategy,messages are prioritized according to the quality of the message.When forwarding the message,the message with a higher priority is forwarded firstly.When the next hop node is selected,the node with high reliability is selected for message forwarding,and the attributes of the relay node are fully considered.When congestion occurs,messages with low message quality are discarded firstly,and the S-ACK message confirmation and deletion mechanism are added to release the cache space of the node,so as to effectively alleviate the node congestion.Simulation results show that compared with the traditional congestion control algorithm,CCMQ has better performance in message delivery rate,network load rate and average delay.
      RFID Indoor Positioning Algorithm Based on Proximal Policy Optimization
      LI Li, ZHENG Jia-li, LUO Wen-cong, QUAN Yi-xuan
      Computer Science. 2021, 48 (4): 274-281.  doi:10.11896/jsjkx.200300028
      Abstract ( 14 )   PDF(4390KB) ( 19 )   
      References | Related Articles | Metrics
      In the Radio Frequency Identification(RFID) dynamic indoor positioning environment,the positioning error and the computing complexity of traditional indoor positioning model will increase with the increase of the number of positioning targets.This paper proposes an RFID positioning algorithm based on Proximal Policy Optimization(PPO),which regards the positioning as Markov decision-making process.Firstly,the action evalution is combined with random action and the return of the action is then maximized to select the best coordinate value.Meanwhile,under the premise of limiting the action to a certain range,the algorithm introduces clipped probability ratios,using post-sample and pre-sample action alternatesly,then,with stochastic gradient ascent updates multiple epochs policy of minibatch and with the critic network evaluate the action.Finally,the PPO positioning model is obtained by training.This method can effectively reduce the positioning error and improve the positioning efficiency.At the same time,it has a faster convergence speed,especially when dealing with a large number of positioning targets,it can greatly reduce the computational complexity.Experiment results show that,compared with other RFID indoor positioning algorithms,such as Twin Delayed Deep Deterministic policy gradient(TD3),Deep Deterministic Policy Gradient(DDPG) and actor-critic using Kronecker-Factored Trust Region(ACKTR),the average positioning error of the proposed method decreases respectively by 36.361%,30.696% and 28.167%,the positioning stability improves by 46.691%,34.926% and 16.911%,and the computing complexity decreases respectively by 84.782%,70.213% and 63.158%.
      MUSIC Beam-forming Method Based on Temporal and Spatial Union Estimation of Noise Subspaces
      WANG Si-xiu, GUO Wen-qiang, WANG Xiao-jie, ZHANG Chuan-peng
      Computer Science. 2021, 48 (4): 282-287.  doi:10.11896/jsjkx.200300029
      Abstract ( 9 )   PDF(2613KB) ( 24 )   
      References | Related Articles | Metrics
      Under the case of broadband short pulse,for the instability problem of noise subspaces estimation in frequency domain MUSIC beam-forming,a MUSIC beam-forming method based on temporal and spatial union estimation of noise subspaces is proposed.Firstly,this method constructs the complex analytic data for liner array receiving data in time domain.Then,according to the construction method of time domain augmented data and the spatial sliding motion method,this method stably realizes noise subspaces estimation with shorter data length.Lastly,based on the orthogonal property of noise subspaces,this method obtains the corresponding beam via identity matrix and noise eigenvector.The numerical simulation and measured data processing results show that,compared with the frequency domain MUSIC beam-forming,the new approach reduces the number of snapshots for stably obtaining noise subspace,has better stability and detection performance, and improves the robustness of MUSIC beam-forming in the practical application.
      Information Security
      Study of Universal Shellcode Generation Technology
      CHEN Tao, SHU Hui, XIONG Xiao-bing
      Computer Science. 2021, 48 (4): 288-294.  doi:10.11896/jsjkx.200300151
      Abstract ( 12 )   PDF(1812KB) ( 20 )   
      References | Related Articles | Metrics
      Shellcode generation technology is a program transformation technology that transforms programs from source form to binary form.This technology can be used to implement Shellcode generation,including Shellcode used in exploitation and functional Shellcode used in post-penetration period.This paper formally describes the relationship between code and data in the program and proposes a LLVM-based program transformation technology,which can be used to generate system-independent Shellcode.By constructing a built-in global data table and adding dynamic relocation code,this technology converts the access form of the code to the data from absolute memory address to relative memory address,eliminates the dependence of the relocation mechanism provided by operating system during code execution,and makes the generated Shellcode have good position-independent characteristics.In the experimental part,we test the function of our shellcode generation system based on this technology with different source code of different sizes under different operating systems.We also compare the consistency of the code function before and after the shellcode generation,as well as the file size,number of functions and execution time.Experiment results show that the shellcode generation system functions normally and has strong compatibility and versatility.
      IPSec VPN Encrypted Traffic Identification Based on Hybrid Method
      ZHOU Yi-min, LIU Fang-zheng , WANG Yong
      Computer Science. 2021, 48 (4): 295-302.  doi:10.11896/jsjkx.200700189
      Abstract ( 12 )   PDF(3786KB) ( 21 )   
      References | Related Articles | Metrics
      This paper proposes a hybrid method,which combines fingerprint identification with machine learning method to rea-lize the identification of IPSec VPN encrypted traffic.Firstly,the method selects the IPSec VPN traffic from the network traffic based on the load characteristics.Secondly,based on the time-related flow features,it uses the random forest algorithm to establish the IPSec VPN traffic classification model.Through parameter optimization and feature selection,the overall traffic identification accuracy reaches 93%.The experimental results verify the feasibility of identifying IPSec VPN traffic by machine learning method based on time-related flow features.At the same time,the experimental results show that the proposed method can effectively balance the recognition accuracy and recognition speed,and achieve the effect of efficient identification of IPSec VPN encrypted traffic.
      Prevention of Dishonest Behavior in Supply-Demand Matching
      ZHANG Shao-jie, LU Xu-dong, GUO Wei, WANG Shi-peng, HE Wei
      Computer Science. 2021, 48 (4): 303-308.  doi:10.11896/jsjkx.200900090
      Abstract ( 24 )   PDF(1371KB) ( 22 )   
      References | Related Articles | Metrics
      Supply-demand matching problem can be solved by crowdsourcing in social networks (SN).However,due to the non-cooperative constraints in practical applications and the privacy protection mechanism of social networks,participants of crowdsourcing have the motivation and conditions to profit from dishonest behaviors.This kind of behavior affects the fairness principle,and will lead to the collapse of the trust chain in the networ.In order to solve the problem of dishonest behavior in the crowdsourcing supply-demand matching method,this paper considers the distributed public accountingto ensure that members truthfully report individual behavior and status,and looks for two types of dishonest individuals by checking the public information.This paper also designs a punishment mechanism based on reputation to counter dishonest behavior.Finally,the validity and feasibility of our mechanism are proved by theoretical analysis.Under the mechanism,the best strategy for crowdsourcing participants is to be honest.
      Study on Malicious Behavior Graph Construction and Matching Algorithm
      WANG Le-le, WANG Bin-qiang, LIU Jian-gang, MIAO Qi-guang
      Computer Science. 2021, 48 (4): 309-315.  doi:10.11896/jsjkx.201100171
      Abstract ( 12 )   PDF(1992KB) ( 21 )   
      References | Related Articles | Metrics
      Malware is a very threatening security problem in the Internet age.Due to the emergence of malicious programs and the speed up of propagation,it becomes more difficult to detect malicious programs.Most firewalls and antivirus software use a special set of bytes to identify malicious code based on malicious characteristics.However,a programmer of malicious program uses code obfuscation techniques to avoid this detection.Therefore,researchers use dynamic analysis method to combat this new malicious program,but the time efficiency and matching accuracy of this method are not satisfactory.This paper proposes an effective malicious behavior graph construction and matching algorithm,including the storage method of two-dimensional association graph,the construction method of behavior graph,the construction method of behavior association rules,the design of behavior graph parser,and the behavior matching algorithm.Finally,experimental verification analysis proves that this method has a high detection accuracy rate,except for the AutoRun category,the recognition rates for other types of malware are all above 90%.
      Attention-based Hot Block and Saliency Pixel Convolutional Neural Network Method for Face Anti-spoofing
      WU Xiao-li, HU Wei
      Computer Science. 2021, 48 (4): 316-324.  doi:10.11896/jsjkx.200300128
      Abstract ( 10 )   PDF(2986KB) ( 25 )   
      References | Related Articles | Metrics
      Face anti-spoofing is used to verify whether the testee is a real person.The diversity of attack methods and the application of face recognition on various embedded and mobile devices with low computing capabilities have made face anti-spoofing a very challenging task.Aiming at face anti-spoofing,an attention-based hot block and saliency pixel convolutional neural network method is proposed.The hot block method replaces the discrimination of the entire face with the determination of 5 hot blocks,which not only reduces the amount of calculation,but also forces the network to focus on hot spots with more discerning information,so as to improve the accuracy of the network.On the other hand,the saliency pixel method performs saliency pixel prediction on the input face image to determine whether the saliency prediction map meets depth characteristics of the face to identify the liveness and the attack.This method fuses the results of hot blocks and saliency pixels to give full play to the role of local features and global features,and further enhances the effect of face anti-spoofing.Compared with existing methods,the proposed method has achieved good results on CASIA-MFSD,Replay-Attack and SiW datasets.
      Classification of Application Type of Encrypted Traffic Based on Attention-CNN
      CHEN Ming-hao, ZHU Yue-fei, LU Bin, ZHAI Yi, LI Ding
      Computer Science. 2021, 48 (4): 325-332.  doi:10.11896/jsjkx.200900155
      Abstract ( 20 )   PDF(2293KB) ( 33 )   
      References | Related Articles | Metrics
      With the development of traffic encryption technology,encrypted traffic has gradually replaced non-encrypted traffic and become the most important part of the current network environment.While protecting users’ privacy,encrypted traffic is also used by malicious software to avoid the defense of traditional intrusion detection system based on the port or payload keywords of traffic,which brings serious threat to network security.In view of the limitations of conventional classification methods,resear-chers try to use artificial intelligence method to classify the application type of encrypted traffic,but the existing researches usually do not make full use of the characteristics of encrypted traffic,resulting in poor performance in the actual complex network environment.To solve the problems mentioned above,this paper proposes an encrypted traffic classification method based on Attention-CNN model.After the preliminary feature extraction of encrypted traffic,we use both BiLSTM+Attention and 1D-CNN model to compress and further extract the temporal and spatial features of encrypted traffic respectively.Finally,one fully connected neural network is used for the final classification based on the obtained mixed features.Experiments are carried out on the ISCXVPN2016 dataset which is the widely used open source dataset in encrypted traffic classification area.Experimental results show that the overall classification precision of the Attetnion-CNN could reach 98.7% and the F1 score is significantly improved compared with several existing studies.
  • Computer Science
    NCTCS2017 (0)
    Network & Communication (159)
    Information Security (312)
    Software & Database Technology (3)
    Artificial Intelligence (330)
    Surveys (52)
    Graphics, Image & Pattern Recognition (58)
    WISA2017 (1)
    WISA2018 (1)
    NASAC 2017 (13)
    CGCKD 2018 (12)
    Netword & Communication (4)
    Graphics, Image & Pattern Recognition (7)
    CCF Big Data 2017 (7)
    NCIS 2017 (5)
    Graphics, Image & Pattem Recognition (10)
    Interdiscipline & Frontier (77)
    Review (33)
    Intelligent Computing (87)
    Pattern Recognition & Image Processing (82)
    Big Date & Date Mining (21)
    Interdiscipline & Application (127)
    ChinaMM 2017 (12)
    CCDM2018 (14)
    Graphics ,Image & Pattern Recognition (45)
    Pattem Recognition & Image Processing (25)
    Big Data & Data Mining (42)
    Surverys (5)
    NDBC 2018 (8)
    Graphics,Image & Pattern Recognition (11)
    Database & Big Data & Data Science (119)
    Computer Graphics & Multimedia (180)
    Intelligent Software Engineering (17)
    Databωe & Big Data & Data Science (14)
    Computer Software (7)
    Mobile Crowd Sensing and Computing (9)
    Software Engineering (6)
    Intelligent Edge Computing (14)
    New Distributed Computing Technologies and Systems (9)
    Survys (0)
    Network & Communication (0)
    Graphics, Image & Patten Recognition (0)
    Interdiscipline & Frontier (0)
    HPC China 2018 (6)
    Data Science (18)
    Advances on Multimedia Technology (11)
    Software & Database Technology (48)
    ChinaMM2018 (14)
    ChinaMM2019 (0)
    Intelligent Mobile Authentication (10)
    Big Data & Data Science (87)
    Network & Cornmunication (4)
    Theoretical Computer Science (10)
    Software Engineering & Database Technology (8)
    Computer Architecture (12)
    Discipline Construction (2)
    Computer Science Theory (19)
    Computer Network (100)
    Contents (5)

  • 2021 Vol. 48 No. 3

  • 2021 Vol. 48 No. 2

  • 2021 Vol. 48 No. 1

  • 2020 Vol. 47 No. 12
  • 2021,48 (3) 
  • 2021,48 (2) 
  • 2021,48 (1) 
  • 2020,47 (12) 
  • 2020,47 (11A) 
  • 2020,47 (11) 
  • 2020,47 (10) 
  • 2020,47 (9) 
  • 2020,47 (8) 
  • 2020,47 (7) 
  • 2020,47 (6) 
  • 2020,47 (6A) 
  • 2020,47 (5) 
  • 2020,47 (4) 
  • 2020,47 (3) 
  • 2020,47 (2) 

  • More>>
  • Polynomial Time Algorithm for Hamilton Circuit Problem (4634)
    JIANG Xin-wen
    Computer Science. 2020, No.7:8-20
    Abstract (4634) PDF (1760KB) (9360)
    Review of Time Series Prediction Methods (936)
    YANG Hai-min, PAN Zhi-song, BAI Wei
    Computer Science. 2019, No.1:21-28
    Abstract (936) PDF (1294KB) (4089)
    Survey of Distributed Machine Learning Platforms and Algorithms (877)
    SHU Na,LIU Bo,LIN Wei-wei,LI Peng-fei
    Computer Science. 2019, No.3:9-18
    Abstract (877) PDF (1744KB) (3371)
    Detecting Community from Bipartite Network Based on Spectral Clustering (342)
    ZHANG Xiao-qin, AN Xiao-dan, CAO Fu-yuan
    Computer Science. 2019, No.4:216-221
    Abstract (342) PDF (1772KB) (2773)
    State-of-the-art and Development Trend of Artificial Intelligence Combined with Law (1289)
    HUANG Qiao-juan, LUO Xu-dong
    Computer Science. 2018, No.12:1-11
    Abstract (1289) PDF (1447KB) (2313)
  • » Diagnosability of Discrete-event Systems Based on Temporal
      Computer Science. 2012 Vol. 39 (8): 210-214
      Cited by: Baidu(710)
    » Key Technologies and Applications of Internet of Things
      LIU Qiang,CUI Li,CHEN Hai-ming
      Computer Science. 2010 Vol. 37 (6): 1-4
      Cited by: Baidu(460)
    » Measuring Semantic Similarity between Words Using Web Search Engines
      CHEN Hai-yan
      Computer Science. 2015 Vol. 42 (1): 261-267
      Cited by: Baidu(288)
    » Research Survey of Cloud Computing
      LI Qiao,ZHENG Xiao
      Computer Science. 2011 Vol. 38 (4): 32-37
      Cited by: Baidu(211)