Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 47 Issue 2, 15 February 2020
  
Contents
Contents
Computer Science. 2020, 47 (2): 0-0. 
Abstract PDF(272KB) ( 541 )   
RelatedCitation | Metrics
Database & Big Data & Data Science
Survey on Representation Learning of Complex Heterogeneous Data
JIAN Song-lei, LU Kai
Computer Science. 2020, 47 (2): 1-9.  doi:10.11896/jsjkx.190600180
Abstract PDF(2234KB) ( 4230 )   
References | Related Articles | Metrics
With the coming of the eras of artificial intelligence and big data,various complex heterogeneous data emerge continuously,becoming the basis of data-driven artificial intelligence methods and machine learning models.The quality of data representation directly affects the performance of following learning algorithms.Therefore,it is an important research area for representing useful complex heterogeneous data for machine learning.Firstly,multiple types of data representations were introduced and the challenges of representation learning methods were proposed.Then,according to the data modality,the data were categorized into singe-type data and multi-type data.For single-type data,the research development and typical representation learning algorithms for categorical data,network data,text data and image data were introduced respectively.Further,the multi-type data compounded by multiple single-type data were detailed,including the mixed data containing both categorical features and continuous features,the attributed network data containing node content and topological network,cross-domain data derived from different domains and the multimodal data containing multiple modalities.And based on these data,the research development and state-of-the-art representation learning models were introduced.Finally,the development trends on representation learning of complex heterogeneous data were discussed.
Review on Community Detection in Complex Networks
ZHAO Wei-ji,ZHANG Feng-bin,LIU Jing-lian
Computer Science. 2020, 47 (2): 10-20.  doi:10.11896/jsjkx.190100214
Abstract PDF(2666KB) ( 3120 )   
References | Related Articles | Metrics
In recent years,with the rapid development of modern network communication and social media technologies,complex network has become one of the frontier hotspots of interdisciplinary research.As an important problem in the research of complex network,community detection has important theoretical significance and application value,and has attracted increasing attention.Many community detection algorithms and reviews have been proposed.However,most of the existing reviews on community detection focus on a particular direction or field.On the basis of previous work,this paper did deep research in the community detection algorithms,and gave a review on the research progress of community detection.Firstly,this paper gave the definition of community detection and evaluation measurements for different network structure.Then,this paper introduced the classic community detection algorithms on different network structure,including the global community detection and local community detection algorithms on homogeneous networks,community detection on heterogeneous network,and community detection on link structure combined with node content,as well as the dynamic network community detection and community evolution.Finally,this paper briefly introduced the typical applications of community discovery in the real world,including impact maximization,link prediction and emotion analysis application.In addition,this paper discussed the challenges in the current community discovery field.This paper try to draw a clearer and more comprehensive outline for the community detection research field,and provide a good guide for beginners in the community detection.
Survey of Link Prediction Algorithms in Signed Networks
LIU Miao-miao,HU Qing-cui,GUO Jing-feng,CHEN Jing
Computer Science. 2020, 47 (2): 21-30.  doi:10.11896/jsjkx.190600104
Abstract PDF(1546KB) ( 1344 )   
References | Related Articles | Metrics
Link prediction in signed networks includes the possibility prediction of link existence or establishment and the sign prediction of unknown links between two nodes in the network.Related research is of great significance for analyzing and understanding the topological structure,function and evolutionary behaviors of signed networks,and has great application value in the fields of personalized recommendation,attitude prediction and protein interaction research and other fields.This paper reviewed the research results of link prediction in signed networks,and introduced related concepts,theoretical basis,commonly used data sets and evaluation indexes of link prediction accuracy of signed networks.According to the design idea,link prediction algorithms in signed networks were mainly divided into two categories,namely supervised and unsupervised machine learning method.The main idea of each algorithm was elaborated in detail.The characteristics,rules and existing problems of link prediction in signed networks were discussed,and the challenges and possible directions in the future were also pointed out,which can provide useful reference for relevant researchers in the fields of informatics,biology and sociology and so on.
New Similarity Measure Based on Extremely Rating Behavior
FENG Chen-jiao,LIANG Ji-ye,SONG Peng,WANG Zhi-qiang
Computer Science. 2020, 47 (2): 31-36.  doi:10.11896/jsjkx.190500130
Abstract PDF(1844KB) ( 854 )   
References | Related Articles | Metrics
With the rapid development of Internet technology,drastic Internet information explosion makes information overload as an increasingly serious problem.Faced with the massive Internet information,users consume a lot of time to search for information or products,but the search solution is constrained.The recommender systems is hence proposed to address the problem of information overload.The recommender systems use users’ historical behaviors to speculate their needs,interests,etc.,and recommend the information and products users may be interested in.As an important type of recommendation approach,the memory-based collaborative filtering methods establish the rating prediction function based on neighbor information of the user or pro-duct.The essence of the function is to precisely measure the similarity between users or products.The traditional similarity mea-sures such as Pearson,Cosin and Spearman rank correlation coefficients,only take into account the linear relationship between users,while the heuristic similarities,such as the PIP measurement based on three special factors and its improved version,only depict the non-liner relationship between users.Indeed,in the recommender systems,it is neither the linear relation nor the non-linear relation is good for measuring the similarity between users.In order to describe the similarity among users more finely,this paper proposed a similarity measure index of the correlation level considering the extreme rating behaviors based on anonli-near function.By integrating this index with the traditional linear correlation coefficients,this paper constructed a novel similarity measure.Comparative experiments were conducted to test the practicability and validity of the proposed approach on Ml (100k) and Ml-latest-small datasets.The results demonstrate that the proposed method performs better judged by indicators of MAE and RMSE.
Support Vector Machine Model Based on Grey Wolf Optimization Fused Asymptotic
WU Yu-kun,XIAO Jie,Wei William LEE,LOU Ji-lin
Computer Science. 2020, 47 (2): 37-43.  doi:10.11896/jsjkx.190100092
Abstract PDF(1531KB) ( 848 )   
References | Related Articles | Metrics
The development of big data requires higher accuracy of data classification.The wide application of support vector machine (SVM) requires an efficient method to construct an SVM classifier with strong classification ability.The kernel parameter,penalty parameter and feature subsets of dataset have an important impact on the complexity and prediction accuracy of the model.In order to improve the classification performance of SVM,the asymptotic of SVM was integrated into the gray wolf optimization (GWO) algorithm,and a new SVM classifier model was proposed.The model optimizes feature selection and parameter optimization of SVM at the same time.The new grey wolf individual integrating the asymptotic property of SVM directs the search space of grey wolf optimization algorithm to the optimal region in super-parameter space,and can obtain the optimal solution faster.In addition,a new fitness function,which combines the classification accuracy obtained from the method,the number of chosen features and the number of support vectors,was proposed.The new fitness function and GWO fused asymptotic lead the search to the optimal solution.This paper used several classical datasets on UCI to verify the proposed model.Compared with the grid search algorithm,the gray wolf optimization algorithm without asymptotic convergence and other methods in the literature,the classification accuracy of the proposed algorithm has different degrees of improvement on different data sets.The experimental results show that the proposed algorithm can find the optimal parameters and the smallest feature subset of SVM,with higher classification accuracy and less average processing time.
Feature Selection Method Based on Rough Sets and Improved Whale Optimization Algorithm
WANG Sheng-wu,CHEN Hong-mei
Computer Science. 2020, 47 (2): 44-50.  doi:10.11896/jsjkx.181202285
Abstract PDF(1471KB) ( 1267 )   
References | Related Articles | Metrics
With the development of the Internet and Internet of Things technologies,data collection has become easier.However,it is necessary to reduce the dimensionality of high-dimensional data.High-dimensional data contain many redundant and unrelatedfeatures,which will increase the computational complexity of the model and even reduce the performance of the model.Feature selection can reduce the computational cost and remove redundant features by reducing feature dimensions to improve the performance of a machine learning model,and retain the original features of the data,with good interpretability.It has become one of important data preprocessing steps in machine learning.Rough set theory is an effective method which can be used to feature selection.It preserves the characteristics of the original features by removing redundant information.However,it is difficult to find the global optimal feature subset by using the traditional rough sets-based feature selection method because the cost of computing all feature subset combinations is very high.In order to overcome above problems,a feature selection method based on rough sets and improved whale optimization algorithm was proposed.An improved whale optimization algorithm was proposed by employing poli-tics of population optimization and disturbance so as to avoid local optimization.The algorithm first randomly initializes a series of feature subsets,and then uses the objective function based on the rough sets attribute dependency to evaluate the goodness of each subset.Finally,the improved whale optimization algorithm is used to find an acceptable approximate optimal feature subset by iterations.The experimental results on the UCI dataset show that the proposed algorithm can find a subset of features with less information loss and has higher classification accuracy when the support vector machine is used as the classifier for evaluation.Therefore,the proposed algorithm has a certain advantage in feature selection.
Academic Paper Recommendation Method Combined with Researcher Tag
WU Lei,YUE Feng,WANG Han-ru,WANG Gang
Computer Science. 2020, 47 (2): 51-57.  doi:10.11896/jsjkx.190300121
Abstract PDF(2827KB) ( 829 )   
References | Related Articles | Metrics
In recent years,the rise of scientific social networks has changed the original mode of exchanges and cooperation among researchers to some extent,which makes scientific social networks well received by researchers.With the surge of research fin-dings on scientific social networks,it’s difficult for researchers to find research papers they are really interested in.Consequently,it becomes an important task to recommend the papers that researchers are interested in.Considering the particularity of resear-chers’ reading data,this paper conducted paper recommen-dation from the perspective of one class collaborative filtering.On the one hand,researchers’ tag information is used to extract negative cases precisely;on the other hand,based on the researcher-paper matrix with negative instances incorporated,the researchers-tag matrix and papers’ similarity information are jointly integrated into the probability matrix factorization,to alleviate the data sparsity problem.Finally,experiments were carried out on a scientific social network,ScholarMate.Four evaluation metrics,namely precision,recall,MAP,and MRR,were adopted to verify the recommendation accuracy as well as the recommendation order.The experimental results show that the proposed method performs betterthan the baselines with an improvement of 4.19% in terms of the precision,which demonstrate the effectiveness of considering the paper recommendation on scientific social networks as a one-class collaborative filtering problem,the effectiveness of introducing extra social information to improve the recommendation results,and the scalability of the proposed method.
Community Detection Algorithm Based on Local Similarity of Feature Vectors
YANG Xu-hua,SHEN Min
Computer Science. 2020, 47 (2): 58-64.  doi:10.11896/jsjkx.181202433
Abstract PDF(3473KB) ( 875 )   
References | Related Articles | Metrics
Community discovery and analysis is a hot topic in the study of complex network structures and functions.At present,the widely used algorithm for community partitioning has some problems,such as high time complexity,inaccurate quantification of the number of community cores,and low partitioning accuracy.Therefore,this paper proposed a community detection algorithm ELSC based on local similarity of feature vectors.The algorithm first calculates the eigenvector centrality of each node in the network.On this basis,the eigenvector local similarity (ELS) and eigenvector attractiveness (EA) indicators were proposed.The ELS index indicates the similarity between nodes.To form the initial community,the similarity between the nodes within the same community is higher,and the similarity between different community nodes is lower.The EA index considers the local similarity and the eigenvector centrality ratio,indicating the node.The attraction is used to optimize the initial community and complete the community division of the network.The algorithm determines the node by the most value,avoiding the problem that the threshold number of nodes is uncertain.The modularity and standardized mutual information between the proposed algorithm and six well-known algorithms were compared on seven real networks.Numerical simulation results show that the algorithm has high accuracy and low time complexity.
Mining Deep Semantic Features of Reviews for Amazon Commodity Recommendation
LI Ke,CHEN Guang-ping
Computer Science. 2020, 47 (2): 65-71.  doi:10.11896/jsjkx.190200362
Abstract PDF(1838KB) ( 949 )   
References | Related Articles | Metrics
Review mining plays an important role in the field of recommender system (RS).However,conventional mining methodscannot explicitly mine deep semantic features of reviews.Therefore,the major challenge in RS is how to mine deep semantics of reviews.This paper utilized Skip-Thought Vectors (STV) to learn latent semantic features of reviews.In addition,in order to enhance the ability of semantic representation of reviews,it introduced the Long Short-Term Memory (LSTM) network into STV,and proposed a deeply hierarchical bi-directional feature-extraction model in combination with bi-directional information mining method,user preference mining method and deeply hierarchical model.The introduced model can not only mine the deep semantic feature of reviews,but also mine the user’s emotional preferences.Then,the proposed model is combined with the Singular Value Decomposition (SVD) model.Experiments on two Amazon datasets show that the proposed model performs better than conventional models due to its strong ability of deep semantics mining of reviews.
Computer Graphics & Multimedia
Rough Uncertain Image Segmentation Method
RAO Meng,MIAO Duo-qian,LUO Sheng
Computer Science. 2020, 47 (2): 72-75.  doi:10.11896/jsjkx.190500177
Abstract PDF(1766KB) ( 717 )   
References | Related Articles | Metrics
Image segmentation is a fundamental problem in the field of computer vision,involving image retrieval,object detection,object recognition,pedestrian tracking and many other follow-up tasks.At present,there are a lot of research results,including traditional methods based on threshold,clustering and region growing,and popular algorithms based on neural networks.Due to the boundary uncertainty of the image region,the existing algorithms are not suitable for solving the problem of partial gradation of the image.Granular computing is one of the effective tools for solving complex problems,and has achieved good results on uncertain and fuzzy problems.Aiming at the limitation of the existing image segmentation algorithms in the uncertainty problem,based on the idea of granular computing,a rough uncertain image segmentation method was proposed in this paper.Based on the K-means algorithm and the neighborhood rough set model,this algorithm granulates the pixel points at the edge of the cluster,and uses the neighborhood matrix to calculate the inclusion degree of the clusters for the granulated pixels.Finally,the optimization of class clustering of edge pixels is realized.In the Matlab2019 programming environment,the experiment selected an equestrian training picture and a picture of a building in the BSDS500 data set to test the algorithm.Firstly,the color image is processed by grading,and the K-means algorithm is used to segment the image.Then,the value of the neighborhood factor is set,and the edge point is re-divided according to the neighborhood information of the edge pixel.Compared with the K-means algorithm,this algorithm can achieve better results.The experimental results show that the proposed method outperforms the K-means algorithm in the evaluation of roughness,which can effectively reduce the blurring of the image region boundary and realize the segmentation of the image gradient region with gray boundary blur.
Video Compression Algorithm Combining Frame Rate Up-conversion with HEVC Standard Based on Motion Vector Refinement
CAI Yu-han,XIONG Shu-hua,SUN Wei-heng,Karn PRADEEP,HE Xiao-hai
Computer Science. 2020, 47 (2): 76-82.  doi:10.11896/jsjkx.190500092
Abstract PDF(2087KB) ( 1149 )   
References | Related Articles | Metrics
Combining frame rate conversion technology with the HEVC standard will improve the compression efficiency of video.Aiming at the non-ideal result in frame rate up-conversion using the motion vector of low frame rate video extracted from HEVC coding bit stream directly,this paper proposed a compression algorithm combining frame rate up-conversion and HEVC based on motion vector refinement.Firstly,at the encoding end,the original even frames are extracted to reduce the frame rate of video,and then the low frame rate video is encoded and decoded by HEVC.Combined with the motion vector extracted from HEVC coding bit stream at the decoding end,the forward-backward joint motion estimation is used to further refine it,which makes the motion vector closer to the real motion of the object.Finally,the frame rate up conversion technique based on motion compensation is used to restore the video to its original frame rate.Experimental results show that compared with the HEVC standard,the proposed algorithm has some bitrate saving.In the meantime,compared with other algorithms,the proposed algorithm can increase the PSNR value of reconstructed videos by 0.5 dB on average with the same bitrate saving.
Classification Net Based on Angular Feature
WANG Li-hua,DU Ming-hui,LIANG Ya-ling
Computer Science. 2020, 47 (2): 83-87.  doi:10.11896/jsjkx.190500077
Abstract PDF(1361KB) ( 820 )   
References | Related Articles | Metrics
The excellent performance of Convolutional Neural Networks (CNN) in image classification tasks makes CNN models widely used in various fields of computer vision.In addition to the changes in the network structure,a large part of the reason why the accuracy and efficiency of the image classification model increase year by year comes from thenormalization technology and the improvement of the classification loss function.In the face recognition task,with the increasing precision,the classification loss function change from Softmax Loss to Triplet Loss,and from L-Softmax Loss to Arcface Loss,the measurement method develops from geometric measurement to angle measurement.The change of measurement mode is actually a change of feature form,and the feature form changes from general feature to angle feature.The feature points trained on the Mnist dataset using the angle metric loss function are angularly distributed,and the accuracy is higher than the geometric metric.If the angle metric is represented by more direct angular features,the feature points of the same class are linearly distributed after training,and accuracy is also higher than the general angle metric.This makes people wonder whether angle features can be used instead of general features in the CNN classification model.In the CNN classification model,the main structure is often composed of multiple convolutional layers and one or several fully connected layers.Through unifying the normalization operation of the convolutional layer and the fully connected layer,layers in model come to the angular convolutional layers and the angular fully connected layers.On the basis of the common classification network,the convolution layer is replaced by the angle convolution layer,and the full connection layer is replaced by the angle full connection layer,and then an angle classification network composed of angular features can be obtained.The accuracy of the angle classification network constructed on ResNet-32 is 2% higher than that of the original classification network on the Cifar-100 dataset.The validity of the feature in the classification network is demonstrated.
Verification and Evaluation of Modified Social Force Model Considering Relative Velocity of Pedestrians
ZHONG Zhen-wei,JI Qing-ge
Computer Science. 2020, 47 (2): 88-94.  doi:10.11896/jsjkx.190500055
Abstract PDF(2874KB) ( 1367 )   
References | Related Articles | Metrics
In the field of crowd simulation,the social force model is a very classic micro-simulation model proposed by Helbing,which can simulate some self-organizing phenomenon.However,there are still some shortcomings in the social force model such as pedestrian oscillation and pedestrian overlapping.Therefore,many scholars have enriched and improved the social force model in terms of parameter setting,force range and algorithm optimization.Gao et al.proposed a modified social force model considering the relative velocity of pedestrians,which is still the basis and important reference for scholars to study the improved social force model and various simulation experiments.Since Gao et al.showed the advantages of their modified social force model through only two experiments,which is a little lack of reliability,and there is no more self-organizing experiments to show that their modified social force model still retains the original social force model’s advantage that can simulate the self-organizing phenomenon,this paper did the confirmatory experiments and evaluation experiments for the modified social force model proposed by Gao et al.Two experiments conducted by Gao et al.were verified by confirmatory experiments,which confirm the advantages of Gao et al.’s modified social force model.Through the experimental results of the evaluation experiments,it is confirmed that the modified social force model of Gao et al.retains the ability to simulate the self-organizing phenomenon.This paper also discovered and analyzed the pedestrian overlap problem of the improved social force model of Gao et al.
Automatic Detection Algorithm of Nasal Leak in Cleft Palate Speech Based on Recursive Plot Analysis
LIU Xin-yi,TIAN Wei-wei,LIANG Wen-ru,HE Ling,YIN Heng
Computer Science. 2020, 47 (2): 95-101.  doi:10.11896/jsjkx.181001848
Abstract PDF(3804KB) ( 892 )   
References | Related Articles | Metrics
Nasal leak is a typical symptom of patients with velopharyngeal insufficiency.This paper studied the characteristics of nasal leak in cleft palate speech.Recursive plot based on the nonlinear dynamics method is used to explore the features.Combined with the recursive trend analysis method and the region distribution processing based on the recursive plot, quantitative parameters and minimum regions of the recursive plot analysis are extracted as characteristic matrix.Combined with classifier,automatic detection of nasal leak in cleft palate speech is achieved.The experiment analyzes the detection effect for factors such as downsampling point,delay time,critical distance,speech unit and classifier type then comprehensively weighs the influence of each factor on the detection accuracy in order to select the optimal value.The experimental results show that when the KNN classifier is used,the downsampling point is 30000 points,the delay time is 3ms,the critical distance is 5 units,and the speech unit is 4 frames,the detection accuracy of nasal leak in cleft palate speech is 84.63%.The automatic detection algorithm of nasal leak in cleft palate speech is aimed at providing an effective and objective auxiliary diagnosis basis for clinical pharyngeal function assessment.
Recognition Algorithm of Red and White Cells in Urine Based on Improved BP Neural Network
LIU Xiao-tong,WANG Wei,LI Ze-yu,SHEN Si-wan,JIANG Xiao-ming
Computer Science. 2020, 47 (2): 102-105.  doi:10.11896/jsjkx.191100195
Abstract PDF(1407KB) ( 831 )   
References | Related Articles | Metrics
Analyzing the components of urine in the microscopic image such as red and white blood cells can help doctors evaluate patients with kidney and urinary diseases.According to the characteristics such as low contrast,fuzzy edge of red and white cells in the non-stained and unlabeled urine image,a recognition method based on improved BP neural network was proposed in this paper.Firstly,the method combines genetic algorithm with BP neural network to optimize the weights and thresholds,to solve the problems of local extremum in the training process and improve the recognition accuracy of the BP neural network.Secondly,it uses the method of momentum gradient descent to eliminate the oscillation of network in gradient descent,to accelerate the convergence of the network and improve the learning rate of BP neural network.Compared with basic BP neural network,the improved algorithm improves the recognition rate of red and white blood cells by 6.9% and 9.5%,and the recognition speed has increased by 19.3s and 42.1s.Compared with the CNN recognition algorithm,the improved algorithm improves the recognition rate of white blood cells by 1.7%.Compared with the SVM recognition algorithm,the improved algorithm improves the recognition rate of red and white blood cells by 12.9% and 12.7%.The results of verification test and control test show that the improved method can realize the recognition of red and white cells with higher accuracy and faster recognition speed.
Single Image De-raining Method Based on Deep Adjacently Connected Networks
FU Xue-yang,SUN Qi,HUANG Yue,DING Xing-hao
Computer Science. 2020, 47 (2): 106-111.  doi:10.11896/jsjkx.190100228
Abstract PDF(2504KB) ( 1312 )   
References | Related Articles | Metrics
Rain streaks result in the occlusion of image content,which seriously affects the human visual effect and the processing performance of subsequent systems.Existing deep learning-based methods improve de-raining performance at the expense of complex network structure and parameter burden,which makes these methods difficult for serving practical applications.Therefore,a deep adjacently connected network structure was proposedin this paper.By focusing on the relationship between learned feature maps in depth networks,a fusion operation is designed to connect the adjacent features to obtain rich and more effective feature representation.Experiments on three public synthetic datasets and real-world rainy images show that the proposed method improves de-raining performance on both subjective and objective evaluations.The average structural similarity (SSIM) value on the synthetic dataset Rain100H is 0.84,and the average SSIM values on the synthetic dataset Rain100L and Rain1200 are 0.96 and 0.91.In real-world rainy images,the proposed method can effectively remove the foreground rain streaks while protecting background image information to obtain better visual quality.Compared with JORDER,the proposed method achieves comparable de-raining results,and can reduce the model parameters and CPU runtime by one and two orders of magnitude,respectively.Experimental data demonstrate that fusing adjacent features in the deep network can generate more effective representation.Therefore,although the proposed method contains relative few parameters and simple neural network structure,it can still achieve better ima-ge de-raining performance and solve the problems of parameter burden and complex network structure in existing methods.Mo-reover,the network structure design scheme in this paper can also provide reference values for relative image restoration tasks based on deep learning.
Face Liveness Detection Based on Image Diffusion Speed Model and Texture Information
LI Xin-dou,GAO Chen-qiang,ZHOU Feng-shun,HAN Hui,TANG Lin
Computer Science. 2020, 47 (2): 112-117.  doi:10.11896/jsjkx.181202339
Abstract PDF(2605KB) ( 885 )   
References | Related Articles | Metrics
To solve the problem of fraud in face authentication,this paper proposed a face liveness detection algorithm based on image diffusion speed model and texture information.The spatial structures of real face and fake face images are different.In order to extract difference features,anisotropic diffusion is used to enhance image edge information.And then,the difference between the original image and the diffused image is used as the image diffusion speed,and a diffusion velocity model is contructed.Then,local binary pattern algorithm is used to extract the diffusion speed feature and train a classifier.There are many differences between real face images and fake face images.In order to further improve the generalization ability of face liveness detection,the blur degree and color feature of face image are extracted synchronously.These features are combined by cascading feature matrix and another classifier is trained.Finally,a judgment is made based on the probabilities weighted fusion result of classifier output.Experimental results show that the proposed algorithm can detect spoofing faces quickly and efficiently.
Detection Method of Chip Surface Weak Defect Based on Convolution Denoising Auto-encoders
LUO Yue-tong,BIAN Jing-shuai,ZHANG Meng,RAO Yong-ming,YAN Feng
Computer Science. 2020, 47 (2): 118-125.  doi:10.11896/jsjkx.190100141
Abstract PDF(3041KB) ( 1250 )   
References | Related Articles | Metrics
Chip surface defects can affect the appearance and performance of the chip.Therefore,surface defect detection is an important part of the chip production process.The automatic detection method based on machine vision attracts much attention because of its advantages of low cost and high efficiency.Weak defects such as low contrast between defects and background and small defects,bring challenges to traditional detection methods.Because deep learning has shown strong capabilities in the fields of machine vision in recent years,this paper studied the detection of weak defects on the chip surface by using the method based on deep learning.Chip surface defects were regarded as noise in this menthod.Firstly,convolutional denoising auto-encoders (CDAE) is applied to reconstruct the image without defect.Then,the reconstructed image without defect is used to subtract the input image,thus obtaining the residual image with defect information.Because the influence of background has been eliminated from the residual diagram,it is easier to detect defects based on the residual diagram.Because of the random noise in the process of reconstructing defect-free image from chip background image based on CDAE,the weak defect may be lost in the reconstructed noise.Therefore,this paper proposed an overlapping block strategy to suppress the reconstructed noise,so as to better detect the weak defect.Because CDAE is an unsupervised learning network,there is no need to perform a large amount of manual data annotation during training,which further enhances the applicability of the method.By using the real chip surface data provided by the paper partner,the effectiveness of the proposed method in chip surface detection is verified.
Image Semantic Segmentation Based on Deep Feature Fusion
ZHOU Peng-cheng,GONG Sheng-rong,ZHONG Shan,BAO Zong-ming,DAI Xing-hua
Computer Science. 2020, 47 (2): 126-134.  doi:10.11896/jsjkx.190100119
Abstract PDF(3116KB) ( 1262 )   
References | Related Articles | Metrics
When feature extraction is performed by using convolutional networks in image semantic segmentation,the context information is lost due to the reduced resolution of features by the repeated combination of maximum pooling and downsampling operations,so that the segmentation result loses the sensitivity to the object location.Although the network based on the encoder-decoder architecture gradually refines the output precision through the jump connection in the process of restoring the resolution,the operation of simply summing the adjacent features ignores the difference between the features and easily leads to local mis-identification of objects and other issues.To this end,an image semantic segmentation method based on deep feature fusion was proposed.It adopts a network structure in which multiple sets of fully convolutional VGG16 models are combined in parallel,processes multi-scale images in the pyramid in parallel efficiently with atrous convolutions,extracts multi-level context feature,and fuses layer by layer through a top-down method to capture the context information as far as possible.At the same time,the layer-by-layer label supervision strategy based on the improved loss function is an auxiliary support with a dense conditional random field of pixels modeling in the backend,which has certain optimization in terms of the difficulty of model training and the accuracy of predictive output.Experimental data show that the image semantic segmentation algorithm improves the classification of target objects and the location of spatial details by layer-by-layer fusion of deep features that characterize different scale context information.The experimental results obtained on PASCAL VOC 2012 and PASCAL CONTEXT datasets show that the proposed method achieves mIoU accuracy of 80.5% and 45.93%,respectively.The experimental data fully demonstrate that deep feature extraction,feature layer-by-layer fusion and layer-by-layer label supervision strategy in the parallel framework can jointly optimize the algorithm architecture.The feature comparison shows that the model can capture rich context information and obtain more detailed image semantic features.Compared with similar methods,it has obvious advantages.
Zernike Moment Based Approach for Local Feature Detection
HE Chao-lei,BI Xiu-li,XIAO Bin
Computer Science. 2020, 47 (2): 135-142.  doi:10.11896/jsjkx.181202403
Abstract PDF(5819KB) ( 859 )   
References | Related Articles | Metrics
In order to obtain local feature region with better robustness against image geometric or quality deformation,a novel local feature detector based on Zernike moment with rotation invariance and scale invariance was proposed.For an input image,a Hessian matrix derived by Zernike moments (ZM-Hessian) is used to detect interest points.Firstly,the interest points are located by the difference of the determinate and trace of the ZM-Hessian matrix approximately.Then,the non-maximum-suppression method is applied to capture maximum corner response under multi-scale masks.After that,2D parabolic interpolation is employed to locate the interest points precisely at the sub-pixel level.At last,principal curvature is employed to eliminate edge points.Gradient histogram is employed to obtain dominant orientation.The vector of descriptors are constructed by 4-by-4 neighbors’ 8 directions of interest points.The proposed detector was compared with other traditional detectors based on Mikolajczyk’sframework.Experiment results prove that the proposed method is effective under various image deformations such as angle transformation,rotation & zoom,image blur,image compression,illumination change,and has good anti-noise performance.
Fast Simple Linear Iterative Clustering for Image Superpixel Algorithm
LEI Tao,LIAN Qian,JIA Xiao-hong,LIU Peng
Computer Science. 2020, 47 (2): 143-149.  doi:10.11896/jsjkx.190400121
Abstract PDF(6214KB) ( 1455 )   
References | Related Articles | Metrics
Simple linear iterative clustering (SLIC) takes a long time in the process of superpixel clustering.To address this drawback,this paper proposed a fast SLIC algorithm for image superpixel.Firstly,the algorithm removes the pixels that are clearly different from the clustering center in a superpixel area,and then uses the remaining pixels to update the clustering center.The operation ensures that the clustering center achieves convergence quickly,and prevents error propagation.Secondly,the edge pi-xels of each superpixel area are considered as active pixels while the non-edge pixels are considered as stable pixels that belong to one fixed class by initializing grids on the original image.Finally,fast superpixel image segmentation is achieved by labeling unstable pixels iteratively.This paper performed six comparative algorithms and the proposed algorithm on the Benchmark BSD500 under the environment of MATLAB.Compared with SLIC algorithm,the segmentation error rate of the proposed algorithm is reduced by 5%,the segmentation accuracy is improved by 0.5%,and the running time is 0.18s less than the later.The experimental results show that the proposed algorithm can improve the quality of superpixel segmentation while effectively reducing the computational complexity of the algorithm compared to popular superpixel algorithms.
Artificial Intelligence
Survey on Computerized Neurocognitive Assessment System
ZHANG Zheng-lin,ZHANG Li-wei,WANG Wen-juan,XIA Li,FU Hao,WANG Hong-zhi,YANG Li-zhuang and
LI Hai
Computer Science. 2020, 47 (2): 150-156.  doi:10.11896/jsjkx.190100150
Abstract PDF(1536KB) ( 2000 )   
References | Related Articles | Metrics
Neurological and psychiatric disorders,surgical injuries,tumors,and aging can lead to neurocognitive decline in humans.Intervention and rehabilitation require specialized neurocognitive assessment tools to evaluate cognitive decline and track cognitive changes.In order to promote the research and development of computerized neurocognitive assessment system suitable for Chinese population and improve the clinical application level of cognitive neuropsychology in China,firstly,the research background and development history of neurocognitive assessment were summarized.Secondly,the computerized neurocognitive assessment system with empirical evidences supporting its reliability and validity over the past decade at home and abroad was investigated and its characteristics were compared.Then,this paper demonstrated the necessity of developing a computerized neurocognitive assessment system suitable for Chinese people,introduced its development process and put forward the computer adaptive testing strategy for the assessment system.Subsequently,the main problems of the current computerized neurocognitive assessment system were analyzed.On this basis,the application of modern information technology such as virtual reality,brain cognitive computing model and online assessment in the future computerized neurocognitive assessment system was prospected.Finally,a commercial model of computerized neurocognitive assessment system based on artificial intelligence,mobile medicine and digital medicine was proposed.
Distant Supervised Relation Extraction Based on Densely Connected Convolutional Networks
QIAN Xiao-mei,LIU Jia-yong,CHENG Peng-sen
Computer Science. 2020, 47 (2): 157-162.  doi:10.11896/jsjkx.190100167
Abstract PDF(2007KB) ( 845 )   
References | Related Articles | Metrics
Densely connected convolutional networks (DenseNet) is a new architecture of deep convolutional neural network.By using identity mapping for shortcut connections between different layers,it can ensure the maximum information transmission of neural network.In the distant supervised relation extraction task,precious models use shallow convolution neural networks to extract features of a sentence which can only represent partial semantic information.To enhance the representation power of network,a deep convolutional neural network model based on dense connectivity was designed to encode sentences.The proposed model consists of five layers of densely connected convolutional neural networks.It can capture more semantic information by combining different levels of lexical,syntactic,and semantic features.At the same time,it can alleviate the phenomenon of gradient disappearance of deep neural network,which makes the network more capable of characterizing natural language.The experimental results on NYT-Freebase datasets show that the mean accuracy of the proposed model achieves 82.5%,and the PR curve area achieves 0.43.Experimental results show that the proposed model can effectively utilize features and improve the accuracy of distant supervised relation extraction.
Neural Machine Translation Combining Source Semantic Roles
QIAO Bo-wen,LI Jun-hui
Computer Science. 2020, 47 (2): 163-168.  doi:10.11896/jsjkx.190100048
Abstract PDF(1551KB) ( 881 )   
References | Related Articles | Metrics
With the rapid development of deep learning in recent years,neural machine translation combining deep learning has gradually replaced statistical machine translation and becomes the mainstream machine translation method in the academic circle.However,the traditional neural machine translation regards the source-side sentence as a word sequence and does not take into account the implicit semantic information of sentences,resulting in the inconsistency between the translation results and source-side semantics.To solve this problem,some linguistic knowledges,such as syntax and semantics,are applied to neural machine translation and achieve good experimental results.Semantic roles can also be used to express the semantic information of sentences and have a certain application value in neural machine translation.This paper proposed two neural machine translation encoding models that incorporate semantic role information of sentences.On the one hand,semantic role played by labels are added to the word sequences to mark the semantic role played by each wordin the sentence.The semantic role labels and source-side words together constitute the word sequence.On the other hand,by constructing the semantic role tree of source sentences,the position information of each word in the semantic role tree is obtained,which is spliced with the word vector as a feature vector to form a word vector containing semantic role information.Experimental results on large-scale Chinese-English translation show that,compared with the baseline system,the two methods proposed in this paper not only improve 0.9 BLEU points and 0.72 BLEU points on average in all test sets respectively,but also improve performance in other evaluation indexes,such as TER (Translation Edit Rate) and RIBES (Rank-based Intuitive Bilingual Evaluation Score).Further experimental analysis shows that the proposed neural machine translation encoding models combining semantic roles have better translation effect on long sentences and translation adequacy than the baseline system.
Traffic Signal Control Method Based on Deep Reinforcement Learning
SUN Hao,CHEN Chun-lin,LIU Qiong,ZHAO Jia-bao
Computer Science. 2020, 47 (2): 169-174.  doi:10.11896/jsjkx.190600154
Abstract PDF(2295KB) ( 3169 )   
References | Related Articles | Metrics
The control of traffic signals is always a hotspot in intelligent transportation systems research.In order to adapt and coordinate traffic more timely and effectively,a novel traffic signal control algorithm based on distributional deep reinforcement learning was proposed.The model utilizes a deep neural network framework composed of target network,double Q network and value distribution to improve the performance.After integrating the discretization of the high-dimensional real-time traffic information at intersections with waiting time,queue length,delay time and phase information as states and making appropriate definitions of actions,rewards in the algorithm,it can learn the control strategy of traffic signals online and realize the adaptive control of traffic signals.It was compared with three typical deep reinforcement learning algorithms,and the experiments were performed in SUMO (Simulation of Urban Mobility) with the same setting.The results show that the distributional deep reinforcement learning algorithm is more efficient and robust,and has better performance on average delay,travel time,queue length,and wai-ting time of vehicles.
Convolutional Neural Networks Based on Time-Frequency Characteristics for Modulation Classification
XU Mao,HOU Jin,WU Pei-jun,LIU Yu-ling,LV Zhi-liang
Computer Science. 2020, 47 (2): 175-179.  doi:10.11896/jsjkx.181202361
Abstract PDF(2469KB) ( 1052 )   
References | Related Articles | Metrics
In the situation of increasingly dense communication environment and endless modulation patterns of signals,the modu-lation classification becomes more and more difficult.It is very important for the application of radio communication to seek a new method of automatic modulation classification (AMC) with high accuracy and good timeliness.Based on this,a novel convolutio-nal neural network based ontime-frequency characteristics (TFC-CNN) for AMC was proposed.Firstly,a large number of modulation signals are collected,and the time-frequency features of the signals are converted into image features by short-time Fourier transform,which are used as the input of the network.Secondly,a convolutional neural network with stronger feature extraction ability and fewer parameters is designed,and the feature extraction ability of the network is enhanced by improving the connection mode of different layers in the network.At the same time,the model parameters are reduced by reducing the scale of the convolution kernel and using the global average pooling,the timeliness of the model is improved.Finally,adding batch normalization layers to network can increase the stability of the model and prevent overfitting.The experimentresults show that the proposed algorithm is significantly lessin parameters and training time than the traditional methods,and has higher accuracy,which shows the superiority of the proposed algorithm.
Population Distribution-based Self-adaptive Differential Evolution Algorithm
LI Zhang-wei,WANG Liu-jing
Computer Science. 2020, 47 (2): 180-185.  doi:10.11896/jsjkx.181202356
Abstract PDF(2085KB) ( 935 )   
References | Related Articles | Metrics
Differential evolution is a simple and powerful heuristic global optimization algorithm.However,its performance is strongly influenced by the differential evolution strategies and the value of control parameters.Inappropriate strategies and parameters may lead the algorithm fall into premature convergence.Aiming at the problem about selection of strategies and parameters in search process of differential evolution,a population distribution-based self-adaptive differential evolution algorithm was proposed.Firstly,the adaptive factor is established for measuring the distribution of the current population,and the evolution stage of the algorithm can be further determined adaptively.Then,according to the characteristics of different evolution stages,the stage-specific mutation strategies and control parameters are designed,the self-adaptive mechanism is also designed in order to realize dynamic adjustment of strategies and parameters,to balance the global detection and local search capabilityof the algorithm,and improve the search efficiency of the algorithm.Finally,the proposed algorithm is compared with six main-stream differential evolution variants.The numerical experiments of fifteen typical test functions show that the proposed algorithm is superior to six main-stream differential evolution variants in terms of the measures of the average function evaluation times,solution accuracy and converge velocity.Therefore,the computational cost,optimization performance and convergence performance of the proposed algorithm can be proved to be more advantageous.
Protected Zone-based Population Migration Dynamics Optimization Algorithm
HUANG Guang-qiu,LU Qiu-qin
Computer Science. 2020, 47 (2): 186-194.  doi:10.11896/jsjkx.181202338
Abstract PDF(1856KB) ( 662 )   
References | Related Articles | Metrics
To solve global optimum solutions of some complex optimization problems,a new swarm intelligence optimization algorithm,called PZPMDO,was proposed.In this algorithm,it is assumed that many biological populations live in an ecosystem,and the ecosystem is divided into two regions:non-protected zone and protected zone.All kinds of protection should be carried out for biological populations in the protected zone.There is a population migration channel between the non-protected zone and the protected zone.If the density of a biological population in a certain region is too high,the population will migrate to the low density region spontaneously,resulting in the influence on biological populations in the low density zone by the migrated biological population.The greater the proportion of a biological population,the greater the influence of the population.The stronger a biological population is,the more the biological population will spread its advantages to other biological populations.There is a mutual influe-nce on the survival and competition of each population in different zones,which is reflected in the interaction among the features of biological populations,and the influence varies with time.The ZGI index is used to describe the strength of a biological population.The protected zone-based population migration dynamic model,population migration and interaction of biological populations are used to construct operators.PZPMDO has 8 operators,and only 1/1000~1/100 of of total variables are dealt with at a time of evolution.The algorithm has the characteristics of fast search speed and global convergence,it is suitable for solving the global optimization problem with higher dimensions.
Product Review Summarization Using Discourse Hierarchical Structure
ZHANG Yi-fei,WANG Zhong-qing,WANG Hong-ling
Computer Science. 2020, 47 (2): 195-200.  doi:10.11896/jsjkx.181202410
Abstract PDF(1461KB) ( 758 )   
References | Related Articles | Metrics
Product review summarization aims to extract a series of relevant sentences that represent the overall opinions of the product.Analysis of discourse hierarchical structure aims to analyze the hierarchical structure and semantic relationship between the various semantic units in the discourse.Obviously,the analysis of discourse hierarchical structure is conducive to determine the semantic information and importance of each semantic unit in the discourse,which is very useful for extracting the important content of the discourse.Therefore,this paper proposed a product review summarization method based on discourse hierarchical structure.This method builds a product review summarization model based on LSTM and applies attention mechanism to extract the important content in the product review by integrating discourse hierarchical structure into the model.The experiments was conducted on the Yelp 2013 dataset and evaluated on the ROUGE evaluation index.The experimental results show that the ROUGE-1 value of the model after adding the discourse hierarchical structure is 0.3608,which is 1.57% higher than the stan-dard LSTM method using only sentences information of the product review.This shows that the introduction of discourse hierarchical structure into the product review summarization task can effectively improve the performance of the task.
Belief Coordination for Multi-agent System Based on Possibilistic Answer Set Programming
WU Tian-tian,WANG Jie
Computer Science. 2020, 47 (2): 201-205.  doi:10.11896/jsjkx.190100101
Abstract PDF(1369KB) ( 600 )   
References | Related Articles | Metrics
Multi-agent system MAS is a very active research direction in the field of artificial intelligence.In multi-agent systems,action conflicts will inevitably occur due to the difference in beliefs between agents.The rigorous coordination method proposed by Sakama et al.is only applicable to situations where there is a common belief among agents.When there is no common belief,this coordination method has no solution.In order to solve this problem,this paper proposed a belief coordination method based on possiblistic answer set programming (PASP).Firstly,according to different belief sets of agents,the weighted quantitative method is used to calculate the satisfaction degree of PASP’s answer set relative to Agent’s belief,so as to weaken some beliefs,and the default decision theory is introduced to deduce the consis-tent solution of Agent’s belief coordination.Then,a consistent coordination program is constructed according to the consistent solution,which serves as the background knowledge base commonly recognized by agents.Finally,the multi-agent belief coordination algorithm is implemented to enable the belief coordination among agents to be completed auto-nomously based on the DLV solver.The example of tourism recommendation system shows that this algorithm can break the limitations of rigorous coordination method and effectively solve the coordination problem when there is no common belief among all agents.
Sine Cosine Algorithm Based on Logistic Model and Stochastic Differential Mutation
XU Ming,JIAO Jian-jun,LONG Wen
Computer Science. 2020, 47 (2): 206-212.  doi:10.11896/jsjkx.181102197
Abstract PDF(2073KB) ( 686 )   
References | Related Articles | Metrics
In view of the slow convergence speed,easy to fall into local optimum and low precision of the standard sine cosine algorithm,an improved sine cosine algorithm (LS-SCA) with the nonlinear conversion parameter and the stochastic differential mutation strategy was proposed to solve global optimization problems.Firstly,a nonlinear conversion parameter based on Logistic model is designed to balance between global exploration and local exploitation.Secondly,a stochastic differential mutation strate-gy is introduced to maintain the diversity of population and avoid falling into the optimal value.Finally,the nonlinear conversion parameter and stochastic differential mutation strategies are fused.On the one hand,12 standard test functions are selected for global optimization experiments.The results show that LS-SCA is superior to the other SCAs and comparison latest algorithms in convergence accuracy and convergence speed with the same number of fitness function evaluations.Stochastic differential mutation strategy can improve LS-SCA’s global optimization ability especially.On the other hand,LS-SCA is used to optimize the parameters of neural network to solve two classical classification problems.Compared with the traditional BP algorithm and the otherintelligent algorithms,the neural network based on LS-SCA can achieve higher classification accuracy.
Computer Network
Survey on Application of Blockchain in VANET
ZHOU Chang,LU Hui-mei,XIANG Yong,WU Jing-bang
Computer Science. 2020, 47 (2): 213-220.  doi:10.11896/jsjkx.190600001
Abstract PDF(1604KB) ( 1583 )   
References | Related Articles | Metrics
The Vehicular Ad Hoc Network (VANET) is a kind of mobile ad hoc network (MANET) that supports vehicle to vehicle (V2V) and vehicle to infrastructure (V2I) communications,which is one of the core technology in Intelligent Transport System (ITS).Distributed data storage,point-to-point transmission,consensus mechanism and encryption algorithm of blockchain can warrant the security and reliability of VANET.However,the strong consistency and chain structure of blockchain can not match the two main characteristics of VANET,i.e.the rapid mobility of vehicle nodes and network instability.This research focus on the technical aspects of blockchain such as node,storage,cross-chain and consensus under the conditions of high mobility of vehicle nodes and volatility of network,and analyzes the existing problems and proposes solutions.We also prospect the future applications and research of blockchain in VANET which can be provided as a reference and basis.
P2P Network Search Mechanism Based on Node Interest and Q-learning
LI Long-fei,ZHANG Jing-zhou,WANG Peng-de,GUO Peng-jun
Computer Science. 2020, 47 (2): 221-226.  doi:10.11896/jsjkx.190400002
Abstract PDF(1847KB) ( 592 )   
References | Related Articles | Metrics
Adding smartphone devices to the resource sharing system based on unstructured P2P network can satisfy people’s requirements for diversity,convenience,high frequency,real-time and high efficiency of resource sharing.However,the expansion of network scale and the increase of network node heterogeneity will inevitably lead to the decrease of system resource search efficiency,the sharp increase of redundant information and the more non-network.To solve these problems,an improved resource search mechanism based on node interest and Q-learning was designed.Firstly,nodes are clustered according to interest similarity,and interest sets are divided.Then,interest trees are constructed according to the capability values of interest sets.This structure avoids the generation of message loops,which greatly reduces redundant information.In resource search,flooding algorithm is used to forward messages in interest trees,and Q-learning-based message forwarding mechanism is used among interest trees,which is constantly strengthened.The most likely paths to obtain the target resources are transformed,and query messages are propagated preferentially on these paths.In addition,for the “hot spot” resource problem,an adaptive hot spot resource index mechanism was designed to reduce the repeated path searching and redundant message volume.To solve the problem of node failure,the root node redundancy mechanism and the strategy method of piggyback detection were given.The analysis results show that the method can reduce message redundancy caused by root node failure and common node failure respectively.The simulation results show that compared with GBI-BI algorithm and Interest CN algorithm,the proposed search algorithm can improve hit rate,shorten response time,reduce redundant information,and has better comprehensive performance.Finally,it solves the problems of low efficiency of resource search and high overhead of network traffic caused by the addition of smartphone devices to P2P network.
Communication Satellite Fault Detection Based on Recurrent Neural Network
LIU Yun,YIN Chuan-huan,HU Di,ZHAO Tian,LIANG Yu
Computer Science. 2020, 47 (2): 227-232.  doi:10.11896/jsjkx.190600147
Abstract PDF(1523KB) ( 1015 )   
References | Related Articles | Metrics
With the rapid development of modern spaceindustry,the structure of communication satellites is becoming more and more complex,while its faults are gradually increasing,and fault detection of communication satellites has become a key issue in the current aerospace field.At present,the detection of faults by major space agencies is still based on simple upper and lower threshold’s detection.The method is too simple and can only detect a small number of specific faults.Early studies using traditional machine learning for detection can only detect faults in quantitative characteristics.Aiming at the problem that traditional machine learning algorithms are difficult to effectively learn the trend of telemetry data,this paper proposed a thresholding methodbased on long-short-time memory network.LSTM prediction model is used to learn the trend change of the satellite telemetry data,and at the same time to maximize the correlation coefficient and the F1 score,to determine the appropriate threshold for the fault determination of the multi-dimensional telemetry data.This method can effectively judge the fault by the trend of the satellite telemetry data.The experimental data is based on the 24D communication satellite telemetry data provided by a space agency for 2 years.The core model LSTM network is trained on NVIDIA GTX TITAN X.The final model accuracy is 99.34%,the precision is 81.93%,and the recall rate was 94.62%.At the same time,compared with the traditional machine learning algorithm and the LSTM-based non-threshold method,the accuracy of the model is significantly higher.The experimental results show that the LSTM network can efficiently learn the trend characteristics of satellite telemetry data.At the same time,using the appropriate method to select the threshold value,it can effectively detect the faultsof the communication satellite which successfully solve the problem of communication satellite fault detection in the aerospace field.
RFID Indoor Positioning Algorithm Based on Asynchronous Advantage Actor-Critic
LI Li,ZHENG Jia-li,WANG Zhe,YUAN Yuan,SHI Jing
Computer Science. 2020, 47 (2): 233-238.  doi:10.11896/jsjkx.190100070
Abstract PDF(2240KB) ( 653 )   
References | Related Articles | Metrics
In view of the fact that the accuracy of existing RFID indoor positioning algorithm is easily affected by environment factors and the robustness is not strong,this paper proposed an RFID indoor positioning algorithm based on asynchronous advantage actor-critic (A3C).The main steps of the algorithm are as follows.Firstly,the RSSI value of RFID signal strength is used as the input value.The multi-thread sub-action network parallel interactive sampling learning,and the sub-evaluation network evaluates the advantage and disadvantage of the action value,so that the model is continuously optimized to find the best signal strength RSSI and trains the positioning model.The sub-thread network updates the network parameters to the global network on a regular basis,and the global network finally outputs the specific location of the reference tag,at the same time the asynchronous advantage actor-critic positioning model is trained.Secondly,in the online positioning stage,when the target to be tested enters the area to be tested,the signal strength RSSI value of the object to be tested is recorded and input into the asynchronous advantage actor-critic positioning model.The sub-thread network obtains the latest positioning information from the global network,locates the side target,and finally outputs the specific position of the target.RFID indoor positioning algorithm based on asynchronous advantage actor-critic was compared with the traditional RFID indoor positioning algorithm based on Support Vector Machines (SVM) positioning,Extreme Learning Machine (ELM) positioning,and Multi-Layer Perceptron positioning (MLP).Experiment results show that the mean positioning error of the proposed algorithm is respectively decreased by 66.114%,50.316% and 44.494%; the average positioning stability is respectively increased by 59.733%,53.083% and 43.748%.The experiment results show that the proposed RFID indoor positioning algorithm based on asynchronous advantage actor-critic has better positioning performance when dealing with a large number of indoor positioning targets.
Address Assignment Algorithm for Tree Network Based on Address Space
LIU Ning-ning,FAN Jian-xi,LIN Cheng-kuan
Computer Science. 2020, 47 (2): 239-244.  doi:10.11896/jsjkx.190400130
Abstract PDF(2205KB) ( 683 )   
References | Related Articles | Metrics
Wireless sensor network (WSN) is a multi-hop and self-organizing network composed of a large number of micro-sensor nodes deployed in the monitoring area through wireless communication.Distributed environment awareness and simple and flexible deployment make WSN become an important factor affecting our daily life.With the continuous development of micro-electronics and communication technology,WSN has been widely used in national defense,military,environmental monitoring,medical health,smart home and industrial manufacturing.ZigBee is a global standard for wireless personal area networks that support low-rate transmission,low power consumption,security and reliability for available products and applications.Different from the other wireless personal area network standard such as Bluetooth,Wi-Fi,ZigBee provides the low power wireless tree and mesh networking,and supports up to thousands of wireless sensor devices in a network.There exist isolated nodes in the Distributed Address Assignment Mechanism of ZigBee technology,which results in unavailable of idle address and waste of resources.To solve this problem,a novel tree-based Address Assignment Algorithm for Tree Network (AAN) is proposed in this paper.This algorithm can decrease the idle address space and the number of isolated nodes in the network,optimize the network topology,and reduce the time and storage space needed to establish and maintain the routing table.Simulation experiment results show that our algorithm is more advantage than DAAM and one of its present improvements in terms of the success rate of address assignment,number of isolated nodes,and network depth.
Information Security
Malware Name Recognition in Tweets Based on Enhanced BiLSTM-CRF Model
GU Xue-mei,LIU Jia-yong,CHENG Peng-sen,HE Xiang
Computer Science. 2020, 47 (2): 245-250.  doi:10.11896/jsjkx.190500063
Abstract PDF(1644KB) ( 926 )   
References | Related Articles | Metrics
To address the problems such as short,informal,single entity category and entity disambiguation in the malware name recognition task on Twitter,this paper proposed an entity recognition method based on BERT-BiLSTM-Self-attention-CRF to automatically recognize malware name in tweets.Based on the BiLSTM-CRF model,the BERT is used to encode context information,improve the contextual semantic quality of word embeddings,and enhance the semantic disambiguation ability.At the same time,Self-attention mechanism is used to learn weighted representation to improve the performance of single entity category re-cognition by learning the long-term relations between words and sentence structure.To evaluate the proposed methods,this paper constructed a labeled dataset in tweets that contains malware name entities.Experimental results show that the proposed method can achieve a better performance,attain 86.38% precision,84.73% recall and 85.55% F-score.The proposed model can outperforms the baseline model,with F-score improved by 12.61%.
Fake Account Detection Method in Online Social Network Based on Improved Edge Weighted Paired Markov Random Field Model
SONG Chang,YU Ke,WU Xiao-fei
Computer Science. 2020, 47 (2): 251-255.  doi:10.11896/jsjkx.190600172
Abstract PDF(1465KB) ( 656 )   
References | Related Articles | Metrics
Social media systems provide a convenient platform for sharing,communication and collaboration.When people enjoy the openness and convenience of social media,there may be many malicious acts,such as bullying,terrorist attacks and fraudulent information dissemination.Therefore,it is very important to be able to detect these anomalous activities as accurately and early as possible to prevent disasters and attacks.The success of online social networks (OSN) in recent years,such as Twitter,Facebook,Google+,LinkedIn,has made them targests of attacker’s goal due to their rich profit resources.The openness of social networks makes them particularly vulnerable to unusual account attacks.Existing classification models mostly use method that first assign weights to the edges of the graph,iteratively propagate the reputation scores of the nodes in the weighted graph,and use the final posterior scores to classify the nodes.One of the important tasks is the settingof edge weight.This parameter will directly affect the accuracy of the test results.Based on the detection task of fake account in social media,this paper analyzed the global structure based on social graph,and improves the algorithm of edge weight in the paired Markov random field model,so that it can adaptively optimize in the iterative process.GANG+LW,GANG+LOGW,and GANG+PLOGW algorithms with higher accuracy were proposed.These three algorithms used three different methods to improve the algorithm of edge weight.Experiments show that the proposed method can obtain more accurate fake account detection results than the basic paired Markov random field model,in which GANG+PLOGW got the best results in the three algorithms.The result proves that this improved model can solve the problem more effectively when detecting fake accounts in social networks.
Protocol of Dynamic Provable Data Integrity for Cloud Storage
LI Shu-quan,LIU Lei,ZHU Da-yong,XIONG Chao,LI Rui
Computer Science. 2020, 47 (2): 256-261.  doi:10.11896/jsjkx.181202371
Abstract PDF(1569KB) ( 754 )   
References | Related Articles | Metrics
Cloud storage is a novel data storage architecture.The security and manageability of data in cloud storage are also facing new challenges.Because users no longer store any copies of the data in their local memory,they cannot fully ensure whether the outsourced data are intact overall.How to protect the data integrity in the cloud has become a hot topic in academic research.The protocol of Provable Data Integrity (PDI) was considered to be the main method to solve this problem,this paper presented lattice-based provable data integrity for checking the integrity of the data in the cloud.The proposed scheme realizes the dynamic data verification by incorporating the idea of Ranked Merkle Hash Tree (RMHT) and lattice-based technology.The scheme realizes the fine-grained signature and reduces the computational cost required by the user to generate the authentication tag.The scheme introduces the RMHT to perform the modification verification of the data and supports the dynamic update of the data.It has strong privacy protection capability,blinds the users original data during the verification process,and the third party cannot obtain users real data information.Moreover,in order to prevent malicious third parties from launching denial-of-service attacks on cloud servers,only authorized third parties can verify the integrity of user data.Finally,security analysis and performance ana-lysis show that the proposed scheme not only has characteristics of unforgeability and privacy protection,but also greatly reduces the computational cost of signature.
Easy-to-deploy Dynamic Monitoring Scheme for Android Applications
SU Xiang,HU Jian-wei,CUI Yan-peng
Computer Science. 2020, 47 (2): 262-268.  doi:10.11896/jsjkx.190100117
Abstract PDF(1550KB) ( 1591 )   
References | Related Articles | Metrics
Android application dynamic monitoring scheme is usually implemented in three ways:1) custom ROM;2) after obtaining the device root permission,modify the system file or use ptrace technology to inject code into the target process;3) repackage APK to add monitoring code.All three methods are implemented in an intrusive manner,which depends on the system environment and is difficult to deploy to different devices.In order to solve the above problems,a non-intrusive dynamic monitoring scheme based on plug-in technology was proposed.The scheme releases the monitoring system in the form of host App and installs it on the target device.The application to be monitored is loaded by host App environment in the form of a plug-in for opera-tion,and the host App loads the corresponding monitoring module when loading the plug-in,so the App is monited.Start a process ahead of time before the application to be monitored runs as a plugin.The Binder proxy object in the process is replaced by a dynamic proxy method,and the Binder service request in the process is redirected to the virtual service in the virtual service process for processing,so that the components in the application to be monitored can run in the pre-started process.When the Application object in the application to be monitored is initialized,the Java layer and the Native layer monitoring module are loadedto complete the monitoring.According to this scheme,the prototype system AndroidMonitor is implemented on the VirtualApp sandbox and tested on the Nexus5 device.The experimental results show that compared with other schemes,although the startup time of the application to be monitored is increased by about 1.4s,the scheme does not need to acquire the root authority of the device system,and can simultaneously monitor the Java layer and the native layer sensitive API.The system introduces a device information protection module to prevent device information from leaking when monitoring applications.The system is distributed in the form of an app,which is easy to deploy to different devices and has multiple application scenarios.
Improvement of DPoS Consensus Mechanism Based on Positive Incentive
CHEN Meng-rong,LIN Ying,LAN Wei,SHAN Jin-zhao
Computer Science. 2020, 47 (2): 269-275.  doi:10.11896/jsjkx.190400013
Abstract PDF(1874KB) ( 1588 )   
References | Related Articles | Metrics
Consensus mechanism is the key of block chain technology.In the DPoS consensus mechanism,each node can indepen-dently determine its trusted authorization nodes,and these authorization nodes will take turns to generate new blocks for rapid consensus verification.But DPoS still has security problems such as inactive voting and node corruption.Aiming at these two problems,this paper proposed an improved DPoS scheme based on reward incentive.The evoting rewardr is used to encourage nodes to actively participate in the process of voting and the ereporting rewardr is used to encourage common nodes to report bribery nodes.The Matlab simulation experiments show that the introduction of voting reward improves the voting enthusiasm of nodes.Compared with the original DPoS consensus mechanism,in which the number of voting nodes accounts for 45% to 50%,the introduction of two different voting reward methods increases the number of voting nodes to 65% to 70% and 55% to 60% respectively.Compared with the original DPoS consensus mechanism,in which the proportion of nodes that do not accept bribes will decrease as the bribery of malicious nodes increases,the introduction of the reporting reward method makes the proportion of choosing reporting nodes increase significantly,and the proportion of choosing reporting nodes can increase to 54% when the number of voting rounds is 20.The experiment results show that the improved DPoS mechanism can not only make more nodes vote,but also enhance the bribery resistance of the common nodes,so that the probability of malicious nodes becoming the “trustee” becomes smaller,thus ensuring the security of the network.
Cryptanalysis of Medical Image Encryption Algorithm Using High-speed Scrambling and Pixel Adaptive Diffusion
YU Feng,GONG Xin-hui,WANG Shi-hong
Computer Science. 2020, 47 (2): 276-280.  doi:10.11896/jsjkx.190100051
Abstract PDF(2010KB) ( 587 )   
References | Related Articles | Metrics
Security is essential and important for every image encryption algorithm.Medical image encryption is a means to protect patients’ privacy.Analyzing the security of medical image encryption algorithm is very meaningful for the design of medical image encryption algorithm,enhancing the security of algorithm and promoting the application of medical image encryption algorithm.Recently,Hua et al.proposed a medical image encryption algorithm using high-speed scrambling and pixel adaptive diffusion.The key operation of the scheme is insertion of a random sequence around an image,then the random values are dispersed to the whole image by scrambling,finally,the whole image is scrambled by diffusion.Because different random values are generated in each encryption,even for one unchanged image,the cipher-image is different in every encryption such that Hua et al’s scheme is similar to one time one pad system.In this paper,the security of the algorithm was analyzed by differential cryptanalysis and chosen ciphertext attack in detail.The decryption process is analyzed theoretically by differential cryptanalysis and linear relationship is constructed between plain-images and cipher-images.Based on the linear relationship,a codebook is established,and the codebook attack breaks Hua et al’s algorithm.The size of the codebook is determined by the size of the cipher-image.If the size of the cipher-image is,the constructed codebook contains pairs of plain-image/cipher-image.The experimental results verify the theoretical analysis.To improve the security of Hua et al’s algorithm and to resist the differential cryptanalysis,an improved scheme was proposed.In the improved scheme,plaintext-related permutation matrices are introduced.The simulation and statistical results show that the improved scheme not only inherits the advantages of the original algorithm,but also resist the differentialcryptanalysis and the codebook attack.
Malicious Web Request Detection Technology Based on CNN
CUI Yan-peng,LIU Mi,HU Jian-wei
Computer Science. 2020, 47 (2): 281-286.  doi:10.11896/jsjkx.181202455
Abstract PDF(1518KB) ( 1052 )   
References | Related Articles | Metrics
At present,in the field of Web malicious requests detection technology based on convolutional neural network,malicious requests are detected only for the URL part,and each research has different digital representation methods for the original data,which will result in low detection efficiency and detection accuracy.In order to improve the performance of the convolutional neural network in web malicious request detection,this paper introduced other HTTP request parameters to be merged with URLs,and used the dataset HTTP data set CSIC 2010 and DEV_ACCESS as the raw data.The comparative experiment first used six digital representation methods to represent the raw input of the string format,and then put them to the designed convolutional neural network to obtain six different models.At the same time,the classical algorithms HMM,SVM and RNN were trained on the same training data set to obtain the control models.Finally,the nine models were evaluated on the same test data set.The experimental results show that in the multi-parameter Web malicious request detection method,the convolutional neural network using the combination of the vocabulary mapping and the internal embedding layer to represent the original data achieves 99.87% accuracy and 98.92.% F1 score,therefore,the accuracy is improved by 0.4~7.7 percentage points and the F1 value is improved by 0.3~13 percentage points.The experiment fully demonstrate that the multi-parameter Web malicious request detection technology based on convolutional neural network has obvious advantages,and using the vocabulary mapping and the internal embedding layer of the network to represent the original data can make the model achieve the best detection performance.
DoS Anomaly Detection Based on Isolation Forest Algorithm Under Edge Computing Framework
CHEN Jia,OUYANG Jin-yuan,FENG An-qi,WU Yuan,QIAN Li-ping
Computer Science. 2020, 47 (2): 287-293.  doi:10.11896/jsjkx.190100047
Abstract PDF(2206KB) ( 1174 )   
References | Related Articles | Metrics
With the rapid development of network technology,network attacks have brought huge negative impacts,so network security issues need to be resolved urgently.Aiming at denial of service (DoS) attacks in networks,an anomaly detection method for isolated forest based on edge computing framework was proposed.According to the characteristics of each edge node,the method realizes the reasonable distribution of the model training tasks and effectively improves the utilization efficiency of edge nodes.Meanwhile,the characteristics of edge computing are utilized to realize the offloading of model training tasks from cloud center,so as to better reduce the time consumption of the system and reduce the burdenof the cloud center.In order to verify the effectiveness of the proposed method,the 10%-KDDCUP99 network dataset is preprocessed and partial data used for experiments.Experimental results show that compared with the Support Vector Machine (SVM) and Multi-Layer Perceptron (MLP) methods,time consumption of proposed method is reduced by 90% and 60% respectively,and area under curve (AUC) can reach more than 0.9,which indicates that the method can effectively reduce the system time consumption and ensure a high detection performance.
Secure and Efficient Electronic Health Records for Cloud
TU Yuan-fei,ZHANG Cheng-zhen
Computer Science. 2020, 47 (2): 294-299.  doi:10.11896/jsjkx.181202256
Abstract PDF(1846KB) ( 623 )   
References | Related Articles | Metrics
With the development and popularty of mobile devices,Electronic Health Record-based BAN is becoming more and more popular.People can back up the medical data acquired by the Body Area Network (BAN) to the cloud,which makes it possible for medical workers to accessed the user’s medical data using mobile terminals almost anywhere.However,for some patients,these medical data are personal privacy and they only wantto be accessed by someone with some rights.This paper proposed an efficient and secure fine-grained access control scheme,which not only enables authorized users to access medical data stored in the cloud,but also supports some privileged doctors to write records.In order to improve effciency of whole system,a method of matching before decryption is added to perform decryption tests without decryption.In addition,this scheme can outsource the bilinear pairing operation to the gateway without leaking the data content so that eliminates the user’s computation overhead.Performance evaluation shows that efficiency of proposed solution in computating,communication and storage has been significantly improved.
Trust Based Energy Efficient Opportunistic Routing Algorithm in Wireless Sensor Networks
SU Fan-jun,DU Ke-yi
Computer Science. 2020, 47 (2): 300-305.  doi:10.11896/jsjkx.190100172
Abstract PDF(1691KB) ( 607 )   
References | Related Articles | Metrics
In order to prevent potential malicious nodes in the network from being added to the candidate forwarding set of opportunistic routing,reduce network energy consumption and ensure reliable data transmission,a trust based energy efficient opportunity routing in wireless sensor networks (TBEEOR) algorithm was proposed.The algorithm calculates the algebraic connectivity of nodes according to the topology of the network,then calculates the sincerity of the connectivity of the nodes,and then combines forwarding sincerity and ACK sincerity of nodes to calculate the comprehensive trust degree by using the concept of information entropy.Finally,comprehensive trust of nodes is used to calculate the energy consumption caused by communication and cooperation between nodes,thereby obtaining the expected cost of the network.In addition,the algorithm can effectively identify and judge malicious nodes in the network,further reducing the impact of malicious nodes on network performance.The experimental results show that the TBEEOR algorithm effectively guarantees the reliability of data transmission and helps to prolong the network life cycle,thereby improving the throughput of network,and reducing network energy consumption.
Optimization of Aircraft Taxiing Strategy Based on Multi-agent
ZHANG Hong-ying,SHEN Rong-miao,LUO Qian
Computer Science. 2020, 47 (2): 306-312.  doi:10.11896/jsjkx.181202400
Abstract PDF(2427KB) ( 727 )   
References | Related Articles | Metrics
The rapid development of civil aviation has led to the shortage of capacity in many airports.In order to alleviate the current situation of large airports,the problem of aircraft taxiing strategy optimization was studied.Taxiing path optimization is the optimal management of the distance between the runway and the gate of the arriving and departing flights according to the airport resource information and the ground operation management system during a specific time period.Through in-depth analysis the structure of the airport ground network,comprehensive consideration of factors such as taxiing conflict and ground operation rules,a multi-agent taxiing strategy optimization method is proposed to improve the utilization rate of airport resources.The aircraft taxiing strategy optimization model is established,based on the concept of ground network link structure.Combined with the basic theory of multi-agent,the selection probability function of runway exit and the multi-agent path optimization algorithm are designed to seek the optimal taxiing path of aircraft.The aircraft taxiing strategy experiment is carried out,based on the actualsituation of a large domestic airport.The results show that the optimization effect of multi-agent taxiing strategy is more signifi-cant compared with the previous algorithms.Set the speed at the runway entrance and the minimum interval distance of the aircraft at the same intersection.the aircraft can effectively adjust the original taxiing path and shorten the taxiing time on the airport scene through the interaction of the runway exit selection and the interactive negotiation among agents.The total taxiing distance of the aircraft,the density and the average waiting time of the aircraft on the taxiway are significantly better than the contrast optimization algorithm,and the taxiway resource allocation is more reasonable compared with the shortest path algorithm.And the average waiting time of aircraft at the node is reduced by 8.26%.Alleviate the current situation of airport traffic congestion and improve the operating efficiency of scene,which is of great significance for reducing aircraft delay and airport operation safety.
Replica Dynamic Storage Based on RBEC
HONG Hai-cheng,CHEN Dan-wei
Computer Science. 2020, 47 (2): 313-319.  doi:10.11896/jsjkx.181102161
Abstract PDF(1604KB) ( 517 )   
References | Related Articles | Metrics
With the rapid development of cloud storage technology,the existing cloud storage architectures and storage patterns are presented in a static way to users and attackers,making the data face more security threats.This paper proposed a duplicate dynamic storage scheme based on Random Binary Extension Code (RBEC).The scheme uses a network code to store the data blocks on the cloud nodes.The data information of the nodes can be changed randomly and time-varying by the node data transformation based on Binary Random Extension Code.By changing the attack surface,it can increase the complexity and cost of the attacker,reduce the vulnerability exposure and the probability of being attacked,and improve the flexibility of the system.The theoretical analysis and simulation results show that the coding computation time cost of this method is not high in the whole dynamic transformation,and its main time cost is the transmission of data encoding blocks between nodes.In addition,the performance of this method was compared with the general regenerative code mimetic transformation schemes.Because of the characteristics of REBC,that is,the probability of the regenerated encoding matrix satisfying the MDS property is almost 1,the performance overhead of this method is better than that of general regenerative code which may transform many times in the encoding process.