Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 45 Issue 6A, 20 June 2018
  
Review
Research Progress and Challenges on Association Graph
YIN Liang, YUAN Fei, XIE Wen-bo, WANG Dong-zhi, SUN Chong-jing
Computer Science. 2018, 45 (6A): 1-10. 
Abstract PDF(2097KB) ( 2112 )   
References | RelatedCitation | Metrics
With the development of web technology and projects such as Linked Open Data having been carried out,the association graph has made significant contributions on many areas such as Internet intelligent search,library bibliographic management,medicine and intelligent manufacturing.This paper reviewed the key topics of the association graph,including definition,framework and construction etc.The research progress on entity extraction,relationship extraction and knowledge fusion are discussed thoroughly.Furthermore,some challenges on association graph are also summarized.
Review of Principle and Application of Deep Learning
FU Wen-bo, SUN Tao, LIANG Ji, YAN Bao-wei, FAN Fu-xin
Computer Science. 2018, 45 (6A): 11-15. 
Abstract PDF(3673KB) ( 3920 )   
References | RelatedCitation | Metrics
As an important technical means of machine learning,deep learning has a broad application prospect.This article briefly described the development of deep learning,introduced convolutional neural network,restricted boltzmann machine,auto encoder and its derived series method model,andsix kinds of mainstream depth frame such as Caffe,TensorFlow,Torch.This paper also discussed the application of deep learning in image,speech,video,text and data analysis,analyzed the existing problems and future trends of deep learning,providing a more comprehensive method guidance and literature index support for beginners.
Research and Development of Feature Dimensionality Reduction
HUANG Xuan
Computer Science. 2018, 45 (6A): 16-21. 
Abstract PDF(4155KB) ( 3255 )   
References | RelatedCitation | Metrics
Quality of data characteristics directly impacts the accuracy of the model.In the field of pattern recognition,dimensionality reduction technique is always the focus of researchers.At the era of big data,massive data needs to be processed while the dimension of the data is rising.The performance of the traditional methods of data mining is degradedor losing efficiency for processing high dimensional data.Studies show that dimensionality reduction technology can be implemented to effectively avoid the “Curse of Dimensionality” in data analysis,thus it has wild application.This paper gave detailed description about two dimensionality reduction methods which are feature selection and feature extraction,in addition,a thoroughly comparison about the feature of these two methods was performed.Feature selection algorithm was summarized and analyzed by two key steps of algorithm,which are searching strategy and evaluation criterion.Finally,the direction for future research of the dimensionality reduction was discussed based on its practical application.
Overview of Imbalanced Data Classification
ZHAO Nan, ZHANG Xiao-fang, ZHANG Li-jun
Computer Science. 2018, 45 (6A): 22-27. 
Abstract PDF(4514KB) ( 2618 )   
References | RelatedCitation | Metrics
Imbalanced data classification has been drawn significant attention from research community in last decade.Because of the assumption of relatively balanced class distribution and equal misclassification costs,most standard classifiers do not perform well with imbalanced data classification.In view of various phases of data classification,different imbalanced data classification methods have been proposed.The relevant research achievements over the years were analyzed,and various approaches with imbalanced data were introduced from the view of feature selection,adjustment of the data distribution,classification algorithm and classifier evaluation.The future trends and research issues that still need to be faced in imbalanced data classification were discussed in the end.
Survey of Symbolic Execution
YE Zhi-bin,YAN Bo
Computer Science. 2018, 45 (6A): 28-35. 
Abstract PDF(1797KB) ( 5946 )   
References | RelatedCitation | Metrics
As an important program analysis method,symbolic execution can generate high coverage tests to trigger deeper vulnerabilities.This paper firstly introduced the principle of classical symbolic execution,and elaborated three dynamic symbolic execution methods which are known as concolic testing,execution generated test and selective symbolic execution.Meanwhile,the essence of main challenges of symbolic execution and the current major solutions were discussed.Symbolic execution has been incubated in dozens of tools which were described and compared in this paper.Finally,the develop directions of symbolic execution were forecasted.
Review of Trust Declassification for Software System
ZHU Hao,CHEN Jian-ping
Computer Science. 2018, 45 (6A): 36-40. 
Abstract PDF(2845KB) ( 646 )   
References | RelatedCitation | Metrics
Non-interference model is the baseline security model of information flow control.It ensures zero leakage of secret information,but its restrictiveness of security condition is too strong.Software system inevitably violates non-interference model and releases proper information for its requirement of function.In order to prevent attacker obtain extra information from the channel of information release,the channel should be under control and trusted declassification policy and enforcement mechanisms should be established.Existing declassification policies are classified into WHAT,WHO,WHERE and WHEN dimensions,and existing enforcement mechanisms are classified into static enforcement,dynamic enforcement and secure multi-execution.The characteristics and deficiencies of these mechanisms were compared,the challenge of following study was discussed,and the direction of future study was out-looked.
Research Progress on Max Restricted Path Consistency Constraint Propagation Algorithms
ZHANG Yong-gang, CHENG Zhu-yuan
Computer Science. 2018, 45 (6A): 41-45. 
Abstract PDF(1650KB) ( 679 )   
References | RelatedCitation | Metrics
Constrained propagation technology is critical to the performance of constraint satisfaction problems.Constrained propagation technology completely removes some local inconsistency values during a preprocessing process,or efficiently pruning the search tree during the search.The max restricted path consistency algorithm (maxRPC) is astrong consistency constraint propagation algorithm proposed recently,which can remove more inconsistent values,and achieves good results in solving complex problems.In this paper,some algorithms,such as AC3,AC3rm,maxRPC1,maxRPC2,maxRPCrm,maxRPC3 and other related algorithms which are related to the arc consistency algorithm AC and max restricted path consistency algorithm maxRPC,were introduced respectively and compared with each other.The performance of the proposed algorithm is verified by the experimental results on the mistral solver.
Research Status of Sentiment Analysis for Short Text
——From Social Media to Scarce Resource Language
YONG Tso, SHI Xiao-dong, NyimaTrashi
Computer Science. 2018, 45 (6A): 46-49. 
Abstract PDF(3485KB) ( 1202 )   
References | RelatedCitation | Metrics
With the gradual maturity of social networks,texts of various languages appear on social networks.These short texts contain praise and demand of people.They have important reference for the government and enterprises to understand the public opinion,which have significant value in research and application.First of all,the current research methods of sentiment analysis for Internet short text were summarized,including neural network,cross language and applied linguistics knowledge.Secondly,the current situation analysis was carried out in the hot spot field of sentiment analysis for short text.Finally,the research trend of sentiment analysis for short text was summarized,and the future was prospected.
Overview of Deep Neural Network Based Classification Algorithms for Remote Sensing Images
CUI Lu,ZHANG Peng,CHE Jin
Computer Science. 2018, 45 (6A): 50-53. 
Abstract PDF(1730KB) ( 3751 )   
References | RelatedCitation | Metrics
Accurate and efficient remote sensing image classification is one of the important research contents of remote sensing image analysis.In recent years,with the development ofmachine learning technology,deep neural network has become an effective processing method for remote sensing image classification.This paper analyzed some problems exi-sting in remote sensing image classification and the principle structure of several typical deep neural networks.The research status of remote sensing image classification and remote sensing image classification based on deep neural network were introduced,and the trend of deep neural network in remote sensing image classification technology was summarized.
Research Review and Development for Automatic Reading Recognition Technology of Pointer Instruments
HAN Shao-chao,XU Zun-yi,YIN Zhong-chuan,WANG Jun-xue
Computer Science. 2018, 45 (6A): 54-57. 
Abstract PDF(1608KB) ( 2585 )   
References | RelatedCitation | Metrics
The technology of determining the reading of pointer instruments automatically is a hotspot of machine vision in recent years,which is also an important research content and advanced technology in the field of pattern recognition.After the general overview of the recognition technology of the pointer meters,the basic concept,fundamental and main research contents of automatic reading recognition technology of the pointer meters based on machine vision were introduced in this paper.The research status of this technology both at home and abroad was introduced.The latest progress of the image correction,detection of round contours,detection of the pointer and angle calculation was introduced.Finally,the key technology and the development direction of the automatic reading recognition technology were pointed out .
Intelligent Computing
Learning Effect Evaluation Method Based on Fine-granularity Learning Emotion Ontology
——Taking Algorithm Design and Analysis Course as Example
ZHANG Chun-xia,NIU Zhen-dong,SHI Chong-yang, SHANG Jian-yun
Computer Science. 2018, 45 (6A): 58-62. 
Abstract PDF(1534KB) ( 806 )   
References | RelatedCitation | Metrics
Education goals include cognitive domain goal,motor skill domain goal and emotional domain goal.Education of emotional domain goal has received attentions and research of more and more pedagogues and scholars of many domains.Emotions of learners play an important role in traditional education and network education,and they affect lear-ning initiative,enthusiasm,creativity and learning effects of learners.According to authors’ teaching practices of algorithm related courses of undergraduates and master graduates for many years,this paper built a fine-granularity learning emotion ontology,and proposed a learning effect evaluation method based on fine-granularity learning emotion ontology.The characteristics of the fine-granularity learning emotion ontology are that it introduces multiple semantic relations among knowledge points of courses,and constructs a classification of emotion feedback actions of teachers.The traits of the learning effect evaluation method are that it builds the evolutional model of learning emotion based on relational paths of knowledge points in the fine-granularity learning emotion ontology,and this model can be used to evaluate learning effects of learners.
Dynamic Crowding Distance-based Hybrid Immune Algorithm for Multi-objective Optimization Problem
MA Yuan-feng,LI Ang-ru,YU Hui-min,PAN Xiao-ying
Computer Science. 2018, 45 (6A): 63-68. 
Abstract PDF(1586KB) ( 980 )   
References | RelatedCitation | Metrics
The goal of the research on multi-objective immune optimization algorithm is to make the population uniformly distributed in Pareto optimal domain and make the algorithm converge fast.To improve the diversity and convergence of the non-dominated solution set,a dynamic crowding distance-based hybrid immune algorithm for multi-objective optimization problem was presented in this paper.The algorithm uses dynamic crowding distance calculation to compare and update individuals in each subpopulation.Meanwhile,it references mutation-guiding operator of differential evolution to strengthen the local search ability and improve search precision of the immune optimization algorithm.Compared with the other three efficient multi-objective optimization algorithms,five benchmark test problems and simulation results indicate that the algorithm performs better in approximation,uniformity and coverage.It converges significantly faster than the relevant optimization algorithms.
Generation Mechanism and Interpretations of Paradoxes
WU Mei-hua,WANG Yong-jun,YANG Yi-chuan,WANG Xiao-yang
Computer Science. 2018, 45 (6A): 69-71. 
Abstract PDF(1519KB) ( 1038 )   
References | RelatedCitation | Metrics
Based on some concrete paradoxes in computer science,this paper used the diagonal arguments to explain the generation mechanism of a class of paradoxes,and pointed out that a deep reason of resulting in some paradoxes is the self-reference.Moreover,from two new perspectives in quantum mechanics and category theory,this paper gave some interpretations including paradoxes rather than traditional method to avoid a paradox by prohibiting the self-reference.It shows that our models not only can provide a reasonable explanation for some paradoxes,but also can provide some new ideas to understand the essence of the paradoxes.
Answering Word Sense Judgement Questions in Chinese Reading Comprehension
TAN Hong-ye, WU Yu-fei
Computer Science. 2018, 45 (6A): 72-74. 
Abstract PDF(1572KB) ( 635 )   
References | RelatedCitation | Metrics
Read comprehension tasks require that computers answer relevant query according to the test context on a given single text.This paper researched judgment of word meaning with the background of reading comprehension in Beijing Chinese college entrance examination,proposed a framework based on support value,which was calculated by n-gram,PMI and sentence similar.The experimental results show that the three methods have good effect on real data and auto data.In all ways,support value based on PMI has the best performance on real data,with the accuracy reaching 75%.
Modeling and Decision-making of Futures Market Price Prediction with DBN
CHEN Jun-hua, HAO Yan-hui, ZHENG Ding-wen, CHEN Si-yu
Computer Science. 2018, 45 (6A): 75-78. 
Abstract PDF(1636KB) ( 1460 )   
References | RelatedCitation | Metrics
The deep learning algorithm can realize the approximation of complex functions by learning the deep nonliner network structure,and it can learn the essential characterstics of data sets from a large number of unlabeled samples.Deep belief network (DBN) is a model of deep learning,which is a Bayesian probability generation model composed of multi-layer random hidden variables.DBN can be used as a pre-training link for deep neural networks,providing initial weight for the network.This learning algorithm not only solves the problem of slow training,but also produces very good initial parameters,greatly enhances the model's modeling capabilities.The financial market is a multi-variable and nonlinear system.The DBN model can solve the problems like initial weights and so on,that other prediction methods are difficult to analyze and predict.This paper used oil futures market price forecast as an example,to prove the feasibi-lity of using DBN model to predict futures market price.
Structure Identification of Belief-rule-base Based on AR Model
CHEN Ting-ting,WANG Ying-ming
Computer Science. 2018, 45 (6A): 79-84. 
Abstract PDF(1564KB) ( 654 )   
References | RelatedCitation | Metrics
According to the application of the belief-rule based reasoning in system control,the traditional belief K-means clustering algorithm can not make full use of the dynamic correlation information of time in data.Therefore,based on the fuzzy clustering algorithm,the autoregressive (AR) model was introduced to dynamically cluster the uncertain demand in the aggregate production planning as a set of time series.Compared with traditional algorithm,the new algorithm has the following characteristics.It can not only make full use of the aggregate demand data within the correlation of the production plan,but also further use the membership functions of the AR model to predict process fuzzy adjustment,so as to get more ideal belief rule base structure and improve the accuracy of reasoning and decision-making.
Research on UAV Multi-point Navigation Algorithm Based on MB-RRT*
CHEN Jin-yin, HU Ke-ke,LI Yu-wei
Computer Science. 2018, 45 (6A): 85-90. 
Abstract PDF(1589KB) ( 996 )   
References | RelatedCitation | Metrics
With the wider application of UAV,automatic navigation capability of UAV plays a more important role.As one of the complex UAV missions,multi-point navigation algorithm requires an optimal path or sub-optimal path,such as the fastest,shortest distance or shortest time to traverse all specific destinations without collision with known obstructions.Aiming at the problem of disordered multi-point path planning,the greedy MB-RRT* algorithm was proposed which is based on MB-RRT* and combined with greedy strategy used to solve the TSP problem.The algorithm improves the speed of the multi-point navigation problem by sacrificing a certain path quality.Finally,the effectiveness of the algorithm was verified by simulation experiments in the two-dimensional and tree-dimensional environment.
Automatic Keyword Extraction Based on BiLSTM-CRF
CHEN Wei, WU You-zheng, CHEN Wen-liang, ZHANG Min
Computer Science. 2018, 45 (6A): 91-96. 
Abstract PDF(1616KB) ( 1309 )   
References | RelatedCitation | Metrics
Automatic keyword extraction is an important task of natural language processing (NLP),which provides technical support for personalized recommendation,online shopping and other applications.For the task,a new keyword extraction method based on bidirectional long short-term memory network and conditional random field (BiLSTM-CRF) was proposed.In the method,the extraction task is regarded as the sequence labeling problem.Firstly,the input text is represented as a low-dimensional,high-density vector.Then,a classification algorithm is used to predict the tags of the words.Finally,a CRF layer is used to decode the whole sequence to get the tagging result.Experiments were conducted on large scale real data,and the results show that this way can improve about 1% compared with the base system.
Word Segmentation Based on Adaptive Hidden Markov Model in Oilfield
GONG Fa-ming,ZHU Peng-hai
Computer Science. 2018, 45 (6A): 97-100. 
Abstract PDF(1563KB) ( 859 )   
References | RelatedCitation | Metrics
The Chinese word segmentation is the first step in constructing the petroleum field ontology.Documents in petroleum field have their own unique characteristics which make word segmentation more complex.Until now,there is no effective word segmentation algorithm,especially for Chinese characters.Based on the hidden Markovian model,an adaptive hidden Markovian word segmentation model was proposed in this paper,which combines the domain-knowledge dictionary and user-defined information,by introducing the terminology set.The proposed algorithm calibrates word segmentation under semantic constraints and word meaning constraints,and can identify professional terms and character combinations in the field of petroleum accurately.It is also proved that the proposed algorithm achieves remarkable improvements in both accuracy and recall rate in word segmentation,compared to the NLPIR Chinese word segmentation system invented by Chinese Academy of Science.
Language Understanding Model Based on Ontological Semantics Network
WANG Fei,YI Mian-zhu,TAN Xin
Computer Science. 2018, 45 (6A): 101-105. 
Abstract PDF(1543KB) ( 1311 )   
References | RelatedCitation | Metrics
The traditional knowledge representation has limited scope of knowledge and incomprehensive formal semantics description,thereby causing the computer’s accurate portrayal of natural language.This paper proposed ontological semantics network on the levels of concept and lexicalfor semantic analysis.Its brain-like neural language network dra-wing from the inspiration of human brain neural cell’s work principle,could both accurately portray different meanings of a word in different domains,understand the text meaning and cover the elements and characteristics in the process of words’s generating into sentences.Matrix is employed to further formalize the model,with singular value decomposition to reduce the scale complexity,which makes it more convenient to describe the relationship between lexical semantics.
Text Similarity Calculation Algorithm Based on SA_LDA Model
QIU Xian-biao, CHEN Xiao-rong
Computer Science. 2018, 45 (6A): 106-109. 
Abstract PDF(1585KB) ( 684 )   
References | RelatedCitation | Metrics
Many information processing techniques are based on computing the similarity of text.However,the traditional method of similarity calculation based on vector space model has the problems of high dimension and poor semantic sensitivity,so the performance is not very satisfactory.This paper proposed a self-adaptive LDA (SA_LDA) model based on traditional LDA model.It can manually determine the number of topic.Applying it in text similarity calculation,it can solve the high dimensional and sparse problem.Experiments show that this method improves the accuracy of similarity calculation and the effect of text clustering compared with VSM.
Time Series Similarity Based on Moving Average and Piecewise Linear Regression
FENG Yu-bo,DING Cheng-jun,GAO Xue,ZHU Xue-hong,LIU Qiang
Computer Science. 2018, 45 (6A): 110-113. 
Abstract PDF(1537KB) ( 752 )   
References | RelatedCitation | Metrics
Aiming at the problems that the Euclidean distance is sensitive to the anomaly data and the efficiency of the DTW distance algorithm is low,a time series similarity method based on the moving average and the piecewise linear regression was proposed.Firstly,the original variable-averaging algorithm and the piecewise linear regression are used to transform the original time series.The parameters of the piecewise linear regression (intercept and distance) are taken as the characteristics of the time series so that the feature extraction of the time series is realized,and the data is dimensioned.Then it calculated distance using the dynamic time bending distance.The performance of the method is similar tothat of DTW algorithm,but the proposed method is almost 96% higher in algorithm efficiency.The experimental results verify the effectiveness and accuracy of the method.
New Swarm Intelligent Algorithms:Lions Algorithm
ZHANG Cong-ming, LIU Li-qun,MA Li-qun
Computer Science. 2018, 45 (6A): 114-116. 
Abstract PDF(1529KB) ( 1212 )   
References | RelatedCitation | Metrics
As the optimization object becomes nonlinear,high dimensional and multi target,the ideal result can not be obtained by using the traditional optimization method.Intelligent algorithm is a good solution to the shortcomings of the traditional optimization methods.This paper proposed a new intelligent algorithm,called lions algorithm.Lions algorithm’s request on the initial value is not high.It has faster optimization speed and strong global convergence ability.In this paper,the principle of the lions algorithm was given,the convergence performance of the algorithm and the influence of the parameters on the convergence of the algorithm were analyzed,and it was compared with artificial bee colony algorithm.Finally,the algorithm was applied to the maximum power tracking of the PV,and the practical ability of the algorithm was verified by experiment and simulation.
Attribute Transfer and Knowledge Discovery Based on Formal Context
ZHENG Shu-fu,YU Gao-feng
Computer Science. 2018, 45 (6A): 117-119. 
Abstract PDF(1512KB) ( 492 )   
References | RelatedCitation | Metrics
The theory of concept lattice is an effective tool of knowledge representation and knowledge discovery,and is the basis of knowledge representation,knowledge discovery and knowledge acquisition.Based on the formal context information entropy and the importance of attribute theory,this paper discussed the characteristics of knowledge transfer of formal context attributes,obtained the attribute transfer principle based on formal context,and gave the knowledge discovery and application of formal context.
Hybrid Particle Swarm Optimization with Multiply Strategies
YU Wei-wei,XIE Cheng-wang
Computer Science. 2018, 45 (6A): 120-123. 
Abstract PDF(1547KB) ( 1018 )   
References | RelatedCitation | Metrics
A hybrid particle swarm optimization with multiply strategies (HPSO) was proposed to solve the problem of being easy to get into the local optimum and slow convergence speed for particle swarm optimization algorithm(PSO) in dealing with some complicated optimization problems.The HPSO uses the opposition-based learning strategy to genera-te the opposition-based solutions,which enlarges the search range of particle swarm,and enhances the global exploration ability of the algorithm.At the same time,in order to jump out of the local optimum,the HPSO performs Cauchy mutation on some poorer particles to generate individuals that are far from the local optimum,and the differential evolution (DE) mutation is employed to remain individuals to improve the capacity of local exploitation.The above strategies are combined to balance the abilities of global exploration and local exploitation,which are expected to solve some hard optimization problems better.The HPSO and other three well-known PSOs were compared on 10 benchmark test instances experimentally.The results show that the HPSO performs significant advantages over the compared algorithms in the solution accuracy and the convergence speed.
Group Search Optimization with Opposition-based Learning and Differential Evolution
ZOU Hua-fu,XIE Cheng-wang,ZHOU Yang-ping,WANG Li-ping
Computer Science. 2018, 45 (6A): 124-129. 
Abstract PDF(1576KB) ( 747 )   
References | RelatedCitation | Metrics
In general,the standard group search optimization algorithm (GSO) is easy to fall into the local optimum and its convergence speed is slow when solving some complex optimization problems.A group search optimization algorithm based on opposition-based leaning and differential evolution (OBDGSO) was proposed in this paper.The OBDGSO uses the opposition-based learning operator to generate the opposite population to expand the global exploration range.In addition,the operator of differential evolution (DE) is utilized to perform local exploitation to improve the solution accuracy.These two strategies are integrated into the GSO to better balance the abilities of the global convergence and local search.The OBDGSO is tested on 12 benchmark functions along with four other peering algorithms,and the experimental results show that the OBDGSO has significant performance advantages in solution accuracy and convergence speed.
Prediction of Uncertain Trajectory Based on Moving Object Motion in Complex Obstacle Space
GONG Hai-yan,GENG Sheng-ling
Computer Science. 2018, 45 (6A): 130-134. 
Abstract PDF(1555KB) ( 659 )   
References | RelatedCitation | Metrics
Most of the existing moving objects trajectory prediction is in the road network space,however,in the actual geographical environment,there exists obstacles,the movement of moving objects is basically carried out in the obstacle space.In recent years,there have been many studies on moving object trajectory prediction in road network space,such as obstacle range query,nearest neighbor query and so on.However,there is no research on the uncertain trajectory prediction of moving objects in obstacle space.For this reason,this paper proposed an uncertain trajectory prediction algorithm based on moving object motion in obstacle space.Firstly ,the obstacle space was pruned by using the regional relation among obstacles.Secondly,the concept of obstacle space expectation distance was proposed,and the trajectory data of obstacle space is clustered,thereby excavating the moving object hot spot region.Next,according to the obstacle distance and the historical visiting habit of each hotspot region,a Markov trajectory prediction algorithm based on the motion law was proposed.Finally,the accuracy and efficiency of the algorithm were verified by experiments.
Low Complexity Bayesian Sparse Signal Algorithm Based on Stretched Factor Graph
BIAN Xiao-li
Computer Science. 2018, 45 (6A): 135-139. 
Abstract PDF(1576KB) ( 728 )   
References | RelatedCitation | Metrics
The linear mathematical model of additive Gauss white noise was established,and the message passing algorithm based on Sparse Bayesian learning was studied in this model.In this work,we modified the factor graph by adding some extra hard constraints which enables the use of combined belief propagation (BP) and MF message passing.This paper proposed a low complexity BP-MF SBL algorithm,based on which an approximate BP-MF SBL algorithm was also developed to further reduce the complexity.The BP-MF SBL algorithms show their merits compared with state-of-the-art MF SBL algorithms.They deliver even better performance with much lower complexity compared with the vector-form MF SBL algorithm and they significantly outperform the scalar-form MF SBL algorithm with similar complexity.
Pattern Recognition & Image Processing
Diagnosis of Alzheimer’s Disease Based on 3D-PCANet
LI Shu-tong, XIAO Bin, LI Wei-sheng, WANG Guo-yin
Computer Science. 2018, 45 (6A): 140-142. 
Abstract PDF(1587KB) ( 834 )   
References | RelatedCitation | Metrics
Deep learning technologies play more and more important roles in computer aided diagnosis (CAD) in medicine.However,they always face the problem that insufficient labeled data is available for deep learning methods to learn the millions of weights.This paper took the idea of non-supervised to solve the problem on limited labeled labels,and proposed a 3D-PCANet method from aspects of unsupervised deep learning for computer aided AD prediction on limited labeled MRI image.Simultaneously,full 3-D view of MRI images are used in the proposed methods.Experimental results show that the proposed method achieves promising performance in AD prediction.
Research of Image Classification Algorithm Based on GPU
LI Si-yao, ZHOU Hai-fang, FANG Min-quan
Computer Science. 2018, 45 (6A): 143-145. 
Abstract PDF(1590KB) ( 737 )   
References | RelatedCitation | Metrics
This paper introduced three classical image classification algorithms based on GPU,which are Bayes,KNN and SNN. Coprocessing of GPU and CPU is a structure pattern used frequently. Programs with large amount of calculation are working on the GPU,and CPU is used to control.This paper tested the programs.Through testing,the times of the acceleration effect of working on the GPU are 72.472,149.536,125.39.The used framework is Tesla k20c.Bayesian,KNN and SNN algorithms are based on supervised classification.The experiment shows the image process results and the times,which meet the requirements.
Salient Object Detection Based on Dictionary and Weighted Low-rank Recovery
MA Xiao-di, WU Xi-yin, JIN Zhong
Computer Science. 2018, 45 (6A): 146-150. 
Abstract PDF(1635KB) ( 644 )   
References | RelatedCitation | Metrics
Salient object detection intends to identify salient areas in natural images.In order to improve detection results,a method based on dictionary and weighted low-rank recovery for salient object detection was proposed.Firstly,a dictionary is incorporated into the low rank recovery model to separate the low rank matrix from the sparse matrix better.Secondly,sparse matrices corresponding to the color,location and boundary connectivity priors are obtained,and the adaptive coefficients are generated by their saliency values.Finally,a weighted matrix is constructed by adaptive coefficients with three priors,and the matrix is merged into the low rank recovery model.Compared with eleven state-of-the-art methods in four challenging databases,the experiment results show that the proposed approach outperforms the state-of-the-art solutions.
Novel Method of Improved Low Rank Linear Regression
YU Chuan-bo,NIE Ren-can,ZHOU Dong-ming, HUANG Fan, DING Ting-ting
Computer Science. 2018, 45 (6A): 151-156. 
Abstract PDF(1653KB) ( 576 )   
References | RelatedCitation | Metrics
The low rank linear regression model has good robustness for the influence of occlusion and illumination and so on.To a certain extent,the overfitting phenomenon of LRLR (Low Rank Linear Regression) is reduced in LRRR (Low Rank Ridge Regression) and DENLR (Discriminative Elastic-Net Regularized Linear Regression) by regularization coefficient matrix.Because the error approximation of data in subspace is ill-considered,the data are hardly mapped to the target space accurately via projection matrix.This paper proposed has low rank linear regression classification method which has a faster computing speed and is more discriminative.Firstly,the 0-1 constitutive matrix is regarded as the target value of the linear regression.Secondly,the kernel norm is used as the convex approximation of low rank constraints.Thirdly,all kinds of the distance matrix and the model output matrix are regularized to reduce overfitting phenomenon,at the same time it can enhance the spatial discriminant of projection subspace.Then,the augmented Lagrange multiplier (ALM) is used to optimize the objective function.Finally,the nearest neighbor classifier is used for classification in subspace.We compared the related algorithms on AR,FERET face database,Stanford 40 Actions database,Caltech-UCSD Birds database and Oxford 102 Flowers database.The experimental results show that the proposed algorithm is effective.
Target Detection Algorithm Based on 9_7 Lifting Wavelet and Region Growth
CHEN Yong-fei,CUI Yan-peng,HU Jian-wei
Computer Science. 2018, 45 (6A): 157-161. 
Abstract PDF(1577KB) ( 1064 )   
References | RelatedCitation | Metrics
For the fast moving target in the image,a target detection algorithm based on 9_7 lifting wavelet and regional growth was proposed.Firstly the algorithm performs 9_7 lifting wavelet transform on the image.This transform enlarge the difference of illumination between the target and the background.And then,it filters the ambiguity goal,uses region growing algorithm to find the suspicious target area in the image and judges the target coarsely.Finally,according to the target geometric features,combined with the background light intensity,it determines the target location in a single frame image.The proposed algorithm not only simplifies the traditional algorithm,reduces the amount of code,improves the detection accuracy,but also has a large number of image processing interface and has good applicability.
Infrared and Visible Images Fusion Using Visual Saliency and Dual-PCNN
HOU Rui-chao,ZHOU Dong-ming,NIE Ren-can,LIU Dong,GUO Xiao-peng
Computer Science. 2018, 45 (6A): 162-166. 
Abstract PDF(1592KB) ( 1048 )   
References | RelatedCitation | Metrics
Aiming at uneven brightness,inconspicuous object,low contrast and loss details problems in the existing infrared and visible light image fusion methods,in combination with nonsubsampled shearlet transform (NSST) which has multi-scale transformation and the most sparse expression characteristics,saliency detection which has the advantage of highlighting infrared objects,and Dual-channel pulse coupled neural network(Dual-PCNN)which has the advantages of coupling and pulse synchronization,an image fusion method for infrared and visible light images based on NSST and visual saliency guide Dual-PCNN was proposed in this paper.Firstly,the high frequency and low frequency sub-band coefficients of infrared and visible light image are decomposed by NSST in each direction,and then low frequency coefficients are fused by the Dual-PCNN,which is guided by the saliency map of the images.For the high frequency sub-band coefficients,a modified spatial frequency is adopted as the input to motivate the Dual-PCNN.Finally,the fused image is reconstructed by inverse NSST.The experimental results demonstrate that the infrared objects in the fusion image are highlighted and the details of the visible background are rich.Compared with other fusion algorithms,the proposed method has a certain degree of improvement on the subjective evaluation and objective evaluation.
Research of Cloud Segmentation in Space Target Observation
WANG En-wang,WANG En-da,FAN Liang,ZHANG Jin-wei,HUANG Xue-hai
Computer Science. 2018, 45 (6A): 167-170. 
Abstract PDF(1575KB) ( 657 )   
References | RelatedCitation | Metrics
In the process of space debris observation,interference with clouds is a bottleneck of optical telescope.This paper presented a new calculation method of the cloud amount,dividing the detected cloud image into twenty-four octants.The pixel values of each octant are counted,if the statistical results are greater than a given threshold T,it represents that the clouds of this octant are much more.If they are less than the threshold T,it indicates that there is no clouds covering in this octant,the telescope can be guided to the corresponding position of this octant.The research shows that this method is simple and easy to implement,it can realize the automatic guidance of the telescope and improve the observation efficiency.
Multi-feature Fusion Mean-Shift Tracking Algorithm Based on Prediction
GUO Yu, HAO Xiao-yan, ZHANG Xing-zhong
Computer Science. 2018, 45 (6A): 171-173. 
Abstract PDF(1617KB) ( 760 )   
References | RelatedCitation | Metrics
The application of video surveillance in life has been quite extensive,especially the main target tracking is widely used in daily life,and it is a difficult part in the computer vision.In the real video scene,there are many complex target appearance changes,such as partial occlusion,light changes,etc.These have a greater impact on the Mean-Shift tracking algorithm.In order to solve the problem of inaccurate tracking caused by the above complex environment,this paper fused the color and Gabor-LBP edge features in Mean-Shift tracking algorithm,and introduced the quadratic polynomial to predict the position of the video target,to improve the tracking accuracy.
Research on Splicing Recovery of Broken Files Based on Intelligent Algorithms
H
UO Min-xia,XUE Bo-huan
Computer Science. 2018, 45 (6A): 174-178. 
Abstract PDF(1629KB) ( 787 )   
References | RelatedCitation | Metrics
The technique of the splicing of broken files in the areas of recovering judicial evidence and historical documents and acquiring military intelligence has an important application.With the development of science and technology,the technique of automatic splicing for broken files is now a hot researching spot.As for unidirectional broken files,the technique in this paper was to build two kind of models based on the length and size of broken files and the condition of the usage of English and Chinese,with matching 0-1 and Pearson related coefficient gray matching.As for broken files in transverse direction or longitudinal direction,the technique in this paper was to build models and perform relevant processing,with the way of cluster analysis and gray matching to realize the full-automatic and semi-automatic recovery of those broken files.
Forward Vehicle Detection Research Based on Improved FAST R-CNN Network
SHI Kai-jing, BAO Hong
Computer Science. 2018, 45 (6A): 179-182. 
Abstract PDF(1599KB) ( 596 )   
References | RelatedCitation | Metrics
The current research on vehicle detection is mainly about machine learning,but it is still difficult to deal with occlusion and false detection.In this paper,using deep learning methods to detect forward vehicles is more effective.This paper firstly adopts the selective search method to obtain the candidate area of the sample image,and then uses the improved FAST R-CNN training network to detect the forward vehicles on the road.The method has been tested in the KITTI vehicle public dataset.The experimental results show that the detection rate of this method is higher than that of the direct test based on CNN.The problem of occlusion and error detection is largely solved.Moreover,the widely used method extracts the circulated Harr-Like features,and then uses the Adaptive Boosting classifier algorithm.Compared in TSD-MAX traffic scene dataset,the proposed method provides a higher performance.The results show that this method improves the accuracy and robustness of vehicle detection.
License Plate Recognition Method Based on GMP-LeNet Network
LIN Zhe-cong,ZHANG Jiang-xin
Computer Science. 2018, 45 (6A): 183-186. 
Abstract PDF(1548KB) ( 1031 )   
References | RelatedCitation | Metrics
As the core of intelligent traffic management system,the research of license plate recognition technology has important business prospects.The traditional license plate character recognition method has the problem of complex feature extraction.As an efficient recognition algorithm,convolution neural network has a unique superiority in dealing with two-dimensional license plate image.When the traditional convolution neural network LeNet-5 identifies the license plate image,there is a series of problems such as less training data,redundancy of the fully connection layer and over-fitting of the network.A global intermediate pool (GMP-LeNet) network was designed,which utilizes the convolution la-yer instead of the fully connection layer.The 1*1 convolution kernel learning from the NIN network is used to reduce channel dimension.Then the global mean pool layer feeds the feature graph to the output layer after the dimension reducing directly.Experiments show that GMP-LeNet network can suppress the over-fitting phenomenon effectively with a faster recognition speed and the higher robustness.The final license plate recognition rate is close to 98.5%.
SD-OCT CSC NRD Region Segmentation Based on Region Restricted 3D Region Growing
HE Xiao-jun, WU Meng-lin, FAN Wen, YUAN Song-tao, CHEN Qiang
Computer Science. 2018, 45 (6A): 187-192. 
Abstract PDF(1692KB) ( 551 )   
References | RelatedCitation | Metrics
It is important to segment neurosensory retianl detachment (NRD) of central serous chorioretinopathy (CSC) region,because the volume of CSC region plays a very important role in the diagnosis and study of CSC,while NRD is the most common and serious situation in CSC.The paper presented an automated spatial-domain optical cohe-rence tomography (SD-OCT) NRD segmentation method,which firstly segments NRD lesion in 3D space.And the segmentation of lesion in two-dimensional images is transformed into three-dimensional space segmentation problem,which makes full use of the three-dimensional structure information of data and improves the segmentation precision.The experiment results with 18 SD-OCT cubes indicate that the proposed method can segment the NRD accurately,and the average area coverage is as high as 89.5%.Compared to other four segmentation methods,the proposed algorithm achieves the highest accuracy and costs the least time,which has great advantages in clinical application and research.
Collision Detection Algorithm Based on Semi-transparent Color Overlay and Depth Value
LI Pu, SUN Chang-le, XIONG Wei, WANG Hai-tao
Computer Science. 2018, 45 (6A): 193-197. 
Abstract PDF(1651KB) ( 963 )   
References | RelatedCitation | Metrics
A fast collision detection algorithm based on image-space was proposed with the view of verifing the assemblability of the assembling parts in the process of virtual assembly.Firstly,it filters the non-collision parts through the way of overlaying translucent colors,and identifies the potential collision areas.Then it calculates the minimum separation distance between the covered objects and the assembly objects in the direction of its movement,which can make up for the disadvantages of collision algorithm based on image-space,which just can judge whether a collision occurred but can't calculate the distance.Finally,this paper put forward a strategy of partitioning pixel region which focuses on the procedure of getting the distance,in order to improve the detection precision of the algorithm.Test results show that the algorithm can satisfy the requirements of real-time and accuracy of virtual assembly system on the whole.
Method of Facial Motion Capture and Data Virtual Reusing Based on Clue Guided
ZHANG Chao,JIN Long-bin,HAN Cheng
Computer Science. 2018, 45 (6A): 198-201. 
Abstract PDF(1552KB) ( 491 )   
References | RelatedCitation | Metrics
Most of the unmarked facial expression motion capture methods only capture the planar position changes of facial expression movements,have no description of the depth changes.For this problem,a clue guided based facial motion captures and data virtual reusing method was proposed.Firstly,the method uses a monocular vision system for locating the main body of the face.Then,the face landmark features are refined by the cascade regression model.We can obtain the positional transformation relationship of the feature points in the three-dimensional space by the active landmark cues of the facial features and the depth clues of the landmark features.Finally,the facial skeleton node data are used to achieve facial expression motions’ reconstruction.Through the experiment of the online real-time facial expression motion capture,it can be seen that this method can not only achieve the exact match of the corresponding landmark features in different perspectives,but can also better assigns real facial motion to virtual characters.
Graph-based Ratio Cut Model for Classification of High-dimensional Data and Fast Algorithm
ZHENG Shi-xiu,PAN Zhen-kuan,XU Zhi-lei
Computer Science. 2018, 45 (6A): 202-205. 
Abstract PDF(1574KB) ( 608 )   
References | RelatedCitation | Metrics
Data classification is an important part of data mining.With the increase of the amount of data and the dimension of data,the processing of large-scale and high-dimensional data becomes the key problem.In order to improve the accuracy of data classification,inspired by the image segmentation algorithm in computer vision,an algorithm based on nonlocal operator was proposed for the classic Ratio Cut classification model.A new energy functional is modeled by introducing Lagrange multipliers,and the energy functional is solved by the alternating optimization method.Numerical experiments show that the accuracy and computational efficiency of the proposed algorithm are greatly improved compared with the traditional classification method.
Algorithm for Human Dorsal Vein FeatureIdentification
YAN Jiao-jiao, CHONG Lan-xiang,LI Ting
Computer Science. 2018, 45 (6A): 206-209. 
Abstract PDF(1546KB) ( 783 )   
References | RelatedCitation | Metrics
For the current hand vein image recognition using the extraction structure features such as refinement and skeleton operations,it’s easy to cause the loss of vein structure details and misjudgment of feature points,this paper proposed a hand vein feature recognition algorithm based on gradient histogram gradient (HOG).Adopting general biometric identification process,this algorithm extracts the HOG texture feature of the low-frequency sub-band graph by the directly decomposing two-level wavelet packet after the hand dorsal vein image is preprocessed by image grey normalization pretreatment and filtering enhancement.Then,the personal identity is recognized by using K neighbor classifier.This algorithm was verified finally by using self-established dorsal vein image database.The experimental results show that the proposed algorithm is effective and its correct recognition rate is 95%,and its application prospect is broad.
Pedestrian Detection Based on Objectness and Sapce-Time Covariance Features
LIU Chun-yang, WU Ze-min, HU Lei, LIU Xi
Computer Science. 2018, 45 (6A): 210-214. 
Abstract PDF(1664KB) ( 646 )   
References | RelatedCitation | Metrics
In order to solve the fusion of space-time information and excessive detection area in pedestrian detection,a pedestrian detection method was proposed based on objectness and space-time covariance features.Firstly,binarized normed gradients algorithm is used for a test image to get objectness evaluations,and a pedestrian detection candidate area is formed.Secondly,the spatial and temporal features are extracted.Finally,a space-time detector based on cova-riance information was proposed to improve the accuracy.Experimental results on the INRIA and Caltech demonstrate that the proposed method outperforms the state-of-art pedestrian detectors in accuracy.
Texture Synthesis Based on Self-similarity Matching
ZHU Rui-chao,QIAN Wen-hua,PU Yuan-yuan, XU Dan
Computer Science. 2018, 45 (6A): 215-219. 
Abstract PDF(1724KB) ( 983 )   
References | RelatedCitation | Metrics
Image Quilting algorithm is a classical algorithm of texture synthesis based on the sample,but the speed and suture effect still need to be improved.Based on the error of block matching,an improved method based on self-similar matching was proposed.The improved algorithm can effectively improve the stitching speed,enlarge the range of application and synthesis quality.The algorithm first determines the matching block size according to the sample size by dynamic pattern.Then on the principle of self-similar matching,it sets block boundary matching error and retains the suture block boundary information.In the process of stitching,the greedy algorithm is used to select the block with the highest degree of coincidence as the next block to be stitched.The experimental results show that the improved algorithm improves the time efficiency of the synthesis,enhances the stitching effect between the blocks,and improves the final synthesis effect.
Research on Multi Video Vehicle Tracking Based on Mean Shift
ZHU Hao-nan, XU Ming-min,SHEN Ying
Computer Science. 2018, 45 (6A): 220-226. 
Abstract PDF(1705KB) ( 964 )   
References | RelatedCitation | Metrics
In order to improve the accuracy of target tracking in multi video,a vehicle tracking method based on Mean Shift combined with visual words was proposed.The method uses Mean Shift to provide the contour and the color information to carry on the initial match and the track.A scale invariant identification method was proposed for the situation of vehicle viewing angle and environment in different video,which regards the visual word bag feature as the vehicle features for match again.The method can be used to determine the specific location of the target vehicle by using the video camera in the high-speed network.Experimental results show that the Shift Mean based multi video vehicle tracking method can improve the accuracy of vehicle tracking.
Yawning Detection Algorithm Based on Convolutional Neural Network
MA Su-gang, ZHAO Chen, SUN Han-lin, HAN Jun-gang
Computer Science. 2018, 45 (6A): 227-229. 
Abstract PDF(1584KB) ( 1125 )   
References | RelatedCitation | Metrics
Yawning detection can be used to warn drivers of fatigue driving behavior,thereby reducing traffic accidents.A yawning detection algorithm based on convolutional neural network was proposed.The driver’s facial image can be directly used as input for neural network,so as to avoid the complex explicit feature extraction of the facial image.The Softmax classifier is used to classify the features extracted from the neural network to determine whether the behavior is yawning or not.This algorithm achieves 92.4% accuracy in the YawDD dataset.Compared with other existing algorithms,the proposed method has the advantages of high detection accuracy and simpleimplementation.
Vehicle Recognition Based on Super-resolution and Deep Neural Networks
LEI Qian, HAO Cun-ming,ZHANG Wei-ping
Computer Science. 2018, 45 (6A): 230-233. 
Abstract PDF(1580KB) ( 623 )   
References | RelatedCitation | Metrics
Vehicle recognition plays a key role in traffic video surveillance system.In this paper,deep neural network and super-resolution were uses to realize vehicle recognition in traffic surveillance.It uses deep convolution neural network CaffeNet to complete vehicle recognition with advanced deep learning framework CAFFE and computationally powerful GPU.In the image preprocessing stage,an image super-resolution reconstruction algorithm based on deep learning and sparse representation was used to enhance the detail information of the image.First,based on auto-encoders,an improved model of nonnegative sparse denoising auto-encoders (NSDAE) was proposed to realize the dictio-nary joint learning.Then,the sparse representation was used to realize super-resolution reconstruction of vehicle image.Experimental results show that the accuracy of vehicle recognition is improved obviously after adding the super resolution processing.
Improved Sparsity Adaptive Matching Pursuit Algorithm
WANG Fu-chi,ZHAO Zhi-gang,LIU Xin-yue,LV Hui-xian,WANG Guo-dong,XIE Hao
Computer Science. 2018, 45 (6A): 234-238. 
Abstract PDF(1587KB) ( 785 )   
References | RelatedCitation | Metrics
Sparsity adaptive matching pursuit (SAMP) algorithm is a widely used reconstruction algorithm for compressive sensing under the condition that the sparsity is unknown.In order to optimize the performance of SAMP algorithm,an improved sparsity adaptive matching pursuit(ISAMP) algorithm was proposed.The proposed algorithm introduces generalized Dice coefficient for matching criterion,which improves its performance in selecting the most matching atom from measurement matrix for residual signal.Meanwhile,it uses threshold method to select preliminary set and adopts exponential variable step during the iteration.Experimental results show that the proposed algorithm improves reconstruction quality and computational time.
Image Edge Detection Method Based on Kernel Density Estimation
ZHOU Jian, XU Hai-qin
Computer Science. 2018, 45 (6A): 239-241. 
Abstract PDF(1588KB) ( 911 )   
References | RelatedCitation | Metrics
There are many algorithms for image edge detection.Among them,the image edge detection algorithms based on Sobel operator,Laplace operator and Canny operator are classic.But the proposed method is different from these differential operator methods.Pixel carries out kernel density estimation in the small window area to obtain a kernel density map.Then the kernel density map is used to select the appropriate bandwidth or threshold to control the image edge detection.Experimental results show that this method is feasible,simple and fast.
Study and Application of Improved Retinex Algorithm in Image Defogging
LIU Yang, ZHANG Jie, ZHANG Hui
Computer Science. 2018, 45 (6A): 242-243. 
Abstract PDF(1567KB) ( 856 )   
References | RelatedCitation | Metrics
The images obtained in foggy days are always not distinct and the overall brightness of images is high.Reti-nex algorithm is a new image enhancement algorithm.It has many advantages such as constant color,fast processing speed,etc.But it also has same disadvantages such as the effect of processing bright image is not good.The experimental result proves that the improved algorithm overcomes the above disadvantages,has better effect of image enhancement,and it is an algorithm with strong adaptability and high robustness.
Projection Image Library Design Method for Aircraft CAD Model with Accurate Pose
FU Tai,YANG Li, WANG Bin
Computer Science. 2018, 45 (6A): 244-246. 
Abstract PDF(1529KB) ( 717 )   
References | RelatedCitation | Metrics
With the development of CAD technology and 3D scene understanding in recent years,accurate pose estimation based on target 3D CAD model has become an important method.However,direct application of the CAD model often requires commercial CAD software to be invoked in programming,in this way,it not onlyneed to configure a large number of compatibility files,but also has higher requirements for computer graphics hardware.Meanwhile,the frequent interaction of software level data leads to low efficiency.Therefore,an image library with the truth values of pose can be obtained by projecting the CAD model,which greatly simplify the problem by means of transforming the three-dimensional problem into a two-dimensional problem.Considering the integration of programs,OpenGL is used as the display tool of CAD model.Firstly,the internal and external parameters of virtual camera are calibrated.Considering the relati-vity of motion,the virtual image is kept motionless,and the pitch and azimuth are rotated 0~90 degrees and 0~360 degrees respectively with the target model projected at 1 degree according to the set distance.The target projection image library with accurate pose is obtained.
Face Recognition Using 2D Gabor Feature and 3D NP-3DHOG Feature
WANG Xue-qiao,QI Hua-shan, YUAN Jia-zheng, LIANG Ai-hua, SUN Li-hong
Computer Science. 2018, 45 (6A): 247-251. 
Abstract PDF(1565KB) ( 628 )   
References | RelatedCitation | Metrics
Face recognition algorithm based on 2D images extracts texture feature for recognition,but lighting,facial expressions and facial gestures can have adverse effect on it.3D face features can accurately describe the geometric structure of face and they are barely affected by makeup and light.Because 3D face feature lacks texture information,two kinds of features for face recognition.This paper fused Gabor based 2D face feature and new partitioning 3D histograms of oriented gradients 3D feature for face recognition.Firstly,the Gabor feature of 2D face is extracted,then the new partitioning 3D histograms of oriented gradients feature are extracted,which aims to extract the discriminant 3D face feature.Secondly,the linear discriminant analysis subspace algorithm is used to train two subspaces respectively.Finally,sum rule is used to fuse the two similarity matrices,and the nearest neighbor classifier is applied to finish the recognition process.
Study of FCM Fusing ImprovedGravitational Search Algorithm in Medical Image Segmentation
FENG Fei, LIU Pei-xue,LI Li,CHEN Yu-jie
Computer Science. 2018, 45 (6A): 252-254. 
Abstract PDF(1527KB) ( 754 )   
References | RelatedCitation | Metrics
In order to improve the performance of the fuzzy c-means clustering algorithm in dealing with medical image segmentation,this paper presented a new hybrid method for image segmentation.The method uses fuzzy c-means clustering algorithm (FCM) to divide image pixel space into homogeneous area.Gravitational search algorithm is fused is putted into the fuzzy c-means clustering algorithm to find the optimal clustering center and make the fitness function value of fuzzy c-means clusteringminimal.Experimental results show that compared with traditional clustering algorithm,this method is more effective in the segmentation of different types of images.
Improved Neighborhood Preserving Embedding Algorithm
LOU Xue, YAN De-qin, WANG Bo-lin, WANG Zu
Computer Science. 2018, 45 (6A): 255-258. 
Abstract PDF(1642KB) ( 731 )   
References | RelatedCitation | Metrics
Neighborhood persistence embedding (NPE) is a novel subspace learning algorithm that preserves the original local neighborhood structure of the sample set while maintaining dimensionality.In order to further improve the re-cognition function of NPE in face recognition and speech recognition,this paper proposed an improved neighborhood preserving embedding algorithm (RNPE).On the basis of NPE,by introducing the interclass weight matrix,the dispersion between classes is the largest,the intra-class dispersion is the smallest,distribution constraint between the classes is increased.The classification experiments are done by the extreme learning machine (ELM) classifier with Yale face database,Umist face database,Isolet speech database.The results show thatthe recognition rate of RNPE algorithm is significantly higher than NPE algorithm and other traditional algorithms.
Low-contrast Crack Extraction Method Based on Image Enhancement and Watershed Segmentation
ZHOU Li-jun
Computer Science. 2018, 45 (6A): 259-261. 
Abstract PDF(1533KB) ( 867 )   
References | RelatedCitation | Metrics
In the process of tunnel crack detection in actual scene,there exists small,low-contrast and stain-interfered cracks.It is difficult to extract those cracks by conventional methods.In order to solve this problem,a crack detection method based on image enhancement and watershed segmentation was proposed.In this method,the interfered stain is removed to balance the image background contrast.The image is further enhanced by top-hat and bottom-hat transformation.Then the segmentation lines are obtained by watershed algorithm.According to the gray-value difference between the gray-value of segmentation line and its surrounding gray-value,the crack edge can be extracted.Experimental results show that the proposed method is accurate and effective to detect tunnel cracks and it is also robust to noise.
Network & Communication
Performance Evaluation and Optimization of Inter-cores Communication for Heterogeneous
Multi-core Processor Unit
LUO Shu-yan, ZHU Yi-an, ZENG Cheng
Computer Science. 2018, 45 (6A): 262-265. 
Abstract PDF(1622KB) ( 1332 )   
References | RelatedCitation | Metrics
With the continuous development of embedded technology,more and more systems conduct high performance computing by using heterogeneous multi-processor units (HMPU),but the efficiency of inter-processor communication seriously restricts the system capabilities of high performance computing.This paper presented a stage-oriented assessment model based on three influence factors including communication granularity,communication cache and message transmission mechanism against the problem of hardly quantifying the performance of inter-processor communication.Besides,the influence of performance of the inter-processor communication on different stages has been proved by experiments.Due to the changeable environment and limited resources of embedded system,the static communication strategy has limitations on system performance optimization.In order to solve this problem,this paper raised the dyna-mic communication strategy optimization model (DCSOM) based on the memory constraints,time constraints and performance goals.It is proved by experiments that dynamic communication strategy has more advantages in multi-core processing units of a small amount of data and long period.
Novel Energy Detection Method and Detection Performance Analysis
CAO Kai-tian, HANG Yi-ling
Computer Science. 2018, 45 (6A): 266-269. 
Abstract PDF(1608KB) ( 670 )   
References | RelatedCitation | Metrics
In order to overcome the disadvantage that the existing small sample size-based energy detection (ED) me-thods only obtain the approximations of detection performance of ED in AWGN (Additive White Gaussian Noise),a more tractable and more accurate closed-form expression for detection probability of ED in Rayleigh fading channel was derived and its performance was analyzed by exploiting the latest research result of generalized Marcum Q-function in this paper.Both theoretical analysis and simulation results show that compared with the approximate analysis of ED detection performance such as the CLT (Central Limit Theorem)-based approach,the CoG (Cube-of-Gaussian)-based me-thod and other approximations,the proposed scheme has more robust and accurate detection performance.
Workload Forecasting Method in Cloud
JIANG Wei,CHEN Yu-zhong,HUANG Qi-cheng,LIU Zhang-hui,LIU Geng-geng
Computer Science. 2018, 45 (6A): 270-274. 
Abstract PDF(1554KB) ( 1414 )   
References | RelatedCitation | Metrics
Cloud computing is a model of computing and service based on information network,it provides information technology resource for users in a dynamic and flexible way and the users can use them on demand.Due to the startup time of the host,resource allocation time,task scheduling time and other factors,there is a delay problem in the service providing for user in the cloud environment.Therefore,workload prediction is an important way of energy optimization in cloud environment.In addition,due to the great fluctuation of cloud workload,the prediction difficulty of the model is increased.This paper presented a prediction model (Hybrid Auto Regressive Moving Average model and Elman neural network,HARMA-E) based on autoregressive modal and Elman neural network.Firstly,it uses ARMA model to predict,and then it uses ENN model to predicterrors of ARMA model,and the final prediction value is obtained by modifying the input value of ARMA.Experimental results show that the proposed method can effectively improve the prediction accuracy of the host workload.
TOPSIS Based Model for Satellite Resource Selection
YUAN Wei-wei,MENG Fan-lun,PENG Jun,SUN Xiao,LIU Ri-chu
Computer Science. 2018, 45 (6A): 275-278. 
Abstract PDF(1538KB) ( 656 )   
References | RelatedCitation | Metrics
Resource selection is one of the key problems in satellite resource planning and allocation.The selection of different resources will affect efficiency of system resource and customer’s experience.Based on analysis and survey,a satellite resource selection evaluation index system was constructed,which has advantages such as wide coverage,strong objectivity and easiness of automatical measure.A TOPSIS based evaluation model was constructed,and the model was used to get fully ordered result,which helps to find the most proper frequency resource.Good results have been gained in real work.
Study of Sub-channel and Power Allocation in Ultra-dense Networks
TAN Bo-wen,WANG Gang,YAO Wen
Computer Science. 2018, 45 (6A): 279-282. 
Abstract PDF(1544KB) ( 552 )   
References | RelatedCitation | Metrics
In ultra-dense network,the serious inter-cell interference has restricted the data rate of users.In view of this problem,a new priority-based clustering resource allocation scheme was proposed in this paper.The scheme is divided into three steps.In the first step,it uses graph-based coloring algorithm for femtocell access points (FAPs).In the se-cond step,it uses number of data to be transmitted,the queuing delay and the interference intensity of each femtocell user equipment (FUEs) in the cluster as the priority,and then calculates the priority of each cluster,for examples,clusters with high priority can receive sub-channel of good channel gain first.At last,it allocates power for FUEs by Karush-Kuhn-Tucker(KKT) and Water-Filling fashion.Simulation results show that the proposed scheme can effectively reduce the interference between femtocell,and can greatly satisfy the needs of users,while improving the throughput and spectrum efficiency.
Total Communication and Efficiency Analysis of Large Scale Networks
YAN Jia-qi, CHEN Jun-hua, LENG Jing
Computer Science. 2018, 45 (6A): 283-289. 
Abstract PDF(1614KB) ( 671 )   
References | RelatedCitation | Metrics
The centrality measure of nodes has always been a hot topic in complex network research.This paper focused on researching the concept of total communication through the sum of the functions of the network adjacency matrix.The main research includes matrix exponent and resolution,which have natural explanations on the path of the basic graph.The research proved that they can be calculated very quickly even in the case of large networks.In addition,this paper proposed the sum of the node communication as a valid measure of the network connection,which can measure the degree of communication between each node and other nodes in the network.A comparison has been made between the centrality measure of nodes and the related methods by using virtual network data and real data.The results show that the total communication capability can be used as a measure of connectivity for the overall measure of information flow on a given network,which has broad application prospects.
Task Scheduling Scheme Based on Sharing Mechanism and Swarm Intelligence
Optimization Algorithm in Cloud Computing
FU Xiao
Computer Science. 2018, 45 (6A): 290-294. 
Abstract PDF(1554KB) ( 658 )   
References | RelatedCitation | Metrics
In order to improve the utilization rate of virtual machine (VM) in cloud computing and reduce the completion time of tasks,a hybrid intelligent optimization algorithm of fusion sharing mechanism was proposed to realize dynamic scheduling of cloud tasks.First,the virtual machine scheduling is encoded as bees,ants and genetic individuals.Then,using artificial bee colony (ABC),ant colony optimization (ACO) and genetic algorithm (GA),the optimal solutionis found in each neighborhood.Finally,by a mechanism of sharing,three algorithms regularly exchange their solutions and obtain the optimal solution as the current optimal solution for the next iteration process,in order to accelerate the algorithm convergence and enhance the accuracy of convergence.Through the CloudSim,the results of cloud task scheduling simulation experiment show that the proposed hybrid algorithm can reasonable scheduling tasks effectively,and has the superior performance in the task completion time and stability.
Simulation for Integrated Space-Ground TT&C and Communication Network Routing Algorithm
LI Zhi-yuan,LI Jing,ZHANG Jian
Computer Science. 2018, 45 (6A): 295-299. 
Abstract PDF(1589KB) ( 867 )   
References | RelatedCitation | Metrics
Based on space heterogeneous network set up by GEO,MEO,LEO satellites,considering the heterogeneity,self-organization,self-healing and cooperated abilities of the characteristic of ubiquitous network,this paper proposed the definition of integrated space-ground TT&C and communication network,built a multi-layer satellite network based on “backbone and access” models,and studied a routing technology when satellite network was divided.Finally,network performance was analyzed and simulated in the condition of different number of failed nodes under CGR,random and flood routing algorithms.It was verified that the network can be more self-healing and survivable when CGR routing algorithm is used.
Task Scheduling Algorithm Based on DO-GAPSO under Cloud Environment
SUN Min CHEN, Zhong-xiong, LU Wei-rong
Computer Science. 2018, 45 (6A): 300-303. 
Abstract PDF(1547KB) ( 575 )   
References | RelatedCitation | Metrics
In order to find reasonable cloud computing task scheduling scheme,the demand of users can not be satisfied by optimizing scheduling strategy from a single aspect,and there are some weight assignment problems in several aspects to optimize scheduling policy.Focusing on the problems,considering the completion time,cost and service quality,an algorithm of a dynamic target based on particle swarm and genetic algorithm(DO-GAPSO) was proposed,a dynamic linear weighting allocation policy wasintroduced in the fitness of function modeling.Cloud environment simulation experiment was conducted in the CloudSim platform.Under the same condition,discrete particle swarm optimization(DPSO),double fitness genetic algorithm(DFGA) were compared with the proposed algorithm.The experimental results show that the proposed algorithm is better than the other two algorithms in execution efficiency and optimization ability.It is a kind of effective task scheduling algorithm in cloud computing environment.
Workflow Energy-efficient Scheduling Algorithm in Cloud Environment with QoS Constraint
LI Ting-yuan, WANG Bo-yan
Computer Science. 2018, 45 (6A): 304-309. 
Abstract PDF(1632KB) ( 583 )   
References | RelatedCitation | Metrics
Cloud provides a high-efficient and reliable execution environment for scheduling large-scale workflow.However,the high energy consumption resulted by workflow execution not only increases the economic cost of cloud resource providers,but influences the system reliability and has a negative effect to the environment.For meeting user-defined deadline QoS requirement and reducing the execution consumption of workflow scheduling in cloud,a workflow energy-efficient scheduling algorithm QCWES was proposed.QCWES divides the energy-efficient scheduling scheme of workflow into three phases:the deadline redistribution,the ordering of scheduled tasks and the best resource selection based on DVFS.The deadline redistribution phase is to redistribute the user-defined overall workflow deadline among all tasks,the ordering of scheduled tasks is to obtain the scheduling order of tasks by top-down task leveling,the best resource selection based on DVFS is to select the best available resource with appropriate voltage/frequency level for each task so that the total energy consumption is minimal while meeting its sub-deadline.Some simulation experiments were constructed to evaluate the performance of our algorithm by random workflow and the real-world workflow based on Gaussian Elimination.The results show that QCWES can reduce the energy consumption of workflow scheduling under meeting deadline constraint,and achieve the trade-off between users’ QoS requirement and resources’ energy consumption.
Blind Recognition of RSC Code Generated Polynomial Based on Variable Step Size of Gradient Search
WU Zhao-jun, ZHANG Li-min, ZHONG Zhao-gen
Computer Science. 2018, 45 (6A): 310-313. 
Abstract PDF(1680KB) ( 609 )   
References | RelatedCitation | Metrics
According to blind recognition of RSC code generated polynomial,the nonlinear function between step size and gradient was established in M step based on EM algorithm.As a result,a novel algorithm of variable step size of gradient search was proposed based on analysis of the signal model.Compared with the fixed step algorithm,the new algorithm has better cognitional performance on the condition of low SNR and has a strong ability to resist noise.Besides,the convergence of parameter estimation curve is rather faster.The computer simulation results show that the proposed algorithm can converge at real value at 4th iteration,while for fixed step size algorithm,more than 20 iterations are needed.In terms of performance of noise resistance,the Monte Carlo trial results show that the correct ratio of recognition can reach more than 80% at SNR of 0dB.
Community Label Detection Algorithm Based on Potential Background Information
SONG Yan-qiu, LI Gui-jun, LI Hui-jia
Computer Science. 2018, 45 (6A): 314-317. 
Abstract PDF(1639KB) ( 555 )   
References | RelatedCitation | Metrics
In recent years,community structure analysis has attracted much attention in many fields,which aims to partition nodes in a graph into several clusters,in order to achieve a satisfactory state in which each cluster has a densely connected intra-cluster structure and homogeneous attribute value.Existing methods mainly assume that nodes in graphs are cooperative to optimize a given objective function,but ignore their background information in real-life contexts.Based on potential theory,this paper proposed a new semi-supervised community detection algorithm,which uses the electrostatic field generated by the tag node to determine the label of unlabeled nodes(community label).This paper firstly gave a certain number of nodes to the user-defined label,and then used the sparse linear equations to calculate the label of the remaining nodes,where each node’s label was set to calculate the maximum potential value.By comparing with the existing algorithms,it is showed that the proposed algorithm has a strong detection ability in terms of the real world network and artificial benchmark network.It is also very accurate even through in the case of fuzzy large-scale community structure.
Position Prediction Algorithm Based on IRWQS and Fuzzy Features
CHEN Bo,ZHANG Yun-he, QIU Shao-ming, WANG Yun-ming
Computer Science. 2018, 45 (6A): 318-322. 
Abstract PDF(1563KB) ( 498 )   
References | RelatedCitation | Metrics
In view of the fact that the existing two-dimensional position prediction algorithm is difficult to reflect the influence of terrain factors on prediction accuracy,this paper proposed a position prediction algorithm based on IRWQS (Incremental Repetition Weighing Queue Strategy) and fuzzy feature.Firstly,the three-dimensional position coordinate information obtained from the Plough satellite navigation system is extracted and converted into a database,and then the online incremental weighting queue scan operation is performed by using the chained operation of the database.Secondly,the optimal position coordinates are obtained through the fuzzy feature matching algorithm to get the coordinate points and movement trends of the next moving position exactly.The experimental results show that compared with MMTS algorithm and UCMBS algorithm,the prediction accuracy of this algorithm increases by about 9% and 25% on average.
NAT Device Detection Method Based on C5.0 Decision Tree
SHI Zhi-kai,ZHU Guo-sheng,LEI Long-fei,CHEN Sheng,ZHEN Jia,WU Shan-chao,WU Meng-yu
Computer Science. 2018, 45 (6A): 323-327. 
Abstract PDF(1539KB) ( 838 )   
References | RelatedCitation | Metrics
NAT hides the internal network structure to the external network.On the one hand,it offers access to the illicit terminal facilitates,causing potential threats to the network.On the other hand,users can also privately share networks through NAT,which directly harm the interests of network operators.Effective detecting NAT devices plays an important role in network security and controlling,network operation and management.This article analyzed and compared the current NAT detection technologies.The advantages,disadvantages and the applicable conditions of each technologies were described.A C5.0 decision tree based NAT device detection method using features of the upper-level applications and training data was proposed in this paper.The experiments with real network traffic data show that the model can identify NAT device effectively.
Dynamic Frame Time Slot ALOHA Algorithm Based on Dynamic Factor Mean
ZHOU Shao-ke, ZHANG Zhen-ping, CUI Lin
Computer Science. 2018, 45 (6A): 328-331. 
Abstract PDF(1593KB) ( 746 )   
References | RelatedCitation | Metrics
The dynamic frame slot ALOHA algorithm is an improved algorithm based on the probabilistic ALOHA algorithm.Within a certain range,the algorithm identifies the tag,and the number of frame time slots can be dynamically increased as the number of tags increases.However,when a large number of tags are recognized,due to the limitations of the reader hardware,resource utilization and system throughput are greatly reduced.Aiming at this problem,this paper proposesd a dynamic frame slot ALOHA algorithm based on dynamic factor mean estimation algorithm.First,it uses the dynamic factor mean tag estimation method to estimate the number of labels accurately.Immediately,to accurately estimate the label,the proposed dynamic frame slot ALOHA improved algorithm is used for grouping,in accor-dance with the group to identify.Finally,the dynamic factor mean value tag estimation algorithm and the dynamic frame time slot ALOHA algorithm with the tag estimation algorithm are simulated respectively.The simulation results show that the proposed algorithm can estimate the accuracy of the tag and keep the estimation error in the range of 5%.The dynamic frame time slot ALOHA algorithm based on the dynamic factor mean value estimation method can guarantee higher system utilization rate of 30% and so on.The number of frame slots required by the whole recognition process is about 45% lower than that of the dynamic frame slot ALOHA algorithm.
Minimal Base Stations Deployment Scheme Satisfying Node Throughput Requirement in Radio Frequency Energy Harvesting Wireless Sensor Networks
CHI Kai-kai, XU Xin-chen, WEI Xin-chen
Computer Science. 2018, 45 (6A): 332-336. 
Abstract PDF(1554KB) ( 577 )   
References | RelatedCitation | Metrics
In radio frequency energy harvesting wireless sensor networks (RFEH-WSNs),base stations (BSs),i.e.,sinks,not only have high cost,but their deployment positions also greatly determine the achievable throughputs of nodes.This paper studied the minimal BSs deployments satisfying the node throughput requirement.Firstly,this problem was formulated as an optimization problem to deeply understand the essence of this problem.Then,a low-complexity heuristic deployment algorithm and a genetic algorithm based deployment algorithm were proposed.Simulation results show that,these two algorithms can find the BSs deployment with relatively few BSs.Compared to the heuristic deployment algorithm,genetic algorithm based deployment algorithm achieves fewer BSs,but has a little higher computational complexity,and is suitable for small and medium scale RFEH-WSNs.
Researcn on Space Information Network Architecture Based on LEO Satellites for
Backbone Access and Frequency Resolution Strategy
LIU Jun-feng,LI Fei-long,YANG Jie
Computer Science. 2018, 45 (6A): 337-341. 
Abstract PDF(1613KB) ( 887 )   
References | RelatedCitation | Metrics
Space information network (SIN) utilizes various spatial platforms to achieve spacial information real-time acquisition,transmission and processing,which breaks the barrier of multiple independent systems incapable sharing resources.Based on the current situation of complex types,fragmentation,lacking of unified communications,poor integrated services and small-coverage for our national space infrastructure,this paper proposed the construction principles of not only focusing on the existing system but also taking into account the future development.A novel SIN architecture based on the double layered LEO was designed during the initial stage of SIN.The higher LEO satellites regarded as the backbone core network play the role of backbone transmission task,while the lower LEO satellites regarded as the hot access network provides access to terrestrial and space services node.Meantime,the scalable system feature is exhibited for future space information service.For example,the topological relations with future increased GEO backbone nodes are considered.Then,orbit resource shortages of SIN is effectively resolved.Additionally,frequency acquisition strategy is researched for the problem of limited frequency resources in SIN at the end of this paper.
Content-aware and Group-buying Based Cloud Video Delivery Networks
ZHAO Tian-qi, LU Dian-jie, LIU Yi-liang, ZHANG Gui-juan
Computer Science. 2018, 45 (6A): 342-347. 
Abstract PDF(1570KB) ( 520 )   
References | RelatedCitation | Metrics
Cloud video delivery networks (CVDNs) applies cloud storage technology to the video delivery networks (VNDs),which can proide high quality video deliery service for users at a lower cost,such as live video and live strea-ming.However,the impact of video content classification and user collaboration are less considered in the existing cloud video delivery mechanisms.How to combine video content classification and user collaboration to further save users’ purchase cost is a challenging problem.This paper put forward a content-aware and group-buying (CG) strategy,which classifies prices for different video content and allows the users to purchase them by forming coalitions.Then,cost formula,user purchase quantity constraint and single user cost constraint were defined to formulate the CG problem as a linear programming problem which can be solved by GLPK tools.The experimental results show that the CG strategy can reduce user cost effectively.
Information Security
Block Chain Based Architecture Model of Electronic Evidence System
HOU Yi-bin, LIANG Xun, ZHAN Xiao-yu
Computer Science. 2018, 45 (6A): 348-351. 
Abstract   
References | RelatedCitation | Metrics
This paper proposed an electronic evidence system architecture based on block chain technology.The absolutely-credible characteristic of block chain data is used to guarantee the authenticity of electronic evidence and promote its development.The framework also describes a methodology to package a large quantity of data,which can reduce the cost and improve the efficiency when saving data into block chain.
Security Incidents and Solutions of Blockchain Technology Application
WANG Jun-sheng, LI Li-li, YAN Yong, ZHAO Wei, XU Yu
Computer Science. 2018, 45 (6A): 352-355. 
Abstract PDF(1592KB) ( 1529 )   
References | RelatedCitation | Metrics
Blockchain technology has been widely used.It has many advantages,such as distributed storage,high redundant data,centering and so on.The safety of its application and regulatory issues has been paid attention to.Firstly,this paper collected and analyzed all kinds of blockchain security incidents,and the causes of the incident were classified,corresponding safety precautions were put forward.Secondly,the present situation of China blockchain supervision was analyzed.Referring to the international policy on blockchain supervision,the model for China blockchain monitoring situation was put forward.Finally,the technology development needs of blockchain were summarized in the pattern of supervision.
Research on Key Technologies of Quantum Channel Management in QKD Network
ZHENG Yi-neng
Computer Science. 2018, 45 (6A): 356-363. 
Abstract PDF(1680KB) ( 731 )   
References | RelatedCitation | Metrics
With the development of the Internet,the information dissemination is increasing and the information security is more and more important.As some information requires higher security,researches on information encryption me-thods are of great significance.Quantum key distribution (QKD) technology is based on the no-cloning theorem,which states that it is impossible to create an identical copy of an arbitrary unknown quantum state.That is why QKD is unconditionally secure and it enables keys distribution to be safe.However,the current QKD network is small in size and cannot meet the needs of large-scale network.At the same time,the routing techniques on the classical networks can not apply to the QKD network.Finding out feasible quantum paths becomes a problem to be solved.In view of the above issues,a QKD network model which can meet the large-scale QKD communication was put forward according to optical switch,and its network structure and signaling systems were designed.Based on this,the pilot signal protocol and the quantum channel management mechanism were proposed.The results show that the model works well.
Research on Network Attack Detection Based on Self-adaptive Immune Computing
CHEN Jin-yin,XU Xuan-yan,SU Meng-meng
Computer Science. 2018, 45 (6A): 364-370. 
Abstract PDF(1613KB) ( 747 )   
References | RelatedCitation | Metrics
The Internet is inherently open and interactive,making the attacker use the network vulnerabilities to destroy the network.Network attacks are generally conceal and highly hazardous,so how to effectively detect network attacks becomes extremely important.In order to solve the problem that most of the detection algorithms can only detect a kind of network attack,and the detection delay is high,this paper proposed a negative selection algorithm based on density automatic partition clustering method with self-set,referred to DAPC-NSA.The algorithm uses the density clustering algorithm to preprocess the self-training data,performs cluster analysis on the training data,eliminates the noise,and generates the self-detector.And then it generates the nonself-detector according to the self-detector,and uses the self-detector and nonself-detector to detect the anomalies.The simulated intrusion detection experiment was carried out.The experiment shows that the algorithm can not only detect six kinds of attacks simultaneously,but also has the higher detection rate and the lower false alarm rate.The detection time is short compared with other detection algorithm,and it can achieve the target of real-time detection.
Study on Click Fraud Detection in Online Advertising with Imbalanced Data Processing Methods
LI Xin, GUO Han,ZHANG Xin,HU Fang-qiang,SHUAI Ren-jun
Computer Science. 2018, 45 (6A): 371-374. 
Abstract PDF(1555KB) ( 647 )   
References | RelatedCitation | Metrics
Click fraud detection in online advertising is one of the most important applications of machine learning.Support vector machine (SVM) is a prominent supervised machine learning algorithm on classification problems with roughly equal distributions datasets.However,when applied to click fraud detection problems,the success of SVM is greatly limited due to the extreme imbalanced distribution of FDMA2012 competition dataset.In this paper,three data preprocess methods,random under-sample (RUS),synthetic minority over-sampling technique (SMOTE) and SMOTE+edited nearest neighbor(ENN),were detailed investigated,followed by SVM classifier to solve the question.Results show that the method combining SMOTE+ENN with SVM achieves accuracy about 95% on minority samples,which basically reaches the requirements of online advertising click fraud detection system.
Research of Webshell Detection Method Based on XGBoost Algorithm
CUI Yan-peng,SHI Ke-xing,HU Jian-wei
Computer Science. 2018, 45 (6A): 375-379. 
Abstract PDF(1541KB) ( 1477 )   
References | RelatedCitation | Metrics
To solve problem of uniform code characteristics and difficulty to extract of the encrypted Webshell and non-encrypted Webshell,this paper proposed a Webshell detection method based on XGBoost algorithm.First of all,this paper analyzed features of Webshell,and found that most of the Webshell have code execution,file operations,database operations,compression,obfuscation coding and so on,which describe the behaviors of Webshell comprehensively.Therefore,for non-encrypted Webshell,its main feature is divided into the number of occurrences of correlation functions.For encrypted Webshell,according to the statistical characteristics of the code,file coincidence index,information entropy,the length of the longest string,compression ratio are taken as four parameters as its features.Finally,these two type of features are gregarded together as a Webshell features,improving the problem of lack of Webshell feature coverage.The experimental results show that the proposed method can achieve high performance,compared with the traditional single-type Webshell detection,it improves the efficiency and accuracy of Webshell detection.
New Identity-based Authentication and Key Agreement Scheme in Ad hoc Networks
HUO Shi-wei,ANG Wen-jing,LI Jing-zhi,SHEN Jin-shan
Computer Science. 2018, 45 (6A): 380-382. 
Abstract PDF(1513KB) ( 642 )   
References | RelatedCitation | Metrics
The available identity-based authentication and key agreement schemes in Ad hoc networks are based on bilinear pairing with high computation cost,and the schemes also have the problem of key escrow.Considering the problem,a new identity-based authentication and key agreement scheme was proposed.Identity authentication was realized using identity-based signature without bilinear pairing.The session key was established using diffie-hellman key exchange technology.It is shown that the proposed scheme avoids the problem of key escrow,and has higher efficiency.
Malware Classification Based on Texture Fingerprint of Gray-scale Images
ZHANG Chen-bin,ZHANG Yun-chun, ZHENG Yang,ZHANG Peng-cheng, LIN Sen
Computer Science. 2018, 45 (6A): 383-386. 
Abstract PDF(1531KB) ( 2101 )   
References | RelatedCitation | Metrics
With the rapid increment of the number of Android malwares,the traditional malware detection and classification methods were proved to be with low detection rate,highly complex training model and so on.To solve above problems,the texture feature of gray-scale image-based malware classification method was proposed by combining the image texture feature abstraction and machine learning classifiers.The proposed method starts with converting the malware samples into grayscale images.Four feature abstraction methods were designed including GIST and Tamura-based feature abstraction algorithm.By taking the texture feature as the source data,5 kinds of classification learning models were constructed by using high performance architecture Caffe.Finally,the detection and classification of malwares were done.The experimental results show that the image texture feature-based malware classification achieves high accuracy,and the Caffe architecture can effectively improve the learning time which further reduces the complexity.
Identity Based Aggregate Signature Scheme with Forward Security
WEI Xing-jia, ZHANG Jing-hua,LIU Zeng-fang,LU Dian-jun
Computer Science. 2018, 45 (6A): 387-391. 
Abstract PDF(1565KB) ( 800 )   
References | RelatedCitation | Metrics
By using the tools of bilinear pairing,discrete logarithm on elliptic curve and strong RSA assumption,this paper proposed a new aggregate signature scheme with forward security.It can realize the authentication between the private key generation center and the signature user,and has the quality of forward security for the signature information,which further guarantees the system’s security.The scheme was proved secure in the random oracle paradigm with the assumption that the computational Diffie-hellman (CDH) problem is intractable.
Big Date & Date Mining
Research on Neural Network Clustering Algorithm for Short Text
SUN Zhao-ying,LIU Gong-shen
Computer Science. 2018, 45 (6A): 392-395. 
Abstract PDF(1535KB) ( 1342 )   
References | RelatedCitation | Metrics
Short text has a small number of vocabularies and weak description of information,resulting in the characteris-tics of high dimensionality,sparse features and noise interference.The existing clustering algorithms have low accuracy and efficiency for the large-scale short text.A short text clustering algorithm based on deep learning convolution neural network was proposed to solve this problem.The proposed clustering algorithm uses the word2vec model to learn the potential semantic association between words in the short text,and the multidimensional vector to represent the single word based on the large-scale corpus,and then the short text is also expressed as the multidimensional original vector form.Using convolution neural network,the feature vector is extracted from the original vector of sparse and high dimension to the low-dimensional text vector with more effective characteristics.Finally,the traditional clustering algorithm is used to cluster the short text.The proposed clustering method is feasible and effective for the reduction of text vector,and has achieved good short text clustering effect with F-measure of over 75%.
Construction Method of Domain Subject Thesaurus Based on Corpus
AN Ya-wei, CAO Xiao-chun, LUO Shun
Computer Science. 2018, 45 (6A): 396-397. 
Abstract PDF(1569KB) ( 1446 )   
References | RelatedCitation | Metrics
To achieve a massive domain corpus oriented subject thesaurus,a method based on feature matrix which is set up by computing words co-occurrence was proposed.By operating on this feature matrix,words are divided into clusters,and central word for each words cluster is calculated.Lexical bundles are finally gained by re-organizing words clusters using central word as a core.The experiment indicates that the proposed method can achieve good precision rate and recall rate.
Collaboration Filtering Recommendation Algorithm Based on Ratings Difference
and Interest Similarity
WEI Hui-juan, DAI Mu-hong
Computer Science. 2018, 45 (6A): 398-401. 
Abstract PDF(1592KB) ( 925 )   
References | RelatedCitation | Metrics
In order to improve the quality of recommendation system and solve the existing similarity calculation inaccuracy problem of traditional collaborative filtering algorithm,this paper put forward a method to calculate user similarity.Based on the user common ratings,this method firstly calculates the information entropy of rating differentials according to rating differentials and time features.Then it evaluates the similarity of the user by utilizing the information entropy of rating differentials and the rated item attributes.Finally,the nearest neighbors would be calculated according to the user similarity,which helps predict the rating of the target item.The experimental results show that the proposed algorithm makes the target user find the nearest neighbors more accurately and improves the recommendation accuracy effectively.
Spark Based Condensed Nearest Neighbor Algorithm
ZHANG Su-fang,ZHAI Jun-hai,WANG Ting-ting,HAO Pu,WANG Cong,ZHAO Chun-ling
Computer Science. 2018, 45 (6A): 406-410. 
Abstract PDF(1565KB) ( 670 )   
References | RelatedCitation | Metrics
K-nearest neighbors (K-NN) is a lazy learning algorithm.It is unnecessary to train classification models,when one uses K-NN for data classification.K-NN algorithm is simple and easy to implement.The disadvantages of K-NN is that it requires large number of computations,which is introduced by calculating distances between testing instance and every training instance.Condensed nearest neighbors (CNN) can overcome the drawback of K-NN mentioned above.However,CNN is an iterative algorithm,when it is applied in big data scenario,its efficiency becomes very low.In order to deal with this problem,this paper proposed an algorithm named Spark CNN.In big data circumstances,Spark CNN can significantly improve the efficiency of CNN.This paper experimentally compared the Spark CNN with MapReduce CNN on 5 big data sets,the experimental results show that the Spark CNN is very effective.
Cloud Resource Selection Algorithm by Skyline under MapReduce Frame
QI Yu-dong,HE Cheng,SI Wei-chao
Computer Science. 2018, 45 (6A): 411-414. 
Abstract PDF(1541KB) ( 487 )   
References | RelatedCitation | Metrics
This paper researched a cloud resource selected algorithm under the MapReduce frame,which uses a method of possibility filtrate to figure the possibility of a resource nod belonging to the Skyline results set.It filtrates information by value seted in advance,to reduce the frequency of heartbeat in the MapReduce frame eventually,to optimize the flow of network.
Collaborative Filtering Personalized Recommendation Based on Similarity of Tag Information Feature
HE Ming, YAO Kai-sheng,YANG Peng,ZHANG Jiu-ling
Computer Science. 2018, 45 (6A): 415-422. 
Abstract PDF(1624KB) ( 650 )   
References | RelatedCitation | Metrics
Tag recommendation systems are aimed to provide personalized recommendation using tag data for users.Previous tag based recommendation methods usually neglect the characteristics of users and items,and similarity mea-sures are unconsidered fully incorporating effectively both user similarity and item similarity,which leads to deviation of recommendation results.To address this issue,this paper proposed the collaborative filtering recommendation method of combining tag features and similarity for personalized recommendation.Two-dimensional matrix is used to define actions among user-tag and tag-item based on integrating information among users,tags and items.Tag features representation is constructed,and user similarity and item similarity are calculated by similarity measure method based on tag features.The user preferences for items are predicted by their tag behaviors and linear combination of similarity of users and items,and the recommended list is generated according to the rank of preferences.The experimental results on Last.fm show that the proposed method can improve recommendation accuracy and satisfy the requirement for users.
Diversity Recommendation Approach Based on Social Relationship and User Preference
SHI Jin-ping,LI Jin,HE Feng-zhen
Computer Science. 2018, 45 (6A): 423-427. 
Abstract PDF(1571KB) ( 760 )   
References | RelatedCitation | Metrics
The traditional recommendation algorithm,represented by collaborative filtering,can provide users with a high recommended list with high accuracy,while ignoring another important measure which is diversity in the recommendation system.With the increasing development of social networks,with a lot of redundancy and duplication of information,the overload information makes it more difficult to find user interests quickly and effectively.For recommending the most content for users to meet their hobbies,user interests with a significant relevance and covering different aspects are needed.Therefore,based on social relations and user preferences,this paper proposed a sorting framework for diversity and relevance.Firstly,this paper introduced the social relations graph model,considering the relationship between users and items to better model their relevance.Then,this paper used a linear model to integrate the two important indexes of diversity and relevance.Finally,the algorithm was implemented by Spark GraphX parallel graph calculation framework,and experiments were carried on real dataset to verify the feasibility and scalability of the proposed algorithm.
Study on Active Acquisition of Distributed Web Crawler Cluster
DONG Yu-long,YANG Lian-he,MA Xin
Computer Science. 2018, 45 (6A): 428-432. 
Abstract PDF(1583KB) ( 1090 )   
References | RelatedCitation | Metrics
In this paper,in order to solve the processing efficiency,scalability,task allocation and load balance problem existed in the present distributed web crawler method,an active acquisition task distributed web crawler method was proposed,in which a sub-controlled module is added into the sub-node to evaluate the node load and operation status,and apply task queue for the central control node.Based on this method as well as the dynamic dual-directional priority task allocation algorithm,a distributed network crawler model was designed,which has the characteristics of load ba-lance,task hierarchical allocation,abnormal node smart identification and safe exit,etc.The practice test shows that the active acquisition task distributed web crawler method can be used to build large-scale distributed crawler cluster effectively.
Efficient Friend Recommendation Scheme for Social Networks
CHENG Hong-bing, WANG Ke, LI Bing, QIAN Man-yun
Computer Science. 2018, 45 (6A): 433-436. 
Abstract PDF(1594KB) ( 579 )   
References | RelatedCitation | Metrics
With the rapid development of modern network technology,human society has entered the era of information.An increasing number of people prefer to talk and make friends with others through social networks.Besides the people or events which users initiatively focus on,social network will also recommend alternative users.However,most of the alternative users are the promotion of social networks.In this paper,for the accuracy and reliability of social networks recommendation,a new scheme based on tag matching was proposed.First,each word in the corpus is trained by Word2Vec,and then a word vectors space can be obtained and the similarity among words can be obtained by using the cosine similarity.Secondly,through the similarity comparison experiments,this paper chose an appropriate similarity value as the threshold to judge whether two words are similar.Finally,the similarity threshold was applied to the matching algorithm.The simulation experiments show that the recommend users are relatively reliable and accurate.
Research on Data Mining Algorithm Based on Examination Process and Knowledge Structure
DAI Ming-zhu,GAO Song-feng
Computer Science. 2018, 45 (6A): 437-441. 
Abstract PDF(1535KB) ( 656 )   
References | RelatedCitation | Metrics
In order to study the mastery of knowledge points at different stages of student,based on the theory of data mining,knowledge structure was combined with examination results to study data.Based on the theory of educational measurement and the decision tree algorithm of data mining,an improved algorithm was proposed according to the original C4.5 algorithm,applying the difficulty level of the knowledge points involved in the test papers and the knowledge structure to refine the knowledge structure in order to determine the degree of knowledge of individual students or groups of students and the relationship between the knowledge points.The experimental results show that the efficiency of the improved algorithm is improved,whose formula is simple and practical compared with the original formula.According to the decision tree model,the remaining data is used to verify the improved formula,and it is faster to draw the conclusion that the effect of knowledge points on programming is relatively important.Test data is used to verify the decision tree,and the accuracy rate is 90%.Finally,a visual display of the decision tree can give an effective reference for students to learn the arrangements,teachers to develop teaching programs and arrangements.
Algorithm for Mining Bipartite Network Based on Incremental Modularity
DAI Cai-yan, CHEN Ling, HU Kong-fa
Computer Science. 2018, 45 (6A): 442-446. 
Abstract PDF(1606KB) ( 676 )   
References | RelatedCitation | Metrics
Aiming at mining communities from bipartite network,an algorithm based on incremental modularity was proposed.The algorithm assumes that each vertex constitutes a community by itself with its own label.A part of the vertex copies its own label and passes it to a vertex on another part,so that it is located in the same community,and then it performs the same operation on the vertices of another part,and repeats iterations until convergence.In label propagation,the algorithm chooses the edge with the largest incremental modularity,so that the overall modularity is constantly improving.The experimental results on real datasets show that the proposed algorithm can mine high quality communities from bipartite network.
Influence Factors Mining of Traffic Accidents Based on Association Rules
JIA Xi-bin,YE Ying-jie,CHEN Jun-cheng
Computer Science. 2018, 45 (6A): 447-452. 
Abstract PDF(1561KB) ( 1654 )   
References | RelatedCitation | Metrics
The road traffic safety is a public safety issue.The number of deaths due to traffic accidents account for the highest proportion in all accidents every year.With the development of big data intelligent analysis technology,the traffic accident data are extesively used to trace the causes,it is helpful to propose specific measures to avoid and prevent the occurrence of traffic accidents.According to the characteristics of diversity causes of traffic accidents,this paper proposd to use the news’ data of traffic accident combining with a wide range of news’ authenticity and characteristics timeliness to do the analysis of factors and the liability of traffic accidents.Taking the traffic accident news in Sina as the data source,the relevant factors of traffic accidents are extracted from it.In terms of the limitation in classic Apriori that only applies to a single dimension association mining and needs to scan database frequently,an improved multi-va-lued attribute Apriori algorithm was proposed.Focuing on the traffic accident data of provinces and cities,a variety of combination factors which lead to these traffic accidents were mined,thus the rules of frequent traffic accidents in pro-vinces and cities were summarized as the basis for taking preventive and regulatory measures.
Scaling-up Algorithm of Multi-scale Classification Based on Fractal Theory
LI Jia-xing, ZHAO Shu-liang,AN Lei,LI Chang-jing
Computer Science. 2018, 45 (6A): 453-459. 
Abstract PDF(1589KB) ( 568 )   
References | RelatedCitation | Metrics
At present,the research of multi-scale data mining mainly focuses on space image data,and recently has produced some results on the general data,including the multi-scale clustering and multi-scale association rules,but it has not been involved in the field of classification mining.Combining with fractal theory,this paper applied the theory,knowledge and methods related to the multi-scale data mining to the areas of the classification mining,and proposed an approach of similarity measure based on Hausdorff.Relative to the definition of weight through experience,this paper clearly defined it by the similarity of generalized fractal dimension to improve the precision of similarity measure.Then,this paper proposed a multi-scale classification scaling-up algorithm named MSCSUA(Multi-Scale Classification Scaling-Up Algorithm).At last,this paper performed experiments on four UCI benchmark data sets and one real data set (H province part of the population).The experimental results show that the thought of multi-scale classification is feasible and effective,the MSCSUA algorithm performs well in terms of classification than SLAD,KNN,Decision Tree and LIBSVM algorithms on different data sets.
Bisecting K-means Clustering Method Based on Cohesion and Coupling
YU Yong,KANG Qing-yi,CHEN Chang-geng,KAN Shi-lin,LUO Yong-jun
Computer Science. 2018, 45 (6A): 460-464. 
Abstract PDF(1569KB) ( 695 )   
References | RelatedCitation | Metrics
Clustering analysis is one of the most important techniques in data mining.It has important role and wide application in every field of social economy.K-means is one kind of the simple and widely used clustering methods,but its disadvantage is that it depends on the initial conditions and the number of clusters is difficult to determine.This paper introduced the cohesion and coupling of cluster,and presented the measurement of cohesion and coupling.Based on the principle of “high cohesion and low coupling”,the clusters are constantly divided and merged in the process of bisecting K-Means clustering algorithm.By judging whether the clustering results meet the requirements,it can determine the number of clusters,thus improving the bisecting K-Means clustering algorithm.The experimental results on Iris data show that the algorithm is not only more stable,but also has higher clustering accuracy.
TEFRCF:Collaborative Filtering Personalized Recommendation Algorithm Based on Tag
Entropy Feature Representation
HE Ming, YANG Peng, YAO Kai-sheng, ZHANG Jiu-ling
Computer Science. 2018, 45 (6A): 465-470. 
Abstract PDF(1638KB) ( 658 )   
References | RelatedCitation | Metrics
Tags are served as an effective way for information classification and information retrieval at the age of Web2.0.Tag recommendation systems aim to provide personalized recommendation for users by using tag data.Theexi-sting tag-based recommendation methods tend to assign the popular tags and their corresponding items more larger weight in predicting users’ interest on the items,resulting in weight deviations,reducing the novelty of the results and being unable to fully reflect users’ personalized interest.In order to solve the problems above,the concept of tag entropy was defined to measure the uncertainty of tags,and the collaborative filtering personalized recommendation algorithm based on tags entropy feature representation was proposed.This method solves the problem of weight deviation by introducing tag entropy,and then the tripartite graphs are used to describe the relationship among users,tags and items.The representation of users and items is constructed based on tag entropy feature representation,and the similarity of items is calculated by the feature similarity measure method.Finally,the user preferences for items are predicted by the linear combination of tags behaviors and similarity of items,and then the recommended list is generated according to the rank of preferences.The experimental results on Last.fm show that the proposed algorithm can improve recommendation accuracy and novelty,and satisfy the requirement for users.
Hash Join in MapReduce Distributed Environment Based on Column-store
ZHANG Bin, LE Jia-jin
Computer Science. 2018, 45 (6A): 471-475. 
Abstract PDF(1610KB) ( 785 )   
References | RelatedCitation | Metrics
The characters of big data are volume,variety,value,velocity,and common hardware and open source.Aiming at the system inefficiency and limited scalability of traditional relational database in big data analysis,this paper presented an algorithm of Hash joins in MapReduce distributed environment based on column-store by introducing MapReduce computing model.First of all,this paper proposed the design of large data-oriented distributed computing models.Then,it proposed the partition aggregation and the heuristic optimization strategy to realize the implementation of Hash join algorithm.Lastly,the experiments evaluated execution time and load capacity.The results show that the proposed method is effective and can provid good scalability in big data analysis.
Improved XGBoostModel Based on Genetic Algorithm for Hypertension Recipe Recognition
LEI Xue-mei, XIE Yi-tong
Computer Science. 2018, 45 (6A): 476-481. 
Abstract PDF(1559KB) ( 907 )   
References | RelatedCitation | Metrics
A novel improved XGBoost (eXtreme Gradient Boosting) model based on genetic algorithmfor hypertension recipe recognition was proposed.The model consists of three steps.Firstly,data pre-processing is employed to handle missing values,remove duplicate data and analyze data feature.Then,the genetic algorithm is used to optimize theparameters of XGBoost model adaptively.At last,hypertension recipe identification model is trained according to the optimal parameters.The results show that the parameters optimized by genetic algorithm performs better than grid search.Moreover,the proposed model outperforms other four models (Random forest,GBDT,Bagging and AdaBooster) over four evaluation measures:accuracy,recall rate,F1 and the area under the curve (AUC) on average,and enhances the interpretability of credit scoring model.
Co-location Pattern Mining Algorithm Based on Data Normalization
ZENG Xin,LI Xiao-wei,YANG Jian
Computer Science. 2018, 45 (6A): 482-486. 
Abstract PDF(1548KB) ( 516 )   
References | RelatedCitation | Metrics
In the practical application,the spatial features not only contain the spatial information,but also the attribute information,which is important for the knowledge discovery and scientific decision.Existing co-location pattern mining algorithms do not consider the weight of instances of different attributes in the adjacent distance when calculating the adjacent distance of two different feature instances.It results in that the weight of partial attribute is too large and also affects the result of the co-location pattern mining.Standardizing the attribute values and giving an equal weight to all attributes,a data standardization algorithm DNRA based on join-based was put forward.Meanwhile,a deep research was given on the problem that the distance threshold was difficult to determine.The range of the distance threshold was derived in DNRA algorithm,helping the users to select the appropriate distance threshold.Finally,the performance of the DNRA algorithm was analyzed and compared by a large number of experiments.
Adaptive Stochastic Gradient Descent for Imbalanced Data Classification
TAO Bing-mo,LU Shu-xia
Computer Science. 2018, 45 (6A): 487-492. 
Abstract PDF(1576KB) ( 852 )   
References | RelatedCitation | Metrics
For imbalanced data classification,the performance of using traditional stochastic gradient descent for solving SVM problems is not very well.Adaptive stochastic gradient descent algorithm defines a distribution pinstead of using uniform distribution to choose examples,and the smoothing hinge loss function is used in the optimization problem.Because of the training sets are imbalanced,using uniform distribution will cause the algorithm choose more majority class based on the imbalanced ratio.That would result the classifier bias towards the minority class.The distribution p largely overcomes this issue.When to stop the programs becomes an important problem,because the normal stochastic gradient descent algorithm does not have a stop criterion especially for large data sets.The stop criterion was setted according to the classification accuracy on the training sets or its subsets.This stop criterion could stop the programs very early especially for large data sets if the parameters are chosen properly.Some experiments on imbalanced data sets show that the proposed algorithm is effective.
Coordination Filtering Personalized Recommendation Algorithm Considering Average
Preference Weight and Popularity Division
HE Ji-xing,CHEN Wen-bin,MOU Bin-hao
Computer Science. 2018, 45 (6A): 493-496. 
Abstract PDF(1537KB) ( 996 )   
References | RelatedCitation | Metrics
This paper presented a new recommendation algorithm which takes into account the average preference weight.The algorithm is divided into three stages:neighborhood computing,data set partitioning and preference prediction.In the neighborhood calculation,the KNN based on the Euclidean distance is used to determine the neighborhood.At the same time,the data set is divided into the data set and the non-popular data set according to the popularity threshold of the data set itself.When the score is predicted,the existing neighborhood selects part of the project accor-ding to the popularity degree,and predicts the user’s average preference weight based on the preference similarity of the item set.The results show that on the Movielens 100K data set,the new algorithm is superior to the typical cosine recommendation algorithm,the person recommendation algorithm,the collaborative filtering algorithm based on the project preference coordination filtering algorithm and the user attribute weighted active neighbor existing algorithms in MAE.
Correlative Factors about Elderly Disabled and Dementia in Big Data Environments
LI Han LI Hui-jia, ZHANG Lin-zi, HUANG Yu-ying
Computer Science. 2018, 45 (6A): 497-501. 
Abstract PDF(1583KB) ( 866 )   
References | RelatedCitation | Metrics
Recently,with the deepening of China’s population aging problem,the rising family pension burden,government pressure and the decline of demographic dividend have gained wide attention.Aging,status,and sense of control (ASOC) is a continuous survey data about elderly’s health in American which was conducted every three years from 1995 to 2001.Using three waves of ASOC,we have found influential factors related to disability and dementia by logistic regression based on the descriptive statistical analysis.On the basis of the descriptive statistics,by Logistic regression and removing strong related factors,we discovered that age,gender,smoking,drinking,exercise,heart disease,arthritis/rheumatism, and participation in community servicehave strong correlation with the elderly disability.Age,drinking,hypertension,participation in community service and marital status are significantly related to dementia in the elderly.
Interdiscipline & Application
Process Modeling on Knowledge Graph of Equipment and Standard
YIN Liang,HE Ming-li,XIE Wen-bo,CHEN Duan-bing
Computer Science. 2018, 45 (6A): 502-505. 
Abstract PDF(1558KB) ( 800 )   
References | RelatedCitation | Metrics
In order to clearly describe the complex association between equipment,standards,and standardized elements,it is an importantly analytical tool to construct a knowledge graph of equipment-standard.Using the constructed knowledge graph of equipment-standard,the transformation of standardization research can be achieved from model following to system leading,from qualitative analysis to quantitative analysis,and from individual evaluation to system ve-rification.The process modeling is a key step in the knowledge graph modeling.The IDEF3 method is applied to model the main structure of knowledge graph and the sub-processes involved.A heterogeneous network model of equipment-standard knowledge graph is obtained through process modeling.
Pulse Condition Recognition Based on Convolutional Neural Network with Dimension Enlarging
ZHANG Ning
Computer Science. 2018, 45 (6A): 506-507. 
Abstract PDF(1548KB) ( 696 )   
References | RelatedCitation | Metrics
A new model of convolutional neural network was promoted for pulse condition recognition.The model is fit for the group including different dimensional data sets.For more effective training process,the sample characters and HHT’s results were considered as a times series.The result shows the expected accuracy rating and training efficiency.The method can also obtain the relations between pulse conditions and several personal biological data.
Application Research of Improved Parallel Fp-growth Algorithm in Fault Diagnosis
of Industrial Equipment
ZHANG Bin,TENG Jun-jie,MAN Yi
Computer Science. 2018, 45 (6A): 508-512. 
Abstract PDF(1572KB) ( 599 )   
References | RelatedCitation | Metrics
Nowadays,industrial equipment is becoming more and more intelligent and large-scale.Along with the increasingly complex and diverse equipment failures,how to diagnose faults quickly and accurately has become a challenge.Hence,taking big data technology Hadoop as platform,fp-growth is utilized as big data mining method to realize the fault diagnosis of industrial equipment.Taking the industrial gear box as example,firstly,the two parts of data are selected as the training data and the test data respectively.In preprocessing stage,the training data is processed by null value,the correlation analysis of dimension and discretization of data.Secondly,this paper put forward an improved paral-lel fp-growth algorithm based on interest to mine the association rules between attribute columns and faults by the training data.Finally,the association rules were verified by the test data to prove the feasibility of the improved method.Experiment results show that the proposed interest based improved parallel fp-growth algorithm can perform fault diagnosis efficiently with accuracy.
Design of Cache Scheduling Policies Based on MLC STT-RAM
ZHU Yan-na,WANG Dang-hui
Computer Science. 2018, 45 (6A): 513-517. 
Abstract PDF(1566KB) ( 504 )   
References | RelatedCitation | Metrics
Multi-level cell (MLC) STT-RAM which can store multiple bits per cell,has been considered as a promising alternative to SRAM for the last-level Cache.MLC STT-RAM can reduce static power consumption significantly and has smaller cell size facilitates and better read performance.However,a major shortcoming of MLC STT-RAM Cache is its inefficient write operations.Based on hard/soft partition structure,this paper implemented write intensity prediction for energy-efficient MLC STT-RAM LLC.The objective of this architecture is to dynamically predict whether blocks will be written more than certain times thereby helping to reduce write latency and energy of MLC STT-RAM Cache.The key idea to solve this problem is to correlate write intensity with memory access instruction addresses.On top of that,this paper designed MLC STT-RAM LLC based on this predictor,in which prediction results are used to determine Cache line placement.Experimental results showed that this architecture reduces 6.3% of write energy consumption and improves system performance by 1.9% on average compared to the previous approach.
Research of Model Checking Application on Aerospace TT&C Software
LI Yun-chou,YIN Ping
Computer Science. 2018, 45 (6A): 523-526. 
Abstract PDF(1546KB) ( 732 )   
References | RelatedCitation | Metrics
Model checking is an efficient method to ensure software quality.However,complex input data and indistinct verification properties in large-scale aerospace TT&C software greatly hinders the application of model checking.After analyzing the characteristics of aerospace TT&C software and the difficulty of applying model checking,an application framework to aerospace TT&C software based on CBMC was proposed,including the construction method of aerospace measurement data and the extraction method of verification properties.The framework was then applied to the trajectory measurement data processing software and the result was satisfied.
System and Methods of Passenger Demand Prediction on Bus Network
ZHOU Chun-jie,ZHANG Zhi-wang,TANG Wen-jing
Computer Science. 2018, 45 (6A): 527-535. 
Abstract PDF(1737KB) ( 917 )   
References | RelatedCitation | Metrics
Public transport,especially bus transport,can reduce the private car usage and fuel consumption,and alleviate the condition of traffic congestion and environmental pullution.However,when traveling with buses,the travelers not only care about the waiting time,but also care about the crowdedness in the bus.Excessively overcrowded buses may drive away many travelers and make them reluctant to take buses.So accurate,real-time and reliable passenger demand prediction becomes necessary,which can help determine the bus headway and reduce the waiting time of passengers.However,there are three major challenges for predicting the passenger demand on bus services:inhomogeneous,seaso-nal bursty periods and periodicities.To overcome the challenges,this paper proposed three predictive models and further took a data stream ensemble framework to predict the number of passengers.This paper developed an experiment over a 22-weekperiod.The evaluation results suggest that the proposed method achieves outstanding prediction accuracy among 86411 passenger demands on bus services,more than 78% of them are accurately forecasted.
Verification of G Language System Model Based on SPIN
XUE Yan, WU Shu-hong, WANG Yao-li
Computer Science. 2018, 45 (6A): 536-540. 
Abstract PDF(1618KB) ( 769 )   
References | RelatedCitation | Metrics
For large systems,in order to ensure the reliability,stability and efficiency of its operation,it is necessary to verify the system from two aspects,the business model and the system model.At present,the validation of the business model can be done through BPMN.For the system model validation,SPIN tool is selected.G language created by the NI company is a graphical block diagram language and has not yet joined the ANSI standard.Therefore,the first step is to extract the G language form,rules,grammar and other language features.SPIN does not provide direct support for the G language,so the second step is to complete the G2Promela mapping.In the work of G2Promela,mainly taking the framework of the compiler to Scanner- Parser- Optimizer- Generator (SPOG framework) as the main line,according to the first step of the pre-processing work,G2Promela mapping rules is classified and created through the method function,pointer,keywords,variables to realize the G language system model validation.The proposed method complements the gaps in the G language system model validation,thus further ensuring the performance of the G language program.
CCI Noise Equalization Algorithm for MLC Flash Memory
ZHANG Xuan,ZHOU Le,HOU Ai-hua
Computer Science. 2018, 45 (6A): 541-544. 
Abstract PDF(1559KB) ( 629 )   
References | RelatedCitation | Metrics
With the increase of MLC (Multi-Level Cell) flash memory density,CCI (Cell-to-Cell Interference) is the dominant noise source which affects the reliability of NAND flash memory.On the research of MLC flash memory mo-del and CCI noise model,an equalization algorithm of CCI noise was proposed for MLC flash memory.This method compensates the sensed threshold voltage of MLC flash memory by estimating the CCI interference,so it is more accurate to read the information stored in MLC.The simulation results show that the CCI noise equalization algorithm can reduce the overlap of the adjacent threshold voltage distribution,which help to reduce the raw bit error rate and enhance the reliability of flash memory.
Change Propagation Method of Service-oriented Business Process Model with Data Flows Based on Petri Net
HE Lu-lu, FANG Huan
Computer Science. 2018, 45 (6A): 545-548. 
Abstract PDF(1583KB) ( 513 )   
References | RelatedCitation | Metrics
In order to adapt to changing business requirements flexibly,it is necessary to adjust the process models.In the process of business integration,business logic may change,and thus it is critical to analyze the investigations of business changes and its propagation.The existed methods study the business change regions and change propagation me-thods mainly from the aspect of control flow structures,neglectes the data information and service structures of the mode.In this paper,the change propagation method and the change domain regions were analyzed in the service-oriented business process model with data information.It focused on the propagation problem between the service layers and the process layers.Firstly,by a mutation operation in a service layer (or process layer),the direct influence region of the change was discussed in detail,and two change propagation algorithms were proposed,which are the service layer change propagation algorithm named SLCPA,and the process layer change propagation algorithm named PLCPA respectively.Finally,a case example was given to illustrate the feasibility and effectiveness of the proposed method.
Research on Ontology Data Storage of Massive Oil Field Based on Neo4j
GONG Fa-ming,LI Xiao-ran
Computer Science. 2018, 45 (6A): 549-554. 
Abstract PDF(1594KB) ( 1000 )   
References | RelatedCitation | Metrics
The development of semantic web technology has promoted the development of integrated technology between multidisciplinary ontology in the oil field.As the scale of data increases,the traditional data storage and information retrieval based on relational database have encountered a lot of problems.In view of this problem,this paper proposed a domain ontology construction process based on Neo4j database to improve data storage and information retrie-val.Firstly,this paper proposed a solution of large-scale ontology data storage problem based on Neo4j graphics database.By designing a distributed storage mechanism based on Neo4j storage model,the efficient use of storage space was realized .Secondly,based on the Neo4j data model,this paper designed a two-tier index architecture retrieval algorithm.In the light of experimental evaluation,compared with the method based on the relational database,the method proposed in this paper can save more than 10% storage space,and improve the search efficiency by more than 30 times.
Analysis and Processing of Speech Signal Based on MATLAB
HUANG Chun-yan,JING Ni-jie,ZHU Hong-mei
Computer Science. 2018, 45 (6A): 555-558. 
Abstract PDF(1609KB) ( 2619 )   
References | RelatedCitation | Metrics
As an engineering software with very powerful functions of data analysis and processing,MATLAB is used for speech signal analysis,processing and visualization conveniently.This paper introduced the principle of FFT,the functions of MATLAB and the design and usage of the filter designing firstly.And then,an actual speech signal with noise was analyzed and processed by MATLAB.The result showed that MATLAB can execute analysis and processing of the speech signal simply and conveniently.
Object Optimization Research of Underway Materials Resupplying for Battle Group
QIN Fu-rong, LUO Zhao-hui, DONG Peng
Computer Science. 2018, 45 (6A): 559-561. 
Abstract PDF(1582KB) ( 762 )   
References | RelatedCitation | Metrics
In order to improve the process of battle group’s underway materials resupplying,this research analyzed the whole process of underway replenishment scheduling and the necessity of transportation scheduling.Afterwards,a comparison between ordinary emergency resources scheduling and underway materials transportation scheduling was made,and the similarities and differences were analyzed.Then,based on requirements of transportation scheduling,this research presented a multi-objective optimization model by taking the minimum task completion time and the minimum task cost as objectives.Lastly,an algorithm was designed to solve this scheduling problem and a typical example is gi-ven.A reasonable scheme is obtained by using models and algorithm above,which verifies effectiveness and feasibility of the algorithm.
Optimization of Register Allocation Strategy for MLC STT-RAM
NI Yuan-hui,CHEN Wei-wen,WANG Lei,QIU Ke-ni
Computer Science. 2018, 45 (6A): 562-567. 
Abstract PDF(1572KB) ( 485 )   
References | RelatedCitation | Metrics
Multi-level cell spin-transfer torque random access memory (MLC STT-RAM) is a promising nonvolatile memory technology.Unlike the SRAM that uses a charge mode to store information,MLC STT-RAM uses the spin polarization current to change the magnetic layer direction of the free layer through the magnetic tunneling junction (MTJ) to store information,so it can naturally avoid electromagnetic interference.This paper used the anti-electromagnetic radiation characteristics of MLC STT-RAM,and explored it as a register for its natural immunity to electromagnetic radiation in rad-hard space environment.MLC STT-RAM exhibits unbalanced write-state transitions due to the fact that the magnetization directions of hard and soft domains cannot be flipped.This feature leads to nonuniform costs of write-states in terms of latency and energy.However,current SRAM-targeting register allocations do not have a clear understanding of the impact of the different write-state transition costs.As a result,those approaches heuristically select variables to be spilled without considering the spilling priority imposed by MLC STT-RAM.Aiming to address this li-mitation,this paper proposed a state-transition aware spilling cost minimization (SSCM) policy to save power when MLC STT-RAM is employed in register design.Specifically,the spilling cost model is first constructed according to the linear combination of different state transition frequencies.Directed by the proposed cost model,the compiler picks up spilling candidates with the highest cost to achieve lower power and higher performance.
Analysis on Mathematical Models of Maintenance Decision and Efficiency Evaluation of Computer Hardware
ZHAI Yong, LIU Jin, CHEN Jie, LIU Lei, XING Xu-chao, DU Jiang
Computer Science. 2018, 45 (6A): 568-572. 
Abstract PDF(1597KB) ( 686 )   
References | RelatedCitation | Metrics
Combing the actual conditions of computer hardware maintenance and on the basis of the reliability and avai-lability theory of equipmentl subsystem,a calculation method of maintenance importance variables which include equipment / subsystem asset salvage value,importance of business and unreliability was analyzed in accordance with the principle of maintenance funding efficiency optimization.Based on this,the mathematical model for evaluating the maintenance requirement of equipment/subsystem was researched and constructed,and then the maintenance decision algorithm was put forward.Finally,combining the instances,the paper proposed the method of maintenance efficiency eva-luation by using reliability theory of computer equipment/subsystem and the availability analysis based on Markov chain,providing certain inspiration for quantitative assessment of computer hardware maintenance.
Application of Dual Keeloq Algorithm in Intelligent Access Control System
WU Wei-jian, CHEN Shi-guo,LI Dan
Computer Science. 2018, 45 (6A): 573-575. 
Abstract PDF(1528KB) ( 761 )   
References | RelatedCitation | Metrics
Access control system has always been an important part of access control system ,and Keeloq rolling code technology has a wide range of applications in intelligent access control systems,wireless door locks and other fields.This paper analyzed the encryption and decryption principle of keeloq algorithm and its application in the key code of the access control system.Summarizing some problems of single keeloq algorithm and multiple keeloq algorithm,this paper provided a kind of scheme called dual keeloq algorithm to improve its security.The dual keeloq algorithm is not a simple secondary keeloq algorithm for a key code encryption,it’s another keeloq algorithm encryption of the important field of key code on the basis of single keeloq algorithm encryption.The required information and the code length are different in the two encryption.This encryption method increases the complexity of key code.At the same time,compared to multiple keeloq encryption,it reduces the overhead of system calculation in the key code encryption and decryption process.
Design of Noise Measurement System for Automobile Injector
ZHU Jun-chao,WANG Tan,ZHANG Bao-feng
Computer Science. 2018, 45 (6A): 576-579. 
Abstract PDF(1569KB) ( 600 )   
References | RelatedCitation | Metrics
Aiming at the technical requirements of injector noise measurement at home and abroad,a set of noise mea-surement system for automobile injector was designed.The system includes an injector drive module,a system control module,a system oil supply module,a noise measurement module and a man-machine interaction module.The pressure control of oil supply module adopts the pneumatic control method of the membrane conduction combined with the PID feedback to improve the reliability and the precision of pressure control.The system software is designed by using the Windows VS2010 development platform.Using CAN communication,RS232 serial communication technology and multi-thread parallel technology,the whole control of man-machine interaction interface to the noise detection system of injector is achieved.Based on the sound pressure method,the noise of the fuel injector is measured and analyzed,and the performance of the injector is evaluated according to the measurement result.Experimental results show that the maximum sound pressure level and average sound pressure level under A weighting are less than 70db (A) in the standard,meeting the requirements of the standard on fuel tanker’s noise value.
Research onData Processing Method of Wireless Monitoring System
LIAN Le, FU Jie
Computer Science. 2018, 45 (6A): 580-582. 
Abstract PDF(1529KB) ( 489 )   
References | RelatedCitation | Metrics
The traditional database and online transaction processing (OLTP) have been unable to meet users’ demands for data query and analysis.This paper put forward a new kind of data processing mode combining ROLAP technology and an improved algorithm of data mining approach.It uses the ROLAP engine to combine star database into a multidimensional data structure,and uses the improved K-means algorithm to classify and aggregate uncache data in the database.Combining linear regression algorithm,the data rate of change is counted to achieve earlier warning function of monitoring system.Simulation results show that the new data processing system can mine more data and information,and the early warning time has been significantly improved compared with the traditional alarm mode.
Novel Method of Web Page Segmentation Based on Title Machine Learning
LI Jin-sheng,LE Hui-xiao,TONG Ming-wen
Computer Science. 2018, 45 (6A): 583-587. 
Abstract PDF(1581KB) ( 1373 )   
References | RelatedCitation | Metrics
To solve the problem that it is difficult to implement the web page segmentation method based on document object model (DOM),a novel method was proposed through employing string model.The feature of the title of a web page is dug out by machine learning.Based on the found title,the web page is segmented.Firstly,the titles in web pages are picked up by the information of liner block function and title tag.Secondly,web pages are partitioned into content blocks by using the titles.Finally,the content blocks are merged by block depth information.It is proved that the complexity of algorithms in the method are O(n),and the method is suitable for web pages in the university portal,blog and resource web sites.The method is useful for many applications in web page information management,and it has a good prospect.
Improved Difference Algorithm and It’s Application in QRS Detection
PENG Yan,WU Zhao-qiang, ZHANG Jing-kuo, CHEN Run-xue
Computer Science. 2018, 45 (6A): 588-590. 
Abstract PDF(1564KB) ( 1064 )   
References | RelatedCitation | Metrics
An improved difference threshold algorithm was used to realize the electrocardiogram QRS wave detection.Distinguishing from traditional adaptive algorithms,the algorithm can realize precise localization of QRS wave in the case of strong interference,the detection error rate is under 1%,and it has feature of a small amount of calculation and strong real-time.Through a lot of practice,the implementation of algorithm can be divided into three steps.Firstly,through the combination of first derivative and second derivative,the QRS complex is determined.Secondly,Q,R,S peak position are confirmed through the adaptive threshold.Thirdly,the position of P wave and T wave are determined by using the form method through above parameters.
Design and Implementation of Network Subscription System Based on Android Platform
HAO Jun-sheng,LI Bing-feng,CHEN Xi,GAO Wen-juan
Computer Science. 2018, 45 (6A): 591-594. 
Abstract PDF(1552KB) ( 708 )   
References | RelatedCitation | Metrics
For solving the problem of congestion,dining difficulty and boring waiting time in college dining peak period,this paper designed and developed a college network ordering system based on Android platform.The system consists of four parts,user management,online ordering,online payment and order sequence.In order to determine whether the number of orders unit time has reached the maximum capacity of the window,this paperd used the K-means algorithm to characterize the customerst’queuing time to extract the similarity between the customer queuing time based on the dishes and establish the maximum capacity of the window standard.It takes the maximum capacity of the window as the maximum allowable value of the user orders in the unit time.The proposed method can alleviate congestion problems by the mode of “user selecting time for free,making meal in advance and taking meal on time”.
Tableware Sorting System Based on LabVIEW Machine Vision
ZHANG Wen-yong, CHEN Le-zhu
Computer Science. 2018, 45 (6A): 595-597. 
Abstract PDF(1638KB) ( 1241 )   
References | RelatedCitation | Metrics
With the rapid development of the domestic industrial robot industry,intelligent equipment begins to be applied to engineering practice projects.Machine vision,as the eyes of the robot,enjoying diverse applications over condition monitoring,inspection and quality control field,has made remarkable development in recent years.Regarding the sorting of table wares as the research object,this system mainly adopts the LabVIEW software developed by the U.S.NI companies as development environment,by invoking abundant specialized controls and function library of the visual development kit IMAQ Vision and Vision Assistant.Simultaneously,taking the special conditions of the dishes classification into account,this paper designed a set of machine vision application systems,which not only are easy-used but also integrate the image acquisition with image processing,visual inspection and judgment.Based on LabVIEW,the application of machine vision system implements the function of sorting dishes,and solves some problems in practical application,which has laid a good foundation for further research and development.Therefore,the system greatly improves the accuracy and efficiency of the cutlery classification.
Development of Real 3D Display System Based on Light Field Scanning
ZENG Chong, GUO Hua-long, ZENG Zhi-hong, ZHAO Juan
Computer Science. 2018, 45 (6A): 598-600. 
Abstract PDF(1540KB) ( 1125 )   
References | RelatedCitation | Metrics
In recent years,the light field 3D display technology,an innovative 3D display technology,has been proposed in the field of computer vision.This technology is a real 3D display system based on light field scanning.By restructuring the spatial distribution of light intensity,the amount of excessive information will be reduced.In order to design the system,high speed projector,directional scatter reflector,high speed spinning motors and so on will be applied into the system.The feasibility of the display system will be analyzed based on the principles of lithe field 3D display and its system,display system structure and stereo imaging.The experiment proves that when the output power of the projector and rotary power of the motor reach a certain level,the dimensional imaging will be achieved.Without any goggles or tools,observers could watch the 360 dimensional images with their naked eyes.
Design of Storage Platform for Large Scale Data Based on SWIFT System
LI Peng-yuan,ZHANG Zhi-yong
Computer Science. 2018, 45 (6A): 601-605. 
Abstract PDF(1579KB) ( 738 )   
References | RelatedCitation | Metrics
With the rapid development of China’s space activities,storing the massive data based on the huge data stora-ge platform becomes increasingly important.This paper presented a cloud storage solution based on one distributed storage system which is named SWIFT,and built the infrastructure architecture for this storage platform.The design of SWIFT mainly includes four parts,the hash process of data storage,Ring,Partition,and Replica policy.And this paper verified the validity of the key design of SWIFT through the way of data simulation.