Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
Current Issue
Volume 43 Issue 12, 01 December 2018
Survey of Target Tracking Algorithms Based on Machine Learning
CAO Dong, FU Cheng-yu and JIN Gang
Computer Science. 2016, 43 (12): 1-7, 35.  doi:10.11896/j.issn.1002-137X.2016.12.001
Abstract PDF(1452KB) ( 163 )   
References | Related Articles | Metrics
The theories and algorithms based on machine learning on video target tracking become an important direction of development.On-line learning,through continuous learning and update of the sample to adapt background environment and the change of target,performs better in target tracking.According to the characteristics of the algorithms,the on-line learning methods is divided into ensemble learning method,discriminant learning method and kernel learning method.The detailed descriptions of the representative methods for each class were presented.Finally,the challenges of applying machine learning to target tracking and some interesting research trends were pointed out.
Review on Methods of Operation Planning
CHENG Kai, CHEN Gang, ZHANG Pin and YIN Cheng-xiang
Computer Science. 2016, 43 (12): 8-12, 23.  doi:10.11896/j.issn.1002-137X.2016.12.002
Abstract PDF(530KB) ( 89 )   
References | Related Articles | Metrics
The quality of operational plan determines the success or failure of a war.The course of action generation is the key to planning which is widely concerned by domestic and foreign researchers.Currently the generation of operation plan faces the problem that the space of action states can neither effectively deal with the uncertainties affecting the implementation effects of the plan nor satisfy the demand of modern war’s nonlinear and uncertainty.So this paper summarized the progress in related research fields from the respects of classic planning and operational planning.Especially for the operational planning problem,traditional,effect based operation and uncertain course of action generation me-thods were discussed systemically.Then the main research directions of these fields were pointend out,which has good significance for operational planning.
State-of-the-art on Deep Learning and its Application in Image Object Classification and Detection
LIU Dong, LI Su and CAO Zhi-dong
Computer Science. 2016, 43 (12): 13-23.  doi:10.11896/j.issn.1002-137X.2016.12.003
Abstract PDF(1727KB) ( 134 )   
References | Related Articles | Metrics
For traditional algorithms and strategies on image object classification and detection is hard to face the Challenges from efficiency,performance and intelligent of processing of image video big data.Based on the simulation of a hierarchical structure existing in human brain,deep learning can establish the mapping between the low-level signals and the high-level semantics for achieving the hierarchical expression of data characteristic.Deep learning with powerful ablility for visual information processing becomes the cutting-edge technology and research hot spot in coping with the coming challenge.At first,in this paper the basic theory of deep learning was discussed.Then,around image object classification and detection,we respectively summarized the development of deep learning in the visual field recentely.Finally,deep learning and its current problems in the visual field and the subsequent research direction were discussed in a well-informed level.
Review of Concept Drift Data Streams Mining Techniques
DING Jian, HAN Meng and LI Juan
Computer Science. 2016, 43 (12): 24-29, 62.  doi:10.11896/j.issn.1002-137X.2016.12.004
Abstract PDF(637KB) ( 354 )   
References | Related Articles | Metrics
Data stream is a new data model proposed in recent years.It has different characteristics such as dynamic,infinite,high dimensional,orderly,high speed and evolving.In some data stream applications,the information embedded in the data is evolving over time that has the characteristics of concept drift or change.These data streams are known as evolving data streams or concept drift data streams.Therefore,the algorithms that mine data streams have space and time restrictions,and need to adapt change automatically.In this paper,we provided the survey of concept drift and classification,clustering and pattern mining on concept drift data streams.Firstly,we introduced the types and detection methods about concept drift.In order to deal with the concept drift,the sliding window model is used to mining data stream.The data stream classification model includes single model and ensemble model.The common methods include decision tree,classification association rules and so on.Data stream clustering methods can be divided into k-means based method and not.Pattern mining can provide useful patterns for classification,clustering,association rules and so on.Patterns include frequent patterns,sequential patterns,episode,sub-tree,sub-graph,high utility patterns and so on.Finally,we introduced the frequent patterns and high utility patterns in detail.
Chinese Real-word Error Automatic Proofreading Based on Combining of Local Context Features
LIU Liang-liang and CAO Cun-gen
Computer Science. 2016, 43 (12): 30-35.  doi:10.11896/j.issn.1002-137X.2016.12.005
Abstract PDF(501KB) ( 137 )   
References | Related Articles | Metrics
Similar to the English context-sensitive spelling correction,real-word error in Chinese refers to the error that a Chinese word is misused to another Chinese Word.In the paper,a Chinese real word error detection and correction method based on confusion sets was proposed.This method extracts local feature around the aim word which forms left adjacent bigram,right adjacent bigram and three trigrams.The probability of bigram and trigram are computed with the confusion words in the aim word’s confusion set.A model based on multi-feature fusion was proposed and rules was used to find the real-word errors.We classified the result into two types,marking the errors and rewriting the errors.In the experiment,we used 18 group confusion sets and built 20000 sentences corpus to validate the algorithm.The results show that the proposed method can find the real-word errors in Chinese texts and give the correction lists.The proposed method combines automatic error-detecting and automatic error-correction.
Uyghur Keyword Extraction and Text Classification Based on TextRank Algorithm and Mutual Information Similarity
Ghalip ABDUKERIM and LI Xiao
Computer Science. 2016, 43 (12): 36-40.  doi:10.11896/j.issn.1002-137X.2016.12.006
Abstract PDF(399KB) ( 113 )   
References | Related Articles | Metrics
This paper proposed Uyghur keyword extraction and text classification scheme based on TextRank algorithm and mutual information similarity for the issues of classification in Uyghur language text.Firstly,the input document is pre-processed to filter out non-Uyghur characters and stop words.Then,keywords set in the text is extracted through using the TextRank algorithm which is weighted by semantic similarity of words,position of words and importance of frequency.Finally,the similarity between keyword sets in the input text and a variety of keyword sets is measured according to the mutual information similarity,and the text classification is realized.The experimental results show that this scheme can efficiently extract the keywords,and the average classification rate reaches 91.2% when the set size is 1250.
Study on Microblog Propagation Model Based on Analysis of User Behavior
ZHENG Zhi-yun, GUO Fang, WANG Zhen-fei and LI Dun
Computer Science. 2016, 43 (12): 41-45, 70.  doi:10.11896/j.issn.1002-137X.2016.12.007
Abstract PDF(517KB) ( 74 )   
References | Related Articles | Metrics
With the rapid rise of twitter and its influence continuing to improve,extracting the microblog information dissemination characteristics and building the propagation model have become a hot research topic.Forward for user behavior,firstly the information transmission mechanism was analyzed.Then accroding to eight factors extracted from publishing user,receiving user,user intimacy and information timeliness four aspects which affect the user behavior,the model was established.After that,the SCIR model was presented based on user behavior analysis and its dynamic equation was given.Finally the rationality of the model was validated by real forwarding data.Results show that forward considering user behavior influence factor,and combining the behavior analysis,can well fit information dissemination process.
Self-adaptive Genetic Algorithm Based on Intuitionistic Fuzzy Niche for Solving Traveling Salesman Problem
MEI Hai-tao, WANG Yi and HUA Ji-xue
Computer Science. 2016, 43 (12): 46-49, 78.  doi:10.11896/j.issn.1002-137X.2016.12.008
Abstract PDF(421KB) ( 53 )   
References | Related Articles | Metrics
An improved niche algorithm based on distance measure of intuitionistic fuzzy and self-adaptive fuzzy genetic algorithm was proposed.Utilizing the distance measure of intuitionistic fuzzy set and the individual fitness in genetic algorithm optimizing procedure which is used to measure the similarity of individual,the one with low fitness is eliminated by the share function and penalty function,which can enhance the diversity of population.Furthermore,a fuzzy control system is established to adjust the crossover and mutation rate adaptively,and the algorithm can get balance between local search and global search capability,which can avoid the premature convergence and poor searching efficiency in the later period.Simulation results of series TSPLB instances show that the proposed method has many advantages on the convergence speed,optimal precision and efficiency.
Keyword Extraction Algorithm Based on Length and Frequency of Words or Phrases for Short Chinese Texts
CHEN Wei-he and LIU Yun
Computer Science. 2016, 43 (12): 50-57.  doi:10.11896/j.issn.1002-137X.2016.12.009
Abstract PDF(725KB) ( 107 )   
References | Related Articles | Metrics
Keyword extraction for Chinese text is an important and difficult part of the text processing research,especially in the field of natural language processing research.Most existing studies focus on English text or long Chinese text,but due to their nature limitations,those keyword extraction algorithms can not apply to Chinese text.Those keyword extraction algorithms for English text are unsuitable for extracting keywords from Chinese texts.How to extract words or phrases accurately from Chinese text which are meaningful and closely related to the topics of this paragraph is the point of this paper.This paper presented a novel keyword extraction algorithm based on length and frequency of words or phrases for Chinese texts.This algorithm firstly extracts words or phrases with high frequency in the paragraph,then calculates the weight of the words or phrases according to the frequency and length of these words or phrases.Lastly, according to their weights,keywords are filtered out.This algorithm can extract the relative important words or phrases from the Chinese text accurately,which can help us find out the theme of this section efficiently and accurately.Experimental results show that compared with other keyword extraction algorithms,the proposed keyword extraction algorithm can process Chinese text with higher accuracy.
Evidence Combination Rule Based on Vector Conflict Representation Method
LI Jun-wei and LIU Xian-xing
Computer Science. 2016, 43 (12): 58-62.  doi:10.11896/j.issn.1002-137X.2016.12.010
Abstract PDF(418KB) ( 65 )   
References | Related Articles | Metrics
In order to solve the counter-intuitive behaviors gotten by Dempster combination rule when combining high conflict evidence ,a new improved Dempster combination rule based on vector conflict representation(VCRD) method was proposed.Firstly,the deficiencies of conflicting belief and Jousselme distance are analyzed by the way of examples.Then,the conflicting degree between the evidence is measured by using similarities and differences of evidence vector,and the evidence can be amended according to the weight computed by utilizing the conflicting degree between them.Finally,the modified functions are combined by Dempster combination rule.The theoretical analysis and experimental results show that compared with Dempster combination rule and other improved methods by the results of numerical examples,VCRD combination rule exhibits the ability in combining highly conflicting evidences rationally and reducing the decision risk.
Accelerated Attribute Reduction Algorithm Based on Probabilistic Rough Sets
LIU Fang and LI Tian-rui
Computer Science. 2016, 43 (12): 63-70.  doi:10.11896/j.issn.1002-137X.2016.12.011
Abstract PDF(590KB) ( 83 )   
References | Related Articles | Metrics
A heuristic attribute reduction algorithm based on probabilistic rough sets was introduced.Incremental approaches for computing the probabilistic approximation accuracy and the modified probabilistic approximation accuracy in probabilistic rough sets were presented.The attribute core is obtained by comparing the updated values of the probabilistic approximation accuracy.Then,the attribute reduction of probabilistic rough sets is gradually obtained by comparing the updated values of the modified probabilistic approximation accuracy.Finally,a fast algorithm for calculating the attribute core and attribute reduction based on probabilistic rough sets is developed.And the effectiveness and feasibility of the proposed accelerated algorithm for attribute reduction are validated by illustrative examples.
Incrementally Updating Approximations Approach in Dominance-based Rough Set for Multi-criteria Classification Problems
LI Yan, JIN Yong-fei, WU Ting-ting, GUO Na-na and YU Qun
Computer Science. 2016, 43 (12): 71-78.  doi:10.11896/j.issn.1002-137X.2016.12.012
Abstract PDF(629KB) ( 85 )   
References | Related Articles | Metrics
In the framework of dominance-based rough set approach(DRSA),dominance relations are used to handle preference ordered attributes contained in data and these attributes are also called as criteria.DRSA has been widely used in multi-criteria decision-making problems.In real applications,however,due to the variations of attribute set and object set,the information systems are often updated from time to time.Under such dynamic environment,the approximation sets in DRSA are required to be updated correspondingly for their future use in feature reduction,rule extraction,and finally in decision-making.In this paper,focusing on multi-criteria classification problems,we developed incremental methods to update set approximations when an object is inserted or deleted.The updating principles in difference cases were discussed and related theoretical results were given with detailed proofs.Two incremental algorithms,DRSA1 and DRSA2,were proposed to update approximations sets when an object is deleted or inserted respectively.Illustrative examples were also given to support the effectiveness of the proposed incremental methods.The experimental results on UCI data sets demonstrate the obvious improvement for non-incremental method (classic DRSA) in terms of efficiency and scalability by using the incremental approach.
Effective Algorithm for Computing Attribute Core Based on Binary Representation
HU Shuai-peng, ZHANG Qing-hua and YAO Long-yang
Computer Science. 2016, 43 (12): 79-83, 107.  doi:10.11896/j.issn.1002-137X.2016.12.013
Abstract PDF(477KB) ( 60 )   
References | Related Articles | Metrics
Computing partitions of the domain (U/C) based on condition attributes and searching for attribute core are the most critical and time-consuming computation during the process of knowledge discovery based on rough sets.Gene-rally,them through comparing every attribute value of each object.In this paper,based on the binary representation,the “sum” of all the conditional attribute was got firstly.Comparing the “sums” once ,we can obtain U/C through judging whether the “sums” are repeated or not,and its time complexity is O(|C||U|).Then this method for computing U/C was used to design a new efficient algorithm for quickly computing attribute core,and its time complexity is O(|C||U|) whether information system is consistent or not.An example was used to illustrate the detail steps of the proposed algorithms.Finally,experimental results show that the new algorithms are not only exact but also efficient.
Fuzzy Rough Set Model Based on OWA Operator in Fuzzy Information System
YANG Ji-lin and QIN Ke-yun
Computer Science. 2016, 43 (12): 84-87.  doi:10.11896/j.issn.1002-137X.2016.12.014
Abstract PDF(298KB) ( 111 )   
References | Related Articles | Metrics
In the fuzzy information system,attribute value is not an uncertain value,it is a membership function.Therefore,differences between objects are aggregated by the ordered weighted averaging (OWA) operator for all the attri-butes,which characterizes the similarity of objects.Then similar degree of objects was defined and the related properties were discussed.According to the similar degree of objects,the membership degree of an object which belongs to lower and upper approximations was given by logic relations and function operations.Finally,experimental results show that the similarity of objects can be accurately characterized by the similar degree of objects.Moreover,every object belonging to the lower and upper approximation sets can be more visually and reasonably described by membership degrees of every object for lower and upper approximations.Meanwhile,the description of the rough set can be more reasonable.
Group Decision-making Method Research Based on Time-series Fuzzy Soft Sets of Attribute Weights
ZHANG Qi-wen and XIE Yan-zhao
Computer Science. 2016, 43 (12): 88-90, 96.  doi:10.11896/j.issn.1002-137X.2016.12.015
Abstract PDF(309KB) ( 62 )   
References | Related Articles | Metrics
Aiming at the problems that fuzzy soft sets’ attribute weights are often ignored or subjective experience is relied on to determine in the process of group decision-making,we proposed a method for determining attribute weights based on the attribute dominance degree and discussed its related properties and operations.In group decision-making process,according to this characteristic of the decision-making information varying with time,the concept of time-series fuzzy soft sets was defined and the logarithmic growth model time weights formula based on the decision-making time difference was established.Finally,we used comparative analysis with other decision-making methods to verify the feasibility and rationality of the approach.
Sparse Feature Learning for Restricted Boltzmann Machine
KANG Li-ping, XU Guang-luan and SUN Xian
Computer Science. 2016, 43 (12): 91-96.  doi:10.11896/j.issn.1002-137X.2016.12.016
Abstract PDF(1778KB) ( 76 )   
References | Related Articles | Metrics
As a basic model for deep learning algorithms,restricted boltzmann machine (RBM) is widely applied in the field of machine learning.However,the traditional RBM algorithm does not take full account of the sparse feature lear-ning for data.Therefore,the algorithm performance is greatly influenced by the sparsity of the dataset.In this study,a sparse feature learning method for restricted Boltzmann machine(sRBM) was proposed.Firstly,the sparse coefficient of the dataset is determined by the mean of normalized input data.Then the dense dataset with the sparse coefficient being greater than threshold will be converted to sparse dataset automatically.As a result,sRBM makes the input data sparse without information loss.We performed experiments on MNIST dataset and attribute discovery dataset.The experimental results show that sRBM improves the performance of sparse feature learning for RBM effectively.
Imbalanced Data Classification Method Based on Support Vector Over-sampling
Computer Science. 2016, 43 (12): 97-100.  doi:10.11896/j.issn.1002-137X.2016.12.017
Abstract PDF(1938KB) ( 72 )   
References | Related Articles | Metrics
Traditional support vector machine has drawbacks in dealing with imbalanced data.In order to improve the recognition accuracy of the minority class,an over-sampling method based on support vector was proposed.Firstly,K nearest neighbor technology is used to remove the noise from the original data set.Support vector machine learning is then used to obtain the support vector.Noise obeying a certain rule is added to each support vectors of the minority class to increase the number of minority class samples in order to obtain the relative balanced data set.Finally,the support vector machine is learned on the new data set.The experimental results show that the proposed method is effective on both artificial data sets and UCI standard data sets.
Sliding-window Based Topic Modeling
CHANG Dong-ya, YAN Jian-feng, YANG Lu and LIU Xiao-sheng
Computer Science. 2016, 43 (12): 101-107.  doi:10.11896/j.issn.1002-137X.2016.12.018
Abstract PDF(1137KB) ( 159 )   
References | Related Articles | Metrics
LDA(Latent Dirichlet Allocation) is an important hierarchical Bayesian model for probabilistic topic mode-ling,which touches on many important applications of text mining.This model takes neither the order of documents nor the order of words in one document into account,which simplifies the complexity of issues and provides a great chance to improve itself.To achieve this goal,a sliding-window based topic model was proposed.The fundamental idea of this model is that the theme of one word in a specific document has a strong relationship at the words near by and is mainly affected by them.Through modifying the size of window and sliding step,document is cut into smaller pieces.Meanwhile,aiming at the big dataset and data flow,online sliding window theme model was proposed.Experiments show that the sliding-window based topic model has better generalization performance and accuracy on four common datasets.
Utilizing Tri-training Algorithm to Solve Cold Start Problem in Recommender System
ZHANG Xu-chen
Computer Science. 2016, 43 (12): 108-114.  doi:10.11896/j.issn.1002-137X.2016.12.019
Abstract PDF(587KB) ( 88 )   
References | Related Articles | Metrics
With the development of social network,recommender system is becoming more and more important.Cold start is one of the most important problems in recommender system.A context-based semi-supervised learning framework TSEL was designed.We expanded matrix factorization model SVD to support more kinds of context information,and used Tri-training framework to train individual models.Compared with other methods which solve cold start problems in recommender system (e.g.Co-training),our algorithm has better performance.Tri-training framework can incorporate more recommender models and has good expansibility.We expanded Tri-training framework,and proposed a user activeness-based unlabeled teaching set generating algorithm.We proposed more kinds of models which expand the matrix factorization.We evaluated our algorithm on real world dataset,i.e.MovieLens,and got better performance.
Research on Problem Classification Method Based on Deep Learning
LI Chao, CHAI Yu-mei, NAN Xiao-fei and GAO Ming-lei
Computer Science. 2016, 43 (12): 115-119.  doi:10.11896/j.issn.1002-137X.2016.12.020
Abstract PDF(1157KB) ( 75 )   
References | Related Articles | Metrics
Question classification is an important part of question answering system.But question classification requires the strategy of extracting features and the continuous optimization of characteristic rules at the present stage.The methodof deep learning is feasible in the question classification by the way of self learning question characteristics to represent and understand the problem so as to avoid formulating artificial features and reduce labor costs.For question classification,the long-short term memory(LSTM) model and the convolution neural network (CNN) model were improved,combining the advantages of these two models into a new learning framework (LSTM-MFCNN) to strengthen the semantic study of word order and study of depth characteristics.Experimental results show that the proposed method still has good performance under the condition of no need to formulate the characteristic rules,and the accuracy of this me-thod is 93.08%.
Online LDA on Dynamic Vocabulary
ZHANG Jian-wei, YAN Jian-feng, LIU Xiao-sheng and YANG Lu
Computer Science. 2016, 43 (12): 120-124, 134.  doi:10.11896/j.issn.1002-137X.2016.12.021
Abstract PDF(1009KB) ( 89 )   
References | Related Articles | Metrics
Most of the online LDA algorithms are based on the fixed vocabulary table currently.The vocabulary table may not often match the processed corpus in practice which has a bad effect on the precision of LDA.To solve this problem,we let the topic words distribution subject to the dirichlet process (DP) and re-deduce the model under the framework of BP algorithm.So that we can make the vocabulary table empty before the algorithm running and it can continually add new words to table.Results from the experiments show that,our new algorithm can make the vocabulary table match the corpus better and the dynamic vocabulary table makes the new algorithm achieve better performance on perplexity and PMI compared with other state-of-the-art fixed vocabulary online algorithms.
Experimental Research on Effects of Random Weight Distributions on Performance of Extreme Learning Machine
ZHAI Jun-hai, ZANG Li-guang and ZHANG Su-fang
Computer Science. 2016, 43 (12): 125-129, 145.  doi:10.11896/j.issn.1002-137X.2016.12.022
Abstract PDF(410KB) ( 60 )   
References | Related Articles | Metrics
Extreme learning machine (ELM) is an algorithm for training single-hidden layer feed-forward neural networks (SLFNs).ELM firstly employs randomization method to generate the input weights and hidden nodes biases,and then determines the output weights analytically.ELM has fast learning speed and good generalization ability.All metho-ds published in literatures usually initialize the weights of input layer and biases of hidden nodes with a uniform distribution over the interval [-1,1].However,there are no studies on the rationality of this setting in literatures.This paper investigated this problem by experimental approach.Specifically,the effects of random weights with uniform distribution,Gaussian distribution and exponential distribution were studied.We found that the random weight distributions do have impact on the performance of the extreme learning machine.For different problems or different data sets,the random weights with uniform distribution in [-1,1] are not necessarily optimal.The results of this paper can be used for reference by the researchers engaged in the study of ELM.
Terrorism Prediction Based on Bayes Method and Change Table
XUE An-rong, MAO Wen-yuan, WANG Meng-di and CHEN Quan-zhen
Computer Science. 2016, 43 (12): 130-134.  doi:10.11896/j.issn.1002-137X.2016.12.023
Abstract PDF(420KB) ( 64 )   
References | Related Articles | Metrics
Traditional terrorism behavior prediction algorithms do not consider how the group will change its behaviors.CAPE predicts changes of behaviors according to context variation of organizations,but it only predicts the changes of behavior based on changes of the context,which is existed in its change table.Considering the characteristics of the high dimensions and small samples of terrorism data,this paper proposed a terrorism prediction algorithm based on improved change table using Bayes method,to predict organizational behavior according to any behavior changes.It predicts organization behaviors on the change table due to the fact that Bayes method classifies high dimensions and small sample in a fast and efficient way.Thus,it improves prediction precision and computing efficiency.In addition,considering the continuing effect of the change of the group’s context on its behavior,the weighted Bayes method with different time lags is used to predict the behavior of the organization.Experiments on multiple organization data of MAROB show that,the proposed algorithm is better than CAPE algorithm on accuracy and time complexity.
Rough Set One-class Support Vector Machine Based on Within-class Scatter
ZHANG Bin and ZHU Jia-gang
Computer Science. 2016, 43 (12): 135-138, 172.  doi:10.11896/j.issn.1002-137X.2016.12.024
Abstract PDF(360KB) ( 60 )   
References | Related Articles | Metrics
Classical rough one-class support vector machine(ROC-SVM) constructs rough upper margin and rough lo-wer margin to deal with the over-fitting problem on rough set theory.However,in the process of searching for the optimal classification hyper-plane,ROC-SVM ignores the inner-class structure of the training data which is a very important prior knowledge.Thus,a rough set one-class support vector machine based on within-class scatter(WSROC-SVM) was proposed.This algorithm optimizes the inner-class structure of the training data by minimizing the within-class scatter of the training data.It not only precipitates margin between the origin and the training data in a higher dimensional space as large as possible,but also makes the training data close around the rough upper margin as tight as possible.Experimental results carried out on one synthetic dataset and the UCI dataset indicate that the proposed method improves the accuracy as well as the generalization of the result.And it is more advantageous in solving practical classification problems.
Novel Multi-scale Kernel SVM Method Based on Sample Weighting
SHEN Jian, JIANG Yun, ZHANG Ya-nan and HU Xue-wei
Computer Science. 2016, 43 (12): 139-145.  doi:10.11896/j.issn.1002-137X.2016.12.025
Abstract PDF(585KB) ( 72 )   
References | Related Articles | Metrics
Multi-kernel learning has been a new research focus in the current kernel machine learning field.Through mapping data into high dimensional space,kernel methods increase the computational power of linear classifier and they are an effective way to solve the problem of nonlinear model analysis and classification.In some complex situations,ne-vertheless,the kernel learning method of single kernel function can not completely satisfy the requirements of heterogeneous data or irregular data as well as samples of large size and non-flat distribution.Therefore,it is necessary to deve-lop multiple kernel functions in order to get better results.In this paper,we proposed a new SVM method for multi-scale kernel learning based on sample weighting,which is assigned via fitting abilities of distinct scales kernel functions for samples.Through the experimental analysis on several data sets,we can get that the method proposed in this paper can attain better classification accuracy on each data set.
Algorithm for Mining Association Rules Based on Application Paths and Frequency Matrix
HU Bo, HUANG Ning and WU Wei-qiang
Computer Science. 2016, 43 (12): 146-152, 162.  doi:10.11896/j.issn.1002-137X.2016.12.026
Abstract PDF(1235KB) ( 76 )   
References | Related Articles | Metrics
Association rule mining is an important method to analyze the associated faults of the airborne network and improve the efficiency of faults diagnosis process.This paper analyzed the limitations of the classical Apriori algorithm,and proposed an efficient association rule mining algorithm,which is based on the knowledge of the airborne network,matrix operation and frequent item sets.Due to the association characteristics of the airborne network faults based on the application paths,this paper proposed a mining strategy of block mining,so as to realize the noise isolation in mining process.With the conception of frequency matrix and feature vector,5 kinds of scanning strategies were proposed,thereby reducing the number of cycles and the comparison operation.Comparing with the classical Apriori algorithm,the new algorithm can effectively improve the search efficiency of frequent itemsets.
Overlapping Community Recognition Algorithm of Weighted Networks Based on Gravity Factor
LIU Bing-yu, WANG Cui-rong, WANG Cong and YUAN Ying
Computer Science. 2016, 43 (12): 153-157.  doi:10.11896/j.issn.1002-137X.2016.12.027
Abstract PDF(430KB) ( 99 )   
References | Related Articles | Metrics
The recognition of community in complex social networks by mining big data can favor the quantitative research for economic,political and demographic problems.Community recognition algorithms have become a hot topic of current research.This paper focused on the research of overlapping community discovery,and proposed the overlapping community detection algorithm GWCR,which is based on gravity factor of weighted networks.Firstly,the GWCR algorithm selects the node with the largest gravitation factor as the center node,and uses the gravitation factor between one node and the central node as a measure.The node whose gravitation factor is larger than the threshold will be included in the community.Finally,overlapping communities are discovered by identifying overlapping nodes.Experimental results on three real network datasets show that,compared with conventional overlapping community detection algorithm,GWCR has higher modularity value.
Item-based Collaborative Filtering Algorithm Integrating User Activity and Item Popularity
WANG Jin-kun, JIANG Yuan-chun, SUN Jian-shan and SUN Chun-hua
Computer Science. 2016, 43 (12): 158-162.  doi:10.11896/j.issn.1002-137X.2016.12.028
Abstract PDF(425KB) ( 146 )   
References | Related Articles | Metrics
Item correlation computation is the most critical component in item- based collaborative filtering algorithm.The traditional correlation computation scheme can be challenged by the sparse data set and the situation of recommending unpopular products.In this paper,a novel item- based collaborative filtering algorithm that incorporates the activity of users and popularity of items was proposed.The proposed computation scheme decreases the correlation between items using the activity of users and popularity of items in those rating records where only one item is rated.In this way,the unpopular products can be recommended to users in the sparse data.Experimental evaluation shows that the diversity and novelty of the recommendation list can be improved while maintaining the prediction accuracy.
Personalized Location Recommendation Algorithm Research Based on User Check-ins and Geographical Properties
CAI Hai-ni, CHEN Cheng, WEN Jun-hao, WANG Xi-bin and ZENG Jun
Computer Science. 2016, 43 (12): 163-167, 178.  doi:10.11896/j.issn.1002-137X.2016.12.029
Abstract PDF(498KB) ( 51 )   
References | Related Articles | Metrics
Since the consideration of location recommendation algorithms based on LBSNs (Location-Based Social Networks) is too single,and it couldn’t effectively solve the problem of location recommendation for user in different ci-ties,synthesizing the factors of potential social influence,content match influence and geographical property influence,the personalized location recommendation algorithm SCL (Social-Content-Location) based on user check-ins and geographical properties was proposed. SCL algorithm introduces the comparison of users’ interest features based on the collaborative filtering,and it improves the similarity of users.At the same time,when the content information of location is analyzed, user’s comments on location is integrated,and it alleviates the influence of the short text feature of location labels to LDA (Latent Dirichlet Allocation) topic extraction and improves the accuracy of user’s interest and city pre-ference topic in extraction.The experimental results show that,for the recall rate of residence city,algorithm SCL outperforms collaborative filtering algorithm U near 65%,and outperforms algorithm LCA-LDA near 30%.For the recall rate of new city,algorithm SCL outperforms algorithm LCA-LDA near 26%,which shows that algorithm SCL has certain feasibility for location recommendation under different cities.
Social Tagging Recommendation Model Based on Improved Artificial Fish Swarm Algorithm and Tensor Decomposition
ZHANG Hao, HE Jie and LI Hui-zong
Computer Science. 2016, 43 (12): 168-172.  doi:10.11896/j.issn.1002-137X.2016.12.030
Abstract PDF(454KB) ( 82 )   
References | Related Articles | Metrics
Popular classification (Folksonomy) tag application has gradually become an important way of internet content organization,but with the massive increase in the scale of data,the problem of information overload has been produced.On the other hand,the traditional personalized recommendation algorithm based on the relationship between ‘user-item’ is difficult to have effect on the three elements of the “user-item-label”.Based on the improvement of basic artificial fish swarm algorithm,a clustering analysis method was proposed for the initial data set of the tag recommendation system(TRS),which is used to reduce the scale of the data analysis of the TRS.Based on this,through comprehensive consideration of the label recommendation system element weights and the reflection of user preference score information,and by weighted processing of the element weights and grades as the elements in the tensor,a new weighted tensor model was established,and the model was solved by the dynamic incremental updating of the tensor decomposition algorithm,completing the personalized recommendation.Finally,on two real experimental data sets,the proposed algorithm (FTA) and the other two classic tag recommendation algorithms were compared and analyzed.The experimental results show that the FTA algorithm has better performance in the recall rate and precision rate.
Data Stream Classification Algorithm Based on Kappa Coefficient
XU Shu-liang and WANG Jun-hong
Computer Science. 2016, 43 (12): 173-178.  doi:10.11896/j.issn.1002-137X.2016.12.031
Abstract PDF(503KB) ( 163 )   
References | Related Articles | Metrics
Data streams mining has become one of hot topics in the area of data mining.Because of the existence of concept drift,it is impossible for conventional classification algorithms to be directly applied in data streams environment.In order to deal with the concept changes in data streams,an algorithm based on Kappa coefficient was proposed.The approach uses ensemble classification techniques and a weighted voting strategy to decide the labels of test sets,in addition,the approach employs Kappa coefficient to measure the performance of classification system.When the performance of classifiers decreases significantly,an alarm about concept drift will be made and the algorithm will apply prior know-ledge to delete inaccurate classifiers to adapt to new concept.The experimental results shows that,comparing with the contrast algorithms in the experiments:BWE,AE and AWE,the new approach can not only possess better performance for classification,but also efficiently decrease time cost.
Self-adaptation Classification for Incomplete Labeled Text Data Stream
ZHANG Yu-hong, CHEN Wei and HU Xue-gang
Computer Science. 2016, 43 (12): 179-182, 194.  doi:10.11896/j.issn.1002-137X.2016.12.032
Abstract PDF(426KB) ( 60 )   
References | Related Articles | Metrics
In the real-world applications,a large number of text data stream are emerging,such as network monitoring,network comments and microblogs.However,these data have incomplete labels and frequent concept drifts,which have brought many challenges to existing classification methods of data stream.Thus we proposed a self-adaptation classification algorithm for incomplete labeled text data stream in this paper.The proposed algorithm uses a labeled data chunk as the starting one,and extracts features between the labeled data chunk and the unlabeled data chunk.Meanwhile,for unlabeled data chunks,it uses the similarity of features between two data chunks to test concept drift.Finally, the polari-ty of features of the unlabeled data chunks is calculated to predict the instances.The experimental results show our algorithm can improve the classification accuracy,especially in the data cases with less label information and more concepts drifts.
Pairwise Constrained Semi-supervised Text Clustering Algorithm
WANG Zong-hu and LIU Su
Computer Science. 2016, 43 (12): 183-188.  doi:10.11896/j.issn.1002-137X.2016.12.033
Abstract PDF(554KB) ( 90 )   
References | Related Articles | Metrics
Semi-supervised clustering can use a small amount of tag data to improve the clustering performance,but most of the text clustering algorithms can not directly apply priori information such as pairwise constraints.As the characteristics of text data were high-dimensional and sparse,we proposed a semi-supervised document clustering algorithm.First,pairwise constraints were expanded and embedded in the document similarity matrix,then K density regions which have a small similarity with the already partitioned text collection were gradually searched in the remaining unpartitioned text collection as initial centroid.The remaining unpartitioned texts which are relatively difficult to distinguish were assigned to the K initial centroid according to the constraints.Finally,the clustering result was optimized by the convergence criterion function through integration of punish violations of pairwise constraints.In the clustering process,it can automatically determines the initial centroids to avoid the sensitivity to the initial centroids of K-means algorithm.Experimental results show that the proposed algorithm can effectively use a small amount of pairwise constraints to improve the clustering performance in Chinese and English text datasets.
Research on Evolution and Updating among Multi-source Data Based on Big Data
YU Fang, CHEN Sheng-shuang, LI Shi-jun and YU Wei
Computer Science. 2016, 43 (12): 189-194.  doi:10.11896/j.issn.1002-137X.2016.12.034
Abstract PDF(495KB) ( 60 )   
References | Related Articles | Metrics
Multi-source data based on big data presents the characteristics of a large amount of data,a great variety of data and data changing quickly.These characteristics put forward a new challenge to data updating.The concept of evolutionary data was defined by the analysis of the characteristics among multi- source data based on the big data.Based on this,a dynamic frequency conversion traversal data updating model was created.Firstly, abstracting the data evolutiona-ry way and establishing the concept of evolutionary potential and stability of data,a more general evolutionary computing tools in algebra sense was derived.Secondly,frequency conversion traversal and dynamic weighting model based on probability was deduced by deriving a more general evolutionary computing tools in algebra sense.Finally,by importing tools into the practical application of data updating,dynamic frequency traversal model of multi- source data is verified by experiment with high updated efficiency on big data.
Sequential Pattern Mining Based on Privacy Preserving
FANG Wei-wei, XIE Wei, HUANG Hong-bo and XIA Hong-ke
Computer Science. 2016, 43 (12): 195-199.  doi:10.11896/j.issn.1002-137X.2016.12.035
Abstract PDF(1098KB) ( 65 )   
References | Related Articles | Metrics
Privacy-preserving is one of the most important topics in data mining.Its’ main aim is realizing mining task in the context of uncovering original data information.In this paper,aiming to solve privacy-preserving sequential pattern mining problem, we proposed new concepts about item’s Boolean set relationship,and designed data perturbation method based on random set and random function,which can obtain the support of original sequential database.Theore-tical analysis and experiment results demonstrate that this method can achieve good performance in terms of privacy preserving,mining quality and efficiency.
Collaborative Filtering Recommendation Algorithm Based on Jaccard Similarity and Locational Behaviors
LI Bin, ZHANG Bo, LIU Xue-jun and ZHANG Wei
Computer Science. 2016, 43 (12): 200-205.  doi:10.11896/j.issn.1002-137X.2016.12.036
Abstract PDF(488KB) ( 63 )   
References | Related Articles | Metrics
Recently,collaborative filtering is one of the most widely used and successful recommendation technology in recommender system.And probabilistic matrix factorization is an important method of collaborative filtering and it can be recommended by learning the low dimensional approximation matrix.However,the traditional collaborative filtering recommendation algorithm has the disadvantages of using the ratings between users and items only,ignoring the potential impact of the users (items).At last,it affects the recommendation precision.In order to solve the problem,in this paper,we first used the Jaccard similarity to preprocess the users (items),and then dug out the potential impact through the users (items) location information,finding the set of nearest neighbors successfully.Furthermore,those nearest neighbors were successfully applied into the recommendation process based on probabilistic matrix factorization.Experimental results show that compared to traditional collaborative filtering recommendation algorithm,the proposed algorithm can achieve more accurate rating predictions and improve the quality of recommendation.
Research on Collaborative Filtering Algorithm with Improved Similarity
LI Rong, LI Ming-qi and GUO Wen-qiang
Computer Science. 2016, 43 (12): 206-208, 240.  doi:10.11896/j.issn.1002-137X.2016.12.037
Abstract PDF(331KB) ( 59 )   
References | Related Articles | Metrics
Collaborative filtering recommends and predicts the target user’s preferences by using his neighbor user’s preference.The calculation of similarity is the key.Traditional similarity calculation ignores the affection from the co-rated item number rated by common users,and their average similarity rating.That causes poor similarity description among users in case of data sparse.In this paper,we proposed two factors to improve the traditional similarity calculation.Meanwhile,the collaborative filtering algorithm was improved with the improved similarity and it is applied to film recommendation.Simulation results show that the improved collaborative filtering algorithm based on the improved simi-larity can get a lower MAE value than the traditional method,which is helpful to improve the quality of movie recommendation.
Density Self-adaption Semi-supervised Spectral Clustering Algorithm
ZHOU Hai-song and HUANG De-cai
Computer Science. 2016, 43 (12): 209-212.  doi:10.11896/j.issn.1002-137X.2016.12.038
Abstract PDF(330KB) ( 48 )   
References | Related Articles | Metrics
As an emerging clustering algorithm,the similarity definition of spectral clustering between data points plays an important role in its clustering results.Traditional spectral clustering algorithms typically use gaussian kernel function to be similarity function,but it doesn’t make great effects on multidimensional data.On the basis of defining the new similarity function, a density self-adaption semi-supervised clustering algorithm was put forward which is sensitive with density.Combining with constraint theory in pairs of the semi-supervised clustering,the algorithm makes adaptations on similarity between sample points by using priori information,thus improving the accuracy of data.The algorithm achieves good results both in synthetic datasets and real-world datasets.
Clustering Algorithm Based on Relative Density and k-nearest Neighbors over Manifolds
GU Ling-lan and PENG Li-min
Computer Science. 2016, 43 (12): 213-217.  doi:10.11896/j.issn.1002-137X.2016.12.039
Abstract PDF(422KB) ( 46 )   
References | Related Articles | Metrics
For the problem that traditional Euclidean distance similarity measure cannot fully reflect the distribution characteristics of the complicated data structure,a clustering algorithm based on relative density and k-nearest neighbors over manifolds was proposed.The manifold distance which describes the global consistency and the k-nearest neighbors concept that shows local similarity and affinity were introduced.Based on above descriptions,firstly,the similarity between two objects is measured through the k-nearest neighbors similarity over manifolds.Secondly,the cluster under different densities is found by adapting the relative uniformity of the k-nearest neighbors.Lastly,the k-nearest neighbor pair constraint rule is designed to search the nearest neighbor chain which is composed of the k-nearest data points,in order to classify data objects and identify outliers.Experimental results show that compared with traditional k-means clustering algorithm and the improved k-means clustering algorithm by manifold distance, the algorithm can effectively deal with the clustering problem for complicated data structure and achieve better clustering effect on artificial data sets and UCI public data sets.
Random Forests Based Method for Inferring Social Ties of LBS Users
MA Chun-lai, SHAN Hong, MA Tao and GU Zheng-hai
Computer Science. 2016, 43 (12): 218-222.  doi:10.11896/j.issn.1002-137X.2016.12.040
Abstract PDF(966KB) ( 46 )   
References | Related Articles | Metrics
Inferring social ties from the location information of LBS users,which can provide more information for group discovery and community detection,is now becoming a new problem in intelligence mining from location big data.Based on the theory of co-occurrences,the features of co-occurrences region were divided into four categories,and a new me-thod based on random forests for social ties inferring was proposed in this paper.The method consists of feature selection phase and classification phase.Firstly,for the problem that uncorrelatedand redundant features will affect the accuracy of result,an algorithm based on Fisher criterion and χ2 test was proposed to remove the uncorrelated and redundant features.Secondly,random forests was applied in the classification to overcome the problem of existing method that training phase is slow and the model is easily over-fitting.Check-in data of LBSN users is chosen as test data in experiment,the results indicate the feasibility and effectiveness of the method.
Recommending Commodities Based on User-browsing Tracks
GUO Jun-xia, XU Wen-sheng and LU Gang
Computer Science. 2016, 43 (12): 223-228.  doi:10.11896/j.issn.1002-137X.2016.12.041
Abstract PDF(519KB) ( 55 )   
References | Related Articles | Metrics
With the rapid development of E-commerce,recommendation system has been widely used in the Websites.Currently the collaborative filtering recommendation algorithm is the most widely used,however,this kind of methods has sparse matrix and cold-start problems.In order to solve or at least improve these problems,methods based on users’ browsing records were proposed.These methods extract every user’s browsing path sequence called user browsing tracks from the users’ access log,and then recommend preference commodities for the user based on the analyzing result of the tracks.By now,the most methods that recommend commodities for users through analysis browsing path are based on sequence pattern matching or the view of the relationship between commodity and the next browsed commodity.We considered from the view of the relationship between browsing commodities and eventually bought commodities,establishing the user browsing tracks preference model based on this,mining users’ preference,and recommending products for new users.Experiments show that our method plays a certain role in solving the problem of cold-start for new users and enhancing the accuracy and recall rate of the recommendation system in E-commerce.
Term and Semantic Difference Metric Based Document Clustering Algorithm
WEI Lin-jing, LIAN Zhi-chao, WANG Lian-guo and HOU Zhen-xing
Computer Science. 2016, 43 (12): 229-233, 259.  doi:10.11896/j.issn.1002-137X.2016.12.042
Abstract PDF(473KB) ( 59 )   
References | Related Articles | Metrics
The existing document clustering algorithms are based on the common similarity measurement,but ignore the semantics.So a document clustering algorithm based on maximizing the sum of the discrimination information provided by documents was proposed.Firstly,the discrimination information of term for the corresponding cluster and for the other clusters was analyzed separately,and the data set was transformed from input space to the difference scores matrix space.Then a greedy algorithm was designed to filter the terms with low score from each row of the matrix.Lastly,maximum likelihood estimation was used to smooth the document difference information.Simulation experiment results show that the proposed method has better cluster quality than the plat and hierarchical clustering algorithms,and has a good quality in interpretability and convergence.
Model and Algorithm for Heterogeneous Fixed Fleet School Bus Routing Problem
HOU Yan-e, KONG Yun-feng, DANG Lan-xue and XIE Yi
Computer Science. 2016, 43 (12): 234-240.  doi:10.11896/j.issn.1002-137X.2016.12.043
Abstract PDF(605KB) ( 83 )   
References | Related Articles | Metrics
In practice of school bus route planning,the bus fleet usually consists of a limited number of buses with different capacities,purchase costs and operation costs.However,the heterogeneous fixed fleet school bus routing problem (HFSBRP) has not been well investigated.In this paper,we introduced a mathematical model for HFSBRP and also proposed an iterated local search (ILS) algorithm to optimize the total cost.The ILS is combined with a variable neighborhood descent (VND) algorithm with random neighborhood selection.In local search,the bus type for one or more routes will be adjusted to reduce the costs.Two acceptance rules are used to accept the solution while satisfying the rules.In addition,some worst solutions within the scope of cost deviation are accepted to keep the diversification of the search.Moreover,a perturbation mechanism with multiple points swap or shift is used to avoid local optima.The experimental results demonstrate the correctness and effectiveness of the proposed model.
Bunchy Memory Method for Dynamic Evolutionary Multi-objective Optimization
LIU Min, ZENG Wen-hua and LIU Yu-zhen
Computer Science. 2016, 43 (12): 241-247.  doi:10.11896/j.issn.1002-137X.2016.12.044
Abstract PDF(594KB) ( 64 )   
References | Related Articles | Metrics
One of the challenges in dynamic evolutionary multi-objective optimization (DEMO) is how to exploit past optimal solutions to help DEMO algorithm track and adapt to the changing environment quickly.To alleviate the above difficulty,this paper proposed a bunchy memory (BM) method for DEMO.In the BM method,firstly,a sampling procedure based on minimized utility function is designed to sample a bunch of memory points from the non- dominate set so as to maintain good diversity of memory.Then,the memory is organized as a bunchy queue,so that a number of bunches of memory points sampled from past environment changes can be easily stored in the memory.Next,past optimal solutions stored in the memory are reused to rapidly respond to the new change by using a retrieving procedure based on binary tournament selection.The BM method has good memory effect and improves the convergence and diversity of DEMO algorithm significantly.Experiment results on four benchmark test problems indicate that the proposed BM method has better memory performance than other three methods.Accordingly,the convergence and diversity of the DEMO algorithm,which incorporates the BM method,are also obviously better than those of the other three DEMO algorithms.
Multi-objective Particle Swarm Optimization Algorithm with Balancing Each Speed Coefficient
GENG Huan-tong, ZHAO Ya-guang, CHEN Zhe and LI Hui-jian
Computer Science. 2016, 43 (12): 248-254.  doi:10.11896/j.issn.1002-137X.2016.12.045
Abstract PDF(537KB) ( 62 )   
References | Related Articles | Metrics
PSO has become one of the effective methods for solving multi-objective optimization problems,and the key of PSO is the proper settings of the inertial,local and global velocity coefficients.To solve the problem,separating settings for each speed coefficient in existing algorithm with ignoring potential relevancies,an improved multi-objective particle optimization for balancing each formula element was proposed.For the purpose of guiding the evolutionary particle swarm in a potential global optimum,our algorithm can dynamically adjust the speed of each particle coefficients to balance inertia,local and global effects of three speed items during the searching process.Thus the searching capability and accuracy of the new algorithm is more accurate.Meanwhile,our algorithm can not only balance the capacity of exploitation and exploration,but also improve the efficiency in solving complex multi-objective optimization problem.The experimental results indicate that the new algorithm outperforms other 5 classical evolutionary algorithms in terms of convergence speed and distribution on 7 multi-objective benchmark functions.
Improved Particle Swarm Optimization Algorithm for Solving Weapon-target Assignment Problem Based on Intuitionistic Fuzzy Entropy
SU Ding-wei, WANG Yi and ZHOU Chuang-ming
Computer Science. 2016, 43 (12): 255-259.  doi:10.11896/j.issn.1002-137X.2016.12.046
Abstract PDF(416KB) ( 61 )   
References | Related Articles | Metrics
An improved particle swarm optimization algorithm for solving weapon- target assignment (WTA) problem based on intuitionistic fuzzy entropy (IFEIPSO) was proposed to improve the efficiency and performance.Firstly,the algorithm sets up an integer decoding scheme for a variety of constraints about WTA to decrease the complexity of problem.Then,the algorithm updates the partial best-solution of PSO by using an exchange operation and a simulated annealing mechanism which aims to get the better partial best-solution and global best-solution,and increases the partial searching ability.Finally,the algorithm uses a metric based on intuitionistic fuzzy entropy to measure the diversity of the population,and designs a mutation operation on the basis of entropy value to improve the population’s diversity and global searching performance.The results of simulation indicate that the algorithm improves the searching ability of PSO and it is useful for dealing with WTA problem.
Multigroup ITO Algorithm for Solving EVRP
YIN Zhi-yang and YU Shi-ming
Computer Science. 2016, 43 (12): 260-263, 268.  doi:10.11896/j.issn.1002-137X.2016.12.047
Abstract PDF(393KB) ( 85 )   
References | Related Articles | Metrics
In view of the defects that traditional ITO algorithm,due to low convergence speed,is prone to run into local optimal solution,the environmental temperature adjustment function was redesigned and the path weight renewal rule of particles in drifting and fluctuating was improved so as to make the particles better meet the characteristics of Brownian motion particles.Meanwhile,multi-group concept is introduced into the algorithm to acclerate the convergence speed and improve the capacity of finding an optimal solutions with fully taking advantage of the population information.The first five optimal solutions are further improved by using 2-opt local optimization and reverse optimization.Finally,The vehicle load is considered into the calculation of the fuel efficiency rate,the environment vehicle routing problem(EVRP) model based on least carbon emission is improved,and the improved algorithm is used to solve the problem.Experimental result shows that the improved ITO algorithm can effectively promote the ability of searching the global optimal solution and the convergence rate,and it can effectively prevent the stagnation phenomenon.
Novel Fruit Fly Optimization Algorithm Based on Dimension Partition
WANG You-wei, FENG Li-zhou and ZHU Jian-ming
Computer Science. 2016, 43 (12): 264-268.  doi:10.11896/j.issn.1002-137X.2016.12.048
Abstract PDF(377KB) ( 51 )   
References | Related Articles | Metrics
In order to improve the convergence stability of fruit fly algorithm,a novel dimension partition based fruit fly optimization algorithm was proposed.The fruit fly population is divided into two groups:the following fruit flies and the searching fruit flies.A following fruit fly realizes the accurate local searching near the global best fruit fly,and a sear-ching fruit fly divides each searching dimension of the position vector into several partitions and updates its position by comparing the performances of all partitions.In order to improve the convergence speed,if a searching fruit fly performs worst during several iterations,its new position will be generated near the global best fruit fly.The experimental results of 8 typical functions show that the proposed method needs fewer parameters,and has obvious advantages on convergence stability,convergence accuracy and convergence speed when comparing with traditional methods.
Event Detection in Sensor Network Based on Description Logic
ZHANG Yu-li, CHANG Liang, MENG Yu and GU Tian-long
Computer Science. 2016, 43 (12): 269-272, 286.  doi:10.11896/j.issn.1002-137X.2016.12.049
Abstract PDF(420KB) ( 60 )   
References | Related Articles | Metrics
Event detection based on heterogeneous data sources is a typical application in the environment of internet of things.Existing technology can fulfill the collection,filtering and presentation of data from heterogeneous data sources,and also support some data’s fusion and analysis in low level.However,the obtainment of domain specific knowledge and automatically drawing hidden conclusions still require human intervention.For this case,this paper presented an intelligent method based on εL++ to fulfill automatic detection of events in sensor networks.Firstly,we introduced speci-fic scene about the sensor network and described its domain knowledge with the lightweight description logic εL++.Secondly,we definited event that is to be judged according to its complexity.Finally,we used an instance to check the proposed method.This method takes full advantage of description logic’s clear semantics and good reasoning ability and can give the right results with domain specific knowledge and concrete data.
Researches on Wireless Sensor Network Localization Based on Improved Gbest-guided Artificial Bee Colony Algorithm
XING Rong-hua and HUANG Hai-yan
Computer Science. 2016, 43 (12): 273-276.  doi:10.11896/j.issn.1002-137X.2016.12.050
Abstract PDF(331KB) ( 67 )   
References | Related Articles | Metrics
The overall performance of wireless sensor network(WSN) is highly reliable of the accurate geographic location of each sensor node in WSN.Based on the artificial Bee colony algorithm,the gbest-guided artificial bee colony algorithm adds the iterative optimal solution to updating formula after the neighborhood search,improving the development ability of the algorithm.But when it is applied to WSN node location,it still has the problem of long computing time and unstable convergence.An improved Gbest-guided artificial bee colony algorithm was proposed, and we measured the new solution after neighborhood search.If the new solution is acceptable,crossover with the iterative optimal solution is executed.If the new solution is good,crossover operation is not executed.If the new solution is bad,the solution is quitted.It balances the exploration and development ability of the algorithm better,and it’s proved to have faster convergence rate and better convergence effect when applied to WSN node location.
Sentiment Analysis on Food Safety News Using Joint Deep Neural Network Model
LIU Jin-shuo and ZHANG Zhi
Computer Science. 2016, 43 (12): 277-280.  doi:10.11896/j.issn.1002-137X.2016.12.051
Abstract PDF(383KB) ( 60 )   
References | Related Articles | Metrics
Facing the difficulties in feature expression of Chinese food safety text information,and the loss of semantic information with low classification accuracy,a sentimental text classification model based on joint deep neural network was presented.The proposed model utilizes the corpora of food safety document captured from the internet,and word vector from word embedding method as the input for the neural network to get the pre-trained word vector.The pre-trained word vector is further trained dynamically to get the word features and the sentimental classification of the sentence result,which better express the phrase-level sentimental relations for each sentence and the real semantic meaning in the food safety domain.Then the word feature of the sentence is inputted to the recurrent neural network (RNN) to catch the semantic information of the sentence structure further,realizing the sentimental classification of the text.The experiments show that our joint deep neural network model achieves better results in sentiment analysis on food safety information,compared with the bag-of-words based SVM model.The classification accuracy and F1 value reach 86.7% and 85.9% respectively.
Fault Analysis of High Speed Train Based on EDBN-SVM
GUO Chao, YANG Yan and JIN Wei-dong
Computer Science. 2016, 43 (12): 281-286.  doi:10.11896/j.issn.1002-137X.2016.12.052
Abstract PDF(525KB) ( 61 )   
References | Related Articles | Metrics
As a new hot spot in the field of machine learning,deep learning has opened up new ideas for the research of fault diagnosis.In view of significance of fault analysis for high speed train,combining deep learning and ensemble lear-ning,a new fault diagnosis model based on EDBN-SVM(Ensemble Deep Belief Network-Support Vector Machine)was proposed.Firstly,we preprocessed the vibration signal of high speed train by fast fourier transform (FFT).Secondly,we analyzed the parameters of the EDBN-SVM model,then we set the FFT coefficients as the input of the visible layer of EDBN-SVM model,and used the model to learn high-level features layer by layer.Finally,we utilized multiple SVM classifiers to recognize faults,and combined the recognition results.In order to evaluate the validity of this method,we selected the laboratory data and the simulation data to conduct experiments,and compared it with the traditional fault analysis methods.The results show that the fault recognition effect and the stability of this method are better than traditional methods.
Packing Method Based on Wang-Landau Sampling for Simplified Satellite Module with Static Non-equilibrium Constraints
LIU Jing-fa, HUANG Juan, JIANG Yu-cong, LIU Wen-jie and HAO Liang
Computer Science. 2016, 43 (12): 287-292.  doi:10.11896/j.issn.1002-137X.2016.12.053
Abstract PDF(1628KB) ( 71 )   
References | Related Articles | Metrics
With the background of the three-dimensional layout optimization problem on the bearing plate of the simplified satellite module,we studied the cylinder and cuboid mixed layout problem with static non-equilibrium constraints.To address this problem,the Wang-Landau sampling algorithm,which has been successful applied to statistic physics and the protein structure prediction problem,is introduced to solve the packing problem of satellite module at the first time.The Wang-Landau sampling algorithm can produce a flat histogram of energy by sampling the energies of the whole energy space effectively,so as to estimate the density of states of all possible energiesin the range accurately.By incorporating the steepest descent method with an accelerating strategy and the translation of the center of mass into the Wang-Landau sampling algorithm,an improved Wang-Landau sampling algorithm was proposed.The computational results of two classic instances from the literature show that the convergence rate and the quality of solution of the improved Wang-Landau sampling algorithm outperform other algorithms in literature.
Research on Robot Obstacle Avoidance and Path Planning Based on Improved Artificial Potential Field Method
XU Fei
Computer Science. 2016, 43 (12): 293-296.  doi:10.11896/j.issn.1002-137X.2016.12.054
Abstract PDF(326KB) ( 104 )   
References | Related Articles | Metrics
In uncertain and complicated mobile environment,the use of traditional artificial potential field method for robot obstacle avoidance is difficult to meet the needs of the dynamic adaptation to the environment. An improved artificial potential field method of relative spead was proposed.To solve the problem of local minimum in the traditional path planning,the improved artificial potential field method put forward setting intermediate target.An external force is givento the robot to avoid robots stopping or wandering in a local minimum point.It ensures that the robot can escape from the minimum trap and smoothly arrive at the target location.Finally,in the Matlab platform,to verify the effectiveness of the method,simulation experiment was carried out.The experimental results show that the improved artificial potential field method can well realize the path planning for mobile robot in dynamic environment.
Human Motion Activity Recognition Model Based on Multi-classifier Fusion
WANG Zhong-min, WANG Ke and HE Yan
Computer Science. 2016, 43 (12): 297-301.  doi:10.11896/j.issn.1002-137X.2016.12.055
Abstract PDF(402KB) ( 53 )   
References | Related Articles | Metrics
To improve the accuracy of human activity recognition based on the triaxial acceleration data from mobile sensors,an activity recognition model based on multiple classifier fusion (MCF) was proposed.The features which are high correlated with each daily activity (staying,walking,running,going upstairs and going downstairs) are extracted from the original acceleration data to generate the five feature data sets to train the five base classifiers.The input of the five base classifiers are these feature data sets,and their output are processed using multi-classifier fusion algorithm to produce the final activity recognition result.The experimental results show that the average activity recognition accuracy and the reliability by using MCF are respectively 96.84% and 97.41%,and it can effectively identify human activities.
Investigation on Fault Classification Method of K-PSO Sparse Representation
FU Meng-meng and WANG Pei-liang
Computer Science. 2016, 43 (12): 302-306.  doi:10.11896/j.issn.1002-137X.2016.12.056
Abstract PDF(428KB) ( 105 )   
References | Related Articles | Metrics
In order to solve the problem of multiple faults which can not be identified and classified accurately in modern complex production process,an improved sparse representation fault classification method was proposed.This method is based on the sparse representation of the signal to determine the fault categories.First,the specific implementation process utilizes K-Means Singular Value De-composition(K-SVD) algorithm to constructe over complete dictionary with main features in the original message,and then uses the particle swarm optimization(PSO) algorithm to search and find the most matching atom which is generated in sparse decomposition in the range of over complete dictionary.Finally,the results based on the sparse representation realizes classification and identification about multiple faults problem.The validity and practicability of the proposed method is verified by numerical simulation.Meanwhile,the proposed method was compared with the methods based on the BP neural network and SVM classification through the fault classification of diesel engine fuel system.Experiments show that the algorithm has good effect on fault classification.
Video Tracking Scheme Based on Opportunity Fuzzy Control with Distortion Incentive
Computer Science. 2016, 43 (12): 307-310.  doi:10.11896/j.issn.1002-137X.2016.12.057
Abstract PDF(848KB) ( 61 )   
References | Related Articles | Metrics
In order to improve the target tracking accuracy and tracking video quality,a kind of video tracking system based on the fuzzy control and having the function of distortion incentive was studied.Firstly,according to the spatial domain and fuzzy cluster,the fuzzy control system is constructed based on the fuzzy set and the target moving speed.Then,based on the analysis of the difference between the target and the target moving speed,the video tracking system based on the distortion and the system architecture is proposed.Finally,the experimental results show that the proposed algorithm has obvious advantages in the aspects of system execution efficiency,video transmission delay and tracking video quality.