Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 41 Issue 2, 14 November 2018
  
Survey of Software Fault Localization
CAO He-ling,JIANG Shu-juan and JU Xiao-lin
Computer Science. 2014, 41 (2): 1-6. 
Abstract PDF(567KB) ( 2040 )   
References | RelatedCitation | Metrics
Fault localization is a hot research topic in software debugging,aiming at a rapid and efficient detection of software faults.First,the existing fault localization techniques were classified into two categories according to different research methods:light-weight and heavy-weight fault localization.And similar techniques were compared.The former does not involve the analysis of program dependencies,but it finds out the set of suspicious fault code using statistic or data mining methods based on the coverage information of program execution.The latter involves the analysis of program dependencies and mainly applies data-dependency or control-dependency or program slicing to identify suspicious code.Then,the commonly evaluation data sets and evaluation criteria were summarized.Finally,the trend of future study was discussed.
Survey of Automated Whitebox Fuzz Testing
ZHANG Ya-jun,LI Zhou-jun,LIAO Xiang-ke,JIANG Rui-cheng and LI Hai-feng
Computer Science. 2014, 41 (2): 7-10. 
Abstract PDF(415KB) ( 1087 )   
References | RelatedCitation | Metrics
Software security analysis and vulnerability testing are one of the researching focus and difficulty in the software engineering.People think highly of the software security testing using program analysis.This paper began with an overview of the concepts of the software security testing,then detailed the popular methods of program analysis in softwared security testing:fuzz testing,symbolic execution and automated whitebox fuzz test and compared them to each other,finally gave an overview of the automated whitebox fuzz testing distributed system.
My Views of Computational Thinking
SHI Wen-chong
Computer Science. 2014, 41 (2): 11-14. 
Abstract PDF(363KB) ( 716 )   
References | RelatedCitation | Metrics
The author researched the essence of computational thinking and the problems to be paied attention in cultivating computational thinking.After analysing the views of international academia,the author thought computational thinking is a kind of academic thinking and a kind of computing philosophy.He pointed out that grasping the thinking’s computing disciplines’ characteristics and the computation’s thinking properties is critical to understand computational thinking.He presented the selecting principles of computational thinking’ s banner concepts,gave some concepts based on the principles and thought the computational thinking is contained in the commonalities of those concepts.The typical thoughts of computational thinking are structuralization,formality,co-movement,optimization.Training computational thinking must be based on the banner concepts and the typical thoughts.
Formalization of Real Binomial Coefficient in HOL4
SHI Li-kun,ZHAO Chun-na,GUAN Yong,SHI Zhi-ping,LI Xiao-juan and YE Shi-wei
Computer Science. 2014, 41 (2): 15-18. 
Abstract PDF(333KB) ( 546 )   
References | RelatedCitation | Metrics
The theorem proving is a formal method and plays an important role in the verification of safety-critical system.The fractional calculus is the basis of the complex system’s analysis.The real binomial coefficient is an important part of the fractional calculus GL definition.Currently,there is not the formalization of real binomial coefficient in higher-order-logic theorem library.This paper presented the formalization of the real binomial coefficients.The factorial power was firstly formalized in HOL4.And the real binomial coefficient was formalized using the formalization of factorial power.The paper also presented the formal verification of the fractional calculus.At the same time it illustrateed the practical effectiveness and utilization of our approach.
Research on Hybrid Parallel Programming Technique Based on CMP Multi-cure Cluster
WANG Wen-yi,WANG Chun-xia and WANG Jie
Computer Science. 2014, 41 (2): 19-22. 
Abstract PDF(302KB) ( 382 )   
References | RelatedCitation | Metrics
When validating some theory and testing computer systems processing capacity,high-performance scientific computing is an effective experimental means.Currently,as CMP multi-core clusters become increasingly common,this article attempted to do some experimental studies to the MPI and OpenMP two different parallel programming technique consisting of hybrid programming model.Through analyzing the experimental data of the program execution time and the speedup,it can be seen that in multi-core and multi-node clusters using fine-grained Hybrid parallel programming method than the single using MPI parallel programming method will be more rational and efficient,so it can also better reflect the features and advantages of system hardware and software.
Review of Word Sense Induction
SUN Yu-xia,QU Wei-guang,DI Ying and ZHOU Jun-sheng
Computer Science. 2014, 41 (2): 23-32. 
Abstract PDF(1268KB) ( 686 )   
References | RelatedCitation | Metrics
For many natural language processing tasks,such as machine translation,information retrieval,using word sense other than word itself as feature can perform much better.However,word sense disambiguation requires a large number of marked corpuses,at the same time,there are some problems hindering its application,for example the absence of some word senses.Therefore,people pay more attention to word sense induction.This paper introduced the related works and development of WSI from three aspects,which include the introduction of WSI,the related WSI methods and the evaluations.At last,we summaried and outlooked these works.
Coupled Object Similarity Based Item Recommendation Algorithm
YU Yong-hong,CHEN Xing-guo and GAO Yang
Computer Science. 2014, 41 (2): 33-35. 
Abstract PDF(370KB) ( 528 )   
References | RelatedCitation | Metrics
Recommender systems are very useful due to the huge volume of information available on the Web.It helps users alleviate the information overload problem by recommending users with the personalized information,products or services.For content-based recommendation algorithm,there are few suitable similarity measures for the content-based recommendation methods to compute the similarity between items.This paper proposed a coupled object similarity based item recommendation algorithm.Our method firstly extracts item features from items,and then constructs item similarity model by using coupled object similarity measure.The collaborative filtering technique is then used to produce the recommendations for active users.Experimental results show that our proposed recommendation algorithm effectively solves the problem of similarity measure between items for recommendation algorithm and improves the quality of traditional content-based recommendation when lacking most of the item features.
Study of BoF Model Based Image Representation
LIANG Ye,YU Jian and LIU Hong-zhe
Computer Science. 2014, 41 (2): 36-44. 
Abstract PDF(838KB) ( 485 )   
References | RelatedCitation | Metrics
Designing a suitable image representation is one of the most fundamental issues of computer vision.BoF mo-del is very popular and used extensively in image classification,video search,robot localization and texture recognition.BoF feature is an orderless collection of quantized local image descriptors.While this feature representation discards structural and spatial information,BoF model is conceptually and computationally simple,even as good as stateof- the-art methods.Three steps in the popular BoF were studied in detail,including feature extraction,feature coding and feature pooling.In the end,the main problems and challenges were highlighted based on analysis of current research technique.
Multigranulation View Based Fusing Strategy of D-S Evidence
LIN Guo-ping,LIANG Ji-ye and QIAN Yu-hua
Computer Science. 2014, 41 (2): 45-48. 
Abstract PDF(314KB) ( 492 )   
References | RelatedCitation | Metrics
D-S evidence theory and multigranulation rough set theory are different information fusion methods.The relationship between these theories was addressed and the completeness property was found.A new fusion strategy called the combination fusion of D-S theory and multigranulation rough set theory was presented.An example was employed to illustrate the effectiveness of the proposed fusion method.
Comparison between Two Approaches of Embedding Spatial Information into Linear Discriminant Analysis
NIU Lu-lu,CHEN Song-can and YU Lu
Computer Science. 2014, 41 (2): 49-54. 
Abstract PDF(508KB) ( 657 )   
References | RelatedCitation | Metrics
No Free Lunch Theorem says that only taking full advantage of learning machine of priori knowledge related to the problem under consideration can have a good learning performance.However,the vectorization of the images used in conventional linear discriminant analysis (LDA) damages the spatial structure of initial images,and restricts the improvement of the learning performance of LDA.Spatially smoothing linear discriminant analysis (SLDA) tries to overcome this problem by introducing the spatial regularization to the objective of LDA,whereas IMage Euclidean Distance Discriminant Analysis (IMEDA) substitutes IMage Euclidean Distance (IMED) for the original Euclidean metric in the objective of LDA to utilize the spatially structure information.This paper attempted to explore the intrinsic link between SLDA and IMEDA:theoretically proved that SLDA is the special case of IMEDA when the sample mean of the data set is zero,analyzed the time complexity and the space complexity of the algorithms.The experiments were conducted to compare SLDA with IMEDA on Yale,AR and FERET face datasets,and the influences of the parameters on perfor-mance of the algorithms were analyzed.
Detecting Community Structure in Bipartite Networks Based on Matrix Factorization
CHEN Bo-lun,CHEN Ling,ZOU Sheng-rong and XU Xiu-lian
Computer Science. 2014, 41 (2): 55-58. 
Abstract PDF(682KB) ( 610 )   
References | RelatedCitation | Metrics
Community detection in bipartite network is very important in the reseach on the theory and applications of complex network analysis.An algorithm for detecting community structure in bipartite networks based on matrix factorization was presented.The algorithm first partitions the network into two parts,each of which can reserve the community information as much as possible,and then the two parts are further recursively partitioned until they can not be partitioned.When partitioning the network,we used the approach of matrix decomposition so that the row space of the associated matrix of the networks can be approximated as close as possible and the community information can be reserved the as much as possible.Experimental results show that our algorithm can not only accurately identify the number of communities of a network,but also obtain higher quality of community partitioning without previously known parameters.
Maximum Constrained Density One-class Classifier
ZHAO Jia-min,FENG Ai-min,CHEN Song-can and PAN Zhi-song
Computer Science. 2014, 41 (2): 59-63. 
Abstract PDF(381KB) ( 412 )   
References | RelatedCitation | Metrics
A novel One-Class Classifier (OCC) was proposed within the framework of probability density estimation called Maximum constrained density based OCC,MCDOCC.By constraining the upper bound of the kernel density estimators with the introduced parameter,MCDOCC is more sensitive in the low-density region located on boundary,alleviates the computation cost at the same time.Then,through maximizing the average constrained density of the target data,MCDOCC optimizes the object function with linear programming and the sparse solution can be reached finally.To further improve the generalization ability,two ways for MCDOCC with Negative data (NMCDOCC) were developed for full utilizing the prior knowledge existed in outliers.Experimental results on UCI data sets show that the generalization ability of MCDOCC is comparable with one-class support vector machines,but NMCDOCC is better than it.
Multi-kernel Projective Nonnegative Matrix Factorization Algorithm
LI Qian,JING Li-ping and YU Jian
Computer Science. 2014, 41 (2): 64-67. 
Abstract PDF(324KB) ( 755 )   
References | RelatedCitation | Metrics
Nonnegative Matrix Factorization (NMF) decomposes the data into non-negative base matrix and coefficient matrix,and both of them have relationship with each other.Thus,some researchers rebuild the base matrix based on the projection of coefficient matrix.However,these two NMF-type methods can not satisfy the requirement of non-linear data analysis.With the development kernel learning,kernel function is introduced into the traditional NMF model for non-linear data analysis,which results in another problem,i.e.,kernel parameter selection.We presented a Multi-Kernel Projective Nonnegative Matrix Factorization(MKPNMF) method,which has ability to avoid the problem of kernel parameter selection and improves the final learning performance.A series of experiments on real-world face data were conducted.The results show that MKPNMF outperforms the existing NMF-type methods.
Optimized Implementation of Hybrid Recommendation Algorithm
LI Peng-fei and WU Wei-min
Computer Science. 2014, 41 (2): 68-71. 
Abstract PDF(411KB) ( 544 )   
References | RelatedCitation | Metrics
The ever-increasing number of users and it ems of modern electronic commercial system has made the user-item matrix to become more and more sparse.This situation,in combination with somewhat inappropriate similarity calculation methods currently used,maks the recommendation quality of recommender system to gradually reduce.For this,we presented an optimized recommender algorithm which is based on a hybrid model.In our algorithm,the similarity function is a linear combination of the item property similarity and a modified correlation cosine similarity.The weighting factor,which is generated automatically,is related to the number of users who rated both items.The modification to the correlation cosine similarity measure considers both the rating tendency and the activity from users.To deal with the cold start problem,we also acquired user similarity through user property information with weighting factors computed by SVDFeature.The experimental results demonstrate that our algorithm effectively improves the recommendation quality and alleviates cold starting problem resulting from both users and items.
Distance-based Kernel Evaluation Measure
WANG Pei-yan and CAI Dong-feng
Computer Science. 2014, 41 (2): 72-75. 
Abstract PDF(318KB) ( 777 )   
References | RelatedCitation | Metrics
The success of kernel methods depends on the kernel,thus a choice of a kernel and proper setting of its parameters are of crucial importance.Learning a kernel from the data requires evaluation measures to assess the quality of the kernel.Recently,kernel target alignment (KTA),which measures the degree of agreement between a kernel and a learning task,has been widely used for kernel selection because of its effectiveness and efficiency.However,it is reported that KTA is only a sufficient condition to select a good kernel,but not a necessary condition.The reason is that KTA is not invariant under data translation in the feature space.This paper proposed a new measure for kernel selection named kernel distance target alignment (KDTA).The measure not only overcomes the limitations of KTA but also possesses other properties like simplicity and efficiency.Comparative experiments indicate that the new measure is a good indication of the superiority of a kernel.
Time Prediction for Reyes Rendering Architecture Based on AdaBoost.MH Algorithm
MENG Qing-li,LV Lin,JIN Ying,MENG Xiang-xu and MENG Lei
Computer Science. 2014, 41 (2): 76-81. 
Abstract PDF(501KB) ( 463 )   
References | RelatedCitation | Metrics
A high performance computer system,e.g.a computer cluster,built for large-scale photorealistic rendering,i.e.render farm,is a basic infrastructure for producing CG animations and movie special effects.In a render farm,one of the key issues is the strategy of scheduling and dispatching rendering jobs,which greatly affects the computing efficiency.Time prediction for a render job plays an important and essential role in the job scheduling and dispatching stage.However,there is no feasible algorithm and even little research work on this problem.We focused on the Reyes rendering architecture.We first analyzed the factedors that affect the rendering time and extracted the seven key features as the feature vector based on the analysis.Then we proposed a time prediction framework based on AdaBoost.MH algorithm,in which we transformed the rendering time into intervals and combined them with the feature vector to obtain the samples.Experimental results show the effectiveness of the algorithm,and the accuracy of training set and test set is 79% and 78%.
Protein-protein Interaction Prediction Combining Active Learning with SVM
SHI Wen-li,GUO Mao-zu,LI Jin and LIU Xiao-yan
Computer Science. 2014, 41 (2): 82-86. 
Abstract PDF(509KB) ( 457 )   
References | RelatedCitation | Metrics
An active learning method using SVM was introduced in this paper to solve the problem of protein-protein interaction prediction task.Biological processes in cells are carried out through protein-protein interactions.Since determining whether a pair of genes interacts by wet-lab experiments is resource-intensive,we proposed a support vector machine active learning algorithm for interaction prediction.Active machine learning can guide the selection of pairs of genes for future experimental characterization in order to accelerate accurate prediction of the human gene interactome.As a method of constructing an effective training set,the goal of active learning algorithm is to find informative sample which can enhance the classification results of the model during the iteration,thereby reducing the size of the training set and improving the efficiency of the model within limited time and resources.The experiment shows that compared with the general SVM,active learning with SVM can reduce the number of examples effectively on the premise of keeping correctness of the classifier.
Face Recognition of Dual-tree Complex Wavelet Multi-frequency Within-class and Inter-class Uncertainty Fusion
WANG Shi-min,YE Ji-hua,WANG Ming-wen and CHENG Bai-liang
Computer Science. 2014, 41 (2): 87-90. 
Abstract PDF(868KB) ( 390 )   
References | RelatedCitation | Metrics
In order to better obtain face texture features for representing face and solve the problem of the face multi-frequency weights,this paper proposed dual-tree complex wavelet multi-frequency within-class and inter-class uncertainty fusion in face recognition.Dual-tree complex wavelet multi-frequency features are first used to show face texture features,Dual-tree complex wavelet multi-frequency within-class and inter-class uncertainties are calculated to get multi-frequency uncertainty weights,at the same time two-dimensional principal component analysis method is exploited to construct the linear subspace for face multi-frequency features,and the final face features from face subspace weighted fusion can ensure that the projected sample has minimum within-class distance and the maximum inter-class in the new space. The experimental results on ORL database and comparative analysis indicate that compared with the classical two-dimensional principal component analysis,traditional wavelet,Gabor wavelet and dual-tree complex wavelet feature extraction method,the proposed method in this paper obtains better recognition rate.
Method of Face Recognition Based on Principal Component Analysis and Maximum a Posteriori Probability Classification
YUAN Shao-feng and WANG Shi-tong
Computer Science. 2014, 41 (2): 91-94. 
Abstract PDF(536KB) ( 397 )   
References | RelatedCitation | Metrics
In the processing of face recognition with PCA algorithm,the image may be eligible for some kind of probability density distribution and different levels of noise pollution,so the simple distance classification is no longer effective.Maximum posteriori classification combines the parameter estimation and kernal function and Bayes theory,can take into account the probability distribution well.Under the multivariate Gaussian distribution,using it to replace the distance classification can have the better recognition rate for the images containing the different parameter values of the Gaussian noise.The standard ORL face library was used to verify this theroy,and the result shows its feasibility.
Optimality Conditions on Riemannian Manifold of Nonlinear Convex Programming
ZOU Li,WEN Xin and LIN Bin
Computer Science. 2014, 41 (2): 95-98. 
Abstract PDF(308KB) ( 597 )   
References | RelatedCitation | Metrics
This paper gave the identification of convex function on Riemannian manifold by use of Penot generalized directional derivative and the Clarke generalized gradient,and gave a sufficient condition for the minimum point of convex programming on Riemannian manifolds,and Lagrange theorem,Lagrange sufficient condition,the Kuhn-Tucker theorem and sufficient condition of the minimum point of the equality constrained optimization problems,the inequality constrained optimization problems,and equality and inequality constrained optimization problem was given.
Effects of Statistical Machine Translation with Language Model of Dependency Syntax Relationship
DONG Ren-song,WANG Hua,ZHANG Xiao-zhong,YU Zheng-tao and ZHANG Tao
Computer Science. 2014, 41 (2): 99-101. 
Abstract PDF(252KB) ( 398 )   
References | RelatedCitation | Metrics
In order to improve the the results of statistical machine translation in Chinese-English,the paper proposed a language model based on dependency syntax relationship.In the more mature features of phrase-based statistical translation,this new model can further constraint the result of the NBEST sequence by decoding,recalculate the NBEST sequence scores,and adjust the NBEST sequence to get a better translation.Experiments with a test set with baseline of "Pharaoh" in 500English sentences and the final experimental results show that the proposed language model with dependency syntax relationship can improve the accuracy of Chinese-English’s best statistical translation in some extent.
Artificial Bee Colony Algorithm Based on Hybrid Rank Mapping Probability and Chaotic Search
ZHANG Xin-ming,WEI Feng,NIU Li-ping and WANG Xian-fang
Computer Science. 2014, 41 (2): 102-106. 
Abstract PDF(520KB) ( 355 )   
References | RelatedCitation | Metrics
In view of the shortcomings of artificial bee colony algorithms,such as the low convergence rate and being trapped into local optimums owing to choosing the food source based on direct mapping probability,an Artificial Bee Colony optimization algorithm based on Hybrid rank mapping probability and Chaotic search (ABC-HC) was proposed in this paper.First,two computing probability method to choose food sources were created based on rank mapping.Then the ABC algorithm based on combining the two probability methods in a onlooker bee phrase was proposed in order to keep diversities of the solutions and not to be trapped into local optimums.Finally,in a scout bee phrase,random search was replaced with chaotic search to get a higher convergence rate and a global solution effectively.The simulation results on 10standard test complicated functions indicate that the proposed optimization algorithm is rapid and effective and outperforms the standard ABC algorithm and the evolutionary ones.
Sample-specific Multiple Features Weighting-based High-resolution Remote Sensing Image Classification
CHANG Chun,LI Shi-jin,WAN Ding-sheng and FENG Jun
Computer Science. 2014, 41 (2): 107-110. 
Abstract PDF(372KB) ( 426 )   
References | RelatedCitation | Metrics
High-resolution remote sensing image can provide rich feature details.However,a variety of terrain has complex spatial distribution,and spectral heterogeneity of similar landcovers appears largely,which bring great challenge to traditional pattern recognition classifier.For this purpose,this paper put forward a novel multi-classifier combination method for remote sensing image classification based on adaptive weights adjustment for different query samples.Previous multiple features combination classifiers fail to make full use of local correlation among them,with a unifying weight for all the samples.This paper explored different weights of each feature in classification on different test samples,according to different local distributions.The experimental results on a large remote sensing image database show that different features in remote sensing image classification of different samples have different effects,and the sample-specific multiple features weighting-based method presented in this paper enhances the average classification accuracy from 78.3% to 90%.
Correlated Rules Based Associative Classification for Imbalanced Datasets
HUANG Zai-xiang,ZHOU Zhong-mei and HE Tian-zhong
Computer Science. 2014, 41 (2): 111-113. 
Abstract PDF(320KB) ( 398 )   
References | RelatedCitation | Metrics
Many studies have shown that associative classification is a promising classification method.However,most algorithms of associative classifications may not achieve high classification performance on imbalanced datasets because they generate rules based on the “support-confidence” framework.The confidence (support) tends to bias the majority class in imbalanced datasets.As a result,these instances with minority class may be misclassified.We proposed a new associative classification approach called CRAC (Correlated Rules based Associative Classification for Imbalanced Datasets).First,we mine frequent and mutual associative itemsets for classification.Therefore,we will generate small set of high-quality rules.Second,CRAC only select the rule with largest lift as a CAR among all rules with that frequent and associative itemset as condition.As a result,the antecedent and the consequent of the rules CRAC generated are positively correlated.Finally,we rank rules according to a new metric which integrates lift,support and Complement Class Support (CCS).So,we are likely to use rules with positively correlation to prediction the minority class.Our experiments on fifteen UCI data sets show that our approach is an effective classification technique for both balance and imbalanced datasets,and has better average classification accuracy in comparison with CBA.
Enhanced Multi-objective Evolutionary Algorithm Based on Decomposition
HOU Wei,DONG Hong-bin and YIN Gui-sheng
Computer Science. 2014, 41 (2): 114-118. 
Abstract PDF(473KB) ( 822 )   
References | RelatedCitation | Metrics
A novel algorithm,called multi-objective mixed strategy evolutionary algorithm with local search (LMS-MOEA/D),was presented based on the frame of MOEA/D (multi-objective evolutionary algorithm based on decomposition),to solve a set of scalar optimization sub-problems.The uniform design method was applied to generate the aggregation coefficient vectors.The mixed strategy can make full use of the advantage of each crossover operator,and the algorithm combines local search strategy to approximate the Pareto-optimal set.Experimental results indicate that the proposed algorithm has the efficiency and effectiveness in terms of diversity and convergence.
Moving Object Detection Based on Adaptive Image Blocking and SSIM
TIAN Hong-jin and ZHAN Yin-wei
Computer Science. 2014, 41 (2): 119-122. 
Abstract PDF(603KB) ( 387 )   
References | RelatedCitation | Metrics
This paper focused on object detection.Motivated by the drawbacks of existing background update algorithms that are noise sensitive and slow in execution,an improvement on moving object detection method was proposed by image adaptive blocking and block-wise structure similarity of inter-frames.An initial background model was obtained with a few beginning frames and every successive frame was divided into blocks.Over corresponding blocks of two neighboring frames,a similarity was defined in order to update the background model.The moving objects were then obtained by subtracting the background model from the current frame.Experimental results demonstrate that the improved method has better performance than traditional methods.
Multiple Label Approach Based on Local Correlation of Neighbors
ZHENG Xi-yuan and ZHANG Hua-xiang
Computer Science. 2014, 41 (2): 123-126. 
Abstract PDF(331KB) ( 432 )   
References | RelatedCitation | Metrics
Determining the classification of the test sample by using neighbors’ labels achieves good results in multiple label classification.The mapping relationships of these algorithms are established between the labels of training examples and the number of different samples in their k-nearest neighbors by learning from the training set.The label of a test sample can be easily predicted by applying the mapping relationship.The disadvantage of these algorithms is to consider only the mapping relationship between the labels of the test examples and the number of different samples in their k-nearest neighbors,and to ignore the local correlation between the labels of the test examples and their k-nearest neighbors.This paper proposed an algorithm called ML-WKNN algorithm,which classifies the test examples through the mapping relationship between the labels of the training examples and their k-nearest neighbors by considering the local correlation between the labels of the training examples and their k-nearest neighbors.The experimental results show that the ML-WKNN algorithm achieves better results than other algorithms in dealing with the multi-label classification problems and automatic image annotation.
Parallel Primal Estimated Sub-GrAdient Solver for Structural SVM
GUO Li-na,YANG Ming and TU Jin-jin
Computer Science. 2014, 41 (2): 127-130. 
Abstract PDF(399KB) ( 354 )   
References | RelatedCitation | Metrics
Primal estimated sub-GrAdient solver for SVM (Pegasos) is a simple and effective iterative algorithm for solving the optimization problem of Support Vector Machine.The method alternates between stochastic gradient descent steps and projection steps to find a hyper-plane that can separate two classes of samples with the maximal margin.But it neglects the data distributions which are also vital for an optimal classifier.We developed a novel algorithm,termed as Parallel Primal Estimated sub-GrAdient Solver for Structural SVM (PSPegasos) by embedding the structural information into the SVM and using the parallel computing framework:MapReduce.This algorithm can take full advantage of the computing and storage capacity of the computer cluster,and be applicable to the optimization problem of the massive data.The algorithm was used to two NASA software module datasets CM1and PC1,and the experimental results show that the algorithm can accelerate the convergence speed,improve the classification performance and be an effective solution to the optimization problem of the massive data.
Interactive Path-planning Method Based on Artificial Potential Field in Game Scenarios
YU Shuai,LI Yan,WANG Xi-zhao and ZHAO He-ling
Computer Science. 2014, 41 (2): 131-135. 
Abstract PDF(465KB) ( 588 )   
References | RelatedCitation | Metrics
In real-time strategy (RTS) games,path planning is one of the typical and important tasks for game players.To meet the requirement of real-time response,the game players need to find an offensive path quickly.Besides,there are often interactions among game units which will greatly influence the quality of path planning.Dijkstra algorithm is a traditional and widely used algorithm which can find an optimal path.However,this algorithm cannot meet the strict time limit in RTS games and does not consider the unit interactions.This paper selected a typical RTS game attack-defense scenario,and presented a fast and dynamic path-planning method based on artificial potential field.We also introduced the concept of fuzzy measure to describe the interaction of units.The experiment results show that the proposed method is more efficient and makes the selected game scenario closer to the real games.
EM Algorithm for Latent Regression Model
HAN Zhong-ming,LV Tao,ZHANG Hui and JIANG Tong-qiang
Computer Science. 2014, 41 (2): 136-140. 
Abstract PDF(453KB) ( 945 )   
References | RelatedCitation | Metrics
There have a very wide range of applications for latent variable regression model.Estimation of the parameters of latent variable regression models depends on the assumptions of the distribution of the independent variables.Based on the Beta distribution of the independent variables,an EM algorithm for parameters estimation of latent regression model was proposed in this paper.The detailed solution process in the model was derived.Newton method for solving parameter of Beta distribution was given.Furthermore,an initial value selection algorithm was proposed.Comprehensive experiments were conducted based on simulation datasets and real dataset.The experimental results show that the EM algorithm can efficiently estimate parameters with different distribution shapes of latent regression models.
Analysis and Reasoning of Race Condition in Embedded System Synchronization Process
ZHANG Jing and PAN You-shun
Computer Science. 2014, 41 (2): 141-144. 
Abstract PDF(322KB) ( 409 )   
References | RelatedCitation | Metrics
Because race condition in embedded system synchronization process may arise conflict,this paper proposed a race condition analysis and reasoning model including race dependency set,race cooperation graph and race condition array.Race condition model analyzes synchronization processes with race relation in embedded systems,reasons race condition among processes,and generates race dependency set.Race cooperation graph describes synchronization processes and their race condition reasoning relationship. Race condition array is designed to save process race condition reasoning relationship for further study.The method proposed in this paper improves analysis efficiency and has practical value.
Hypergraph Spectral Clustering with Sparse Representation
WANG Can-tian,SUN Yu-bao and LIU Qing-shan
Computer Science. 2014, 41 (2): 145-148. 
Abstract PDF(689KB) ( 630 )   
References | RelatedCitation | Metrics
Hypergraph spectral clustering method attracts much attention,because it can effectively describe high-order information among the data.Different from traditional graph model,hyperedge in hypergraph is not a pair-wise link between two data points,while it is a subset of data points sharing with some attribute.In practices,hyperedge is usually built by simple K-NN clustering,so it does not consider inherent relationship among the data.We proposed a new hypergraph spectral clustering algorithm with sparse representation.For each data point,sparse representation was used to seek its related neighbors to form a hyperedge,so the data points in a hyperedge have strong dependency.Finally,the spectral decomposition was performed on the Laplace matrix of the hypergraph to obtain the clustering result.Extensive experiments on face database and handwriting database demonstrate the effectiveness of the proposed method.
Novel Moving Object Detection Method Based on ViBe
HU Xiao-ran and SUN Han
Computer Science. 2014, 41 (2): 149-152. 
Abstract PDF(603KB) ( 442 )   
References | RelatedCitation | Metrics
Because of such interferences as ghostand shadow which cannot be overcome by ViBe algorithm in moving object detection in practice,a new improved ViBe algorithm which combines inter-frame difference algorithm with edge detection technology was put forward in this paper.By using inter-frame difference algorithm in preprocessing stage,the true background can be gained and the ghost can be removed.And then according to prior knowledge and edge detection technology in moving object detection stage,the true moving object can be got to eliminate shadow.In addition,with pixel-labeled segmentation method,the description of moving object can be achieved and the object can be tracked.Eventually these methods are applied to real-time traffic surveillance video and the experimental results show that these proposed methods have good performance in removing interfaces like ghostand shadow in moving cars detection and trac-king.
Architecture Selection for Single-hidden Layer Feed-forward Neural Networks Based on Sensitivity of Node
ZHAI Jun-hai,HA Ming-guang,SHAO Qing-yan and WANG Xi-zhao
Computer Science. 2014, 41 (2): 153-156. 
Abstract PDF(353KB) ( 362 )   
References | RelatedCitation | Metrics
Based on sensitivity of node,an architecture selection for Single-hidden Layer Feed-forward Neural Networks (SLFNNs) was proposed.Beginning from an initial large number of hidden nodes,the proposed algorithm firstly employs the sensitivity to measure the significance of the hidden nodes,and then the hidden nodes are sorted in descending order by their significance,finally all unimportant nodes are pruned.The algorithm will terminate when a predefined stop condition is held.The main feasures of the proposed algorithm include the unnecessity of retraining the SLFNN,the compact architecture and the high generalizition capacity.We experimented the proposed approaches on real world datasets and UCI datasets,and the experimental results show that the proposed method is effective and efficient.
Direct Triangulation Algorithm for Three-dimensional Scattered Points
QIU Chun-li and XU Hong-li
Computer Science. 2014, 41 (2): 157-160. 
Abstract PDF(695KB) ( 401 )   
References | RelatedCitation | Metrics
The triangulation of scattered points plays an important role in surface reconstruction.This paper provided an efficient triangle algorithm for this kind of point,based on deeply analyzing the basic method of triangulation.This algorithm puts the strategy of dynamic ball into surface reconstruction,from the incremental computation theory and the constraint method and vertex measure function,is extended to cover the entire surface starting from a basic triangle.The experimental result and analysis show that this algorithm not only reconstructs surface efficiency,but also furthest keeps the characteristics of original surface.Both theoretical analysis and simulation results justify the feasibility of the algorithm above.
Film Affective Classification Based on Improved Fuzzy Comprehensive Evaluation
LIN Xin-qi
Computer Science. 2014, 41 (2): 161-165. 
Abstract PDF(523KB) ( 397 )   
References | RelatedCitation | Metrics
In order to improve the classification accuracy of the film scene emotion,a novel algorithm was proposed based on the improved fuzzy comprehensive evaluation in the fuzzy mathematics theory by establishing the relationship between the low-level features and high-level cognitive emotion.First,the scene luminance,shot cut rates and color ener-gy were selected as the low-level features for theirs special characteristics that can be used to better distinguish different types of human emotional reaction.Further,the extractive methods were put forward.Secondly,after introducing and improving the fuzzy comprehensive evaluation model,fuzzy membership functions were formed to measure the fuzzy relationship between low-level features and emotion,and then the single factor evaluation matrix was built.Finally,the method of the analytic hierarchy process (AHP) was used to determine the relative weight matrix between the features,and the affective fuzzy feature vector was computed by the improved fuzzy comprehensive evaluation model.And the affective type of the film scene was obtained by the maximization value of the components of the affective fuzzy feature vector and threshold at last.The experimental results show that the proposed algorithm can effectively improve the accuracy of the film affective classification.
SMwKnn:Mutual k Nearest Neighbours Algorithm Based on Class Subspace and Distance-weighted
LU Wei-sheng,GUO Gong-de,YAN Xuan-hui and CHEN Li-fei
Computer Science. 2014, 41 (2): 166-169. 
Abstract PDF(367KB) ( 672 )   
References | RelatedCitation | Metrics
Mknnc is an improved algorithm of the k nearest neighbours (KNN),which uses the mutual k nearest neighbours to eliminate anomalies in the training set and the k nearest neighbours.It has the better performance than KNN.However,the real and effective data may be eliminated as the noises so that influencing the efficiency of classification in the noise elimination stage without taking the class label into consideration.The mutual k nearest neighbours algorithm based on class subspace and distance-weighted (SMwKnn) taking distance-weighted into account can eliminate the influence of the redundant or useless attributes on the similarity measurement of the k nearest neighbours classification algorithm and eliminate the anomalies in the neighbours.The experimental results on the UCI public datasets verify the effectiveness of the proposed algorithm.
Plant Leaf Recognition Method Based on Fractal Dimension Feature of Outline and Venation
ZHAI Chuan-min,WANG Qing-ping and DU Ji-xiang
Computer Science. 2014, 41 (2): 170-173. 
Abstract PDF(620KB) ( 555 )   
References | RelatedCitation | Metrics
This article discussed a method of describing the characteristics of plant leaves based on the outline and venation fractal dimension.The method first separates outline and venation.According to multiple threshold edge detection,multiple venation images are got.Then,the two-dimensional fractal dimension of the leaf edge image and multiple venation images is calculated,which is used as the basis of plant leaves classification and recognition.
Tags Know You Better:A New Approach to Enhancing MIR System
ZHOU Li-juan,LIN Hong-fei and YAN Jun
Computer Science. 2014, 41 (2): 174-178. 
Abstract PDF(473KB) ( 454 )   
References | RelatedCitation | Metrics
Music sharing systems with collaboratively tagging function have been important parts on the Internet.They make the system users to annotate and categorize their own interests and thoughts about the resources possible.In the paper,a novel and straightforward way was proposed to search music collections using metadata and descriptions (tags) of tracks,by jointly considering lyrics,tags and popularity of songs to enhance Music Information Retrieval (MIR) system.Furthermore,Tag Latent Dirichlet Allocation (TLDA) model was proposed in the paper to facilitate adjusted VSM by obtaining more semantically related tags.TLDA can better analyze collaboratively generated tags and understand the intent of user queries in a semantic way,acquiring more information than just keyword-matched tracks return list.By comparing the performance of the proposed approach with general tag clustering approach,a result was found that music information retrieval model proposed in the article performs better than conventional metadata-based music retrieval techniques and tags clustering,especially when tags for tracks are extremely sparse and informal.
VTD-XML Node Query Execution Performance Optimization Based on CMP
GUO Xian-yong,CHEN Xing-yuan and DENG Ya-dan
Computer Science. 2014, 41 (2): 179-181. 
Abstract PDF(333KB) ( 454 )   
References | RelatedCitation | Metrics
For mainstream multi-core processors,the VTD-XML’s node query execution performance was optimized,based on preload method,and from the concurrent execution of multiple threads and thread memory access perfor-mance.The experimental results show that the multihreaded XML document parsing framework proposed in this paper can take full advantage of the computing resources of multi-core processors,and effectively improve thread memory access performance,greatly improve the performance of XML node query.
Tracking with Pairwise Uncertainty of RSSI Based on Pairwise Sensing Uncertainty in Wireless Sensor Networks
XIE Yi,HUANG Qi-shan and ZHANG Hui-chuan
Computer Science. 2014, 41 (2): 182-190. 
Abstract PDF(894KB) ( 381 )   
References | RelatedCitation | Metrics
Focusing on the unreliable sensing phenomenon in wireless sensor networks and its impact on target-tracking accuracy,this paper first analyzed the uncertain area caused by the uncertainty of pairwise sensing results and its boun-daries.Then the tracking with pairwise uncertainty of RSSI (TPU-RSSI) strategy was proposed.Hence,the tracking problem is transformed into a vector matching process,which matches the signature vector of divided faces with the sampling vector formed by the burst grouping samplings,in order to improve the tracking flexibility,increase the trac-king accuracy and reduce the influence of in-the-filed factors.In addition,a heuristic matching algorithm was introduced to reduce the computational complexity.The experiment results show that TPU-RSSI is more flexible and has higher tracking accuracy than related methods.
Method of Computing Least No Conflict Routing Groupings in Shuffle-exchange Networks
ZHANG Yi-hao,SHEN Yue-hong and PAN Lin
Computer Science. 2014, 41 (2): 191-196. 
Abstract PDF(493KB) ( 480 )   
References | RelatedCitation | Metrics
In order to resolve the problem of how to separate conflict routings in shuffle-exchange networks,the concepts of the maximal no conflict routing group,the least no conflict routing groupings,eigenfunction and covering function were defined.Based on these concepts,the theory and method of computing the least no conflict routing groupings by boolean algebra were proposed.In addition,an algorithm of approximately computing the least no conflict routing groupings was put forward to improve the efficiency of batch routing.Results of theoretical analysis and experiments show that the time efficiency and accuracy of the algorithm are excellent.It provides strong supports for carrying out batch routing policy in process of massive information exchange.
Network Intrusion Intelligent Detection Algorithm Based on AdaBoost
TAN Ai-ping,CHEN Hao and WU Bo-qiao
Computer Science. 2014, 41 (2): 197-200. 
Abstract PDF(325KB) ( 852 )   
References | RelatedCitation | Metrics
In the Internet,computers and equipment are threaded by malicious intrusion,and the safety of network is seriously affected.Intrusion behavior has features of upgraded fast,strong concealment,random characteristics,so the traditional methods are difficult to prevent this problem effectively.In this paper,a network intrusion intelligent detection algorithm based on AdaBoost was presented.The SVM is used to build the learning-module of intrusion detection.The AdaBoost is used for training these learning-modules,and generating the final the intrusion detection model.The simulation results show the effectiveness of the algorithm.
Relationship Analysis of Microblogging User with Link Prediction
FU Ying-bin and CHEN Yu-zhong
Computer Science. 2014, 41 (2): 201-205. 
Abstract PDF(506KB) ( 977 )   
References | RelatedCitation | Metrics
With the development of online social networking sites represented by microblog,the microblogging users form some complex social networks.In order to study the factors that affect the formation of relationship among microblogging users,this paper used link prediction to analyze the relationship of micro-blogging users.Firstly,this paper studied how the features of network structure affect the formation of microblogging network.The features of microblogging attribute were also analyzed and introduced to build a link prediction model based on random forest classifier.The link prediction model was tested on a user data set collected from Sina Weibo.By comparing the prediction perfor-mance with and without the introduction of microblogging attribute features and analyzing the importance distribution of features,we found that besides the network structure features,microblogging attribute features have significant effect on the formation of user relationship,and can improve the prediction performance significantly.
Research and Application of Data Integration in Distributed Enterprise Service Bus Platform
FAN Jing,XIONG Li-rong and XU Cong
Computer Science. 2014, 41 (2): 206-214. 
Abstract PDF(1063KB) ( 490 )   
References | RelatedCitation | Metrics
To implement the integration of large scale heterogeneous data,solve the problem of data source distribution and satisfy the requirementd of information communication and sharing across various systems and applications,this paper presented a solution for data integration in distributed enterprise service bus platform.The data model for integration based on WSDL and XML was proposed.Besides,to improve performance on processing large amount of messages,a load balancing algorithm based on the ESB process was presented.This algorithm can allocate the process node according to the load of component,and is applied to the system integration model of the distributed ESB platform.An example for hospital information system integration was presented,which verifies the feasibility and effectiveness of the proposed framework and algorithm to solve the integration of larger scale heterogeneous data sources and load balancing problem.
Real-time Task Scheduling Strategy Based on Load Execution Urgency
XIA Jia-li,CAO Zhong-hua,WANG Wen-le and CHEN Hui
Computer Science. 2014, 41 (2): 215-218. 
Abstract PDF(407KB) ( 354 )   
References | RelatedCitation | Metrics
For the compensatory support real-time task model,this paper analyzed real-time task system load execution degree of urgency,then put forward real-time compensation task scheduling strategy TSCTTL based on the load execution degree of urgency.The simulation results show that scheduling compensation tasks according to the load execution degree of urgency of the real-time tasks reduces the system task deadline miss ratio and improves the system returns.
Research on Formal Verification of Web Interaction Model
LI Min,LUO Hui-qiong,TANG Chun-ling and WANG Qiang
Computer Science. 2014, 41 (2): 219-221. 
Abstract PDF(258KB) ( 321 )   
References | RelatedCitation | Metrics
Formal verification of Web interaction model is a credible way on evaluating the attributes of Web events.Through a series of system modeling,behavior analysis,and related validation of center properties,defects will expose during the design phase instead of coding phase or later in the formal model.Thereby,the viability of system model is more powerful.At the mean time,it cost less than the spending of late defect exposure.We investigated the process modeling of interactive application service on Web system,checking the correctness of model’s relative properties.Besides,process modeling achieves service interaction processes deduction on system logic unit through mathematical reasoning.And formal verification of process aiming at the correctness of system services was also performed.The advantage of this method reflects mainly on the early discovery of defects in system service model.The formal verification of Web interaction model is based on IMWSC model verification mechanism.
Analysis and Modeling of Computer Interlocking Software Based on UML
WU Xiao-chun and GAO Xue-juan
Computer Science. 2014, 41 (2): 222-225. 
Abstract PDF(316KB) ( 414 )   
References | RelatedCitation | Metrics
It is an important way to make sure the safety of train running and passengers’ life and property by effectively testing,analyzing and validating computer interlocking software.Formal model is the foundation of system testing,analyzing and validating.Based on interlocking software’s UML informal model,using finite state machine model as the mathematical tools to describe system formal model,this paper studied the method to traverse the UML sequence diagram or scenarios to finite state machine model.Firstly the UML sequence diagram was traversed to FSP process ma-thematical model,and then systematic finite state machine model was obtained by merging all objects’ process mathematic models in the UML sequence diagram.Finally,the case of controlling of entry routes was used to generate systematic finite state machine model to invalidate the feasibility and effectiveness of this method.
Research on Argument Ontology for Computational Argumentation
LIU Bin,YAO Li,HAO Zhi-yong and GONG Yong
Computer Science. 2014, 41 (2): 226-231. 
Abstract PDF(1351KB) ( 458 )   
References | RelatedCitation | Metrics
Argumentation techniques have been widely concerned by AI researchers in recent years.How to construct qualified counter-arguments efficiently for the argumentation process still remains to be a difficult problem.The nature of arguments was analyzed,and concepts relating to arguments and relations between these concepts were formally defined,and the knowledge representation of arguments was structured.Then an argument ontology was implemented.An instance of an argumentation process was used to testify consistency and usability of the argument ontology.In a prototype system built on this ontology,arguments needed in argumentation processes can be aquired by quering or constructing based on the ontology.Arguments constructed previously are reused,thus the computing efficiency of automatic argumentation is enhanced.
Moving Target Tracking Based on Improved Particle Filter
LI Zhi and XIE Qiang
Computer Science. 2014, 41 (2): 232-235. 
Abstract PDF(676KB) ( 419 )   
References | RelatedCitation | Metrics
In the target tracking method based on traditional particle filter,the importance density function is difficult to select and lack of versatility,and the re-sampling method is difficult to design to solve the particle degradation phenomenon effectively.Therefore,a moving target tracking method based on improved particle filter,using artificial fish swarm algorithm,was proposed to improve the importance density function.Particles interact and coordinate their behavior constantly,making the state of particles close to the posterior distribution,and improve the versatility of the importance density function.On this basis,in order to improve re-sampling method and suppress premature phenomenon,the particle swarm convergence and diversity are balanced by the immune operators of artificial immune algorithm.Experimental results show that compared with traditional particle filter algorithm,moving target tracking accuracy and anti-interfe-rence ability are improved and the particle degradation phenomenon is suppressed effectively by adjusting the parameters of the present algorithm.
Dependence Space Based on Two Types of Concept Lattices
BAO Yong-wei,WANG Xia and WU Wei-zhi
Computer Science. 2014, 41 (2): 236-239. 
Abstract PDF(293KB) ( 332 )   
References | RelatedCitation | Metrics
Object oriented concept lattice and attribute oriented concept lattice are two types of generalized models of classical concept lattice.Firstly,two congruence relations were defined respectively on object power sets of object oriented concept lattice and attribute oriented concept lattice using a pair of dual approximation operators.Secondly,an inner operator and a closure operator were constructed based on two kinds of congruence relations.Then relationship between the inner operator and object oriented concept lattice was studied as well as relationship between the closure ope-rator and attribute oriented concept lattice.Finally,relationships between two object oriented concept lattices were transformed into relationships between the corresponding congruence relations.
Multiple Local Adaptive Soft Subspace Clustering Ensemble Based on Multimodal Perturbation
WANG Li-juan,HAO Zhi-feng,CAI Rui-chu and WEN Wen
Computer Science. 2014, 41 (2): 240-244. 
Abstract PDF(450KB) ( 323 )   
References | RelatedCitation | Metrics
This paper proposed multiple local adaptive soft subspace clustering (LAC) ensemble (MLACE) based on multimodal perturbation.There are three merits in the proposed MLACE.Firstly,MLACE combines diversity and complement decisions generated by random initialization,parameter perturbation and feature subspace projection,so as to improve the accuracy of clustering.Secondly,the clustering ensemble information is refined.The probability of each instance belonging to all clusters is defined according to the subspace weight matrix from LAC.Thirdly,because the clustering ensemble information is refined from 0/1binary value into [0,1]real value,the consensus function in clustering ensemble can adopt real valued clustering ensemble method Fast global K means,which can further improve the accuracy of clustering ensemble.Two synthetic datasets and five UCI datasets were chosen to evaluate the accuracy of MLACE.The experiment results show that MLACE is more accurate than K-means,LAC,Multiple LAC clustering ensemble based on parameter perturbation (P-MLACE).
Rough Set Approach to Data Completion Based on Relative Decision Entropy and Weighted Similarity
WANG Sha-sha,JIANG Feng and WANG Wen-peng
Computer Science. 2014, 41 (2): 245-248. 
Abstract PDF(350KB) ( 350 )   
References | RelatedCitation | Metrics
The current data completion methods based on rough sets do not consider the differences between different condition attributes when calculating the similarities between any two objects.To solve this problem,this paper introduced a new notion of weighted similarity,and proposed a rough set data completion algorithm called RDNAWS based on relative decision entropy and weighted similarity.RDNAWS algorithm adopts the concept of relative decision entropy to measure the significance of each condition attribute.Through calculating the significance of each condition attribute and the dependence of the set of decision attributes on it,RDNAWS provides a weight for each condition attribute,which can efficiently distinguish various condition attributes.The experimental results on real data sets demonstrate that our algorithm can obtain better classification performance than the current algorithms.
Parallel Alternating Direction Algorithm with Parameters for Solving Banded Linear Systems
MA Xin-rong,LIU San-yang and DUAN Zhi-jian
Computer Science. 2014, 41 (2): 249-252. 
Abstract PDF(230KB) ( 390 )   
References | RelatedCitation | Metrics
This paper focused on parallel iterative method with parameters for solving banded or block tridiagonal linear systems on distributed-memory cluster.By splitting the coefficient matrix and using parameters,we proposed a new algorithm and gave some convergence theories for some special coefficient matrices.Furthermore,we implemented the algorithm on HP rx2600cluster and compared it with multisplitting method,BSOR method and PEk inner iterative me-thod for different examples.The numerical experiments indicate that acceleration rates and efficiency of our algorithm are higher than the multi-splitting one.The algorithm saves computational time by allocating memory properly.As to Example 1,the acceleration rates and efficiency of our algorithm are better than the BSOR one slightly.And the results for Example 2are better than PEk inner iterative one significantly.
New Evaluation Model for Incomplete Interval-valued Information System Based on Improved Dominance Relations
WANG Bin,SHAO Ming-wen,WANG Jin-he and ZHANG Jun-hu
Computer Science. 2014, 41 (2): 253-256. 
Abstract PDF(364KB) ( 336 )   
References | RelatedCitation | Metrics
Dominance-based rough set approach is an important method to study incomplete interval-valued information systems.To solve outstanding problems in incomplete interval-valued information systems,we proposed two new dominant relations-the upper-limit dominance relation and the similarity dominance relation.Based on these two relations,we studied object ranking and uncertainty measurement,and showed the difference and relationship between the two proposed dominance relations.Examples were provided to substantiate the proposed concept.
Hybrid Inversion Algorithm of Thunder Cloud Equivalent Electric Charge Based on Multi-station Atmospheric Electric Field
XING Hong-yan and HUANG Yu
Computer Science. 2014, 41 (2): 257-260. 
Abstract PDF(323KB) ( 344 )   
References | RelatedCitation | Metrics
In order to inverse thunder cloud equivalent electric charge using the ground electric field data,this paper presented a thunderstorm cloud equivalent charge hybrid inversion algorithm.The algorithm combines the particle swarm method and Newton method through the combined mosaic hybrid structure and controlls the hybrid timing by constructing the mixed probability function.Giving the parameters thunder cloud charge structure,the thunder cloud equivalent charge is inversed based on the forward modeling results.The results show that the particle Newton’s method can effectively avoid the selection of initial value problems by strong global search capability,get better inversion results,and calculation time of a simple serial hybrid structure is short but the inversion effect is not good.Setting mosaic hybrid structure can better reflect the advantage of two algorithms,and building mixed probability density function can improve the computation efficiency.
Simulation Algorithm for 454Pyrosequencing Sequencers
CHEN Wei,CHENG Yong-mei,ZHANG Shao-wu and PAN Quan
Computer Science. 2014, 41 (2): 261-263. 
Abstract PDF(351KB) ( 538 )   
References | RelatedCitation | Metrics
Recent advance of environment genome and deep sequencing technologies has expanded our understanding of composition and structure of microbial community based on 16S rRNA gene sequences.However,the complexity and difficulty of separation of the environmental samples and lack of ground-truth make it difficult to analyze the microbes quantificationally.Thus,simulation datasets will be useful in developing novel softwares because it not only helps us explore the microbial structure quantitatively,but also allow us to construct benchmark studies for evaluating existing methods for processing 16S rRNA sequences data.In the present work,based on error-prone PCR model and making use of the normal distribution model,a simulation algorithm for 454sequencer (Tsim) was established to simulate the process of sequencing by synthesis.The simulation results show that the simulator can effectively simulate 454sequencing process.
Chain Graph and their Concept Lattice Representation
LI Li-feng
Computer Science. 2014, 41 (2): 264-266. 
Abstract PDF(217KB) ( 336 )   
References | RelatedCitation | Metrics
Concept lattices are an ordering of the maximal rectangles defined by a binary relation.There is a correspon-ding relationship between concept and maximal biclique.This paper applied the reduction theory of concept lattice to chain graph.Firstly,the representation of chain graph by concept lattice was given.Secondly,it was proved that a bipartite graph G=(V1,V2,E) is a chain graph if and only if G′=(V1,V2) is such a graph,where (V1,V2) is a reduced context of the context (V1,V2,E).
Hybrid Algorithm Based on Artificial Glowworm Swarm to Achieve Best Solution to Pharmaceutical Distribution Problem
JIN Yu-qin,ZHOU Jin-hai,ZHANG Xing-de and SI Jun-feng
Computer Science. 2014, 41 (2): 267-269. 
Abstract PDF(354KB) ( 345 )   
References | RelatedCitation | Metrics
Pharmaceutical distribution planning has become an important research question needed to resolve.Firstly,the characteristics of the pharmaceutical distribution problems were analyzed in this paper.The mathematical model with constraints was put forward and the fitness function to achieve the best solution of the pharmaceutical distribution routing was determined.And then a hybrid algorithm based on artificial glowworm swarm optimization algorithm was proposed to the model optimization.Simulation results show that the proposed algorithm achieves the best solution to pharmaceutical distribution problems effectively.It not only can save costs,but also improve the operational efficiency and provide a valuable reference to solve this kind of problem.
Research on Information Extration Model for Microblog Content
ZHENG Ying and LI Da-hui
Computer Science. 2014, 41 (2): 270-275. 
Abstract PDF(492KB) ( 327 )   
References | RelatedCitation | Metrics
Social media is the platform or tool that people use to share opinions,insights,ideas and experience.It has become the new media having great influence.Microblogging is an important part of social media,so it will play an important role in the information transfer.Microblogged content-oriented information extraction is to extract the valuable structred information from free text of full of noise,loose,unstructured microblogging content to facilitate effective access to information from Twitter content.This paper proposed a microblogging event extraction based on factor graph approach to accurately extract the events reflected in microblogging.At last we used some experiments to verify the effectiveness of the methods,and the results show that the performance and accuracy of this method is higher than other methods.
Design and Simulation of Passenger Flow Forecast Algorithm for Urban Rail Transit
LI Shao-wei and CHEN Yong-sheng
Computer Science. 2014, 41 (2): 276-279. 
Abstract PDF(323KB) ( 608 )   
References | RelatedCitation | Metrics
To forecast exactly the passenger flow of the urban rail transit,a hierarchical framework based on neural network and Kalman-filter model was presented.First,ELAN neural network model is employed to implement the prediction of the passenger flow.Then the Kalman-filter was used to refine the forecast data of the passenger flow so as to advance the accuracy of the predicted results.Finally,in order to validate the proposed model,the passenger flow of Shanghai subway transport hub was observed and simulated.Experimental results show that the proposed hierarchical model reduces error about 0.8% and has better effects in contrast with any single algorithm.
Image Feature Detection and Registration Algorithm Based on Mexican hat Function
JIN Feng and FENG Da-zheng
Computer Science. 2014, 41 (2): 280-284. 
Abstract PDF(1174KB) ( 465 )   
References | RelatedCitation | Metrics
An operator based on Mexican hat function was used for image local area and feature point detection.Then an image registration algorithm using the two kinds of features was proposed.The Mexican hat operator combining zero-crossing is used for local areas detection,and the feature points are detected by the operator on the different scale space.The image is partitioned into several regions by the local areas and the regions are matched.Then the points are grouped by the regions and matched in each group respectively.At last the image transaction function is gotten by the grouped random sample consensus.The algorithm in this work is based on two kinds of image feature detection and matching using the Mexican hat function,and the experimental results show that the proposed algorithm has high alignment accuracy and small computational volume.
New Method for Road Extraction Based on Modified Path Opening Algorithm
WANG Shuang and CAO Guo
Computer Science. 2014, 41 (2): 285-289. 
Abstract PDF(920KB) ( 477 )   
References | RelatedCitation | Metrics
A novel method for extracting roads from very high resolution remote sensing images based on modified path opening algorithm was proposed in this paper.We constructed new adjacency graph to detect roads with larger curvature at first.Then,we combined the length of the longest path in path opening algorithm with the geometrical feature of the extracted area in order to effectively detect partly damaged or occluded roads.Finally,decision rule was established,which can automatically set the parameter values for road extraction.Experimental results show that the proposed method can greatly improve the extraction result for incomplete roads.
GPU-based Fast Search of Similar Patches in Images
TANG Ying,XIAO Ting-zhe and FAN Jing
Computer Science. 2014, 41 (2): 290-296. 
Abstract PDF(1108KB) ( 603 )   
References | RelatedCitation | Metrics
Many image processing or computer graphic applications involve similar patches search in images,which is a very computational-expensive operation.Traditional acceleration methods such as ANN (Approximate Nearest Neighbor) do not support exact search in non-metric space,and the searching time becomes longer for high-dimensional space.In this paper the general GPU-based framework to compute the similar patches was proposed which can be extended to incorporate any distance functions.Specifically,both Euclidean distance and non-metric Chamfer distance are adopted to compute the similarity between two patches.We proposed the GPU-based algorithm for fast search of similar patches in images under these two distances.Besides,the distances computation is optimized to achieve more efficient CUDA implementation.This algorithm supports exact search since exhaustive search strategy is adopted.The distances between patches are computed in parallel on GPU to greatly improve the computational efficiency.Experimental results show that our method has one or two orders of magnitude speedup compared with traditional acceleration methods and the results of applications in texture synthesis show that our method is capable to synthesize high-quality textures with fast speed.
Face Recognition Using Simplified Pulse Coupled Neural Network
NIE Ren-can,YAO Shao-wen and ZHOU Dong-ming
Computer Science. 2014, 41 (2): 297-301. 
Abstract PDF(712KB) ( 371 )   
References | RelatedCitation | Metrics
A novel face recognition method using Simplified Pulse Coupled Neural Network was proposed.First by the analysis of the oscillation characteristics for neurons,the neuronal Oscillation Time sequences (OTS) were decomposed to captured-OTS(C-OTS) and self-OTS(S-OTS).Then the identification characteristics for X-OTS(OTS,C-OTS and S-OTS) were analyzed by image geometric transformation and oscillation frequency map.Finally,a face recognition system structure was given with C-OTS+S-OTS and cosine distance.Experimental results on face database verify the effectiveness of the proposed method,and it shows better recognition performance than other traditional methods.
Chinese Sign Language Recognition Research Using SIFT-BoW and Depth Image Information
YANG Quan and PENG Jin-ye
Computer Science. 2014, 41 (2): 302-307. 
Abstract PDF(1756KB) ( 289 )   
References | RelatedCitation | Metrics
Introducing the depth image information into sign language recognition research,a Chinese sign language recognition method based on DI_CamShift (Depth Image CamShift) and SIFT-BoW (Scale Invariant Feature Transform-Bag of Words) was presented.It uses Kinect as the video capture device to obtain both of the color video and depth im-age information of sign language.First,it calculates spindle direction angle and mass center position of the depth image correctly tracks gesture by adjusting the search window.Second,an Ostu algorithm based on depth integral image is used to gesture segmentation,and the SIFT features are extracted.Finally,it builds SIFT-BoW as the feature of sign language and uses SVM for recognition.The experimental results show that the best recognition rate of single manual alphabet can reach 99.87%,while the average recognition rate is 96.21%.
Research on Gabor Wavelet Transform Feature Recognition Robustness Based on Vector of Face
PENG Hui
Computer Science. 2014, 41 (2): 308-311. 
Abstract PDF(1180KB) ( 543 )   
References | RelatedCitation | Metrics
There is insufficiency in expressing curve singularity for traditional Gabor wavelet transformation in face re-cognition technology that causes facial expression information hard to identify.This paper proposed a face recognition algorithm combining Gabor wavelet transform and multiple feature vectors.The algorithm firstly utilizes frequency and direction selectivity of Gabor wavelet transformation to extract the Gabor features of face multi-scale and direction and forms a joint sparse model in which the common features and expression characteristics of Gabor can be characterized in all directions and scales via calculation,at the same time,the test image feature vector can be accurately reconstructed using the two feature vector.Finally,the simulation results show that this method can effectively enhance the correct matching ratio of facial expression image and improve the recognition effect.
Thangka Annotation and Retrieval Based on Headdress Features
BI Xue-hui,LIU Hua-ming and WANG Wei-lan
Computer Science. 2014, 41 (2): 312-316. 
Abstract PDF(1414KB) ( 342 )   
References | RelatedCitation | Metrics
Automatic classification can be done by extracting Thangka headdress features,the main steps:1) select the headdress region through human-computer interaction and obtain initial segmentation image by preprocessing the segmentation result which is segmented through iterative segmentation or segmentation algorithm based on RGB;2) separate the crown by the Euler number which is extracted from the initial segmentation image;3) extract Fourier descriptors from out contour of the initial segmentation image,separate hairpin or monk hat through the distance between the feature and each cluster center.The realization of headdress automatic classification can improve the classification efficiency and meet the needs of automatic semantic annotation and semantic retrieval.According to the demand of application,we designed a retrieval system,which can realize retrieval based on text,content and semantics,and improve the accuracy of retrieval.
Automatic Categorization of Traditional Chinese Paintings Based on Wavelet Transform
SHENG Jia-chuan
Computer Science. 2014, 41 (2): 317-319. 
Abstract PDF(497KB) ( 422 )   
References | RelatedCitation | Metrics
Image processing based feature extraction is widely studied in pixel domain.In order to find a better method of image feature representation in the new signal domain,a number of new artistic features were proposed to exploit the advantage of decomposition from the input art works and characterize the artistic styles across different sub-bands in wavelet domain.In order to achieve automatic categorization,3-layer wavelet transform was employed for the extraction of images’ texture features.Moreover,three different classifiers were compared and used to learn different artistic style.Experimental results show that the algorithm can effectively extract image texture features and achieve high accuracy of classification.