Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 42 Issue 5, 14 November 2018
  
State-of-the-art and Future of Voting Theory
ZHANG Nan, CHEN Rong and GUO Shi-kai
Computer Science. 2015, 42 (5): 1-9.  doi:10.11896/j.issn.1002-137X.2015.05.001
Abstract PDF(892KB) ( 1244 )   
References | Related Articles | Metrics
Social choice theory is concerned with the design and analysis of methods for collective decision making.Computational social choice is an interdisciplinary field of study at the interface of social choice theory and computer science and has attracted much attention from people working in artificial intelligence,economic and theoretical computer science,promoting an exchange of ideas in both directions.On the one hand,it is concerned with the application of techniques developed in computer science,such as complexity analysis or algorithm design,to study social choice mechanisms.On the other hand,computational social choice is concerned with importing concepts from social choice theory into computing,and especially social choice theory has since found its place as one of the fundamental tools for the study of multiagent systems.Voting theory is one of the most important research topics in computational social choice currently.So the main topic for this paper is voting theory,which will be investigated from all sorts of angles.Firstly,some popular voting procedures and formal framework of voting theory were introduced.Secondly,strategic manipulation and ways of circumventing strategic manipulation were discussed.Then some of the problems associated with voting in combinatorial domains were highlighted,and several approaches that have been proposed to address them were introduced.Finally,a number of additional topics in voting theory were touched briefly.Meanwhile,some future directions of voting theory were also addressed shortly.
Aerospace Embedded Computer Architecture Evaluation Model
HAN Wei, BAI Xiao-ying and XIE Jian-chun
Computer Science. 2015, 42 (5): 10-13.  doi:10.11896/j.issn.1002-137X.2015.05.002
Abstract PDF(386KB) ( 669 )   
References | Related Articles | Metrics
Typical aerospace embedded computers have many characters in common,especially in the aspect of the design of computer architecture.Three typical aerospace embedded computers were sampled and their characters of architecture and common technology were analyzed.These samples are flight control computer,guidance control computer and satellite borne computer.Seven characters of architecture were concluded.They are modular,network centric,integration,intensivism,longevity,application of commercial off the shelf(COTS) and system openness.As a result,3 eva-luation dimensions were proposed:common architecture,cost control and development oriented,which contribute to the final score of the system under test directly.All the architecture characters and evaluation dimensionalities mentioned above form the evaluation model for aerospace embedded computers.Each architecture character has a basic guideline and a practical reference value or design.Based on the relationship of the classification of the final evaluation result,the evaluation dimensions and the architecture characters,any aerospace embedded computer can be scored according to the evaluation model for various estimate requirements.The evaluating method and progress were illustrated through two kinds of advanced aerospace embedded computers’ evaluation case.The cases indicate that the evaluation model is clear,effective and practical.
Imperialist Competitive Algorithm Based on Import and Export Trade
WANG Shuai-qun, Ao-ri-ge-le, GAO Shang-ce, TANG Zheng and MA Hai-ying
Computer Science. 2015, 42 (5): 14-18.  doi:10.11896/j.issn.1002-137X.2015.05.003
Abstract PDF(905KB) ( 512 )   
References | Related Articles | Metrics
Imperialist competitive algorithm (ICA) inspired by the social phenomenon is a kind of novel swarm intelligence algorithms.Like other evolutionary algorithms,for multi-modal function,ICA also has slow convergence speed and is easy to fall into local optimum.International trade is the exchange of goods and services between different countries and regions and is beneficial to the development of national economy.But it has the trade surplus and the trade defi-cit in economic trade,and a country is not easy to long-standing the trade surplus. The trade deficit which must be timely adjusted,and is conducive to the healthy development of national economy.Inspired by this phenomenon,an improved ICA algorithm based on import and export trade (IICA) was put forward and a benchmark function was selected as the test function to verify the performance of the algorithm.The results show that the solution quality and convergence rate have obvious improvement.The Lennard-Jones potential problem is a potential energy minimization problem and has huge number of local minima,which is growing exponentially with the number of the atoms.So this paper applied IICA algorithm on L-J potential problem to exhibit its applicability over real-world problems.Compared with original ICA and immune algorithm,experimental results demonstrate the effectiveness of IICA in terms of convergence speed and solution quality.
Formalizing Modeling Process of Intelligence World
SUN Shan-wu, WANG Nan and OUYANG Dan-tong
Computer Science. 2015, 42 (5): 19-23.  doi:10.11896/j.issn.1002-137X.2015.05.004
Abstract PDF(387KB) ( 542 )   
References | Related Articles | Metrics
Various networked objects or smart objects are embedded in physical world to transfer it into an intelligent world,which increases the reasoning complexity based on the unified abstraction model framework.This paper presen-ted a new method to represent such an intelligent world based on the knowledge reformulation and abstraction model proposed by Saitta and Zucker.We automatically constructed three distinguishable and interrelated sub-models,i.e.,the model of physical world,networked world and virtual world,according to the communicational ways of the compositive entities by a perception reformulation process.The relationships between the three sub-models are constructed to make them form an integrated model of the intelligent world.We focused on the formalized representation of the perception reformulation and reasoning mechanism in this paper.We built practical intelligent worlds and set up diagnosis reaso-ning experiments,showing that comparing to the knowledge reformulation and abstraction model, the reasoning process of the proposed modeling method can limit malfunctions to one of the sub-models to reduce the diagnosis space.
Performance Testing and Analysis among Different MapReduce Runtime Systems
YI Xiu-wen, LI Tian-rui, ZHANG Jun-bo and TENG Fei
Computer Science. 2015, 42 (5): 24-27.  doi:10.11896/j.issn.1002-137X.2015.05.005
Abstract PDF(402KB) ( 776 )   
References | Related Articles | Metrics
With the development of cloud computing technology,several implementations which adopt MapReduce mo-del,e.g.,Hadoop,Phoenix and twister,have been developed.Hadoop has high scalability and stability,thus is suitable for handling large-scale off-line applications.The primary advantage of Phoenix,which is especially appropriate for data-intensive tasks,is its processing speed.Twister,a lightweight iterative runtime system,is designed for iterative applications.Different applications produce different levels of performance on different MapReduce runtime systems.By testing various applications using the aforementioned runtime systems,the experimental comparison and performance analysis were presented,providing the basis for the selection of parallel programming models for big data processing.
Research Progress on Deep Learning
GUO Li-li and DING Shi-fei
Computer Science. 2015, 42 (5): 28-33.  doi:10.11896/j.issn.1002-137X.2015.05.006
Abstract PDF(780KB) ( 3835 )   
References | Related Articles | Metrics
Deep learning plays an important role in machine learning.If shallow learning is a wave of machine learning, as a new field of machine learning,the deep learning will set off another wave of machine learning.Deep learning establishes and simulates the human brain’s hierarchical structure to extract the external input data’s features from lower to higher,which can explain the external data.Firstly,this paper discussed the origin of deep learning.Secondly,it described the common methods of deep learning illustrated by the example of supervised lear-ning and unsupervised learning.Then it generalized deep learning’s recent research and applications.Finally,it concluded the problems of development.
Incremental User Interest Mining Based on Artificial Immune Algorithm
ZUO Wan-li, HAN Jia-yu, LIU Lu, WANG Ying and PENG Tao
Computer Science. 2015, 42 (5): 34-41.  doi:10.11896/j.issn.1002-137X.2015.05.007
Abstract PDF(922KB) ( 517 )   
References | Related Articles | Metrics
Understanding user interests is the key to personalize service.User’s interests can be categorized into short-term interests and long-term interests,and may evolve over time.Inspired by artificial immune system (AIS),we skillfully employed immune response process in user interests mining.By combining probability with time,we first introduced the concept temporal dynamics to describe the degree that the user pays attention to a particular interest during a specific time interval.Then,based on AIS,we built classifier to extract user interest tags.Finally,directing at incrementally learning user interest,we built concept temporal dynamics for interest tags,which characterize the attention degree since interests first appears,and on this basis judged whether interests have migrated or being forgotten and assigned weights to each interest tag.The main contribution of this paper is that we creatively applied AIS to user short-term interests and long-term interests mining with incremental feature,which can gracefully reflect the migration of user’s interests.It is a natural and complete model for learning user interests.Experimental result on practical dataset indicates that the proposed learning model can effectively discover topics that user focuses on,with the average precision and recall of 79.5% and 74.4% respectively,which is the most suitable user interest mining model.
Research on Influence of Micro Blogging Based on Field Division
LIU Jin-long, WU Bin, CHEN Zhen and SHEN Chong-wei
Computer Science. 2015, 42 (5): 42-46.  doi:10.11896/j.issn.1002-137X.2015.05.008
Abstract PDF(772KB) ( 1167 )   
References | Related Articles | Metrics
In recent years,as an emerging social network,Microblog is being used by a lot of users.The micro blogging platform contains a large amount of information and the update speed of the information is fast so that it often makes users can not find the information they need.Then,it is important to help users find the information which is sent by people who have a great influence.The content that the micro blogging platform publishes and updates is very fast,and the content is not standardized,so that the timeliness and authoritativeness tend to be more important.The purpose of research on influence of micro blogging users in this paper is to identify the influence of different users in different areas.This paper divided the users into different areas by using two different features which are the content of the micro blogging and the concerned relationship of the users.During this,we used the parallel new word recognition algorithm in the micro blogging content and semantic extensions on some important text feature to improve accuracy of the dividing result.Then we computed the influence of the users in all the classification using three models,and combined the result in different weight.At last,we tested the accuracy of the result and compared it with other ways.
Coronal Dimming Detecting and Extracting Algorithm Based on Supervised Learning
TIAN Hong-mei, PENG Bo, LI Tian-rui and XIE Zong-xia
Computer Science. 2015, 42 (5): 47-50.  doi:10.11896/j.issn.1002-137X.2015.05.009
Abstract PDF(833KB) ( 439 )   
References | Related Articles | Metrics
Coronal mass ejections (CMEs),which release huge quantities of matter and electromagnetic radiation into space above the sun’s surface,are considered as one of the driven sources of space weather.Coronal dimming is now viewed as the important characteristic of CME.Dimming can help understand,predict and locate the occurrence of CME.Based on the observed data from extreme ultra-violet imaging telescope (EIT) and atmospheric imaging assembly (AIA),this paper implemented the coronal dimming detection and extraction.By analyzing the statistical characteristics of the difference images related to dimming,we applied Adaboost classification algorithm into dimming detection,and then segmented the coronal dimming region.The experiment results show that the proposed algorithm can effectively detect and extract the coronal dimming areas.Our work establishes the basis for analysis of coronal dimming features.
Object-oriented Remote Sensing Image Classification Based on GEPSO Model
WANG Wei-hong, YAN Lu-qin, JIN Dan-dan, XU Wen-tao and LI Qu
Computer Science. 2015, 42 (5): 51-53.  doi:10.11896/j.issn.1002-137X.2015.05.010
Abstract PDF(587KB) ( 474 )   
References | Related Articles | Metrics
According to the optimization capability of evolutionary algorithms,we proposed an object-oriented remote sensing image classification method based on GEPSO(GEP Optimized by PSO,GEPSO)model.Firstly,image segmentation was done, feature set was selected for the remote sensing image,and then uses GEPSO algorithm was used to construct a class center for each type of image objects.The process of constructing class centers firstly makes use of GEP to search a suboptimal solution,and then uses PSO to search the optimal solution with the suboptimal.Experimental results show that the object-oriented remote sensing image classification method based on GEPSO model has higher classification accuracy.
New Feature Selection Method Based on CHI
HUANG Yuan, LI Mao and LV Jian-cheng
Computer Science. 2015, 42 (5): 54-56.  doi:10.11896/j.issn.1002-137X.2015.05.011
Abstract PDF(329KB) ( 646 )   
References | Related Articles | Metrics
CHI is a widely used feature selection method in text classification.This method only focuses on the relevance between features and classifications but ignores the relevance between feature and feature,resulting in a high redundancy.This paper proposed a concept about residual mutual information,and then CHI and residual mutual information were combined together to optimized the selective results.The experimental results indicate that the method is effective.
Research on Clustering with Feature Preferences
FANG Ling and CHEN Song-can
Computer Science. 2015, 42 (5): 57-61.  doi:10.11896/j.issn.1002-137X.2015.05.012
Abstract PDF(370KB) ( 817 )   
References | Related Articles | Metrics
Traditional clustering methods,such as k-means and fuzzy c-means,do not generally distinguish different contributions or importance of data features to individual clusters,thus when facing high dimensional data,they often lead to lower clustering performance due to hardly considering the presence of high correlation or redundancy between features.In order to mitigate such adversity,with the introduction of the feature weights for each cluster in the clustering objective,we could automatically obtain not only the cluster-dependent weights but also the enhanced clustering performance.Though so,the feature weights obtained by an unsupervised clustering algorithm do not necessarily match the relative importance (or preferences) between the features as users expect.Thus this paper attempted to take advantage of actual preferences from users to design a clustering method which can reflect the feature preference.As a result,the proposed method not only extends the existing clustering methods with globally-weighted cluster-independent features to the one with locally-weighted cluster-dependent features but alos improves the clustering performance for feature preferences.
Topic Concept Discovery for Web Pages
LIU Qiong-qiong, ZUO Wan-li and WANG Ying
Computer Science. 2015, 42 (5): 62-66.  doi:10.11896/j.issn.1002-137X.2015.05.013
Abstract PDF(409KB) ( 446 )   
References | Related Articles | Metrics
Topic discovery from Web page has an important impact on natural language processing,such as text classification,automatic abstract generation,information fusion etc.Mining Web page topics can help users better understand the content of Web pages.Although there are some papers discussing topic discovery from ordinary texts,few of them consider how the label a word belongs to and the location in which a word appears affect the weight of a word,and none of them gives calculation methods for the two impact factors.This article extended Web topics from words level to concepts level based on WordNet,used speech tagging to determine the POS of the words,used word sense disambiguation to determine the words’ meaning in the pages,made full use of label impact factor and location impact factor to modify the weights of concepts,and proposed calculation formulas for calculating these two impact factors.Experimental results on DMOZ dataset show that,compared with un-adjusted weight method,the adjusted weights method can significantly improve topic mining accuracy,which can reach up to 0.95 in the best case.
Study of Positioning Tropical Cyclone Center Based on Ellipse Fitting
LIU Nian-qing, ZHANG Wen-sheng and LI Lin
Computer Science. 2015, 42 (5): 67-71.  doi:10.11896/j.issn.1002-137X.2015.05.014
Abstract PDF(948KB) ( 510 )   
References | Related Articles | Metrics
Positioning tropical cyclone (TC) center is the basis of TC forecasting.A novel ellipse model was presented instead of existing spiral ones to locate TC center objectively and automatically.The model includes diffusion of oriented gradient,ellipse segments selection,ellipse clustering and center point decision.Experimental results show that the modelbased on infrared satellite cloud images can achieve the best average error of less 0.12 on both latitude and longitude in comparison with the best track data from the China Meteorological Administration,providing objective and accurate information for locating TC centers.
Multi-layer Adaptive Morphology Filtering Algorithm
WANG Jia-liang and CHENG Chun-ling
Computer Science. 2015, 42 (5): 72-77.  doi:10.11896/j.issn.1002-137X.2015.05.015
Abstract PDF(1024KB) ( 620 )   
References | Related Articles | Metrics
In view of deficiencies of the exiting morphology filtering algorithms,such as the fixing structure,the pre-set structuring element and bias correction coefficient,an adaptive filtering algorithm with a multi-layer structure was pre-sented.The multi-layer structure is combined with three layers:an input layer,an intermediate layer for calculating,and a layer for correcting the bias.Facing the complicated changing interfering signal,the proposed algorithm can make a choice flexibly among the results calculated by using different structuring elements.In addition,aiming at the bias caused by the opening operation and closing operation,the proposed algorithm can weaken the negative impact that the bias brings to filtering effect by optimizing the bias correction coefficient vector.Simulation results show that the proposed algorithm improves the performance of morphological filtering,and it has the characteristics of simple design and strong practicability.
Gray Extreme Learning Machine Prediction Method
DONG Hong-bin, PANG Jin-wei and HAN Qi-long
Computer Science. 2015, 42 (5): 78-81.  doi:10.11896/j.issn.1002-137X.2015.05.016
Abstract PDF(388KB) ( 570 )   
References | Related Articles | Metrics
Prediction is a behavior which describes the development of the future based on the regularity identified from the past data in a certain period of time.In recent years,the prediction is used in a lot of domains,such as electricity price forecasts,stock prices and weather forecasts.However,the traditional forecasting methods are unable to meet the needs of today’s forecast demand because their accuracy is not high enough or speed is not fast enough.Based on the idea of combination forecasting,the study proposed a combination forecasting algorithm using gray prediction model and extreme learning machine as a weighted method which determines the weights based on the cumulative function in the reinforcement learning for the problem of traditional forecasting methods.Three datasets,including Microsoft stock price,Mackey-Glass time-series data and Taiwanese color filter manufacturing data,were evaluated in the experiment.The results show that the proposed method is effective.
Mining Frequent and High Utility Itemsets
LI Hui, LIU Gui-quan and QU Chun-yan
Computer Science. 2015, 42 (5): 82-87.  doi:10.11896/j.issn.1002-137X.2015.05.017
Abstract PDF(782KB) ( 590 )   
References | Related Articles | Metrics
Mining interesting itemsets from transaction database has attracted a lot of research work for more than a decade.However,most of these studies either use frequency/support(e.g.,frequent itemset mining) or utility/profit (e.g.,high utility itemset mining) as the key interestingness measure.In other words,these two measures are consi-dered individually,which leads to some shortages that frequent itemsets may have low profit,or high profit itemsets may have very low support,so it is meaningless to recommend these itemsets to users.To this end,we considered these two measures from a unified perspective.Specifically,we proposed to identify the qualified itemsets which are both frequent and high utility.The key challenge to these problems is that the value of utility does not change monotonically when we add more items to a given itemset.Thus,we proposed an efficient algorithm named FHIMA (Frequent and High utility Itemset Mining Algorithms),where an effective upper bound based on frequency and utility is designed to further prune the search space.Moreover,FHIMA incorporates the idea of Prefixspan to avoid generating candidates,thus leading to high efficiency.Finally,the experiment results demonstrate the efficiency of FHIMA on real and synthetic datasets.
Differential Evolutionary Algorithm for Parameter Training of Belief Rule Base under Expert Intervention
WANG Han-jie, YANG Long-hao, FU Yang-geng, WU Ying-jie and GONG Xiao-ting
Computer Science. 2015, 42 (5): 88-93.  doi:10.11896/j.issn.1002-137X.2015.05.018
Abstract PDF(484KB) ( 666 )   
References | Related Articles | Metrics
Traditionally,FMINCON function and swarm intelligence algorithms are adopted to search for solutions of training model about the parameters of belief rule base(BRB) system.However,all the parameters of BRB system are not involved in these training processes,and there is a lack of the necessary intervention from the experts in these algorithms.In view of these,firstly,the parameters of BRB system were further broadened on the basis of the existing mo-dels.Then,the logical constraints under the expert intervention were designed.Finally,the new training method for BRB system with better convergent accuracy which is combined with the differential evolutionary algorithm was proposed.In the experiment analysis of the new approach,the effectiveness was validated via the example of a multi-extreme function,and the rationality was examined under the expert intervention in the example of pipeline leak detection by comparing with the existing approaches of parameter training.The results show that the proposed method is effective and feasible.
Label Information-based Neighborhood Preserving Embedding
BAO Xing, ZHANG Li, ZHAO Meng-meng and YANG Ji-wen
Computer Science. 2015, 42 (5): 94-97.  doi:10.11896/j.issn.1002-137X.2015.05.019
Abstract PDF(578KB) ( 485 )   
References | Related Articles | Metrics
Neighborhood preserving embedding (NPE) is widely used for finding the intrinsic dimensionality of the data with high dimension.In order to make full use of the classification information of samples to get optimal features,we constructed an adjacent matrix which can separate different sub-manifolds as far as possible without destroying local geo-metry structure of the original data.By introducing the adjacent matrix,this paper proposed label information-based neighborhood preserving embedding (LINPE).Experiments on UCI data and ORL face databases were performed to test and evaluate LINPE.Experimental results demonstrate the effectiveness of LINPE.
Research on Algorithm of Perspective Sentence Identification Based on Knowledge Map
CHENG Xian-yi and LIU Ying
Computer Science. 2015, 42 (5): 98-105.  doi:10.11896/j.issn.1002-137X.2015.05.020
Abstract   
References | Related Articles | Metrics
Aiming at the characteristics of the perspective sentence,knowledge rule was proposed that is suitable for perspective sentence identification.On this basis,we mixed the minimum cut principle of graph theory with machine learning classification method,introduced the concept of knowledge map,and proposed a algorithm of sentence recognition based on the knowledge map.In public evaluation corpus,self-builted corpora and open corpus,we conducted related experiment and the experimental results show that the perspective sentence classification performance and stability of the recognitioned algorithm based on the knowledge map have obvious advantages.
Pre-processing Method of Multi-label Classification Based on kNN
XU Xiao-dan, YAO Ming-hai, LIU Hua-wen and ZHENG Zhong-long
Computer Science. 2015, 42 (5): 106-108.  doi:10.11896/j.issn.1002-137X.2015.05.021
Abstract PDF(306KB) ( 826 )   
References | Related Articles | Metrics
Multi-label learning is a new field in machine learning.In order to improve the multi-label classification precision,a new kNN method was used to remove the noise labels.First,a normal distribution is discovered by analyzing the characteristics of multi-label datasets,and then the high quality datasets are generated by changing the value of noisy labels to their k-Nearest Neighbors.In the experiments,six kinds of multi-label classification methods were tested on MULAN with new datasets.Compared to the primal datasets,the classification precision based on new datasets is better.Research results show this method is suitable for the data set which has a regular distribution.
Research of Improved Tree Path Model in Web Page Clustering
WANG Ya-pu, WANG Zhi-jian and YE Feng
Computer Science. 2015, 42 (5): 109-113.  doi:10.11896/j.issn.1002-137X.2015.05.022
Abstract PDF(668KB) ( 431 )   
References | Related Articles | Metrics
Computing the similarity is the basis of text mining,and also the crucial step of information extraction.When tackling the Web pages with complex structure,the accuracy of computing the similarity based on traditional tree path model is not perfect.Traditional tree path model will not take the sequence of the paths in consideration and compare the similarity of paths by using perfect matching.It cannot describe the similarity between paths accurately when it is not a perfect matching.Therefore,the paper introduced the structural similarity Web at first,and then proposed a tree path model.This model takes fully account of the relationship between the siblings,the path location and the path weights,and makes up for the defect of the traditional tree path model which cannot express both document structure and hierarchical information.The experiment result shows that the model improves the recognition ability of Web pages structural similarity.It not only can better distinguish the Web pages which have large structure difference,but also effectively reflects the difference between the Web pages with the same template,also has a better effect in the Web page clustering.
Hierarchical Clustering of Categorical Sequences by Similarity Normalization
ZHANG Hao, CHEN Li-fei and GUO Gong-de
Computer Science. 2015, 42 (5): 114-118.  doi:10.11896/j.issn.1002-137X.2015.05.023
Abstract PDF(534KB) ( 449 )   
References | Related Articles | Metrics
A categorical sequence is composed of finite symbols which are arranged in a certain order.Nowadays,categorical sequences,such as gene sequences,protein sequences,and speech sequences,etc.,widely exist in many application domains of data mining.As a major method for sequence data mining,sequence clustering has a great value in identifying the intrinsic structural of sequence data,while it is also an open problem due to the difficulties in measuring the similarity between sequences.This paper proposed a new similarity measure for categorical sequences,and introduced a length-normalization factor to address the problem that the existing methods are sensitive to the sequences length,and to improve the effectiveness of measuring sequences similarity.Based on the new similarity measure,a new clustering method was proposed,where directed acyclic graphs are constructed according to the similarity between samples and a hierarchical clustering of categorical sequences is performed by graph partitioning.Experimental results on real-world datasets show that the new methods based on the normalized similarity measure are able to improve the clustering accuracy significantly.
Project Topic Model Construction Based on Semi-supervised Graph Clustering
SHI Lin-bin, YU Zheng-tao, YAN Xin, SONG Hai-xia and HONG Xu-dong
Computer Science. 2015, 42 (5): 119-123.  doi:10.11896/j.issn.1002-137X.2015.05.024
Abstract PDF(411KB) ( 530 )   
References | Related Articles | Metrics
The quality of project topic model has a direct impact on recommended effect of the follow-up evaluation experts.In order to effectively exploit the association relationships among project document fragments to analyze project topics,we proposed a project topic model construction method based on semi-supervised graph clustering.We first analyzed structural characteristics of project documents to extract project name,project keywords and other structural information that responds project topics.Combined with expert evidence documents,expert topic relationship networks and other external resources which can indicate expert topics,we defined and extracted the association relationship features among project document fragments.Then,we used different association relationships to calculate correlation among project document fragments and built undirected graph model for project document fragments.Finally,using the marked association relationship features as supervised information for clustering,we applied semi-supervised graph clustering algorithm to cluster for project document fragments to realize the construction of the project topic model.The comparative experimental results of project topic extraction verify the effectiveness of the proposed method.Structural features of the project documents,expert evidence documents and expert topic relationship networks have certain guidance function for the construction of the project topic model.
Medium Access Control Protocol of Wireless Sensor Networks Based on Gradient Information
YANG Bing, ZHANG Ke-ke, LI Guo-hui and HE Wei-kang
Computer Science. 2015, 42 (5): 124-131.  doi:10.11896/j.issn.1002-137X.2015.05.025
Abstract PDF(953KB) ( 415 )   
References | Related Articles | Metrics
In wireless sensor networks(WSN),medium access control (MAC) protocols determine the usage mode of wireless channel and allocate shared radio resource among sensor nodes.In this paper,we introduced a new channel competition mechanism to allocate the channel resources for the nodes.The protocol utilizes the signal strength collected by nodes to partition gradients.Based on that,we assigned the nodes in the event region to different time period to contend for the channel.The simulation result proves this method has lightened the burden of channel.And we proposed a self-adaptive gradients partition algorithm which makes the gradients partition more flexible and can adapt to the changes of the network topology very well.At the same time,a gradient jumping mechanism to reduce latency caused by gradients partition was designed.At last,this protocol also gives a strategy to control the distortion of reconstruction caused by spatial correlation of nodes.
Probing-based Opportunistic Routing Algorithm for Multi-channel Wireless Mesh Networks
SHA Hai-jin, BAI Guang-wei, SHEN Hang and ZHANG Peng
Computer Science. 2015, 42 (5): 132-135.  doi:10.11896/j.issn.1002-137X.2015.05.026
Abstract PDF(375KB) ( 385 )   
References | Related Articles | Metrics
The performance of multi-channel wireless mesh networks mainly depends on channel assignment and routing.Most existing routings for multi-channel wireless mesh networks do not consider the problem of interference between channels,resulting in degradation of communication performance.To address this problem,this paper proposed a probing-based opportunistic routing (POR) for multi-channel wireless mesh networks.At first,the set of the best communication channels was selected,reducing the interference.On this basis,the detection method was used to calculate the expected end-to-end delay and choose the set of candidate links.At last,we used opportunistic routing mechanism for transmission to minimize the end-to-end delay.Our simulation results demonstrate that the proposed POR can reduce end-to-end delay significantly,improve delivery ratio,and provide real-time and reliability guarantee for data transmission.
Data Stream and Network Coding-based Data Aggregation Algorithm in Wireless Sensor Networks
FENG Hui-ying, ZHOU Liang and DING Qiu-lin
Computer Science. 2015, 42 (5): 136-141.  doi:10.11896/j.issn.1002-137X.2015.05.027
Abstract PDF(514KB) ( 436 )   
References | Related Articles | Metrics
An energy-efficient adaptive data aggregation algorithm was developed to reduce the number of packets transmitted in clustering wireless sensor networks(WSN),which also maximizes the efficiency of the sensor networks energy.With the ability of storage and calculation,the source nodes use the data stream technology when sensing data in this algorithm,which leads to the reduction of data transmission.When data are transmitted from source node to cluster head,a set of nodes are selected as network coders by cluster head according to the control information.If the data correlation value is lower than a specific threshold,network coding will be performed by these nodes between the packets.However,the network coder nodes will act as aggregation points if data correlation is higher than that threshold.Network coding and data aggregation can reduce the additional energy consumption in cluster head.Experimental results show that the packet delivery rate is increasing and the energy consumption is significantly decreasing after the algorithm is implemented.
Multihoming Scheme for Mobile Networks with Multiple Interfaces and Multiple Routers
YIN Xing and ZHANG San-feng
Computer Science. 2015, 42 (5): 142-148.  doi:10.11896/j.issn.1002-137X.2015.05.028
Abstract PDF(597KB) ( 390 )   
References | Related Articles | Metrics
To solve the common problems of single point of failure and performance bottleneck in mobile routers of the mobile network,this paper analyzed the inability of most existing multihoming schemes to achieve optimal route selection in the multi-interface hybrid multihoming scenario and proposed MM-NEMO,a multihoming method that can be applied in the hybrid multihoming scenario.By defining major common attributes of external interfaces on the mobile router and their maintenance methods and by dynamically exchanging interface attributes between mobile routers using the multicast technology,MM-NEMO enables the mobile routers to maintain the attributes of all external interfaces in the mobile network.Then,the communication initiated by internal nodes uses the comentropy-based multiple attribute decision making (MADM) algorithm to perform the selection,assigning and redirection of the optimal interface.Characteristic analysis indicates that MM-NEMO provides internal nodes in the mobile network with a transparent mechanism for optimal route selection in the hybrid multihoming scenario.Simulation results show that MM-NEMO features low end-to-end delay and high overall throughput.
Novel Blind Source Separation Algorithm Based on Givens Matrix and Joint Non-linear Uncorrelatedness
ZHAO Li-xiang and LIU Guo-qing
Computer Science. 2015, 42 (5): 149-152.  doi:10.11896/j.issn.1002-137X.2015.05.029
Abstract PDF(287KB) ( 456 )   
References | Related Articles | Metrics
A novel algorithm based on Givens matrix and joint non-linear uncorrelatedness for blind source separation (BSS) where sources are statistically independent was proposed.Firstly,we measured independence with the joint non-linear uncorrelatedness of independent sources,since the measurement is the key to the effectiveness of the BSS algorithm.Secondly,based on the property that Givens matrices can impose the orthogonal constraint on the separation matrix and meanwhile decrease the number of parameters to be estimated by half approximately,we converted the BSS problem into an unconstrained optimization problem which is then solved by BFGS algorithm of quasi-Newton method.Finally,the separation for simulated mixed signals and real mixed voice signals shows the effectiveness of our algorithm.
Improved Optimization Algorithm for ZigBee Routing
DU Li-kai, ZHANG Ling and CHEN Yun-hua
Computer Science. 2015, 42 (5): 153-156.  doi:10.11896/j.issn.1002-137X.2015.05.030
Abstract PDF(423KB) ( 603 )   
References | Related Articles | Metrics
With the operation of network,ZigBee can cause the non-uniformity of node’ s energy consumption,leading to the problem of network’ segmentation death.Therefore, a routing optimization algorithm balancing the network’s energy consumption was presented in this paper.This algorithm firstly reduces the network storm by sending directed RREQ,and then based on residual energy of nodes,path forwarding energy consumption and the number of neighbor nodes,it constructs the network routing dynamically.It can avoid the pressure of single link at the same time.By means of the simulation experiment of NS2,the improved routing algorithm is more excellent than the classical routing algorithm of ZigBee in the number of node’s death,energy consumption and survival time.
Improved Interference Cancellation Method over X Channel
TIAN Xin-ji and LU Jing
Computer Science. 2015, 42 (5): 157-159.  doi:10.11896/j.issn.1002-137X.2015.05.031
Abstract PDF(282KB) ( 421 )   
References | Related Articles | Metrics
An interference cancellation method based on space-time code was proposed for X channel with four antennas at each user.Each user employs rate-2 space-time block code.The unwanted codewords are eliminated through the introduction of zero elements at each user’ transmit signals.The wanted codewords from two users keep orthogonal through the pre-coding for each codeword,so the interference between wanted codewords is cancelled.Compared with the existing scheme for the same scene,our proposed scheme greatly reduces feedback amount,while keeping the same transmission efficiency.Simulation results demonstrate the validity of the proposed scheme.
Research on Clustering Routing Protocol in Wireless Sensor Networks
HOU Yan-jun and TAN Guo-zhen
Computer Science. 2015, 42 (5): 160-164.  doi:10.11896/j.issn.1002-137X.2015.05.032
Abstract PDF(516KB) ( 517 )   
References | Related Articles | Metrics
Wireless sensor networks (WSN) are comprised of a large number of minuscule sensors by self-organization and self-adaption in the monitoring area,and the sensors have a certain ability of collecting data,processing data and communicating.So the WSN are widely used in infrastructure health monitoring.We analyzed the challenges in the routing protocol design of WSN,and summarized typical routing protocols of WSN,even the advantage and shortcoming of these protocols.Then,on the base of specifying the LEACH protocol,a clustering routing protocol in wireless sensor networks for infrastructure health monitoring was proposed against the shortcoming of LEACH protocol in head node selection and routing between clusters.The clustering algorithm and inter-cluster routing algorithm were combined together to make up the clustering routing protocol for infrastructure health monitoring in WSN.The result of experiment simulation shows that the routing protocol postpones the death of major nodes,balances the energy consumption of the network and prolongs the effective life-time of the network.
Identity-based Signcryption Cross Autonomous Domains
ZHANG Xue, JI Hui-fang, LI Guang-song and HAN Wen-bao
Computer Science. 2015, 42 (5): 165-168.  doi:10.11896/j.issn.1002-137X.2015.05.033
Abstract PDF(303KB) ( 493 )   
References | Related Articles | Metrics
Real networks especially heterogeneous networks consist of several cooperating sub-networks which belong to different trust domains which are independent and autonomous.The trust domains are maintained by different PKGs.A novel ID-based cross-domain signcryption scheme was proposed which is no restriction on PKG system parameters so that public system parameters,system master keys and system public keys can be totally different.Based upon this signcryption scheme,a cross-domain session key generation scheme was presented.Our cross-doamin signcryption protocol was proved to be secure in the random oracle model assuming the bilinear Diffle-Hellman problem is hard.It satisfies the basic security requirements confidentiality,unforgeability,non-repudiation and public verifiability.The scheme not only achieves cross-domain signcryption,but also makes no restriction on PKG system parameters on condition that computation overheads are little increased.
Modified Function Projective Quasisynchronization of a Class of Delayed Chaotic System
CHAI Xiu-li, GAN Zhi-hua and WANG Jun
Computer Science. 2015, 42 (5): 169-172.  doi:10.11896/j.issn.1002-137X.2015.05.034
Abstract PDF(331KB) ( 404 )   
References | Related Articles | Metrics
Parameter mismatch is the unavoidable problem of chaotic secure communication based on chaos synchronization technology.Taking the typical Lur’e delayed chaotic system as the research object,based on impulsive control method,we achieved modified function projective quasisynchronization of the drive-response chaotic system under the condition that there is a mismatch between the parameters of the drive system and the response system.Sufficient conditions were given for the quasisynchronization,and quasisynchronization error limit was also estimated.Finally,simulation results verify the validity and effectiveness of the theoretical analysis.
Research on Network Security Situational Elements Knowledge Base Model Based on Ontology
SI Cheng, ZHANG Hong-qi, WANG Yong-wei and YANG Ying-jie
Computer Science. 2015, 42 (5): 173-177.  doi:10.11896/j.issn.1002-137X.2015.05.035
Abstract PDF(450KB) ( 713 )   
References | Related Articles | Metrics
As existing methods can not express,share and reuse the network security situational information in a unified manner,a solution of network security situational elements knowledge base model based on ontology was presented.Firstly,combining with the multi-source heterogeneous characteristic of network security situational elements knowledge,classification and acquirement are accomplished.Secondly,according to the principles of ontology construction,the network security situational elements knowledge base model which includes domain ontology,applied ontology and atomic ontology is established.Finally,through situation scenario analysis,model can effectively acquire network security situation knowledge.
Chaotic-based Opaque Predicate Control Flow Flatten Algorithm
WU Wei-min, LIN Shui-ming and LIN Zhi-yi
Computer Science. 2015, 42 (5): 178-182.  doi:10.11896/j.issn.1002-137X.2015.05.036
Abstract PDF(400KB) ( 556 )   
References | Related Articles | Metrics
A control flow flatten algorithm based on chaotic opaque predicate was proposed.This algorithm applies a new construction method of N-state opaque predicates based on Arnold cat planar chaos mapping to improve the global index variable of control flatten flow obfuscation algorithm.A JavaScript obfuscation system was developed depending on this algorithm.Static and dynamic analysis of codes before and after obfuscating proves that the algorithm is correct and effective,and improves obfuscated code’s security.
Method for Software Vulnerability Discovery Based on Soft Set and Multi-attribute Comprehensiveness
TANG Cheng-hua, TIAN Ji-long, WANG Lu, WANG Li-na and QIANG Bao-hua
Computer Science. 2015, 42 (5): 183-187.  doi:10.11896/j.issn.1002-137X.2015.05.037
Abstract PDF(378KB) ( 528 )   
References | Related Articles | Metrics
Aiming at the problem of the vulnerability coverage and artificial defect review in the software vulnerability detection,a method for software vulnerability discovery based on the soft set and multi-attribute comprehensiveness was proposed.Firstly,based on trusted integrated detection tools,an evaluation model of software vulnerability factors was established.Secondly,the soft set was introduced to measure vulnerability factors,then the serious impact on software security was determined through the method of multi-attribute comprehensive integration tools,and the discovery process of software vulnerability was finally completed.Experimental results show that the method has better detection capabilities for vulnerability in different level ,which provides a feasible way for the improvement of software vulnerability detection false positive rate and false negative rate.
Intrusion Detection Based on Scenario and PN Machine
ZHANG Wei, LUO Hui-yun, TENG Shao-hua, LIU Dong-ning and LIANG Lu
Computer Science. 2015, 42 (5): 188-193.  doi:10.11896/j.issn.1002-137X.2015.05.038
Abstract PDF(476KB) ( 445 )   
References | Related Articles | Metrics
To evade detection of rule-based or other misuse detection methods,the attacker can create a large number of variant attack sequences from one attack sequence.Therefore,aiming at the serializable intrusion,we started to study the attack mechanism,extracted key operation sequence of the attacks,constructed intrusion behavior expressions,sorted topologically attack sequence,and did isomorphic transformation for attack operations.Then one attack can be expanded to one intrusion scenario or one class of attacks.A new intrusion detection method was proposed in the paper,which is called the scenario-oriented intrusion detection.A PN machine for scenario was designed and implemented.The PN machine based on scenario can detect one class of attacks.Then,the goal of detecting the known attack and its unknown variant attacks will be achieved.So,some new derived attacks can be detected by the method in the paper.
Effective Vectorization Technique for Interleaved Data with Constant Strides
LI Peng-yuan, ZHAO Rong-cai, GAO Wei and ZHANG Qing-hua
Computer Science. 2015, 42 (5): 194-199.  doi:10.11896/j.issn.1002-137X.2015.05.039
Abstract PDF(1315KB) ( 476 )   
References | Related Articles | Metrics
Due to the development of the SIMD extensions in general processors,automatic vectorizing compilers are widely used in various fields,especially in scientific and engineering computing area.Conventional vectorizing compilers can parallelize applications with continuous access successfully,but most irregular multimedia applications which access interleaved data cannot be vectorized correctly.To address this issue,this paper presented an effective vectorization technique for interleaved data with constant strides.We achieved any form of data regroupings with the help of the data processing instructions provided by targeted platforms.As a result,programs with interleaved data access are vectorized and vector codes are generated.The experimental results show that the proposed method can translate irregular applications with interleaved data access into high-performance targeted vectorized codes,thereby advancing the execution efficiency adequately.
Multi-constraints Service Discovery Based on Ontology
PU Guo-lin and QIU Yu-hui
Computer Science. 2015, 42 (5): 200-203.  doi:10.11896/j.issn.1002-137X.2015.05.040
Abstract PDF(290KB) ( 391 )   
References | Related Articles | Metrics
With the development of Web service,service discovery becomes an important link between the service requestor and service provider,and the quality of service applications is directly affected by the performance of the service discovery mechanism.In order to provide high-quality Web services to the service requester,multi-constraints service mechanism based on ontology was presented,which selects the best service to meet the needs of the service requestor from a semantic perspective with multi-constraints,thus providing a more accurate service and improving the perfor-mance of the service applications.Finally,the experimental results demonstrate that the proposed multi-constraints service mechanism based on ontology effectively improves the accuracy and the quality of service system.
Algorithm for k-Nearest Neighbor Join Based on Data Stream
WANG Fei, QIN Xiao-lin, LIU Liang and SHEN Yao
Computer Science. 2015, 42 (5): 204-210.  doi:10.11896/j.issn.1002-137X.2015.05.041
Abstract PDF(553KB) ( 767 )   
References | Related Articles | Metrics
kNN join is a frequently used operation in spatial database.It involves both the join and the NN search.Data scale is exploding,and traditional centralized algorithm can not meet the requirements.It is an urgent problem to design distributed kNN join algorithm currently.Existing distributed algorithms include several rounds of serial MapReduce tasks,but each MapReduce task reads and writes data from distributed file system.It is inefficient when expressing dependencies between jobs,and these algorithms are inefficient.Firstly,we proposed a framework based on data stream on MapReduce.This framework models data handle process according to the data flow diagram,and we proposed an efficient kNN join algorithm on the framework.The algorithm maps multi-dimensional data sets into one dimension using space-filling curves (z-values),and transforms kNN joins into a sequence of one-dimensional range searches.Experimental results demonstrate that the algorithm can efficiently resolve the large scale kNN join spatial query.
Index Structure for Moving Objects Based on Restricted Network
YI Xian-tian, XU Zhan, ZHANG Ke and GUO Cheng-jun
Computer Science. 2015, 42 (5): 211-214.  doi:10.11896/j.issn.1002-137X.2015.05.042
Abstract PDF(423KB) ( 417 )   
References | Related Articles | Metrics
Aiming at improving the efficiency of index structure and providing neighbor query function,we proposed an index structure called RNR(restricted network R-Tree) which is based on FNR index structure and Geohash coding algorithm.This index structure can meet the demand of nearest neighbor query with comparative high efficiency.We improved the efficiency of RNR index structure by adding auxiliary index structure like hash table and list.We integrated index structure,Geohash coding algorithm and related algorithm to meet the need of neighbor query.We divided the designated area according to the certain rule to make the index have the ability to do irregular range query.We tested the RNR index structure efficiency by using San Francisco geographic data and moving object movement information data.The experimental results show that RNR index structure has high indexing efficiency and can provide efficient window query and neighbor query function.
Dynamic Data Intelligent Mining with Attributes Disjunctive Reduction and Expansion Characteristics
XU Feng-sheng, YAN Li-mei and SHI Kai-quan
Computer Science. 2015, 42 (5): 215-220.  doi:10.11896/j.issn.1002-137X.2015.05.043
Abstract PDF(467KB) ( 395 )   
References | Related Articles | Metrics
S-rough sets (singular rough sets) are generated by introducing dynamic characteristics into Z.Pawlak rought sets.S-rough sets have dynamic characteristics.In general,S-rough sets have three formulas including one direction S-rough sets,dual of one direction S-rough sets and two directions S-rough sets.Given certain conditions,one direction S-rough sets and dual of one direction S-rough sets can be reduced to Z.Pawlak rought sets.Dynamic data intelligent mining with attribute disjunctive characteristics was considered.Attribute disjunctive is one of logic characteristics of data.The main results of this paper include generation of dynamic data and the relations between dynamic data and its attribute disjunctive reduction and expansion,data reasoning and reasoning model,theorems of dynamic data intelligent mining,and applications of the obtained results.
Traffic Surveillance Video Storage in HDFS Based on Event Density
ZANG Ji-kun and YU Jian
Computer Science. 2015, 42 (5): 221-224.  doi:10.11896/j.issn.1002-137X.2015.05.044
Abstract PDF(435KB) ( 455 )   
References | Related Articles | Metrics
Utilizing HDFS to store and process large scale traffic surveillance video is a reliable,highly efficient and scalable solution.However,the default rack awareness data placement strategy in HDFS may cause hotspots when placing data.To address this problem,this paper presented a traffic surveillance video data placement strategy based on traffic event density.The characteristic of traffic surveillance videos allows us to classify them according to traffic event types.When placing new data,the proposed strategy predicts the load of each datanode which is influenced by various of traffic events the datanode stores,then combines the instant load and disk capacity to evaluate each datanode,and chooses the most suitable datanode to store new data.Experiments show that the proposed strategy alleviates the hotspot problem and effectively improves the load balancing and throughput in comparison with the default strategy.
Personalized Medicine Recommendation Based on Tensor Decomposition
WANG Long, WANG Jia-lun, CHENG Zhuan-li, LI Ran and ZHANG Yin
Computer Science. 2015, 42 (5): 225-229.  doi:10.11896/j.issn.1002-137X.2015.05.045
Abstract PDF(683KB) ( 645 )   
References | Related Articles | Metrics
As the online shopping is becoming more and more popular,buying medicine online has brought great convenience for many patients.But when ordinary people buy drugs online,they always purchase medicine blindly.There is a big problem that they do not have access to the medicine guidance.In order to solve this problem,firstly,we clustered the drug into several groups according to the functional description information of the drug,and proposed the personali-zed medicine recommendation based on user collaborative filtering.Then considering the shortcomings of the collaborative filtering algorithm,we used the tensor decomposition methods to model the relationship of the user,symptom and medicine,and recommended the top-N related medicines to the users according to their symptoms.We crawled the real data from the internet and compared the results with collaborative filtering method.The results show good perfor-mance.
Density-based Outlier Detection on Uncertain Data
HONG Sha, LIN Jia-li and ZHANG Yue-liang
Computer Science. 2015, 42 (5): 230-233.  doi:10.11896/j.issn.1002-137X.2015.05.046
Abstract PDF(396KB) ( 444 )   
References | Related Articles | Metrics
Based on local information,a new outlier detection algorithm was designed to calculate density-based uncertain local outlier factor (ULOF) for each point in an uncertain dataset.Firstly,by establishing the possible world model,we calculated the probability of possible word for uncertain data.Then we combined the traditional LOF algorithm to derivate the ULOF algorithm formula,and judged the degree outlier of each data according to the ULOF value.We also did a detailed analysis for efficiency and accuracy of ULOF algorithm.At the same time,we proposed gird-based pruning strategy and k-nearest neighborhood query optimization to reduce the candidate dataset.At last the results of several experiments on synthetic data demonstrate the feasibility and effectiveness of the proposed approach.Optimized NLOF algorithm can improve the outlier detection accuracy,reduce the time complexity and improve the performance of outlier detection on uncertain data.
Classification of Power Quality Disturbances Based on Wavelet Transform and FRVM
MA Ping-ping and HUANG Wen-qing
Computer Science. 2015, 42 (5): 234-236.  doi:10.11896/j.issn.1002-137X.2015.05.047
Abstract PDF(298KB) ( 457 )   
References | Related Articles | Metrics
To reduce the computational complexity and long training time in relevance vector machine(RVM),this paper proposed an optimized algorithm based on fast relevance vector machine(FRVM),which not only greatly reduces the training time of relevance vector machine,but also improves its classification accuracy.This method is applied to the classification of power quality disturbances.Firstly,the wavelet transform is applied to analysis the time-frequency features of the power quality disturbances,and the difference of the energy of the wavelet transform signal in each layer and the standard signal energy is used as feature vector.Secondly,FRVM is used to classify the feature vector to realize power quality disturbances classification based on wavelet transform and FRVM.The simulation verifies that this method can classify all kinds of power quality disturbances,and has higher classification efficiency and accuracy than the classical RVM.
Dynamic Path Planning for Mobile Robot Based on Vector Field
XU Teng-fei, LUO Qi and WANG Hai
Computer Science. 2015, 42 (5): 237-244.  doi:10.11896/j.issn.1002-137X.2015.05.048
Abstract PDF(886KB) ( 681 )   
References | Related Articles | Metrics
Due to its simplicity and high efficiency,the artificial potential field method has been widely focused and used for autonomous robots.At present,potential field methods have received many accolades in dealing with path planning in static environments or dynamic uniform environments,but unnecessary obstacle avoidance or collision caused by excessive relative velocity of the mobile robot with respect to obstacle in a non-uniform environment will appear .Thus,this paper defined a new potential function with the relative displacement.The relative velocity and the relative acceleration factors are incorporated into the attractive potential function.The obstacle avoidance prediction and the velocity-decreasing collision avoidance strategy are brought into the repulsion potential function,which make the robot can not only track target with variable motion,but also keep the same movement trend with target and remove largely the unnecessary obstacle avoidance.When the relative velocity of the mobile robot with respect to obstacle is larger,the robot can also take an obstacle avoidance measure in advance.The simulation result verifies the effectiveness of method proposed.
Distribution Location-routing Problem of Heterotypic Vehicles and its Algorithms
SHI Zhao and FU Zhuo
Computer Science. 2015, 42 (5): 245-250.  doi:10.11896/j.issn.1002-137X.2015.05.049
Abstract PDF(477KB) ( 1364 )   
References | Related Articles | Metrics
Distribution location-routing problem of heterotypic vehicles was considered,which contains heterotypic vehicles restriction,the vehicle capacity restriction,time windows restriction.A mathematic model of the problem was established by using the decomposition method for analysis.First,the locations of distribution centers and group of customers were determined by using improved model based on clustering analysis,and then genetic algorithm was used to solve the problem.Comparison of algorithms and test example show this method can solve location-routing problem of heterotypic vehicles effectively.
Research on Service Dynamic Selection Algorithm in Cloud Computing
ZHANG Heng-wei, HAN Ji-hong, KOU Guang and WEI Bo
Computer Science. 2015, 42 (5): 251-254.  doi:10.11896/j.issn.1002-137X.2015.05.050
Abstract PDF(414KB) ( 421 )   
References | Related Articles | Metrics
To solve the service dynamic selection problem in cloud computing environment,a fitness function which considers both the response time and the cost was designed,and an estimation of distribution-shuffled frog leaping algorithm was proposed to solve the problem of service dynamic selection.On the basis of leapfrog algorithm,evolutionary operators of leapfrog algorithm was redefined by drawing crossover operation of genetic algorithm,and distribution estimation evolutionary strategy was introduced to improve frog update mode of the leapfrog algorithms,so that the new improved algorithm has a more comprehensive learning ability and it can effectively avoid the local optimum.Simulation results demonstrate the feasibility and effectiveness of the proposed algorithm,and compared with the leapfrog algorithm and estimation of distribution algorithms,the convergence performance and optimization capabilities of the proposed algorithm are improved,and it can better solve the service dynamic selection problem in cloud computing environment.
Fault Diagnosis Method Based on Immunodominance Clone Cultural Algorithm and its Application in TE Process
WANG Xue-jie and HUANG Hai-yan
Computer Science. 2015, 42 (5): 255-259.  doi:10.11896/j.issn.1002-137X.2015.05.051
Abstract PDF(398KB) ( 404 )   
References | Related Articles | Metrics
Immunodominance clone select algorithm is a new kind of immunodominance algorithm with strong local and global search ability.Combining it with cultural algorithm,we put forward a new immunodominance clone cultural algorithm.It can make use of transcendental knowledge to guide the evolution of species.We designed a new dynamic acceptance function to promote knowledge updating in cultural algorithm and increase the search capability of the algorithm.Immunodominance clone cultural algorithm was used in nuclear parameter optimization of support vector machine (SVM) for constructing the good performance classifier.The results of Wine dataset classification and fault diagnosis of TE reflect that immunodominance clone cultural algorithm can search the nuclear parameter optimization of SVM accurately.It raises the accuracy of fault diagnosis and has better spread value.
Logic-based Frequent Sequential Pattern Mining Algorithm
LIU Duan-yang, FENG Jian and LI Xiao-fen
Computer Science. 2015, 42 (5): 260-264.  doi:10.11896/j.issn.1002-137X.2015.05.052
Abstract PDF(400KB) ( 517 )   
References | Related Articles | Metrics
Traditional Apriori-like sequential pattern mining algorithms are based on the theoretical framework of support,which need pre-set support threshold,but this often requires in-depth domain knowledge or a lot of practice.Consequently,there is still no good way to set it.Meanwhile,the results of sequential patterns are too large to understand and apply.To solve these problems,this paper presented a logic-based frequent sequential pattern mining algorithm LFSPM,and introduced the thought of logic into frequent pattern mining process for the first time.Through using logical rules to filter,it optimizes the result sets greatly.Experiments show good performance of the proposed approach to solve these problems.
Research on Knowledge Reduction Algorithm Based on Variable Precision Tolerance Rough Set Theory
JIAO Na
Computer Science. 2015, 42 (5): 265-269.  doi:10.11896/j.issn.1002-137X.2015.05.053
Abstract PDF(374KB) ( 433 )   
References | Related Articles | Metrics
Knowledge reduction is an important research issue in rough set theory.Rough set theory is an efficient mathematical tool for further reducing redundancy.The main limitation of traditional rough set theory is the lack of effective methods for dealing with real-valued data.However,practical data sets are always continuous.This has been addressed by employing discretization methods,which may result in information loss.This paper investigated one approach combining tolerance relation together with rough set theory.In order to enhance the ability to adapt to the noise data,this paper explored the knowledge reduction algorithm based on variable precision tolerance rough set theory.The cha-racteristics of parameter were analyzed.The relationship between the classification quality and parameter interval was described,and the parameter value was extended to interval range.The experimental results demonstrate that our proposed algorithm and the related theory are effective.
Research on Algorithm of Satisfiability Ranking Generation for CP-nets
SUN Xue-jiao and LIU Jing-lei
Computer Science. 2015, 42 (5): 270-273.  doi:10.11896/j.issn.1002-137X.2015.05.054
Abstract PDF(394KB) ( 415 )   
References | Related Articles | Metrics
CP-nets(condition preference networks) is a simple and intuitive graphical tool for representing conditional ceteris paribus preference statements over the values of a set of variables,and it has been a studying hotspot in artificial intelligence recently.The satisfiability ranking of CP-nets is studied.By constructing induced graph of CP-nets and solving the strong dominance testing with respect to binary-valued CP-nets,the number of satisfiability ranking was derived.And the algorithm of generating all satisfiability rankings was designed by analyzing the reachability matrix acquired from strong dominance testing.What’s more,the syntax,semantics and some definitions were collated and introduced.
Community Structure Detection Algorithm Based on Community Strength Coefficient
ZHAO Jing-sheng, SUN Yu-hang and HAN Ling-xiao
Computer Science. 2015, 42 (5): 274-276.  doi:10.11896/j.issn.1002-137X.2015.05.055
Abstract PDF(609KB) ( 409 )   
References | Related Articles | Metrics
Community structure is one of the ubiquitous topology characteristics of complex network.In order to divide the community structure effectively in complex networks,this paper introduced the concept of community strength coefficient based on the definition of community strength,and put forward a kind of community structure detection algorithm based on community strength coefficient.The algorithm has a lower time complexity,and it looks for a network node that has maximum degree of intensity coefficient and its neighbor nodes to calculate the community strength coefficient and measure how to divide the community.The simulated experiment was mainly made based on Zachary network and Dolphin network to verify the feasibility and effectiveness.The algorithm has higher accuracy,better sensitivity and better extensibility to divide community.
Novel Denosing Model and its Application for Images Corrupted by Different Types Noises
LUO Zhi-hong and FENG Guo-can
Computer Science. 2015, 42 (5): 277-280.  doi:10.11896/j.issn.1002-137X.2015.05.056
Abstract PDF(920KB) ( 406 )   
References | Related Articles | Metrics
Due to limitation of traditional variational models for denosing ability,a new variational denosing model for images corrupted by various noise was proposed.Firstly,the fitting terms are fused in the presented model by introducing the parameters,and the benefits of three models are integrated to improve the denoising ability.And the solution can be obtained by applying additive operator splitting (AOS) numerical algorithm.The experimental results demonstrate that the proposed model is effective for different types of noises.Lastly the proof on the existence of solution of the model was given.
Reliable Iris Recognition Using 2D Quadrature Filters
FANG Qiang and YAO Peng
Computer Science. 2015, 42 (5): 281-285.  doi:10.11896/j.issn.1002-137X.2015.05.057
Abstract PDF(655KB) ( 577 )   
References | Related Articles | Metrics
This paper investigated a new approach for iris recognition with gray iris image.The approach extracts iris feature using 2D quadrature Log Gabor filter obtained by quaternion fourier transform.The method can get reliable iris phase information by orthogonal directions and calculate analytic signal of the filtered iris image as feature.The method characterizes iris feature in multidirectional ways.Thus iris feature encoding with high complexity that makes the iris feature more comprehensive is obtained.Feature matching adds template and uses the similar way of hamming distance to reduce the disturbance of eyelid,eyelash and facula.The results with 2D Log Gabor quadrature filters achieve significant recognition performance.
Moving Target Detection Based on Improved Gaussian Mixture Model
FAN Wen-chao, LI Xiao-yu, WEI Kai and CHEN Xing-lin
Computer Science. 2015, 42 (5): 286-288.  doi:10.11896/j.issn.1002-137X.2015.05.058
Abstract PDF(595KB) ( 469 )   
References | Related Articles | Metrics
Moving object detection is the basis for object tracking and video surveillance.An improved Gaussian mixture algorithm for moving objects detection based on block and the self-adaptive number of Gaussian mixture model was proposed,aiming at the deficiency of moving target detection algorithm based on Gaussian mixture model.Idea of video image blocking is used to improve efficiency of target detection and at the same time realize video filtering processing.And self-adaptive operation based on number of Gaussian distribution in Gaussian mixture model is used to reduce complexity of algorithm,improve speed of moving target detection.Experimental results indicate that the improved Gaussian mixture algorithm possesses faster detection speed,better detecting effect,and reduces detection noise.It can detect moving target effectively,and is suitable for real-time detection of moving targets.
Integrating Original Images and its Virtual Samples for Face Recognition
LIU Zi, SONG Xiao-ning and TANG Zhen-min
Computer Science. 2015, 42 (5): 289-294.  doi:10.11896/j.issn.1002-137X.2015.05.059
Abstract PDF(1126KB) ( 441 )   
References | Related Articles | Metrics
As one of the most attractive biometric techniques,face recognition is still a challenging task.This is mainly owing to the varying lighting,facial expression,pose,and environment.In this sense,a face image is just an observation and it should not be considered as the absolutely accurate representation of the face.However,even in a real world face recognition system,it is difficult to obtain enough samples.The great success of sparse representation in image reconstruction triggers the research on sparse representation based pattern classification.Inspired by this,a sparse representation based classification method using category elimination and greedy search strategy was proposed for face recognition.First,we reduced the uncertainty of the face representation by synthesizing the virtual training samples.We applied an error-constrained orthogonal matching pursuit algorithm to exploit an optimal representation result of training samples from the classes by eliminating the category and the specific training samples.The final remaining training samples are used to produce a best representation of the test sample and to classify it.Then,we selected useful training samples that are similar to the test sample from the set of all the original and synthesized virtual training samples.Finally,we devised a representation approach based on the selected useful training samples to perform face recognition.Experimental results on five widely used face databases demonstrate that our proposed approach can not only obtain higher face recognition accuracy,but also has a lower computational complexity than the other state-of-the-art approaches.
Segmentation of 3D Geometric Models Based on Mesh Laplace
YANG Jun, TIAN Zhen-hua, LI Long-jie and WANG Xiao-peng
Computer Science. 2015, 42 (5): 295-299.  doi:10.11896/j.issn.1002-137X.2015.05.060
Abstract PDF(1196KB) ( 1038 )   
References | Related Articles | Metrics
Segmentation is one of important methods and means to analyze shapes.A novel algorithm for segmentation of 3D geometric models was proposed based on mesh Laplace and k-means cluster aiming at the problem that the exis-ting mesh segmentation algorithms are sensitive to shape pose and time-consuming.Models are converted from spatial domain to spectral domain by using mesh Laplace in order to obtain the normalized forms,which are analyzed in spectral domain to avoid influence of variation of shape pose to segmentation results and greatly reduce the computing time.Experimental results show that the proposed algorithm is not only more efficient for generating correct and meaningful segmentations,but also more robust to variation of shape pose than existing algorithms.
Extension Model and Strategies Generating Mechanism for Mental Fatigue Recognition
CHEN Yun-hua and CHEN Ping-hua
Computer Science. 2015, 42 (5): 300-304.  doi:10.11896/j.issn.1002-137X.2015.05.061
Abstract PDF(415KB) ( 478 )   
References | Related Articles | Metrics
There are contradictions between the non-intrusive,real-time requirements and recognition rate in facial feature-based mental fatigue recognition algorithms which hinder the practical application of the algorithms.To solve these problems,extension theories and methods were introduced to build an extension model of the contradiction problem.A dependent function used to measure contradictions degree of the problem and a function for strategies evaluation were proposed.Extension analysis and extension transformation were used to generate solving strategies for mental fatigue recognition.Research results show that,based on models and strategies raised in this paper,we can develop a variety of feature extraction and mental fatigue recognition algorithms that both meet the non-intrusive,real-time requirements and recognition rate.At the same time,the value of the strategy can be quantified and compared.Results of this study can improve the intelligence of mental fatigue recognition method,and provides a good example to solve the widespread conflicts between computational complexity and accuracy existing in pattern recognition algorithms.
Motion Deblurring Based on Edge Prior Model
ZHAO Zhi-gang, CHEN Ying-ying, ZHAO Yi, ZHANG Wei-zhong, LV Hui-xian and PAN Zhen-kuan
Computer Science. 2015, 42 (5): 305-308.  doi:10.11896/j.issn.1002-137X.2015.05.062
Abstract PDF(1107KB) ( 439 )   
References | Related Articles | Metrics
How to restore clear images from degraded images is a challenging problem in the field of digital image processing.This paper proposed a restoration algorithm for motion blurred image based on edge prior model and wavelet analysis.We applied a preprocessing procedure before deblurring to remove noise,and then used shock filter to enhance edges and canny edge detection to get clear edges for estimating kernel.After that,we maximized the sparsity of clear image under tight wavelet frame system.Furthermore,the adapted split Bregman method was proposed to solve the optimization problem.Finally,the clear image can be got.The experimental results show that compared with traditional blind restoration algorithms,the proposed method can effectively remove the motion blur.
Supervised Collaborative Neighborhood Preserving Projection Based Algorithm for Face Recognition
ZHANG Qi-wen and ZHUANG Xin-lei
Computer Science. 2015, 42 (5): 309-314.  doi:10.11896/j.issn.1002-137X.2015.05.063
Abstract PDF(1001KB) ( 471 )   
References | Related Articles | Metrics
Neighborhood preserving embedding (NPE) algorithm based on manifold learning theory can discover the intrinsic structure behind data set.But in the scenery of face recognition,algorithm can’t detect the intrinsic structure accurately due to the insufficient of data,sequentially,the performance of NPE is influenced.In order to solve the problems of NPE in face recognition,a supervised collaborative neighborhood preserving projection (SCNPP) algorithm was presented.The proposed algorithm constructs the neighborhood graph under the guidance of category information,makes the geometric relationship between the same samples be preserved effectually,utilizes the collaborative representation to remedy the representation errors of NPE caused by the lack of data,calculates the projection matrix with a weight matrix which preserves the neighborhood relationship effectually and discoveries the intrinsic manifold structure of data accurately,improves the performance of classification.Extensive experiments on popular face databases (FERET,AR and Extended Yale B) verify the effectiveness of the proposed method.
Medical Image Classification Method Based on Discriminative Restricted Boltzmann Machine
CHEN Na, JIANG Yun, ZOU Li, SHEN Jian, HU Xue-wei and LI Zhi-lei
Computer Science. 2015, 42 (5): 315-319.  doi:10.11896/j.issn.1002-137X.2015.05.064
Abstract PDF(418KB) ( 641 )   
References | Related Articles | Metrics
With the development of computer techniques,an increasing number of analytic techniques on medical images by using computers have been developed.Nowadays,applying data-mining methods to the analysis of medical images is becoming popular.The classification performance of these methods usually has a strong dependence on the statistical features extracted from medical images in advance.However,the process of feature extraction is often influenced by many subjective factors such as personal experience.We applied a new method to mammography called discriminative restricted boltzmann machine,which is recently developed in machine learning.Discriminative restricted boltzmann machine can learn the features automatically from the labeled data and can also perform as a classifier.Discriminative is a kind of undirected discriminative model.The experimental results show that DRBM outperforms other methods based on feature extraction in the aspect of classification accuracy rate.
Research of Image Fusion Algorithm Based on Improved Tetrolet Transform
GAO Ji-sen, DONG Ya-nan, SHEN Yu and ZHANG Chun-lan
Computer Science. 2015, 42 (5): 320-323.  doi:10.11896/j.issn.1002-137X.2015.05.065
Abstract PDF(837KB) ( 381 )   
References | Related Articles | Metrics
An improved tetrolet transform used in the fusion of infrared image and visible image was proposed according to high-energy of the target object of infrared image and the rich detail information of visible image.The images are decomposed at multi-scales and multi-directions,and in the fusion rule of low-frequency,appropriate scaling of regional ener-gy is used to prominent infrared target and retain background information of visible image.The experimental results show that more high-frequency information is retained by the improved selection of tetrolet transform template. Compared to the traditional fusion algorithm,the proposed fusion algorithm enhances the image contrast,improves subjective visual effects,and also raises the objective criteria.