Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
Current Issue
Volume 42 Issue 6, 14 November 2018
Review of Dialogue Management Methods in Spoken Dialogue System
WANG Yu, REN Fu-ji and QUAN Chang-qin
Computer Science. 2015, 42 (6): 1-7.  doi:10.11896/j.issn.1002-137X.2015.06.001
Abstract PDF(822KB) ( 563 )   
References | Related Articles | Metrics
Spoken dialogue system is the core technology in the field of human-computer interaction,and it is an important way to realize the harmonious human-computer interaction.The research has great theory significance and application value.The advances of theory and technology in spoken dialogue systems have always been greatly concerned.The status and advances of dialogue management and spoken dialogue system were comprehensively summarized in this paper.First,the main research questions of spoken dialogue system were introduced comprehensively,including the research contents of the modules,key technologies,portability and robust design.Then,the various spoken dialogue management strategies were systematically analyzed from the perspective of theoretical models,advances and usability.Finally,several possible directions and problems for further consideration and discussion were also mentioned.
Category Theoretical Method of Inductive Data Types
MIAO De-cheng, XI Jian-qing and SU Jin-dian
Computer Science. 2015, 42 (6): 8-11.  doi:10.11896/j.issn.1002-137X.2015.06.002
Abstract PDF(363KB) ( 300 )   
References | Related Articles | Metrics
Inductive data types are important research branch of type theory,and traditional methods including mathematical logic and algebra focus on describing finite syntax construction for inductive data types,resulting in some deficiencies for analyzing and designing semantics properties and inductive rules.This paper provided formal definition of predicate in the framework of set category by category theoretical methods,analyzed the construction and properties of predicate category and algebra category,and probed lifting of endofunctors from set category to predicate category,and at last deeply researched universal inductive rules of inductive data types by adjoint functor and its adjoint properties.
Social Force Model for Crowd Simulation Using Density Field
JI Qing-ge, HE Hao and WANG Fu-chuan
Computer Science. 2015, 42 (6): 12-17.  doi:10.11896/j.issn.1002-137X.2015.06.003
Abstract PDF(1640KB) ( 258 )   
References | Related Articles | Metrics
As an effective tool,density field provides an intuitive and efficient means to quickly adjust the direction of pedestrians’ movement in crowd simulation when pedestrians need to percept the density information around.Social force model(SFM) is a popular and classical method in the research of crowd simulation and its significance lies in the simulation of some common-seen self-organized phenomenon.However,social force model still has many deficiencies.For instance,the time complexity of social force model grows exponentially when the number of pedestrians increases,and others are pedestrians’ overlapping and oscillating.This paper modified social force model using density field.First,pedestrians’ stress region and walls’ repulsive distance were introduced to reduce time complexity of the algorithm.Se-condly,this paper built a grid density field matching social force model(SFM),so that pedestrian can bypass the high density region.At last,we proposed a concept named density guiding threshold(DGT).When the grid density is bigger than DGT,pedestrian chooses a new direction which combines the goal direction and the direction of low density region.The numerous experimental results show that short-range SFM using density field not only simulates basic self-organi-zed phenomenon of crowd,but also has advantages in time complexity.
Query Expansion Based on Classification Model
LI Wei-yin, SHI Yu-long, CHEN Jie and SHI Chong-yang
Computer Science. 2015, 42 (6): 18-22.  doi:10.11896/j.issn.1002-137X.2015.06.004
Abstract PDF(423KB) ( 275 )   
References | Related Articles | Metrics
As a key component of query optimization,query expansion plays an important role in improving the perfor-mance of information retrieval systems.Traditional query expansion methods on pseudo-relevance feedback improve the performance of retrieval to some extent.However,the selected expansion terms will also include some irrelevant ones,which leads to adverse effect.In this paper,a novel query expansion method based on classification model was proposed.Combining with statistical information and various features of the candidate expansion terms,this method employs Naive Bayes classification model to reselect the candidate expansion terms so as to further filter the irrelevant ones.Experimental results on TREC 2013 datasets show that the proposed query expansion method can efficiently improve the precision and recall of user queries.
Research of Data Driven Method for Gas Turbine Trip Prediction
XIE Chen, WANG Rui-zhi, LI Yang, MIAO Duo-qian and JIAO Na
Computer Science. 2015, 42 (6): 23-27.  doi:10.11896/j.issn.1002-137X.2015.06.005
Abstract PDF(945KB) ( 261 )   
References | Related Articles | Metrics
Gas turbine is the most widely used device for modern industry.Once trips happened,gas turbine engines could cost customers millions dollars.Research on diagnosis and prediction of trips has significant impact.However,prediction of gas turbine trips is a relatively new subject and research finding is limited.So far no data driven solution for prediction of gas turbine trips is literately reported.The research work begines from preprocessing the data:normalization,dimensionality reduction,attribute value resampling and granulating.Experiments were conducted intensively on real datasets by using data-driven prediction methods Elman.The results of experiments on how to set up a better Elman network are valuable to other relative research.
Scheduler Algorithm Based on Type Specific and Deadline in Hadoop
LI Zhao, TENG Fei, LI Tian-rui and YANG Hao
Computer Science. 2015, 42 (6): 28-31.  doi:10.11896/j.issn.1002-137X.2015.06.006
Abstract PDF(405KB) ( 311 )   
References | Related Articles | Metrics
Hadoop develops open-source software for reliable,scalable,distributed computing.MapReduce is a programming model and an associated implementation for processing large data sets.Because the built-in Hadoop scheduler cannot handle the different type and deadline based jobs,we proposed a scheduler algorithm based on type specific and deadline.We specified these jobs into CPU-bound and I/O-bound and gave priority to jobs according to the deadline.The results of experiments show that the proposed algorithm not only makes full use of the cluster’s CPU and I/O resource,but also meets the jobs’ deadline.If the deadline is almost the same at a period of time,the algorithm is the best.But if jobs’ deadlines from one queue are all shorter than another queue,the efficiency of the algorithm achieves the minimum.
Age Estimation Based on Facial Image
LIN Shi-miao, MAO Xiao-jiao and YANG Yu-bin
Computer Science. 2015, 42 (6): 32-36.  doi:10.11896/j.issn.1002-137X.2015.06.007
Abstract PDF(678KB) ( 337 )   
References | Related Articles | Metrics
Age is an inherent biometric for human.As we grow older,our faces will change a lot.Age estimation based on facial image has been widely studied in recent years.Age estimation mainly consists of two phases:feature extraction and estimation method.A new age estimation method was proposed in this paper.In the feature extraction phase,we suggested combining histogram of oriented gradient (HOG) with local binary patter (LBP) to better describe the age progression of facial images,especially for the teenagers.In the estimation phase,a soft two-level estimation method based on coarse-to-fine strategy was proposed.Specifically,facial images are categorized as either adults or teenagers in the first level.In the second level,then age estimation models for each of the categories are trained,and an overlap area at the category boundary is adopted to fix the classification errors caused by the first level.Experimental results show that the features of fusion achieve better discriminative power of aging.Moreover,the soft two-level model further improves the age estimation accuracy.
Gene Microarray Data Classification Based on Intersecting Neighborhood Rough Set
MENG Jun, LI Rui and HAO Han
Computer Science. 2015, 42 (6): 37-40.  doi:10.11896/j.issn.1002-137X.2015.06.008
Abstract PDF(411KB) ( 237 )   
References | Related Articles | Metrics
In the research of gene microarray data classification and feature selection,rough set theory is an effective tool,as it can eliminate redundant genes.However a drawback in traditional rough set is that it cannot handle with continuous numeric data well,and discretization method may lead to the loss of information.We proposed an attribute reduction algorithm based on intersecting neighborhood rough set,extended the distance neighborhood to intersecting neighborhood and employed the definition of approximation based on set,to build the rough set model.Experimental results on three cancer data sets show that the rough set model based on the set approximate and intersecting neighborhood is effective and efficient.Meanwhile,the analysis of GO terms on selected genes further proves the validity of the model.
Semi-supervised Fuzzy Clustering Ensemble Approach with Data Correlation
FENG Chen-fei, YANG Yan, WANG Hong-jun, XU Ying-ge and WANG Tao
Computer Science. 2015, 42 (6): 41-45.  doi:10.11896/j.issn.1002-137X.2015.06.009
Abstract PDF(663KB) ( 190 )   
References | Related Articles | Metrics
Semi-supervised clustering ensemble has emerged as a powerful machine learning paradigm that provides improved precision,robustness and stability by taking advantage of prior information,while most of them only consider the given pairwise constraints and do not consider the neighbors around the data points constrained in the ensemble step.In this paper,a semi-supervised fuzzy clustering ensemble with data correlation(SFCEDC)was proposed to overcome this defect.Firstly,an ensemble information matrix is built by primarily exploiting the results of semi-supervised fuzzy clustering and a similarity matrix is constructed by aggregating much information of the ensemble information matrix.And then this matrix is modified by using the given constraints and the neighbors around the data points constrained.Finally,a graph partitioning algorithm is employed to get the final clustering results.Experimental results on UCI datasets demonstrate that the proposed approach can improve clustering performance effectively.
Reduction of Ordered Formal Context Based on Dominance Relation
HE Ming-li and WEI Ling
Computer Science. 2015, 42 (6): 46-49.  doi:10.11896/j.issn.1002-137X.2015.06.010
Abstract PDF(366KB) ( 259 )   
References | Related Articles | Metrics
As an efficient tool for knowledge acquisition,formal concept analysis has been applied to various fields.Based on ordered formal context,this paper firstly used dominance relation as a standard scale to convert ordered context into a single-valued formal context.Then,using the discernibility matrix of original single-valued context,we gave a reduction of single-valued context.Furthermore,the reduction of the ordered context based on dominance relation and the theorem of attribute characteristic were obtained.Finally,we compared dominance relation-based reduction of ordered context with the dominance relation-based reduction of ordered information system.
Decision Table Attribute Reduction Algorithm Based on Correspondence Constraints
CHENG Hong-hong, ZHANG Xiao-qin, LI Fei-jiang and QIAN Yu-hua
Computer Science. 2015, 42 (6): 50-53.  doi:10.11896/j.issn.1002-137X.2015.06.011
Abstract PDF(310KB) ( 227 )   
References | Related Articles | Metrics
Decision table attributes reduction is an important problem in rough set theory,and classical decision table attributes reduction methods choose the optimal condition attribute reduction set from the perspective of maintaining the classification ability of universe.Taking the correlation of decision attributes and condition attributes into account,by combining attributes reduction idea with correspondence analysis method in traditional statistical methods,this paper proposed a quantitative measurment to measure the dependent relationship between decision attributes and condition attributes,called projection differentiation.Based on the measurement,we developed a decision table attributes reduction algorithm.Finally,a simple example was given to illustrate the correctness of the proposed method.
Evidence Acquirement and Combination Method Based on Rough Set in Ordered Information System
FAN Bing-jiao and XU Wei-hua
Computer Science. 2015, 42 (6): 54-56.  doi:10.11896/j.issn.1002-137X.2015.06.012
Abstract PDF(242KB) ( 185 )   
References | Related Articles | Metrics
A novel method of evidence acquirement and combination based on rough set was proposed by introducing evi-dence theory into the ordered decision information system.Confidence degrees of evidence are used to calculate approximate conditional probability assignments.Evidence weights are calculated according to the attribute significances and support degrees of evidence.Decisions are gained by using combinational rule to integrate approximate conditional pro-bability assignments.
Astronomical Image Registration Combining Information Entropy and SIFT Algorithm
YUE Xin, SHANG Zhen-hong, QIANG Zhen-ping, LIU Hui, FU Xiao-dong and ZHANG Zhi-hua
Computer Science. 2015, 42 (6): 57-60.  doi:10.11896/j.issn.1002-137X.2015.06.013
Abstract PDF(855KB) ( 258 )   
References | Related Articles | Metrics
Astronomical image registration is a key technology of astronomical movement study,and often there is some slight irregular motion of internal structures in the image.However,in image registration the transformation of an entire image needs to be calcu lated.In this case,no matter whether registration is based on statistical characteristics or local features,it is difficult to achieve the desired results.On this basis,image is divided into several small squares firstly,and the entropy is calculated.Then square with maximum entropy is considered as the local sub-graph to register.Scale invariant feature transform and the affine transformation are used to establish relationships between local sub-graphs to complete image registration.On the one hand,this method can reduce the time of building transform relationship.On the other hand,it ensures the registration of the image area with maximum information entropy,and it also improves the registration quality of astronomical images effectively.
Hybrid Algorithm Framework for Sentiment Classification of Chinese Based on Semantic Comprehension and Machine Learning
XU Jian-feng, XU Yuan, XU Yuan-chen, ZHANG Yuan-jian and LIU Qing
Computer Science. 2015, 42 (6): 61-66.  doi:10.11896/j.issn.1002-137X.2015.06.014
Abstract PDF(534KB) ( 342 )   
References | Related Articles | Metrics
In the background of big data,it is a major challenge to distinguish sentiment orientation from a large number of Internet text information quickly,accurately and comprehensively.The main sentiment classification methods of text information are roughly divided into two categories:one is semantic comprehension and the other is supervised machine learning.The advantage of dealing with sentiment classification by using semantic comprehension method is that it can classify the text in different fields.However,the performance can be greatly affected by avariety of word collocations and sentence patterns.The supervised machine learning method can achieve higher classification accuracy,however,a satisfying classification classifier in a field may not be suitable for a new field.This paper proposed a new hybrid algorithm framework for Chinese sentiment classification combining optimized semantic comprehension and machine lear-ning based on the features extracted by information gain.Experimental results on two separate fields show that this framework has both high classification accuracy and satisfying portability.
Axiomatic Characterizations of Approximate Concept Lattices in Incomplete Contexts
ZHANG Hui-wen, LIU Wen-qi and LI Jin-hai
Computer Science. 2015, 42 (6): 67-70.  doi:10.11896/j.issn.1002-137X.2015.06.015
Abstract PDF(352KB) ( 198 )   
References | Related Articles | Metrics
This paper proposed an approach to construct an incomplete context with two complete contexts.Axiomatic characterizations of approximate concept lattices in complete contexts were obtained based on those of Wille’s concept lattices in formal contexts.Then a new method of building approximate concept lattice was presented,enriching the existing theory related to the approximate concept lattice.
Dynamic Web Service Composition Based on Discrete Particle Swarm Optimization
ZHANG Yan-ping, JING Zi-hui, ZHANG Yi-wen, QIAN Fu-lan and SHI Lei
Computer Science. 2015, 42 (6): 71-75.  doi:10.11896/j.issn.1002-137X.2015.06.016
Abstract PDF(395KB) ( 211 )   
References | Related Articles | Metrics
With the increasing of Web services,how to choose service composition that meets user’s QoS requirements from a large number of candidate services quickly and dynamically is the key issue.In order to solve this problem,a new DDPSO algorithm was proposed based on discrete particle swarm intelligence optimization.First,the cost of time and space was reduced by using the Skyline technology to eliminate redundant candidate services.Second,the diversity of particles was kept and the global search ability was increased by using the Trimming Operators.Finally,a large number of simulation experiments were carried on the actual and random data set,and the results validate the feasibility and efficiency of the algorithms.
Rapid Logic Function Reduction Algorithm Based on Granular Computing
MA He, ZHANG Yu and CHEN Ze-hua
Computer Science. 2015, 42 (6): 76-78.  doi:10.11896/j.issn.1002-137X.2015.06.017
Abstract PDF(310KB) ( 206 )   
References | Related Articles | Metrics
Multivariable logic function is an important tool in the digital circuit that describes the causal relationship between input and output variables.The research on multivariable logic function reduction plays an important role in theoretical and practical significance.However there is no effective means with low complexity.In this work,we first decomposed the multivariable logic function into different knowledge spaces and found heuristic information.Then we used the implicit statistical information to reduct the information granules in different knowledge spaces.Finally we constructed the results.Thus we finally proposed an algorithm to the multivariable logic function (it is also available to the ones that contain the ‘don’t care term’),and then reducted and realized it in MATLAB.The effect is obvious.
Knowledge Reduction under View of Lattice
MA Li and MI Ju-sheng
Computer Science. 2015, 42 (6): 79-81.  doi:10.11896/j.issn.1002-137X.2015.06.018
Abstract PDF(270KB) ( 228 )   
References | Related Articles | Metrics
Classic information systems can be viewed as a special case of lattice structure.We proposed some new concepts such as knowledge reduction and consistent set based on rough set theory under the view of lattice.By means of defining lower approximate and upper approximate operators,we gave two specific reductions.Then,we presented the judgement theorems for consistent sets and proved them.These representations deeply reveal the essence of knowledge,and some relevant results of knowledge reduction are obtained.
Three-way Decisions-based Incremental Learning Method for Support Vector Machine
XU Jiu-cheng, LIU Yang-yang, DU Li-na and SUN Lin
Computer Science. 2015, 42 (6): 82-87.  doi:10.11896/j.issn.1002-137X.2015.06.019
Abstract PDF(740KB) ( 219 )   
References | Related Articles | Metrics
Aiming at the problems that the typical incremental learning algorithm for support vector machine (SVM) loses a lot of useful information and the objectivity that existing incremental learning algorithms for SVM aspire to classification accuracy merely,the subjectivity of loss functions of three-way decisions was introduced to incremental lear-ning algorithms for SVM,and a three-way decisions-based incremental learning method for SVM was proposed.Firstly the conditional probability of three-way decisions was denoted by the ratio of feature distances and center distances.Secondly the objects of boundary region of three-way decisions were regarded as boundary vectors to be trained with the original support vectors and the newly added samples.Finally,simulation experiments were done.The results show that the proposed method not only makes full use of the useful information to improve the classification accuracy,but also revises the objectivity of existing incremental learning algorithms for SVM to some extent.Besides,the computation problem of conditional probability of three-way decisions is resolved.
Conditional Probability-based Multi-granulation Covering Rough Sets
LIU Cai-hui and TU Xiao-qiang
Computer Science. 2015, 42 (6): 88-92.  doi:10.11896/j.issn.1002-137X.2015.06.020
Abstract PDF(308KB) ( 244 )   
References | Related Articles | Metrics
This paper proposed three kinds of covering-based multi-granulation rough sets by employing the conditional probability between target concept and the minimal descriptions of elements.Some basic properties of the models were investigated and their relationships with some existed covering-based multi-granulation rough sets were disclosed.We found that the proposed models are extensions of existed covering-based multi-granulation rough sets.Finally,the relationships between the three models were explored.
Emotion Analysis of Text Based on Topics and Three-way Decisions
WANG Lei, HUANG He-xiao, WU Bing and ZHENG Ren-er
Computer Science. 2015, 42 (6): 93-96.  doi:10.11896/j.issn.1002-137X.2015.06.021
Abstract PDF(347KB) ( 349 )   
References | Related Articles | Metrics
Affective computing has received much attention and has been a hot research in the field of natural language processing and artificial intelligence in recent years.Emotion analysis of text is one of important parts in affective computing.A novel method was proposed to analyze the multi-label emotion classification of the text based on topics features and three-way decisions.The multi-label emotions of sentence are judged by using the topic emotion model,and then the multi-label emotions of text are recognized,combining the theory of three-way decision.Experiment results show that the method is reasonable and effective in recognizing the classifications of text emotion.
Two Similarity Measures between Rough Sets
LIN Juan, MI Ju-sheng and XIE Bin
Computer Science. 2015, 42 (6): 97-100.  doi:10.11896/j.issn.1002-137X.2015.06.022
Abstract PDF(290KB) ( 185 )   
References | Related Articles | Metrics
Rough set theory is emerging as a powerful tool for dealing with vagueness and uncertainty of knowledge.First of all,one method of measuring the similarity between two rough sets was proposed in approximate space based on approximation sets.And then an inclusion degree was defined based on rough membership function,which was used to define another similarity measure between two rough sets.The properties of the two similarity measures were studied respectively.Finally,the relationship between the two similarity measures was also discussed.
Adaptive Bacterial Foraging Optimization Algorithm Based on Dynamic Gaussian Mutation and Random One for High Dimensional Functions
ZHANG Xin-ming, YIN Xin-xin and FENG Meng-qing
Computer Science. 2015, 42 (6): 101-106.  doi:10.11896/j.issn.1002-137X.2015.06.023
Abstract PDF(483KB) ( 254 )   
References | Related Articles | Metrics
In view of the shortcomings of facterial foraging optimization (BFO),such as the bad optimization perfor-mance and generalization in its application of high dimensional function optimization,an adaptive bacterial foraging optimization algorithm based on combing dynamic Gaussian mutation and random one was proposed in this paper.First,the original elimination-dispersal operator is replaced with a new one based on combining random mutation to add population diversity and dynamical Gaussian mutation to raise convergence rate.Then a chemotactic step mechanism is adopted with dynamical adjusting and self-adapting adjusting.Finally,a new communication mechanism is added to the improved BFO.The simulation results on 14 high-dimensional functions indicate that the proposed optimization algorithm is rapid and has good performance and generalization,and outperforms the current global optimization algorithms such as SBFO,POLBBO,BFAVP and RABC.
Analysis of Factors of Effective Teaching Based on Inconsistent Decision Information Systems
CHEN Ya-fei and WANG Xia
Computer Science. 2015, 42 (6): 107-110.  doi:10.11896/j.issn.1002-137X.2015.06.024
Abstract PDF(384KB) ( 177 )   
References | Related Articles | Metrics
To popularize and apply the idea of effective teaching is one of important projects about curricular reform.However,many problems such as unremarkable teaching effect occur in the process of effective teaching.According to the theory of attribute reduction in rough set,an algorithm was firstly obtained for attribute reduction.Then a table about factors of effective teaching was constructed based on investigations about factors of effective teaching and randomly sampling,and transformed into a decision information system.Using the attribute reduction algorithm,the factors about effective teaching were also analyzed in order to obtain distribution reducts,maximum distribution reducts,assignment reducts,lower approximate reducts and upper approximate reducts attribute reducts.Finally the results of attribute reduction and attribu-te analysis were explained to guide the plan of effective teaching and further to improve teaching effect.
Novel H.264 Rate Control Method Based on TMN8 Model
LI Na, WANG Zhong-yuan, HE Zheng, FU You-ming and CHANG Jun
Computer Science. 2015, 42 (6): 111-114.  doi:10.11896/j.issn.1002-137X.2015.06.025
Abstract PDF(702KB) ( 215 )   
References | Related Articles | Metrics
To address the drawbacks of TMN8 rate control model in real-time video communications in practical scena-rios,this paper proposed three technical modifications:target bit rate computation,center-oriented perceptual weighting,and layered rate control.Meanwhile,a two-pass motion estimation approach was also established to deal with the dilemma of cause and effect between RDO estimation and rate control,in which the motion information generated by pre-estimation stage is further used to speed up and optimize formal encoding process.A high-precision H.264 rate control method employing TMN8 model was ultimately implemented on the basis of the presented improved techniques and the two-pass approach.Simulation experimental results and practical applications in SIP video conference system demonstrate that this method does play a positive effect on the visual experience of real-time video.
Research on Relay Node Placement Considering Load Balancing Based on Greedy Optimization Algorithm in Wireless Sensor Networks
ZHANG Hang, TONG Xiao-jun and WANG Zhu
Computer Science. 2015, 42 (6): 115-119.  doi:10.11896/j.issn.1002-137X.2015.06.026
Abstract PDF(946KB) ( 210 )   
References | Related Articles | Metrics
At present,all the relay node placement algorithms in WSN ignore the factor of load balancing,and we introduced several layout optimization models based on this.Then we proposed threshold value method and mean value method to update each path’s load.At last,we put forward an optimal greedy optimization algorithm to try to reduce the number of required nodes when cosidering the load balancing.The test results show that the optimization greedy optimization algorithm can make the load of the whole network more average and it is more suitable for practical application.
Network Selection Algorithm Based on Multi-attribute Decision
ZHANG Yu and LIU Sheng-mei
Computer Science. 2015, 42 (6): 120-124.  doi:10.11896/j.issn.1002-137X.2015.06.027
Abstract PDF(386KB) ( 263 )   
References | Related Articles | Metrics
A network selection algorithm based on multi-attribute decision for heterogeneous networks was proposed according to how to choose and use the right parameters to select the most appropriate network on the characteristics of different services,considering the network load balancing to reduce handoff times and the probability of handoff blo-cking.This algorithm takes the network objective attributes and user preferences into account.Two decisions are made.The first decision uses TOPSIS algorithm,considering only the network objective attributes.When the alternative network solutions are very close,the second decision is made.The second decision uses AHP to calculate the weights and uses ANP to eliminate the dependence between the attributes,then creates utility functions,and selects the most appropriate network on the values of the utility functions.Simulation results show that the proposed algorithm takes the network load balancing into consideration and reduces the average handoff rate and the average probability of handoff blo-cking effectively.
Research on Network Coding to Optimize Performance of TCP in Wireless Networks
GE Wei-min, XU Wen-qing, ZHU Hai-ying, LI Juan and RAN Fang
Computer Science. 2015, 42 (6): 125-130.  doi:10.11896/j.issn.1002-137X.2015.06.028
Abstract PDF(501KB) ( 252 )   
References | Related Articles | Metrics
The emergence of network coding provides a new method for improving TCP performance in wireless network.J.K.Sundararajan et al.proposed a new protocol called TCP/NC based on network coding combining the network coding and transmission control protocol,which achieves remarkable results in improving the performance of TCP in wireless network.But the synchronism problem in data transmission and decoding operation may seriously affect the performance of TCP/NC and its modified protocol is not considered.To address this issue,a revised protocol TCP/NCW was proposed in this paper.We introduced the decoding window adjustment scheme based on TCP/NC.In TCP/NCW,the decoding window is adjusted according to the decoding time and will finally reach an optimal window size.This scheme can ensure the synchronization of data transfer and decoding operation.Therefore,this scheme can achieve better performance.We used the queuing theory to analyze the existence of the optimal decoding window of TCP/NCW.The simulation results with NS2 show that TCP/NCW achieves significant improvement in throughput compared with both TCP/Vegas and TCP/NC in different scenarios,without prejudice of fairness.
Multi-user Video Stream Distributed Scheme with Minimal Distortion Scheduling
JIANG Ying, LI Yan-ping, GUO Shu-xia and LI Wei-ping
Computer Science. 2015, 42 (6): 131-134.  doi:10.11896/j.issn.1002-137X.2015.06.029
Abstract PDF(336KB) ( 160 )   
References | Related Articles | Metrics
In order to improve the transmission quality of the video data stream,reduce the distortion rate of the video data stream,and then improve network utilization efficiency of video data stream,this paper presented a multi-user video stream distributed scheme with minimum distortion scheduling.The program uses the model to capture the sum total of the video distortion,and establishes video stream distortion model.By further modelling with M/G/1 queuing model,the delay distribution correlation function about video distortion and transmission is gotten.By optimizing the network congestion,the delay for the system can be constrainted,thereby reducing the loss of straightforward video.The minimized optimal solution of routing congestion is gotten by considering routing and rate allocation of common questions,minimizing network transmission delays.Comparative analysis and experimental results show that the scheme achieves good results in reducing distortion ratio of the video,shortening latency of video stream transmission and controling packet loss rate.
Blind Identification Algorithm of Photorealistic Computer Graphics Based on Local Binary Count
SHEN Xuan-jing, LI Meng-zhen, LV Ying-da and CHEN Hai-peng
Computer Science. 2015, 42 (6): 135-138.  doi:10.11896/j.issn.1002-137X.2015.06.030
Abstract PDF(689KB) ( 188 )   
References | Related Articles | Metrics
Aiming at the problem that the classification features selected by the existing blind identification algorithms of photorealistic computer graphics have high dimensions and poor universalities,this paper put forward a blind identification algorithm of photorealistic computer graphics based on local binary count.First,the original image is converted from RGB color space to HSV color space.Then,the local binary count matrix is extracted from the HSV color space images and its down-sampling image,and the normalized histogram of the matrix is calculated.Finally,the above histogram is sent as classification features into the SVM classifier,implementing the blind identification of photorealistic computer graphics.The experimental results show that the algorithm can effectively identify photographic images and photorealistic computer graphics.Compared with the existing algorithm,it has higher recognition rate and lower feature dimension.
Privacy-preserving Data Sharing and Access Control in Participatory Sensing
LIU Shu-bo, WANG Ying and LIU Meng-jun
Computer Science. 2015, 42 (6): 139-144.  doi:10.11896/j.issn.1002-137X.2015.06.031
Abstract PDF(598KB) ( 201 )   
References | Related Articles | Metrics
With the development of mobile devices,participatory sensing has a broad application prospect.As the main users of participatory sensing are persons social attributes,participatory sensing is facing many problems which don’t encounter in conventional sensor networks.The security and privacy for users to collecte and share data with others are one of the most important issues.One of the most concerned problems is how to get all the necessary data through single transaction and how to keep identity privacy and preference privacy for users to get all the necessary data when users interact with others.Meanwhile,if participatory sensing wants to be developed,the problems should be solved first.The scheme uses bilinear mapping and blind signature to protect the identity privacy of users,and uses bloom filter to make users get all the necessary data through single transaction and protect the preference privacy of users from data providers at the same time if the matching request fails between users and data providers.Finally,performance analysis shows the security and feasibility of proposed scheme.
Risk Analysis and Countermeasure for User Password Authentication in Big Data Environment
FU Yong-gui and ZHU Jian-ming
Computer Science. 2015, 42 (6): 145-150.  doi:10.11896/j.issn.1002-137X.2015.06.032
Abstract PDF(537KB) ( 191 )   
References | Related Articles | Metrics
Authentication is one of the basic service for information security,and password authentication is the most common authentication method,but currently there is much risk in setting user password.On the basis of analyzing the current setting user password problem,we presented user password protection’s offensive-defensive game model in big data environment,and pointed attacker could improve ability for deciphering user password with big data analysis technology.However,for user to ensure security or reduce risk,more effective password,identity cross-certification techno-logy or dynamic track user access information system behavior technology,as well as lower big data analysis cost are needed.Countermeasure was presented and user data portrait thinking was used to establish information system user identity cross-certification model in big data environment.Validity of model was verified through simulation experiment.
Improved Information Hiding Algorithm Based on Motion Estimation of H.264
WANG Wei, LIN Xi-jie and LI Xiao-qin
Computer Science. 2015, 42 (6): 151-157.  doi:10.11896/j.issn.1002-137X.2015.06.033
Abstract PDF(1622KB) ( 169 )   
References | Related Articles | Metrics
An improved information hiding algorithm based on H.264’s specific motion estimation with quarter-pixel precision was proposed.By modifying every sub-block’s best matching position of macroblock and using the mapping rule between sub-block’s position and binary information,the information is embedded in sub-block’s position.Information extraction based on the interpolation process of luma pixels in decoder of H.264 doesn’t need the original videos and belongs to the blind extraction mechanism.The experiment results demonstrate that the improved information hi-ding algorithm in motion estimation of H.264 increases the hiding capacity and decreases the system cost without decreasing the video quality conspicuously,which has higher integer performance than other algorithms.
Analysis and Improvement of Public Key Cryptosystem Using Random Knapsacks
WANG Qing-long and ZHAO Xiang-mo
Computer Science. 2015, 42 (6): 158-161.  doi:10.11896/j.issn.1002-137X.2015.06.034
Abstract PDF(329KB) ( 247 )   
References | Related Articles | Metrics
A new key recovery attack against Wang et al.’s cryptosystem which was built using random knapsacks was proposed in this paper.We found out that it is not a real random knapsack public key cryptosystem.Actually,a special super increasing knapsack is unobviously used in their scheme.By substituting the special super increasing knapsack with normal super increasing knapsack and hiding the normal super increasing knapsack into a random knapsack,we proposed an improved knapsack public key cryptosystem based on Chinese reminder theorem.Our scheme revises the shortage of the Wang et al.’s scheme and can resist the lattice basis reduction algorithm attack and low-density attack,as well as Shamir attack.
Integrity Based Security Protection Method for Terminal Computer
LI Qing-bao, ZHANG Ping and ZENG Guang-yu
Computer Science. 2015, 42 (6): 162-166.  doi:10.11896/j.issn.1002-137X.2015.06.035
Abstract PDF(832KB) ( 213 )   
References | Related Articles | Metrics
Terminal computer is the basic unit of network activities,which is directly related to the security of network environment and information systems.An integrity based security protection method for terminal computer was proposed,which integrates integrity measurement and real-time monitoring technology to ensure the security and credibility of terminal computer.A protection framework was established,which uses TPM as hardware trusted base and virtual monitor as the core unit.Integrity measurement is used to establish the basic trusted chain from the hardware platform to operating system.And integrity related objects,such as kernel code,data structures,key registers and system status data,are monitored when the system is running to detect and prevent from malicious tampering in order to ensure system integrity,security and reliability.A lightweight virtual machine monitor was designed using Intel VT hardware-assisted virtualization technology and a prototype system was realized.Tests show that the method is effective and has less impact on the performance of terminal computer.
Assessment of Network Security Situation Based on Immune Danger Theory
CHEN Yan-ling, TANG Guang-ming and SUN Yi-feng
Computer Science. 2015, 42 (6): 167-170.  doi:10.11896/j.issn.1002-137X.2015.06.036
Abstract PDF(330KB) ( 203 )   
References | Related Articles | Metrics
In order to assess network security situation in real-time and quantification,an assessment method based on immune danger theory was proposed.Through studying the immune operation mechanism,antigen,antibody and immune cell in the problem of network security were defined.On the premise of describing the judgment rules of danger signal,the antigen is recognized accurately.Based on the changes of antibody density in the immune response and immune balance mechanisms,the calculation method of antibody density was given.Finally,by analyzing the relationship between antibody density and danger level,a danger awareness model based on antibody density was built to assess network security situation in real-time and quantification.The simulation results show that antibody density calculated by using the proposed method accurately reflects the danger level that the system faces,which can provide effective decision-making support for network management.
Anonymous Identity-based Encryption without Random Oracles
YANG Kun-wei and LI Shun-dong
Computer Science. 2015, 42 (6): 171-174.  doi:10.11896/j.issn.1002-137X.2015.06.037
Abstract PDF(307KB) ( 218 )   
References | Related Articles | Metrics
Most identity-based encryption(IBE) schemes do not have the recipient anonymity.This paper proposed a new anonymous IBE scheme based on the DBDH assumption.This scheme is secure against adaptive chosen plaintext attack.We analyzed the anonymity of the scheme and verified the correctness and security.Our scheme is superior in the recipient anonymity and it doesn’t use pairing computations in the encryption.Compared to Gentry’s scheme,our scheme is based on a more common difficulty assumption and makes up anonymous IBE vacancy under DBDH difficulty assumption.
Research on Cache Replacement Model Based on Multi-request Mode under Hybrid Architecture Model
CAO Min and LIU Wen-zhong
Computer Science. 2015, 42 (6): 175-180.  doi:10.11896/j.issn.1002-137X.2015.06.038
Abstract PDF(517KB) ( 308 )   
References | Related Articles | Metrics
For the social needs of many types of access modes and multiple applications,based on GDSF algorithm,this paper introduced two features of average access interval and recent access interval to enhance the adaptability of the algorithm.Cache structure model was built by double keyword indexing mechanism to index buffer object quickly and reduce system overhead.The suffix blocks of big file were prefectched to increase the number of data objects in the cache.In the background of the subject application,comparative experiments with the traditional method show that this method can make the average waiting time of the request,cache object hit rate and byte hit ratio get a comprehensive improvement and improve the adaptability of cache replacement algorithm for multi-type multiple requests mode application.
Linux System Dual Threshold Scheduling Algorithm Based on Characteristic Scale Equilibrium
CUI Yong-jun and ZHANG Yong-hua
Computer Science. 2015, 42 (6): 181-184.  doi:10.11896/j.issn.1002-137X.2015.06.039
Abstract PDF(372KB) ( 174 )   
References | Related Articles | Metrics
In the design and application of embedded Linux operating system,operating system runs on different hardware platforms after transplantation,and it needs a task scheduling algorithm for effective implementation of process management and memory management to improve the operational efficiency of the system.Linux system dual threshold scheduling algorithm was proposed based on characteristic scale equilibrium.The kernel structure of embedded Linux was analyzed.The system task scheduling model was constructed.According to the various classifications of information such as task arrival rate,execution time,etc,the scale features are extracted.In the global task scheduling center,all the task data are integrated and input to the total system scheduler,and the scale optimization objective function is obtained.The feature scale balanced processing is taken.The characteristic time shaft is divided into the adjacent but not overlap task matching smoothing window,and the double threshold trade-off decision is used for task scheduling in Linux system.The simulation results show that the new algorithm has higher efficiency in Linux embedded task scheduling,utilization rate of CPU is better,and the overall performance is better than the traditional algorithm.
Research on Architecture Reconfiguration of Dynamic Self-adaptive Software
CHEN Xiang-dong
Computer Science. 2015, 42 (6): 185-188.  doi:10.11896/j.issn.1002-137X.2015.06.040
Abstract PDF(433KB) ( 216 )   
References | Related Articles | Metrics
In view of the adaptive software researches in the current,people put more focus on the environment perception, quality of service modeling,programming language and so on.It results in a lack of in-depth revealing the adaptive process and principle.This paper researched on adaption from the software architecture,and put forward a kind of method of the reconfiguration of architecture in the dynamic self-adaptive process.This method adjusts the architecture by adding,deleting and updating components and connectors.The experiment of the server pool size dynamic self-adaption adjustment based on cloud computing shows that the dynamic adaption can improve the system credibility and reduce operating costs.
Decoding-directed Dynamic Binary Translation Optimization
DONG Wei-yu, WANG Rui-min, QI Xu-yan and ZENG Yun
Computer Science. 2015, 42 (6): 189-192.  doi:10.11896/j.issn.1002-137X.2015.06.041
Abstract PDF(449KB) ( 283 )   
References | Related Articles | Metrics
The paper introduced a decoding-directed lightweight optimization technique for dynamic binary translation.In decoding phase,it extracts high-level semantics from source instructions,attaches appropriate annotations to them according to the context,and in translation phase,it emits optimized local instruction directly using the annotation information.The technique may identify most block-level optimization opportunities of dynamic binary translation system,and remove redundancies generated by load/store,precise exception supporting and flags handling.Evaluation demonstrates that taking QEMU for reference,the translation overhead of cross-platform x86 system virtual machine ARCH-BRIDGE using above technique gets a decrease of 53%,while the translation block size decreases by 78%,and the load/store operation number deceases by 50% and 21% respectively.
HY-COCA:A Hybrid-data-distribution-aware Way to Detect Correlation over Bi-dimensional Data Space
CAO Wei, WANG Qiu-yue, QIN Xiong-pai and WANG Shan
Computer Science. 2015, 42 (6): 193-203.  doi:10.11896/j.issn.1002-137X.2015.06.042
Abstract PDF(1487KB) ( 175 )   
References | Related Articles | Metrics
Hybrid data distribution between two attributes means that different data sub-regions exhibit different correlated associations.For example,in a distribution between sale amounts and different cities,a semi-independent distribution is observed with lower sale amounts,but for higher sale amounts,the two attributes present soft functional depen-dency.Previous researches on auto detection of association focused on deducing an overall measure of association over two dimensional distributions.They were unable to address hybrid data distribution problem.In statistical analysis,such sub-regions with particular data associations are worth paying attention to.This paper proposed a new way,HY-COCA,to detect data associations globally and locally,finding those sub-regions with special data associations.We did experiments on both synthetic and benchmark data.Experimental results verify the effectiveness of HY-COCA.
Modeling and Optimization for Multi-objective Dynamic Vehicle Routing Problem
ZHOU Hui, ZHOU Liang and DING Qiu-lin
Computer Science. 2015, 42 (6): 204-209.  doi:10.11896/j.issn.1002-137X.2015.06.043
Abstract PDF(488KB) ( 519 )   
References | Related Articles | Metrics
For the dynamic vehicle routing problem in logistics distribution,this paper built a multi-objective and dynamic mathematical programming model synthesizing dynamic demands,the effects on the road network,vehicle sharing,time window and customer satisfaction.This model can describe modern logistics distribution better.Meanwhile,the paper put forward a two-phase solving strategy for it.In the first phase,multi-objective hybrid particle swarm optimization is adopted to get preliminary Pareto solutions.The algorithm uses the modified updating strategy of particle states and simulated annealing operation to improve the searching performance of particles,and uses adaptive grid technique to maintain the dispersion of solutions.In the next phase,greedy insertion and variable neighborhood search are applied to adjust routes according to the changes in demand.The experimental results show that the two-phase algorithm has better exploring ability in solution space, and it can also converge to the global optimum rapidly,and satisfy the real-time requirement.
Improved Algorithms for Attribute Reduction Based on Simple Binary Discernibility Matrix
WANG Ya-qi and FAN Nian-bai
Computer Science. 2015, 42 (6): 210-215.  doi:10.11896/j.issn.1002-137X.2015.06.044
Abstract PDF(731KB) ( 158 )   
References | Related Articles | Metrics
In the algorithms for attribute reduction based on simple binary discernibility matrix,elimination means that redundant attributes are excluded from reduct sets one by one until the last is a minimum reduct.The traditional elimination based on simple binary discernibility has the following shortcomings:lower efficiency and being not able to get the optimal solution.In view of those problems,an improved algorithm for attribute reduction based on simple binary discernibility matrix was presented.Firstly,the decision table was simplified.Secondly,an improved algorithm to simplify binary discernibility matrix was proposed.Lastly,for attribute reduction,we presented a new measure which can delete more than one redundant attributes and proved the feasibility of the measure.Moreover,the experiment results prove the correctness of the method.
Method to Determine Index Weight Based on S Curve
HE Feng, YAN Xue-feng and ZHOU Yong
Computer Science. 2015, 42 (6): 216-219.  doi:10.11896/j.issn.1002-137X.2015.06.045
Abstract PDF(416KB) ( 462 )   
References | Related Articles | Metrics
Aiming at the problem of comprehensive evaluation of multiple decision makers and multi index for dynamic system,the current weighting methods seldom consider the credibility of the decision-makers and the change of target weight with the variation of the evaluation object.A parameter weighting model based on confidence adjustable type S curve index was proposed.Based on the expert scoring method,through introducing expert authority and difference coefficient of experts to determine expert confidence,it enhances the credibility of expert evaluation.An adjustable para-meter is used to calibrate the weighting model based on the confidence index,which can solve the non-linear relation between the evaluation index weight and the dynamic system and increase the flexibility of the weighting model.Combining the results to determine the index weight in accord with the objective reality,it verified the feasibility and practicability of this method,provided a new method for determining the index weight.
Approach of Cross-domain Word Sentiment Orientation Identification on Reviews
WU Fei, ZHANG Yu-hong and HU Xue-gang
Computer Science. 2015, 42 (6): 220-222.  doi:10.11896/j.issn.1002-137X.2015.06.046
Abstract PDF(597KB) ( 190 )   
References | Related Articles | Metrics
The sentiment orientation identification for word is important for text sentiment classification.The existing efforts identify the sentiment orientation of target words according to the similarity of the target words and paradigm words with the assumption that there are some paradigm words.However,the ambiguity of emotion of paradigm words in different corpus affects the results of sentiment classification.We proposed a novel method for cross-domain word sentiment orientation identification based on extracting paradigm words and eliminating the ambiguity of paradigm words in the given corpus.More specifically,we first extracted the candidate paradigm words automatically in a labeled corpus,and then filtered the paradigm words those involve in ambiguity of emotion based on the co-occurrence matrix of paradigm words and target words in the target domain.Finally through computing the similarity of paradigm words and target words,we identified the sentiment orientation of target words.The experimental results demonstrate the effectiveness of our method.
Categorical Incremental Data Labeling Algorithm
LI Yan-hong, LI De-yu and WANG Su-ge
Computer Science. 2015, 42 (6): 223-227.  doi:10.11896/j.issn.1002-137X.2015.06.047
Abstract PDF(400KB) ( 194 )   
References | Related Articles | Metrics
Data labeling has become a simple but efficient solution to improve the efficiency of incremental data clustering.This process of data labeling is performed by assigning each new coming data point to some cluster that is closest to the new data point.One of the main difficulties in categorical data analysis is,however,lacking an appropriate way to define the similarity between data point and cluster.To overcome this difficulty,in this paper,we defined the representative of a cluster as a list of all attribute values with their frequencies in each attribute domain of the cluster,and then,defined the point-cluster dissimilarity measure by means of the change of information entropy.Based on the dissimilarity measure,we designed a categorical incremental data labeling algorithm,to allocate each unlabeled data point into the appropriate cluster.Comparative experiments on several public data sets and a text corpus show that the proposed algorithm has not only the higher labeling accuracy and the less execution time,but also better scalability.
Active Learning in Chinese Word Segmentation Based on Nearest Neighbor
LIANG Xi-tao and GU Lei
Computer Science. 2015, 42 (6): 228-232.  doi:10.11896/j.issn.1002-137X.2015.06.048
Abstract PDF(510KB) ( 203 )   
References | Related Articles | Metrics
As the basis of Chinese information processing,Chinese word segmentation(CWS) plays a very important role.To solve the problems of lacking of training samples and accessing a large number of labeled samples laboriously,a fresh active learning method based on nearest neighbor was proposed.The method adopts CRFs as the basic framework and uses the proposed active learning sampling strategy to select the most useful instances to annotate from a large number of unlabeled samples.Next the annotated are put instances into the labeled set and then the segmenter is trained by using the labeled set.Finally the method was tested in PKU corpora,MSR corpora and shanxi university corpora,and compared with the uncertainty sampling strategy.The experiment result shows that the fresh active learning selection strategy can select more valuable samples,reduce the cost of manual annotation effectively,and improve the accuracy of segmentation.
Fault Diagnosis of High-speed Rail Based on Clustering Ensemble
CHEN Yun-feng, WANG Hong-jun and YANG Yan
Computer Science. 2015, 42 (6): 233-238.  doi:10.11896/j.issn.1002-137X.2015.06.049
Abstract PDF(782KB) ( 233 )   
References | Related Articles | Metrics
Clustering ensemble is the combination of some independent cluster’s results,so as to get an optimal clustering result to the original data.Clustering ensemble can reduce the influence of noise and outlier on the clustering result,and at the same time it can also improve the robustness and stability of the clustering results.This paper divided three aspects to describe the fault diagnosis of high-speed rail analysis based on clustering ensemble.In the first aspect,we switched the original simulation data from time domain to frequency domain through discrete Fourier transform,and used different feature selection algorithms for data preprocessing.In the second aspect,we used AP,FCM,EmGauussian and Kmeans,four different clustering algorithms,to analyze it.In the last aspect,we used HGPA,MCLA and CSPA,three different Cluster Ensemble models,to integration the results of clustering algorithms.This paper applied clustering ensemble algorithm in fault diagnosis of high-speed rail for the first time.The experimental results show that this method has better performance than a single clustering algorithm,and can be more accurate and effective for fault diagnosis of high-speed rail.
WFCD-based Rough Set One-class Support Vector Machine
TIAN Hao-bing, ZHU Jia-gang and LU Xiao
Computer Science. 2015, 42 (6): 239-242.  doi:10.11896/j.issn.1002-137X.2015.06.050
Abstract PDF(363KB) ( 322 )   
References | Related Articles | Metrics
Rough one-class support vector machine(ROCSVM) is a single class SVM.It defines upper approximation and lower approximation hyperplanes by a kernel function mapping,which makes the training samples have an impact on the decision hyperplane adaptively according to the position within the rough margin.Since the ROCSVM only has positive samples,to fully exploit and use the features of the classified training samples have important significance for improving the classification performance of ROCSVM.Thus,we presented a weighted feature-contribution-degree(WFCD) based Gaussian kernel(λ-RBF).First,principal component analysis(PCA) is done to the training set to get vector set sorted by eigenvalues,and then kernel function is constructed based on the vector set,which makes a larger eigenvalue have better effect in the kernel function.Experimental results on UCI standard data sets and simulation data show that compared with the general RBF-based ROCSVM,the λ-RBF based ROCSVM has better generalization and higher re-cognition rate.
Nave Parallel LDA
GAO Yang, YAN Jian-feng and LIU Xiao-sheng
Computer Science. 2015, 42 (6): 243-246.  doi:10.11896/j.issn.1002-137X.2015.06.051
Abstract PDF(574KB) ( 176 )   
References | Related Articles | Metrics
The parallel latent Dirichlet allocation (LDA) costs a lot of time in computation and communication,which brings about long time to train a LDA model and then it can’t be widely applied.This paper proposed nave parallel LDA algorithm,presenting two methods to solve this problem.One is to add impact factor of each word and set thresholdto reduce the amount of corpus,the other is to reduce the communication frequency to decrease the communication time.Experimental results show that the optimized distributed LDA can accelerate the total training time by 36% and improve the speedup ratio,while the loss of accuracy is below 1%.
Self-adaptive Differential Evolution with Multi-mutation Strategies
ZHOU Ya-lan and XU Zhi
Computer Science. 2015, 42 (6): 247-250.  doi:10.11896/j.issn.1002-137X.2015.06.052
Abstract PDF(415KB) ( 442 )   
References | Related Articles | Metrics
The performance of differential evolution(DE) algorithm often depends heavily on the mutation strategy and control parameters.A novel self-adaptive differential evolution with multi-mutation strategies called SMSDE was proposed.SMSDE designs a strategy pool consisting of many kinds of mutation strategy and applies self-adaptive strategies to two main parameters.In order to verify the performance of SMSDE,SMSDE was compared with 6 original DEs and 4 advanced DEs on CEC2013 benchmark functions.The experimental results show that SMSDE is superior to original DEs,and is competitive with the current advanced DE variants.
Incremental Updating Algorithm for Attribute Reduction Based on Improved Discernibility Matrix
LONG Hao and XU Chao
Computer Science. 2015, 42 (6): 251-255.  doi:10.11896/j.issn.1002-137X.2015.06.053
Abstract PDF(383KB) ( 199 )   
References | Related Articles | Metrics
In order to solve the problem that the attribute reduction algorithm based on discernibility matrix spends a lot of time and space and the efficiency of the attribute core and the attribute reduction update of the rough set are slow,what is more,it lacks the incremental updating algorithm for attribute reduction,this paper proposed an incremental updating algorithm for attribute reduction based on the discernibility matrix.When the algorithm updates the discernibility matrix,it only needs to insert a row and a column,or delete a row and modify the corresponding column,which can effectively improve the updating efficiency of core and attribute reduction.We analyzed the relationship of the new object x with the original decision system object,giving out the updating algorithm of the attribute reduction increment.Theoretical and experimental analysis shows that the proposed algorithm can improve the updating efficiency of attribute reduction,reducing the time and space complexity significantly.
Research of Post-processing of FN Algorithm Results in Social Networking
NI Han and BAI Qing-yuan
Computer Science. 2015, 42 (6): 256-261.  doi:10.11896/j.issn.1002-137X.2015.06.054
Abstract PDF(991KB) ( 195 )   
References | Related Articles | Metrics
During the last few years,the clustering algorithms’ transverse comparison in the field of complex networks has attracted a lot of attention.Among them,the algorithm based on modularity is widely used,and modularity is viewed as an evaluation for a clustering.The Newman fast algorithm based on the modularity (Fast-Newman algorithm,FN for short) is more prominent.Many related researches are based on the FN algorithm,however,most of the works focus on operator improvement,application field extension,etc.For the results of the FN algorithm,most of research works are tend to evaluation,measure and summarize.The study focuses on the post-processing of the classification results of the FN algorithm.The research shows the common feature of the error nodes in the FN algorithm,and proposes three different solutions to make the final results more correspond to the actual situation and achieve a better clustering result.In some cases,the clustering accuracy is up to 100%.
Accelerated Structure Learning for General Multi-dimensional Bayesian Network Classifier
FU Shun-kai and LI Zhi-qiang Sein Minn
Computer Science. 2015, 42 (6): 262-267.  doi:10.11896/j.issn.1002-137X.2015.06.055
Abstract PDF(510KB) ( 323 )   
References | Related Articles | Metrics
General multi-dimensional Bayesian network classifier (GMBNC) is one kind of Bayesian network (BN) tailored for the application of multi-dimensional classification,hence it contains only features necessary for the prediction.To avoid global search,a novel algorithm called DOS-GMBNC was proposed.It inherits the framework of existing IPC-GMBNC,conducts a dynamic order of search by making use of the underlying topology information.Experimental stu-dies indicate the effectiveness and efficiency of DOS-GMBNC.It outputs networks with equal quality as PC and iPC-GMBNC algorithms,and it brings considerable reduction of computation complexity,e.g.about 89% and 45% less than PC and IPC-GMBNC respectively on a 100-node network problem.
Trusted Scheduling of Dependent Tasks Using Genetic-annealing Algorithm under Grid Environment
WANG Hong-feng and ZHU Hai
Computer Science. 2015, 42 (6): 268-275.  doi:10.11896/j.issn.1002-137X.2015.06.056
Abstract PDF(717KB) ( 183 )   
References | Related Articles | Metrics
Given the security challenges faced for the dependent task-scheduling problem under heterogeneous grid environment,considering grid nodes’ inherent security and behavioral security,we built a function to measure each node’s identical reliability and a strategy to assess its behavioral credibility.Meanwhile,in order to establish the affiliation between the security requirement and security attributes of each task,an affiliation function of security benefits was defined.Thus,a security trusted task-scheduling model under grid environment was built in this paper.On this basis,with the task’s requirement model presented and grid resources’ topology model introduced,a new security trusted grid task-scheduling model was proposed.To solve this model by using genetic algorithm,we designed several new genetic operators,including improved crossover operator,crossover operator within each individual and migration operator which is taken as mutation operator,and at the same time,with a view to increase search precision,the simulated annealing algorithm was introduced,by which we further proposed a new genetic-annealing algorithm.The simulation results show that compared with similar algorithms under the same conditions,the proposed algorithm has a better overall performance in terms of scheduling length,security trusted value,convergence and other aspects.
Pseudo Relevance Feedback Based on Maximal Marginal Relevance
YAN Rong and GAO Guang-lai
Computer Science. 2015, 42 (6): 276-278.  doi:10.11896/j.issn.1002-137X.2015.06.057
Abstract PDF(334KB) ( 294 )   
References | Related Articles | Metrics
The performance of PRF(Pseudo-relevance feedback) is heavily dependent upon the quality of ‘pseudo-relevant’ documents.In order to improve PRF robustness,this paper proposed a novel approach named RMMR(Reorder Maximal Marginal Relevance).Its aim is to make the minimum similarity between the two documents and the maximum number of relevant with the query for the top-k ranked documents by means of reordering the first-pass retrieval result.At last,query clarity was used to filter the set of expanded queries for the second-pass.Evaluation of this proposal shows important improvements in terms of PRF robustness.
Cloud Computing Resource Scheduling in Mobile Internet Based on Particle Swarm Optimization Algorithm
ZHOU Li-juan and WANG Chun-ying
Computer Science. 2015, 42 (6): 279-281.  doi:10.11896/j.issn.1002-137X.2015.06.058
Abstract PDF(323KB) ( 328 )   
References | Related Articles | Metrics
According to the characteristics of the mobile internet users’ mobility,the concept of mobile cloud is used to share the computing tasks.Particle swarm algorithm can effectively find the computing resources in mobile internet,so as to improve the allocation rate of each computing resources in cloud computing and computing efficiency.This article used the particle swarm optimization (PSO) algorithm,took the service quality of users into consideration,scheduled the heterogeneous network computing resources efficiently,and completed cloud computing resource scheduling scheme with a large amount of calculation of scientific computing.Simulation results show that the proposed strategy can improve the speed of resource scheduling,and improve the efficiency of cloud computing.
Attribute Reduction Based on Asymmetric Variable Neighborhood Rough Set
HUI Jing-li, PAN Wei, WU Kang-kang and ZHOU Xiao-ying
Computer Science. 2015, 42 (6): 282-287.  doi:10.11896/j.issn.1002-137X.2015.06.059
Abstract PDF(501KB) ( 166 )   
References | Related Articles | Metrics
On the basis of analyzing the disadvantage of neighborhood rough set model,we proposed an asymmetric vari-able neighborhood rough set model and a new heuristic attribute reduction algorithm based on asymmetric variable neighborhood rough set.The heuristic condition is global attribute significance.Experimental results show that the number of attribute reduction and classification accuracy based on asymmetric variable neighborhood rough set model have better performance
Text Detection Method Based on Active Contour Model
XU Xiao and GU Lei
Computer Science. 2015, 42 (6): 288-292.  doi:10.11896/j.issn.1002-137X.2015.06.060
Abstract PDF(1182KB) ( 201 )   
References | Related Articles | Metrics
To detect text from images with different backgrounds,a text detection method with active contour models was proposed.Before text detection,the sobel-laplacian and gaussian-laplacian were used to sharpen edges and smooth noise,and then iteration algorithm was repeated to enlarge or lessen the contour to get the final contour,ruling out the un-text block at last.The proposed method can box a single text eventually,and it is advantageous to the subsequent segmentation recognition.Experiment shows that the proposed method can effectively detect the text in the image.
New Architecture for Extraction of 3D Model Features Based on Probabilistic Density Estimation of Local Surface Features
SUN Ting, ZHANG Jin-hua and GENG Guo-hua
Computer Science. 2015, 42 (6): 293-295.  doi:10.11896/j.issn.1002-137X.2015.06.061
Abstract PDF(308KB) ( 177 )   
References | Related Articles | Metrics
Feature extraction is a key issue for 3D model retrieval.A new architecture for extraction of 3D model features using probabilistic density estimation of local surface features was proposed.With the set of 3D local geometrical features,the local feature density of a chosen target point was evaluated using probabilistic density estimation methods.The 3D model can be described using the feature vector comprised of all local feature density values.The single-variate and multi-variate descriptors of 3D mesh model support the implementation of 3D model retrieval.The results show that the retrieval performance of the method is better than that of the statistical feature extraction methods.
Image Edge Detection Based on Fractal Dimension
GUAN Qing and ZHANG Wei
Computer Science. 2015, 42 (6): 296-298.  doi:10.11896/j.issn.1002-137X.2015.06.062
Abstract PDF(761KB) ( 175 )   
References | Related Articles | Metrics
In medical applications,the image of red blood cells requires the ability to extract the relevant features,such as a cell area,roundness and number,etc.For extracting the features of the image,this paper presented an image edge detection method based on fractal dimension.The method is based on fractal brownian random field model.It maps gray space into fractal dimension space by calculating the fractal dimension of every pixel,and then it completes the edge detection in the fractal dimension space.The experimental results show that,in the case of the best window size,this method can highlight features of medical cell image,and has a strong ability to resist noise.
Research on Augmented Reality System Modeling and Registration Error Based on Simple Visual Marker
ZHANG Guo-liang, WU Yan-xiang, WANG Zhan-ni and WANG Tian
Computer Science. 2015, 42 (6): 299-302.  doi:10.11896/j.issn.1002-137X.2015.06.063
Abstract PDF(1706KB) ( 154 )   
References | Related Articles | Metrics
An image registration strategy was studied based on simple visual marker,which is furthered to build augmented reality system.Firstly,color and shape were selected as integrated feature to ensure the reliability of marker detection.Redundant features were utilized to calculate pose of visual markers based on weak perspective model.Secondly,to make image registration be suitable for common vision capture device,rendered model was redistorted according to calibration distortion parameters.Finally,based on space transformation in OpenCV and OpenGL,virtual and real registration was discussed and implemented with visual marker detection result.
Novel Kind of Image Segmentation Model Introducing Difference Image with Multiple Segmentation Characters
HE Ling-na, CAO Jian-fa and ZHENG He-rong
Computer Science. 2015, 42 (6): 303-307.  doi:10.11896/j.issn.1002-137X.2015.06.064
Abstract PDF(1182KB) ( 208 )   
References | Related Articles | Metrics
Most of classical active contour models only have advantages on some ways,however they can’t deal with complex images.So the paper proposed a kind of segmentation model with multiple characters.This paper introduced difference image and took the BGFRLS model of difference image as global control of model.In addition,to avoid re-initialization of level set function and shorten the computational time,this paper introduced the penalization function in Li method.Furthermore,to decrease regulation parameters,the self-adaption weight between global control term and local control term was used in place of constant weight.Through these improvements,our method has some advantages as follows.First,the method has the global segmentation character.Second,by means of introducing the difference image,our method is able to process the image with intensity inhomogeneity and detect the weak edge.Third,ours model is robust to image noise.Ours experiments demonstrate that the proposed method is indeed able to segment the images with intensity inhomogeneity,and is able to detect the weak edge.In addition,it has global segmentation character and robustness.
Double Level Set Image Segmentation Based on Image Layer
CHEN Jing, ZHU Jia-ming and WU Jie
Computer Science. 2015, 42 (6): 308-312.  doi:10.11896/j.issn.1002-137X.2015.06.065
Abstract PDF(907KB) ( 198 )   
References | Related Articles | Metrics
Traditional C-V model can divide the image into object and background,but can not be achieved on the multi-objective image segmentation.Multiphase C-V model for image segmentation requires more iterations and more computing time.In order to slove these problems,this paper proposed a double level set image segmentation algorithm based on image layer.The algorithm introduces the background filling technology to change the image background,forming a new image layer,and the double level set continues division in the new image layer,until all objects are segmented.Through the new image layer,the double level set can achieve the multi-objective image segmentation.The experimental results show that the algorithm can realize multi-objective segmentation with less iteration,also has strong anti-interfe-rence ability and faster convergence speed.
Camshift Tracking Algorithm Based on N-LBP Texture and Hue Information
XU Yi-ming, LU Guan and GU Ju-ping
Computer Science. 2015, 42 (6): 313-316.  doi:10.11896/j.issn.1002-137X.2015.06.066
Abstract PDF(867KB) ( 239 )   
References | Related Articles | Metrics
Color feature tracking algorithms are easily affected by non-homogenous illumination and shadow.How to construct target model with multiple features is a key question for improving tracking performance.A novel feature fusion target tracking algorithm was proposed in this paper.Illumination self-adaptive local standard deviation is introduced to the threshold for local binary pattern,the joint histogram is constructed by improved N-LBP texture descriptor in unified pattern and hue information,and the moving target tracking is conducted within Camshift algorithm framework.The tracking experiments with shadow interference show that the proposed algorithm can overcome the changes of illumination and has more robustness and stability with good real time performance compared with traditional Camshift algorithm.
Palmprint Recognition Algorithm of Integrating Horizontal Gradient and Local Information Intensity
ZHAO Zhi-gang, WU Xin, ZHANG Wei-zhong, ZHAO Yi, HONG Dan-feng and PAN Zhen-kuan
Computer Science. 2015, 42 (6): 317-321.  doi:10.11896/j.issn.1002-137X.2015.06.067
Abstract PDF(1703KB) ( 110 )   
References | Related Articles | Metrics
The palm ridge characteristic is the most effective feature of palmprint.At the acquisition time,palm will produce problems of inconsistent scale,rotation,translation and so on,which make accurately extracting and describing ridge feature become a major difficulty in palmprint recognition.To solve these problems,a novel palmprint recognition algorithm based on a fusion of horizontal gradient orientation and local information intensity(referred FHOG-LII) was proposed.First,we used the mean filter with different templates to remove fine,irregular and unstable characteristics of palm ridge,and used horizontal gradient operator to handle the processed image the processed image and then made image binarization.Secondly,we used the thoughts of blocks to calculate information intensity of palm ridge and made it as feature vector.At last,we used the chi-square distance to match them.Experimental results on PolyU palmprint database show that the proposed method can obtain state-of-the-art recognition accuracy(99.89%).Compared with some traditional methods,the recognition rate improves significantly,indicating the effectiveness of the proposed algorithm.