Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 40 Issue 12, 16 November 2018
  
Survey on Secure Access Technology in Mobile Ad-hoc Network
QIAO Zhen,LIU Guang-jie,LI Ji and DAI Yue-wei
Computer Science. 2013, 40 (12): 1-8. 
Abstract PDF(923KB) ( 674 )   
References | RelatedCitation | Metrics
In this paper,the main security threats were analyzed based on the characteristics of MANET.The concept and the performance demand of the secure access were presented.With that,the major secure access techniques of MANET were reviewed and the merits and demerits of each scheme were analyzed.Finally,the several typical kinds of secure access schemes were compared.And the problems which are worth studying further were forecasted.
Code Generation for Automatic Parallelization of Irregular Loops
DING Rui,ZHAO Rong-cai,XU Jin-long and FU Li-guo
Computer Science. 2013, 40 (12): 9-14. 
Abstract PDF(1130KB) ( 373 )   
References | RelatedCitation | Metrics
Many large-scale scientific applications contain irregular loops.But the prior work of automatic parallelization on distributed memory is hard to generate parallel code for irregular loops at compile-time.We proposed an approach for effective code generation of a common class of irregular loops,and it’s able to transform the serial code of irregular loops to equal parallel computation and communication code at compile-time.The approach searches the local definition set of the loops on each processor by computation decomposition and access expression of array references,and satisfies the producer-consumer relation of irregular array references by partial communication redundancy.The experimental results show that our approach is valid and improves the speedup of test applications.
Design of Quadruple Precision Floating-point Fused Multiply-Add Unit Based on SIMD Device
HE Jun,HUANG Yong-qin and ZHU Ying
Computer Science. 2013, 40 (12): 15-18. 
Abstract PDF(423KB) ( 431 )   
References | RelatedCitation | Metrics
It is an important issue to resolve to decrease the hardware cost and operation latency for the implementation of quadruple precision floating-point arithmetic.To decrease the hardware cost of floating-point quadruple fused multiply add (QPFMA) unit,a new QPFMA unit was designed and realized based on a SIMD device,which supports 64bit×4double precision floating-point fused multiply add (DPFMA).The new QPFMA supports four kinds of FMA operation,multiplication,addition,subtraction and comparison,with the operation latency of 7cycles.By decomposing the 113bit×113bit multiplication of quadruple precision fraction into four 57bit×57bit multiplications to share the 53bit×53bit multipliers of SIMD DPFMA,the hardware cost of the new QPFMA is reduced greatly.Using the 65nm cell library,the new QPFMA is synthesized.The results show its frequency is 1.1GHz and area is 42.71% of a normal QPFMA unit,only equal to the area of a DPFMA unit.Comparing to current QPFMA design,the operation latency decreases by 3cycles and the gate number reduces by 65.96% in equivalent technology and at comparative frequency.
Implementing Compiler Backend for Vector Processing Unit of FT Processor Based on GCC
LI Chun-jiang, DU Yun-fei, NI Xiao-qiang, WANG Yong-wen and YANG Can-qun
Computer Science. 2013, 40 (12): 19-22. 
Abstract   
References | RelatedCitation | Metrics
Compiler backend is the implementations in a compiler for a specific target machine.Different instruction set architecture needs different implementations of compiler backend.Targeting for the architecture and instruction set of the Vector Processing Unit (VPU) in FT processor,we implemented the compiler backend based on GCC.And we made it possible for GCC to correctly compile the intrinsic functions oriented to the SIMD instructions in FT-VPU.In this paper,from the machine description for the four-way double precision SIMD instructions,we concluded the backend implementations in GCC for FT-VPU.Our work is valuable reference to implement a compiler backend for a specific target machine based on GCC.
Research on Benchmark-based HPC in Cloud
LI Chun-yan and ZHANG Xue-jie
Computer Science. 2013, 40 (12): 23-30. 
Abstract PDF(1043KB) ( 792 )   
References | RelatedCitation | Metrics
With the development of Cloud computing, the academia and industry give HPC in the Cloud more attention.As virtualization technology brings performance overhead,HPC in the Cloud is facing some challenges.On the basis of the “HPC+Cloud” computing paradigm,this paper analyzed the advantages of HPC in the Cloud,introduced the key issues about performance evaluation,performance optimization,power consumption,cost-benefit analysis at home and abroad in the implementation of benchmark-based high performance computing in the Cloud computing environment.Meanwhile,this paper got the basic ideas of research on benchmark-based HPC in the Cloud.At the end of this paper,we summarized the current issues,and prospected the foreground of HPC in the Cloud.
Low Power Optimization Method Oriented to Embedded System’s Bus
GE Hong-mei,XU Chao,CHEN Nian and LIAO Xi-mi
Computer Science. 2013, 40 (12): 31-36. 
Abstract PDF(505KB) ( 402 )   
References | RelatedCitation | Metrics
This paper put forward a method which can optimize power consumption of bus in embedded system on the base of software optimization.In the process of compiling,if instruction address bus and data bus can be optimized respectively to reduce the bus-invert frequency,the power consumption will be cut down.The concrete procedure is as follows.For instruction address bus,using the modified genetic algorithm to optimize function call and combinating TO code,bus-invert frequency will be reduced and the power consumption will be reduced as well.For instruction data bus,using particle swarm algorithm to optimize instruction scheduling and combining 0-1bus-invert code,bus-invert frequency will be reduced and the power consumption will be reduced as well.In order to verify the correctness and effectiveness of the method,HR6P serial microcontroller is used as an experiment platform.The experiment result shows that optimization efficiency of bus power consumption can reach about 25%.Therefore,it means the method obviously reduces bus-invert frequency and improves the whole system’s performance.
Affective-oriented Movie Background Music Classification
ZHANG Bao-yin, YU Jun-qing, TANG Jiu-fei, HE Yun-feng and WANG Zeng-kai
Computer Science. 2013, 40 (12): 37-40. 
Abstract   
References | RelatedCitation | Metrics
Movie background music plays an irreplaceable role in strengthening film affection,heightening the dramatic and rendering atmosphere.If movie background music can be automatically classified by affection,it will be remarkable to improve the accuracy of movie affective content analysis undoubtedly.In view of this,movie background music movie background music affection feature vector and classifier were proposed,so that the annotation of movie background music was improved.Movie background music affection vector is consisted of bar-long rhythm patterns,bar-long baseline patterns,Mel Frequency Cepstrum Coefficients (MFCC) and interval features extracted from music audio signal.Compared with other features,rhythm pattern and baseline pattern features are able to demonstrate rhythm structure over movie background music clip.Probability latent semantic analysis is used to classify the movie background music into excitement,tension,relaxation and sadness.Experimental results show that the movie background music affection feature vector and the PLSA classifier effectively improve the accuracy of classes than the state of the art.
On Sequential Pattern Mining Algorithm for Web Access
LI Tao-shen,WANG Wei-na and CHEN Qing-feng
Computer Science. 2013, 40 (12): 41-44. 
Abstract PDF(351KB) ( 428 )   
References | RelatedCitation | Metrics
In view of the problems existing in present sequential pattern mining algorithm for Web access and PrefixSpan algorithm,a sequential pattern mining algorithm for Web access based on projection position-based (PWSPM) was proposed.This algorithm uses sequence pattern analysis to find the user’s behavior model and predict user’s access pattern to home pages.And then,according to analytical results,it improves sites performance and organizational structure to increase the quality and efficiency of the users to find information.Experimental and application results show that PSPM-Web algorithm has better runtime performance and extensibility.It can apply in Web log mining and is used to build intelligent Web sites and solve the personalized information services.
Stock Market Tracking Prediction Algorithm Based on Stream Feature Model
YAO Hong-liang,DU Ming-chao,LI Jun-zhao and WANG Hao
Computer Science. 2013, 40 (12): 45-51. 
Abstract PDF(598KB) ( 413 )   
References | RelatedCitation | Metrics
Because stock market volatility is of mutability and variability,and the distribution of the time series data does not follow the normal distribution,the traditional time series forecast algorithms are difficult to accurate prediction.The stock market tracking prediction algorithm based on Stream Feature Model was proposed (SFM-PG).It is builds the Bayesian networks based on correlation between stocks,selects the Markov Blanket of the target stock as its peer group,and gives a windows tracking prediction model based on the proximity between peer group,through dynamiclly updating the weight of peer group to tracking prediction,effective avoids the influence of the non-normal distribution of time series data on prediction.And then,using the sliding window to extract the feature of the time series data to formation stream feature,and extracting the stream feature model by matching with knowledge base of base,using the knowledge of stream feature model to adjust the predicted results,in order to reduce the prediction error introduced by mutability.Finally,the practicability and effectiveness are showed in the experiment on the network of plate of the Shanghai stock.
Structural Weighted Least Squares Support Vector Machine Classifier
LU Shu-xia and TIAN Ru-na
Computer Science. 2013, 40 (12): 52-54. 
Abstract PDF(280KB) ( 356 )   
References | RelatedCitation | Metrics
The structure information in data has not been exploited in the Least Squares Support Vector Machine Classifier(LSSVM ) and the LSSVM is sensitive to the outliers.Focused on the above issues of the LSSVM, this paper proposed a new classifier---structural weighted least squares support vector machine (SWLSSVM).The structure information is considered by incorporating the covariance matrix into the objective function,and in order to reduce sensitive to the outliers,according to difference of the distances from different types of samples to the center of the sample,the different weights are assigned to the different training samples in the error term of objective function.The experimental results show that the SWLSSVM is more superior to the LSSVM and the SVM in classification and generalization performances.
KNN Text Categorization Algorithm Based on Semantic-Vector-Combination and Multiclass of Feature
LIN Qi-feng,MENG Zu-qiang and CHEN Qiu-lian
Computer Science. 2013, 40 (12): 55-58. 
Abstract PDF(369KB) ( 384 )   
References | RelatedCitation | Metrics
Feature selection is the key stage in the text categorization,and the processing of it will affect the speed and accuracy of text classification.χ2statistic is a important methods in feature selection of text categorization since it mea-sures the dependence between a term and a class effectively.Nevertheless,we found the feature in the vectors of CHI can not fully express the means of concept and it depends the training text set,and the vectors of CHI are used only for the phase of feature selection after the analysis of the application of χ2 statistic in the text categorization.So this paper proposed an improved kNN text categorization algorithm based on Semantic-Vector-Combination and Multi-class of feature,in which the feature considers the means of concept,and the matrix of multiclass of features will improve the efficiency of algorithm in the stage of categorization.The results and analysis of experiments show that the efficiency of categorization is improved and its accuracy is also enhanced.
Application of Fuzzy Set with Three Kinds of Negation FScom in Stock Investment
ZHAO Jie-xin and PAN Zheng-hua
Computer Science. 2013, 40 (12): 59-63. 
Abstract PDF(490KB) ( 365 )   
References | RelatedCitation | Metrics
For the cognition and processing of different negations of the fuzzy knowledge,paper [1]has proposed that negation of fuzzy knowledge including the contradictory negation,the opposite negation and the medium negation,and defined a new fuzzy set with contradictory negation,opposite negation and medium negation (FScom).In order to show applicability of FScom,the fuzzy set with three different negations (FScom) was applied in the stock investment decision-making.Based on FScom,this paper introduced an approach to define the membership function of the fuzzy set and its different negations in decision rules,and discussed the threshold value of range of membership degree and its mea-ning.Finally,it discussed reasoning and realization of fuzzy decision-making in the example based on FScom and the fuzzy production rule.The final decision results were given out.The decision-making process implies that the application of FScom in dealing with practical problems with ambiguity and different negations is effective.
Covering-based Multigranulation Rough Set Model Based on Maximal Description of Elements
LIU Cai-hui
Computer Science. 2013, 40 (12): 64-67. 
Abstract PDF(273KB) ( 352 )   
References | RelatedCitation | Metrics
This paper proposed two kinds of covering-based multigranulation rough sets by employing the maximal description of elements.Firstly,some basic properties of the models were investigated.Then,the conditions for two distinct covering-based multigranulation rough sets to produce the identical lower and upper approximations were studied.Finally,the relationships between the two models were explored.
Adaptive Particle Swarm Optimization Algorithm with Disturbance Factors
ZHAO Zhi-gang,ZHANG Zhen-wen and SHI Hui-lei
Computer Science. 2013, 40 (12): 68-69. 
Abstract PDF(238KB) ( 362 )   
References | RelatedCitation | Metrics
A new particle swarm optimization (PSO) algorithm was presented to overcome disadvantages that standard PSO has shown in solving complex functions,including slow convergence rates,low precisions and premature convergence,etc.The proposed algorithm improves the performances of standard PSO by following methods:a) applying chaotic initialization for swarm,b) using adaptive inertia weight to enhance the balance of global and local search of algorithm,and c) introducing disturbance factors to avoid being trapped in local optimum.The experimental results show that the new algorithm has great advantages of convergence property than standard PSO and some other modified algorithms.
Sampling Techniques with CBES for Imbalanced Learning
ZHI Wei-mei,GUO Hua-ping and FAN Ming
Computer Science. 2013, 40 (12): 70-74. 
Abstract PDF(399KB) ( 370 )   
References | RelatedCitation | Metrics
CBES is a method which can be used for classification of imbalanced datasets.Related experimental results show CBES can boost the generalization ability of the base classifier.Reported researches show sampling method can effectively improve the performance of rare data.In the paper,we skillfully used sampling methods into CBES,and then proposed a method,named sampling-based CBES (SCBES) to further improve the classification performance of rare data.The experimental results demonstrate SCBES can effectively improve the performance of classification for imbalanced datasets.
Research on Improved Apriori Algorithm Based on Compressed Matrix
LUO Dan and LI Tao-shen
Computer Science. 2013, 40 (12): 75-80. 
Abstract PDF(459KB) ( 397 )   
References | RelatedCitation | Metrics
Aiming at the deficiency of the existing Apriori algorithm,an improved Apriori algorithm based on compressed matrix called NCM_Apriori_1was proposed.The improvements of this algorithm are as follows:(1) adding two arrays to record the counts of 1in the row and column,so that the number of scanning the matrix can be reduced during compressing,(2)deleting the unnecessary itemsets which can’t be connected as well as the infrequent ones in compressing matrix to minify the scale of matrix and improve space utilization,(3)changing the condition of deleting the unnecessary transactions to reduce the errors of the mine result,and changing the stopping condition to make the number of cycle decreased.Algorithm performance analysis and experiments results prove that the improved algorithm can mine frequent itemsets effectively and has better efficiency of computing than existing Apriori algorithms based on compressed matrix.
Color Image Retrieval Method Based on Cloud Model and Tolerance Granule
XU Jiu-cheng,REN Jin-yu,SUN Lin and XU Tian-he
Computer Science. 2013, 40 (12): 81-85. 
Abstract PDF(445KB) ( 365 )   
References | RelatedCitation | Metrics
It is very important to extract the grid points for the construction of tolerance granular space models.In the existing tolerance granular space models,establishing grid points for each layer only takes into account the space position,ignoring the uncertainties of image texture feature,such as randomness,fuzziness and relevance.To deal with that,a color image retrieval method based on cloud model and tolerance granule was proposed in this paper.Firstly,an object set of tolerance granular space model was constructed in CIELab color space.Secondly,the cloud model was applied into extracting grid points of each layer,and then the tolerance granular space model based on the extracted grid points was presented.Thirdly,a similarity measure for the color image retrieval based on cloud model and tolerance granule was established.Finally,simulation experiments were done with images of the testing image base,which are chosen from the Corel image base.Then the results show that the proposed method can improve efficiency of image retrieval effectively.
Optimization of Intra Mode Decision for AVS
CHEN Yun-shan,SU Wan-xin,WANG Chun-xia and LIU Yu-sheng
Computer Science. 2013, 40 (12): 86-89. 
Abstract PDF(350KB) ( 357 )   
References | RelatedCitation | Metrics
Rate-Distortion Optimization (RDO) technique is employed in AVS (Audio Video Coding Standard) to improve coding efficiency.However,the computational complexity increases drastically since the encoder has to encode the microblock by measuring RD costs of all the possible modes,which makes it difficult to meet the demands of real-time encoding.After the theory of the intra prediction and the process of the intra mode decision were analyzed,the fast intra prediction algorithm based on SATD (Sum of Absolute Transformed Differences) criterion and spatial correlation was proposed to optimize intra mode decision and reduce the complexity.First,some candidate modes were selected based on block’s SATD which decreases the number of intra prediction mode.Then,mode correlation of neighbor blocks was used to reduce the complexity of luminance blocks and improve the speed of mode decision.Experimental results indicate that encoding time of the proposed approach is averagely reduced by 53.56% than that of AVS standard algorithm with similar encoding efficiency.Compared with the classical algorithm based on edge detection,the encoding time is saved by 16.39% on average while keeping better image quality and lower bite-rate.
Defense Mechanism against Misbehavior of MAC Layer in WLAN
YE Jin,LI Tao-shen,WANG Zheng-fei and ZHANG Xiang-li
Computer Science. 2013, 40 (12): 90-93. 
Abstract PDF(353KB) ( 311 )   
References | RelatedCitation | Metrics
The misbehavior that deliberately modifies the parameters of wireless NIC (Network Interface Card) driver impairs the legitimate users’ rights,even may cause DoS attack and cripple the entire WLAN.To defense against such misbehavior,this paper proposed a defense mechanism that is applied at AP which controls the nodes’ sending speed by dropping the incoming packets in a certain rule and makes WLAN users to use wireless channel fairly.We constructed a WLAN to test our proposed algorithm by applying it in wireless NIC driver.The experiments show that our algorithm performs well in defensing against MAC layer misbehavior even against DoS attack.
Feature Extraction and Classification of Halftone Image Based on Statistics Template
WEN Zhi-qiang,HU Yong-xiang and ZHU Wen-qiu
Computer Science. 2013, 40 (12): 94-97. 
Abstract PDF(457KB) ( 430 )   
References | RelatedCitation | Metrics
Feature extraction and classification method was presented based on statistics template for classifying halftone images produced by various error diffusion methods.Statistics template was described as the descriptor of texture feature of halftone image according to the definition of pixel pairs,and a feature extraction method was presented based on image patches.The ideas of class feature matrices was proposed acting as the descriptor of category and then the optimization problem was formulized by establishing error object function and utilizing gradient descent method to seek the optimal class feature matrices.The characteristics of class feature matrices were discussed by experiments.In experiments,the performance comparisons of our method with two similar methods were conducted.The influences of parameter on classification performance were also discussed and time complexity of feature extraction algorithm was analysed.Experimental results demonstrate that the proposed method is effective.
Chaotic Artificial Bee Colony Algorithm Based on Rank Mapping Probability
ZHANG Xin-ming,LI Xiao-an,HE Wen-tao and WANG Xian-fang
Computer Science. 2013, 40 (12): 98-103. 
Abstract PDF(499KB) ( 346 )   
References | RelatedCitation | Metrics
In view of the shortcomings of artificial bee colony algorithms,such as the low convergence rate and being trapped into local optimums owing to choosing the food source based on direct mapping probability,and low optimization precision,a chaotic artificial bee colony optimization algorithm based on rank mapping probability (CABC-R) was proposed in this paper.The proposed search process was divided into two different phases:in the first one an ABC global optimizer based on rank mapping probability was created to get a global solution,in the second one the local chaotic optimization algorithm was gotten to obtain more precise an optimum.The simulation results on 10standard test complicated functions indicate that the proposed optimization algorithm is rapid and effective,and that it outperforms the current global optimization algorithms such as ABC,JADE,MSEP and RABC.
Reduction for Large-scale SVM Datasets under Quotient Space
QIN Xi,SU Yi-dan and ZHANG Wen
Computer Science. 2013, 40 (12): 104-107. 
Abstract PDF(348KB) ( 345 )   
References | RelatedCitation | Metrics
Using granularity analysis theory and computational method in quotient space,we are able to build a reduction model in quotient space.In this model,we cut out the redundant data using variable granularity so that the reduction becomes more accurate.An example implementation of this method was provided.Experiments indicate our new method yields significantly improved compression without sacrificing the accuracy of traditional SVM techniques.
Modification of Similarity Computation in Ontology Mapping
ZHENG Xiao-jie and ZHANG Lin
Computer Science. 2013, 40 (12): 108-112. 
Abstract PDF(425KB) ( 413 )   
References | RelatedCitation | Metrics
Ontology mapping is one of common solutions of the ontology heterogeneous.Similarity computation among ontologies is the critical step of mapping process.Aiming at the current problems,and for making a further improvement in similarity computation,this paper put forward a comprehensive approach.This approach involves several factors of concept,semantic,properties,instances and structure of the concept.For finding more useful similarity computation method,the paper introduced relativity and attribute theory.At the end,an experiment was used to prove this method can adapt the different scale ontologies,and also can improve the accuracy of the similarity.
Complex Background Image Retrieval Based on Foreground Extraction
FENG Zhe,XIA Hu,FU Yan and ZHOU Jun-lin
Computer Science. 2013, 40 (12): 113-115. 
Abstract PDF(888KB) ( 324 )   
References | RelatedCitation | Metrics
Content-based image retrieval provides users a more intuitive and more accurate retrieval method.Users in such retrieval,tend to be more concerned about the main part of the image.In order to impair the influence of the background information on retrieval result,this paper presented an algorithm of complex background image retrieval based on foreground extraction.The experiment indicates that on H-S color histogram,LBP texture feature,and color-texture mixed characteristic,this algorithm can obtain more optimized performance.
Convergence of Asynchronous Gradient Method with Momentum for Ridge Polynomial Neural Networks
YU Xin,TANG Li-xia and YU Yan
Computer Science. 2013, 40 (12): 116-121. 
Abstract PDF(400KB) ( 427 )   
References | RelatedCitation | Metrics
The momentum was introduced into the conventional error function of asynchronous gradient method to improve the convergence efficiency of Ridge Polynomial neural network.This paper studied the convergence of the asynchronous gradient method with momentum for training Ridge Polynomial neural network,and a monotonicity theorem and two convergence theorems were proved,which are important for choosing appropriate learning rate and initial weights to perform an effective training.To illustrate above theoretical finding,a simulation experiment was presented.
Projection of Semantics and Retrieval in Natural Scenery Images Based on Fuzzy Nerve Network
SHI Yue-xiang,WEN Hua,GONG Ping,MO Hao-lan and JIN Yin-guo
Computer Science. 2013, 40 (12): 122-126. 
Abstract PDF(706KB) ( 351 )   
References | RelatedCitation | Metrics
With the development of Content-Based Image Retrieval (CBIR),the solution of "semantic gap" which exists between the low-features and the high-level semantic features has become the key problems of the semantic image retrieval.To avoid the general method maps an image into a class of semantic image,and reflect the natural scenery image contains a wealth of high-level semantic information and multi-homing type,this paper presented a process of repeating use of the optimal threshold for a roughly extraction of the largest target area with the color image.This color target area is comparatively singleness in the natural scenery images.On the basis of the divided regions,this paper extracted the color and shape features of each region,at last,the fuzzy nerve network was used to map low-features into the high-level semantic features,so it finally realized the image attribute information transfer effectively and obtained the high-level semantic automatically.Experimental results show that the method of image segmentation of natural color image can effectively extract the target object.It also has a certain degree of robustness to the noise images.The accurate retrieval rate approaches 90% and the recall rate also achieves 75% in some class image of the nature image database.The ex-perimental result shows the effectiveness and advancement of this method in the natural image retrieval.
DNA Algorithm for Maximum Matching Problem Based on Sticker Computation Model
WU Xue,SONG Chen-yang,ZHANG Nan,ZHU Yu and CHEN Zhi-hua
Computer Science. 2013, 40 (12): 127-132. 
Abstract PDF(1385KB) ( 360 )   
References | RelatedCitation | Metrics
This paper the DNA solution of the maximum matching problem(MMP) based on sticker computation model was presented,showed how to use DNA strands to construct solution space of mole-cules for the maximum matching problem and how to apply the biological operations of sticker model to solve the problem from the solution space of mole-cules,at the same time analysed of the computational complexity for DNA parallel algorithms.Finally,the computer program was given to simulate this algorithm and the solutions of MMP for all examples were also found,and the feasibili-ty of the algorithm was validated and summarized.
Description and Combination of CSP Process Based on ASP
ZHAO Ling-zhong,SITU Ling-yun,ZHAI Zhong-yi and QIAN Jun-yan
Computer Science. 2013, 40 (12): 133-140. 
Abstract PDF(1170KB) ( 327 )   
References | RelatedCitation | Metrics
In the previous work,an ASP based framework,which is used to verify the model described by CSP,was proposed to solve the problem of verifying multiple properties in one run of a model checker.However,some problems still exit,which include the inability to describe some forms of concurrent process and the scale limitation of the concurrent systems that can be described.In this paper,a new description system for concurrent systems was constructed,which allows loop structure in prefix process,and therefore can be used to describe various forms of concurrent processes.It allows several processes to be combined automatically to generate a new process,which not only fulfills all behavioral characteristics but also keeps the structural consistency with the original processes.In this way,the process description within the verification framework follows a uniform style,which is helpful for the abstraction and validation of concurrent processes.The effectiveness of the ASP description system and the combination process generation technique,and the feasibility of the verification based on the description system were illustrated by examples.
Ultralightweight RFID Mutual-authentication Protocol
LIU Ya-li,QIN Xiao-lin and WANG Chao
Computer Science. 2013, 40 (12): 141-146. 
Abstract PDF(825KB) ( 342 )   
References | RelatedCitation | Metrics
Due to the open wireless communication environments in radio frequency identification (RFID) systems,particularly the reader-tag air interface,security and privacy are increasingly becoming noteworthy issues.It is imperative to design ultralightweight RFID authentication protocols to resist various malicious attacks and threats.A new ultralightweight RFID mutual-authentication protocol for low-cost was proposed,which avoids the security omission in the previous RFID authentication protocols.Security analysis shows that the protocol possesses robust security and privacy properties as well as defending against the possible malicious attacks.In terms of the resource-constrained requirements of low-cost RFID tags,the protocol requires only two simple bitwise operations over the tag end and meanwhile it has better performance advantages compared with other ultralightweight RFID authentication protocols.
Study on Concentric-Ring and Cluster-based Energy Hole Avoiding Method in Wireless Sensor Networks
LIU Zhen and GUO Hang
Computer Science. 2013, 40 (12): 147-151. 
Abstract PDF(436KB) ( 496 )   
References | RelatedCitation | Metrics
How to avoid “Energy Hole” phenomenon has become a critical problem in wireless sensor networks’ applications.This paper analyzed the existing domestic and foreign solution of energy hole and put forward a concentric ring and cluster based energy hole avoiding method in wireless sensor networks.It described the network model and energy consuming model of the method.Furthermore,it designed the location initial algorithm of cluster heads,rotary selection algorithm of cluster head and multiple-hop routing algorithm between clusters.Finally,simulations were conducted to test and analyze the algorithms.
Weights Optimization Particle Filter Algorithm in Multi-sensor Measurement
HU Zhen-tao,LIU Yu and YANG Shu-jun
Computer Science. 2013, 40 (12): 152-155. 
Abstract PDF(451KB) ( 413 )   
References | RelatedCitation | Metrics
Aiming at the effective realization of particle filter in multi-sensor measurement system state estimation,a novel particle filter algorithm based on weights optimization in multi-sensor measurement was proposed in this paper.In the new algorithm,the measurements likelihood function is firstly constructed on the basis of the concrete form of proposal distribution,and all measurement in single filter period are used to calculate the every particle weights,respectively.Secondly,given the otherness of different sensors precision,combining with priori information of sensors precision,the weighting fusion method is used to optimize every particle weights in multi-sensor measurement.Finally,the filter precision is improved by decreasing the variance of particle weights.The theoretical analysis and experimental results show the feasibility and efficiency of the proposed algorithm.
Multi-priority Hybrid Slot Transmission Method Based on IEEE 802.11e EDCA
WANG Wan-liang,CEN Yue-feng and YAO Xin-wei
Computer Science. 2013, 40 (12): 156-159. 
Abstract PDF(330KB) ( 428 )   
References | RelatedCitation | Metrics
To support the different needs of transmission Qualities of Service (QoS) in local multimedia networks,a multi-priority hybrid slot transmission method based on IEEE 802.11e Enhanced Distributed Channel Access (EDCA) protocol was proposed.A hybrid slot is defined which includes some slots and the number of slots in it is less than or equal to the kinds of access categories.The access categories are assigned to the different priority slots according to the transmission QoS requirements.The collision probabilities of access categories with different QoS requirements in the data link layer can be fixed easily.The collision probabilities between the stations are significantly reduced,and the network qualities of the access categories with high transmission QoS needs such as throughput,drop rates,media access delay are improved compared with the traditional EDCA protocol.The multi-priority hybrid slot transmission method reveals a good adaptability to the station changes in the networks.It shows a better performance especially when the stations increase.
New Two-layer Cooperative Routing Protocol in WSAN
WANG Hao-yun,WANG Zhao-min,REN Shou-gang,FANG He-he and XU Huan-liang
Computer Science. 2013, 40 (12): 160-165. 
Abstract PDF(540KB) ( 380 )   
References | RelatedCitation | Metrics
Wireless sensor and actor network (WSAN) was derived from WSN by adding actor nodes which would make decisions and executive scheduled tasks,and a new cooperative routing protocol based on the characteristics of WSAN network was proposed—the angle forwarding routing protocol based on dynamic clustering (AFRPDC),which consists of a RSSI-based dynamic clustering algorithm (BRCA) and an angle forwarding routing protocol (AFRP).The cluster-head election was improved and the stable topology was constructed in BRCA.Then,event data was transmitted directly to cluster-head nodes by sensor nodes using peer-to-peer communications,and was reported to actor nodes by cluster-head nodes with AFRP.Next-hop nodes were selected using angle informations and multi-path routings were found from source cluster-head nodes to actor nodes.The routing path with minimum delay was established when transfering event data.The simulation results show that the proposed cooperative routing protocol has better perfor-mance than the directed diffusion protocol based on link-state clustering (DDLSC) in reducing the latency and power consumption,and prove that the protocol could meet the requirements of real-time,reliability and low energy comsumption in WSAN.
Anycast Routing Protocol for Mobile Sinks in Wireless Sensor Networks
GU Yun-li,XU Xin,HOU Rong-tao,DU Jie,QIAN Huan-yan and MEI Yuan
Computer Science. 2013, 40 (12): 166-168. 
Abstract PDF(355KB) ( 348 )   
References | RelatedCitation | Metrics
Traditional routing protocols for mobile sinks in wireless sensor networks,should broadcast repeatedly their current location information to all sensor nodes while sinks are moving,but this process consumes a large amount of energy.For the problem above,an anycast routing protocol for mobile sinks in wireless sensor networks (ARPMS) based on predictive strategy was proposed.In ARPMS,sinks spread their moving information only when sinks change their moving direction or speed,by those moving information,sensor nodes could calculate(predict) sinks’ current and future location and select the sink with highest energy efficiency as the anycast objective.Because sinks don’t spread their moving information at all times,ARPMS saves much energy cost. Simulation experiments results show that compared with ALURP protocol,the performance of ARPMS is better in term of energy efficiency (223%~462%).
Localization in Cognitive Radio Systems in the Presence of Spatially Obfuscated Data
DENG Cong,JI Chang-peng and LIU Dian
Computer Science. 2013, 40 (12): 169-173. 
Abstract PDF(1081KB) ( 306 )   
References | RelatedCitation | Metrics
Localizing primary users using spectrum sensing of secondary users is a key aspect for improved operations in cognitive radio networks.However,malicious secondary users may obfuscate their location reports causing disruption in the network.This paper mainly solve the problem of primary user localization in the presence of secondary users of varying trust.Using localization reports in support of (or against) hypotheses about user locations,we developed the foundations of an evidential reasoning-based approach that uses subjective logic for information fusion and inferencing for localization in the presence of incomplete and conflicting knowledge.To do so,we exploited extensions of subjective logic that accommodate the spatial relationships naturally existing between location reports.After highlighting spatial extensions,we applied them in building an inferencing algorithm for primary user localization.Through simulations,we analyzed its performance and the effect of various design parameters,showing 90% accuracy in localization.Finally,we compared it with other localization techniques via simulations.
Improved BM Algorithm and Its Application in Network Intrusion Detection
SUN Wen-jing and QIAN Hua
Computer Science. 2013, 40 (12): 174-176. 
Abstract PDF(234KB) ( 701 )   
References | RelatedCitation | Metrics
The traditional BM algorithm has some useless comparison,affecting the string matching speed and reducing the efficiency of intrusion detection.Therefore,this paper proposed an improved BM algorithm,applied it to the engine of network intrusion detection system.Experimental results show that,compared with BM algorithm which employs Snort detection, a network intrusion detection system constructed by improved BM algorithm can effectively reduce the false positive rate and false negative rate,and improve intrusion detection rate and time utilization.Obviously,this network intrusion detection system is very useful for enhancing the overall capacity.
DCNS:A High Available Data Center Network Topology
LENG Fei,XU Jin-hua and LUAN Shi-xi
Computer Science. 2013, 40 (12): 177-181. 
Abstract PDF(470KB) ( 676 )   
References | RelatedCitation | Metrics
In recent years,with the rapid development of cloud computing technology,the data center network which acts as the underlying infrastructure of cloud computing,plays an increasingly important role.But there are some failures in the network,which requires a strong and effective recovery mechanism to ensure data center available.The network structure plays an important role in the availability of the data center network.Data center network can provide various cloud computing services.As a result,data center networking has recently been a hot research topic in both academia and industry.A fundamental challenge in this research is the design of the data center network that interconnects the massive number of servers,and provides efficient and fault-tolerant routing service to upper-layer applications.In response to this challenge,the research community has begun exploring novel interconnect topologies including Fat-Tree,DCell,and BCube,etc.Extention of the proposed solutions is either too fast (i.e.,double exponentially) or too slow,they suffer from performance bottlenecks,or can be quite costly in both routing and construction.This paper proposed a cost-effective,gracefully scalable,and fault-tolerant data center interconnect termed DCNS that combines the advantages of both DCell and BCube architectures while avoiding their limitations.We then proposed fault-tolerant and routing mechanisms for DCNS.Finally,we proposed a comprehensive benchmarking environment that can be used for accurately and practically evaluating and testing the proposed data center architecture.Experimental results show that DCNS can better meet requirements of the data center network architecture and ensure system availability.
Energy-efficient Game Theoretic Model in Multi-hop Networks Having Selfish Nodes
CHEN Song-lin and QIN Yan
Computer Science. 2013, 40 (12): 182-185. 
Abstract PDF(385KB) ( 349 )   
References | RelatedCitation | Metrics
In the construction of high connectivity,low energy consumption,low interference and reasonable route multi-hop wireless networks,facing conflict between the selfish nodes and their cooperative relations,game theory is undoubtedly a good solution tool.This paper employed game theory to build the network topology control solutions, using a method of rationally designing novel revenue function.Based on non-cooperative game theory,this article designed a novel revenue function model.Theoretical studies show that the network can be controled in a stable state through such revenue functions.Finally,we evaluated the performance of simulative algorithm,and the final simulation results show that the game topology for wireless multi-hop network can significantly save energy.
Oblivious Transfer Based on Elliptic Curve Public Key Cryptosystems
XU Yan-jiao,LI Shun-dong,WANG Dao-shun and WU Chun-ying
Computer Science. 2013, 40 (12): 186-191. 
Abstract PDF(532KB) ( 792 )   
References | RelatedCitation | Metrics
Oblivious transfer is a primitive of cryptography.Endowing a public key system with oblivious transfer function has important practical significance.This paper used the elliptic curve cryptosystems to design two k out of n obli-vious transfer schemes.These schemes can make full use of the property of public key cryptosystems,and do not need to establish a authentication channel in advance.Based on efficient elliptic curve cryptosystems,these schemes are very efficient.The first scheme is directly constructed from the encryption and decryption property of elliptic curve cryptosystems,and the second improves the first.It keeps the advantages of the first and reduces the overhead cost of the first.Elliptic curve cryptosystem is probabilistic,and oblivious transfer based on it can expand the application of oblivious transfer.The new protocol can protect the privacy of both the receiver and the sender.The new protocols also can prevent impersonation attacks,replay attacks and man-in-the-middle attacks.They can be used in an insecure channel.
Online Double Random Forests Intrusion Detection Based on Non-extensive Entropy Features Extraction
YAO Dong,LUO Jun-yong,CHEN Wu-ping and YIN Mei-juan
Computer Science. 2013, 40 (12): 192-196. 
Abstract PDF(584KB) ( 354 )   
References | RelatedCitation | Metrics
This paper proposed an intrusion detection method that can be used in high speed network backbone.Based on non-extensive entropy with different parameters,the original distribution of the values of attributes was decomposed to high dimensional features.Using these detailed features,the detection model based on random forest was constructed.For the purpose of increasing detection accuracy and recall further,the second random forest detection model was constructed with the attack instances only.The experimental results suggest that proposed intrusion detection method can achieve competitive detection precision with a high recall.
Anonymous Authentication Mechanisms Based on Zero-knowledge Proof
LI Lin and YUE Jian-hua
Computer Science. 2013, 40 (12): 197-199. 
Abstract PDF(342KB) ( 1718 )   
References | RelatedCitation | Metrics
With the rapid development of Internet,anonymous authentication plays more and more important roles during the users’ privacy protection and information security.Based on the analysis of existed authentication scheme,this paper pointed out the existing shortcomings and proposed the improved one.In addition,the paper gave the zero-know-ledge proof of the scheme with digital signatures proposed by Wang.The proposed scheme greatly reduces the traffic and increases the security.
Scheme of Lite and Tolerant Certification Authority for Wireless Mesh Network
GUO Ping,FU De-sheng,ZHU Jie-zhong and YUAN Cheng-sheng
Computer Science. 2013, 40 (12): 200-204. 
Abstract PDF(557KB) ( 382 )   
References | RelatedCitation | Metrics
In order to solve the problems of complex public key cryptography which is difficult to implement in a resource-constrained wireless environments,a lite and tolerant CA(LT-CA) infrastructure was proposed which combines threshold mechanism with the idea of lite-CA(Certification authority) and ellipse curve cryptograph(ECC) public key mechanism.Comparing LT-CA with traditional Certification-based CA system,analysis shows LT-CA reduces the complications of producing and verifying public keys by generating public/private keys more flexibly and conveniently and it has the added benefit that it is certificateless.Moreover,LT-CA’s private key possesses the ability of intrusion tolerance without obviously increasing the cost of system computing and payloads,and LT-CA can effectively defend against attacks that are known to occur in wireless environments.
Research on Access Control Policy of Meteorological Cloud Data with Attribute-based Encryption
FANG Zhong-jin,XIA Zhi-hua and ZHOU Shu
Computer Science. 2013, 40 (12): 205-207. 
Abstract PDF(343KB) ( 348 )   
References | RelatedCitation | Metrics
The problems of meteorological cloud data storage and sharing have become increasingly serious with the increasing of meteorological services level.An access control model with multi-authority attribute-based encryption(ABE) was proposed against the problems of authentication and access control of meteorological cloud data storage and sharing.An attribute-based encryption scheme which is suitable for big data cloud environment is applied to solve the fine-grained data access control problems in case of multi-class users in meteorological department.The introduction of global ID and multi-authorization mechanism solves the access problem of different institutional users in different data storage departments.The system has high security and good practical value.
Modeling Research in Complex Network of Traditional Economic System
HE Xiao-qin and LU Yi-nan
Computer Science. 2013, 40 (12): 208-210. 
Abstract PDF(232KB) ( 577 )   
References | RelatedCitation | Metrics
With the rapid development of information technology,how to protect the security of the information not stolen and not subject to tampering or destruction,has become a hot issue in present.In order to solve this question,this paper mainly researched the interpreter of a new vigenere password algorithm.We used vigenere to determine the length of the key and the key word and plaintext.Eventually we got the new vigenere password decipher algorithm.
Design of Contradiction Structure for Dummy Method Insertion in Java Software Watermarking
LI Kui,CHEN Jian-ping,SHI Quan and LI Gui-sen
Computer Science. 2013, 40 (12): 211-214. 
Abstract PDF(343KB) ( 323 )   
References | RelatedCitation | Metrics
Software watermarking is a software copyright protection technology appeared in recent years.It achieves the purpose of copyright protection by embedding copyright information (watermark) into a software product.This paper proposed a design method of the contradiction structure for the dummy method insertion used in the bytecode based Java software watermark algorithm.The Java reflection mechanism is used to dynamically generate a random string of 0and 1.The string is then encoded and decoded using the technique of the positive and inverse coding to obtain a string of all zeros.This string is used as the condition of the contradiction structure,which ensures that the dummy method will never be executed.The presented contradiction structure has good concealment and can resist various watermark attacks.
Determination of Agile Development Iteration Order Based on UML
HU Wen-sheng,ZHAO Ming,YANG Jian-feng and LONG Shi-gong
Computer Science. 2013, 40 (12): 215-218. 
Abstract PDF(348KB) ( 307 )   
References | RelatedCitation | Metrics
To determine the iteration order is a critical problem in the agile development,and it is the foundation of agile development process.There are a lot of literatures to do a lot of work in this area.But the iteration order is based on the value of functional groups in many literatures.This iteration order based on a single indicator will often be unexpected consequences.The value of the functional groups is mostly determined by the qualitative approach,and it is difficult to using the quantitative methods to realize it.This paper put forward that we can use UML(Unified Modeling Language) use case diagram and sequence diagram to calculate the use probability and risk degree of use cases representing each system functional,to solve the problems about the study of iterative sequence in the agile development process.And based on the use probability and risk degree of cases,we can determine iterative sequence in the agile development process with the method of Probability and Statistics and Fuzzy Decision-making Method for opinion concentrated.
Incremental Collaborative Filtering Algorithm Based on GridGIS
DI Jia-qi and WANG Ni-hong
Computer Science. 2013, 40 (12): 219-222. 
Abstract PDF(348KB) ( 331 )   
References | RelatedCitation | Metrics
Wide application of spatial data requires an efficient system to manage the recommendation in order to increase the availability of spatial data.Extensive application of spatial data requires an efficient framework to manage,in order to increase the availability of spatial data.Grid geographic information system (GridGIS) supports rapid spatial data retrieval,allowing users to transparently access data at any time in any place.Traditional similarity algorithm is mathematically very rigorous,has somewhat less usefulness and lacks data support.The experiment proves that algorithm in spatial data sets than the traditional method has better prediction performance and operating efficiency.
Mining Maximal Frequent Item Sets with Improved Algorithm of FPMAX
NIU Xin-zheng and SHE Kun
Computer Science. 2013, 40 (12): 223-228. 
Abstract PDF(507KB) ( 1070 )   
References | RelatedCitation | Metrics
Finding maximal frequent itemsets is an important issue in data mining research field.The FPMAX algorithm,which is based on the FP-tree structure,has been proved to be one of the high-performance algorithms on maximal frequent itemsets mining.But for data mining task in dense datasets,FPMAX algorithm will construct a large number of redundant conditional FP-tree.What’s more,if the quantity of frequent itemsets is large,the MFI-tree structure used for subset testing in FPMAX will become quite big,decreasing the efficiency of subset testing in the algorithm.Therefore,this paper proposed the FPMAX-reduce algorithm to overcome those drawbacks of FPMAX.This novel algorithm uses a pruning technique based on the common suffix of transactions and greatly reduces the construction of redundant conditional FP-tree.Besides,when the scale of the newly constructed conditional FP-tree is small,FPMAX-reduce constructs a corresponding conditional MFI-tree,which deletes the redundant information,to improves the efficiency of subset testing in the following recursive calls.Experimental results show that FPMAX-reduce algorithm effectively improves the efficiency of FPMAX and outperforms many existing available algorithms in dense datasets.
Text Similarity Computing Based on Topic Model LDA
WANG Zhen-zhen,HE Ming and DU Yong-ping
Computer Science. 2013, 40 (12): 229-232. 
Abstract PDF(342KB) ( 1995 )   
References | RelatedCitation | Metrics
Latent Dirichlet Allocation (LDA) is an unsupervised model which exhibits superiority on latent topic mode-ling of text data in the research of recent years.This paper presented a method which improves text similarity calculation by using LDA model.This method models corpus and text with LDA.Parameters are estimated with Gibbs sampling of MCMC and the word probability is represented.It can mine the hidden relationship between the different topics and the words from texts,get the topic distribution,and compute the similarity between the text.Finally,the text similarity matrix clustering experiments are carrieel out to assess the effect of clustering.Experimental results show that the method can improve the text similarity accurate rate and clustering quality effectively.
Receptor Editing and Immune Suppression Based Artificial Immune System
LI Gui-yang and GUO Tao
Computer Science. 2013, 40 (12): 233-238. 
Abstract PDF(593KB) ( 416 )   
References | RelatedCitation | Metrics
The detector in the model of artificial immune system (ARTIS) has no ability of active learning.It is difficult to set detection radius and makes detection performance slow in specific applications.Inspired by the receptor editing and immune suppression in the theory of biological immune,a new model called REISAIS (Receptor Editing and Immune Suppression based Artificial Immune System) was proposed.The model gives the detector a certain degree of active learning ability through receptor editing in the tolerance and mature stages.Thereby,the detection rate of the model is improved.The introduction of the immunosuppressive mechanism makes the false alarm rate of the model to be effectivelly controlled.In this paper,the formal description of the detector and suppressor was presented and the performance of the model was analyzed.The effectiveness of receptor editing for improving the detection performance was also proved.Theoretical analysis and experimental results show that the REISAIS achieves better detection performance without setting detection radius compared with ARTIS model.
Intelligent Fusion of Outer Inverse Packet Information and its Application of Attribute Disjunction
SHI Kai-quan and TANG Ji-hua
Computer Science. 2013, 40 (12): 239-242. 
Abstract PDF(327KB) ( 388 )   
References | RelatedCitation | Metrics
Inverse packet sets is composed of internal and outer inverse packet set F and ,and it is a set pair,i.e.,() is a inverse packet sets.It has dynamic characteristics.Based on outer inverse packet set theory and outer inverse packet reasoning,the several concepts are proposed,such as the intelligent fusion generating of outer inverse packet information,its redundance generating and its measure.Then the intelligent fusion theorem,the intelligent fusion dependence and reduction theorem of outer inverse packet information were given.The feature and the contraction theorem of attribute disjunction of intelligent fusion of outer inverse packet information were put forward.The principle of discovering the intelligent fusion of unknown outer inverse packet information by disjunction and contraction of attributes was given,and their application was shown in the end.The inverse packet sets is a new theory and method to research another type of dynamic information,which has the feature of attribute disjunction.
Symbolic ADD Algorithms for Arc Consistency and Application in Constraint Satisfaction Problem Solving
WANG Teng-fei,XU Zhou-bo and GU Tian-long
Computer Science. 2013, 40 (12): 243-247. 
Abstract PDF(418KB) ( 1137 )   
References | RelatedCitation | Metrics
Constraint satisfaction problem (CSP) is an important research topic in the field of artificial intelligence,and arc consistency (AC) is an effective technology to improve the CSP solving efficiency.The symbolic algebraic decision diagram (ADD) algorithm for arc consistency was proposed here to improve the traditional arc consistency technology and was applied in CSP solving.In this paper,ADD was used to compress the search space.It can process multiple constrains in one time,not as the traditional algorithm which can only deal with one value pair in a constraint per step.Fistly,CSP was described as pseudo-Boolean function by 01coding and represented by ADD.Secondly,based on traditional arc consistency algorithm,the ADD operations intersection,union and abstraction were used to achieve constraint propagation and variable domain filtering.Finally,the symbolic algebraic decision diagram (ADD) algorithm for arc consistency was embedded in BT search algorithm to solve the CSP.The result of experimental simulation to solve some problems in benchmark and randomly generated test case shows that the efficiency is higher than backjumping algorithm maintained with AC,such as MAC3+BJ and MAC2001+BJ,meanwhile the performance is better than algorithms BT+MPAC and BT+MPAC* which is based on AC preprocess realized by traditional data structure in CSP solving.
Hopf Bifurcation Analysis and Computer Simulation of Cell Calcium Oscillation Model
ZUO Hong-kun,JI Quan-bao and ZHOU Yi
Computer Science. 2013, 40 (12): 248-250. 
Abstract PDF(303KB) ( 339 )   
References | RelatedCitation | Metrics
The bifurcation mechanisms of the Borghans-Dupont model of calcium oscillation were investigated.By applying the centre manifold and bifurcation theory,a theoretical analysis of bifurcation in this model was first performed.The results not only exhibite the Hopf bifurcation but also show that the supercritical Hopf bifurcation and the subcritical Hopf bifurcation play a great role in the calcium oscillations.Our computer simulations,including the bifurcation diagram of fixed points,the bifurcation diagram of the system in two dimensional parameter space and time series,have been plotted in order to illustrate the correctness of the theoretical and dynamical analysis.
Optimal Tracking Control of Population Transfer in Open Quantum Systems
HUANG Ze-xia,HUANG De-cai and YU You-hong
Computer Science. 2013, 40 (12): 251-253. 
Abstract PDF(237KB) ( 405 )   
References | RelatedCitation | Metrics
In order to controll the open quantum systems with dissipation,we simplified the process of treating with the dynamics of open quantum systems using Liouville superoperator form.For this system,we proposed an efficient,monotonical convergent optimal tracking control method according to a special performance indicator which was designed based on the optimal control theory and the time-dependent density.This method can drive the time-dependent density along a given time-dependent trajectory in real space and control the time-dependent occupation numbers.We simulated the control process in MATLAB,and analyzed the influence of different penalty factors on system performance.
Study and Application of Evaluating Methods of PPI Network Clustering
YOU Meng-li and LEI Xiu-juan
Computer Science. 2013, 40 (12): 254-258. 
Abstract PDF(463KB) ( 601 )   
References | RelatedCitation | Metrics
The research in evaluating clustering results for PPI (Protein-Protein Interaction) network is the key to detect clustering results of function module in PPI network.The four typical methods evaluating clusters of PPI (protein-protein interaction) network were introduced and analyzed in this paper,which are p-value,matching statistics,f-measure based on recall and precision and hF-measure based on hierarchical structure.Besides,considering the similarity between the main error classification and the cluster predicted,a new penalty function and the new Sf-measure evaluation method were put forward lately.The simulation results show the features of various evaluation methods and the rationality and effectivity of Sf-measure method.
Research on Restricted Boltzmann Machines Recommendation Algorithm Based on Cloud Computing
ZHENG Zhi-yun,LI Bu-yuan,LI Lun and LI Dun
Computer Science. 2013, 40 (12): 259-263. 
Abstract PDF(409KB) ( 478 )   
References | RelatedCitation | Metrics
Coupled with the exponential expansion of the data and the high computational complexity of Restricted Boltzmann Machines,efficient computing of Restricted Boltzmann Machines has become an important issue.Based on the detailed analysis,the article introduced Hadoop platform into Restricted Boltzmann Machines,and proposed Restricted Boltzmann Machines recommendation algorithm on cloud platform.The algorithm solves the problem of data correlation with replication mechanism,and divides traditional Restricted Boltzmann Machines process into several Hadoop jobs which implements parallel computing.In the experiments,the comparative analysis between Hadoop platform implementation and the previous implementation draws the conclusion that the Hadoop platform improves Restricted Boltzmann Machines computation efficiently under conditions of large data sets.
Extraction Algorithm Based on Semantic Expansion Integrated with Lexical Chain
LIU Duan-yang and WANG Liang-fang
Computer Science. 2013, 40 (12): 264-269. 
Abstract PDF(637KB) ( 542 )   
References | RelatedCitation | Metrics
For the difficulties that affect the quality of keywords extraction,such as the phenomenon of polysemy,synonyms as well as the accurate and comprehensive expression of the subjects in the text,a method named KESELC based on the semantics of keyword extraction was proposed.By calculating semantic similarity and semantic relevancy based on the tongyici cilin and statistical information,then the concept of semantic expansion and its calculation method were proposed.By combining semantic expansion with lexical chain,it made the text processing in terms of preprocess,polysemy disambiguation,synonym mergence,the construction of lexical chains,feature selection and improvement of weights computation.The extracted keywords not only avoid a redundant expression,but also cover the subjects of the article accurately and comprehensively.The experimental results show that the method of keyword extraction based on KESELC has better performance than the ones based on TFIDF and Lexical chain, and has a certain practical value.
Video Multi-semantic Annotation Algorithm Based on Feedback Fuzzy Graph Theory
ZHU Yu-guang,YAN Ting,ZHANG Jian-ming,YANG Xiong and HU Wei-li
Computer Science. 2013, 40 (12): 270-275. 
Abstract PDF(813KB) ( 346 )   
References | RelatedCitation | Metrics
For bridging semantic gap between video low-level features and high-level semantic concepts in the semantic-based video retrieval system,the video multi-semantic annotation algorithm based on feedback fuzzy graph theory was proposed.First,a training set which includes most temporal and spatial distribution of the whole data is made up and it will achieve a satisfying performance even in the case of limited size of training set.Secondly,the fuzzy operators are applied to graph theory to achieve fuzzy reasoning by using fuzzy semantic.Last,in order to finish the feedback of video annotation,some temples from the testing set that have finished annotation are selected and added into the training set.Experimental results indicate that feedback fuzzy graph not only sets up the relationship between semantic concepts well,but also improves the precision of annotation and shows good performance.
Bi-directional Blocking Job Shop Model and Particle Swarm Optimization Algorithm for Train Scheduling Problem on Single-track Lines
ZHANG Qi-liang and CHEN Yong-sheng
Computer Science. 2013, 40 (12): 276-281. 
Abstract PDF(0KB) ( 206 )   
References | RelatedCitation | Metrics
According to the characteristics of the single-track lines train scheduling problem,a bi-directional blocking job shop scheduling (BDBJSS) model was built to describe its solution space and the discrete particle swarm optimization (DPSO) algorithm was proposed to solve it with the objects of makespan minimization.Based on the BDBJSS mo-del,permutation based encoding schemes were designed in DPSO algorithm for arranging the train sequences,and the random strategy and processing time shortest first strategy were put forward for selecting the running tracks.Also,the train conflict detection and resolution methods were proposed in the algorithm,which can schedule trains according to the steps of "scheduling-detecting conflicts-resolving conflicts".At last,DPSO algorithm was used for global optimization to get the best solution.Experiments results show that the model and algorithm can solve single-track lines train scheduling problem effectively.
Research of Word Sense Disambiguation Based on Combination of Rules and Statistics
MIAO Hai and ZHANG Yang-sen
Computer Science. 2013, 40 (12): 282-286. 
Abstract PDF(0KB) ( 201 )   
References | RelatedCitation | Metrics
In this paper,various structure knowledge dictionaries were analyzed in the computability and computational complexity aspects.The grammatical knowledge-base of contemporary Chinese and Modern Chinese Semantic Dictionary,both from the Institute of Computational Chinese Linguistics of Peking University,were chosen as the knowledge source.Fusion method of more heterogeneous source was considered,and agile rules knowledge base and lexical collocation library were constructed,and a word sense disambiguation method of rules and statistics combination was designed.The method of combining maximum entropy and rule presents the highest accuracy in many kinds of word sense disambiguation method.Compared to the best result in the SemEval 2007(task #5),the MicroAve (micro-average accuracy) and MacroAve (macro-average accuracy) are promoted by 5.5% and 0.9%.
Emotion Classification Algorithm Based on Blink Frequency Detection and Bayesian Network in Affective Learning
TAO Xiao-mei and NIU Qin-zhou
Computer Science. 2013, 40 (12): 287-291. 
Abstract PDF(0KB) ( 183 )   
References | RelatedCitation | Metrics
This paper explored an emotional state classification algorithm through detecting the learner’s blink frequency during a learning process.This algorithm could classify the learner’s emotional states into positive emotional state and negative emotional state.And the negative emotional categories could be inferred by a Bayesian network using the parameters about the student’s personal information and the learning context information.An e-learning system using instructional videos as the core learning material was developed to validate this classification algorithm.
Directional Mining Continuous Domain Ant Colony Algorithm
LIU Wen
Computer Science. 2013, 40 (12): 292-294. 
Abstract PDF(0KB) ( 205 )   
References | RelatedCitation | Metrics
An improved ant colony algorithm for continuous domain optimization was raised in order to solve the problems that there is great complexity when ant colony algorithm solves the optimization problem of continuous domain and a large number of iterations.The improved ant colony algorithm achieves global rapid search by directionally digging the solution space.This paper presented a new algorithm simulation steps,and carried out simulation comparison experiments of the improved ant colony algorithm and the ant colony algorithm of continuous domain and other intelligent optimization methods.Detailed test results show that the improved algorithm has excellent global optimization quality,and convergence rate also improves a lot.
Parallel Processing Research on SIFT Feature Matching Algorithm Based on GPU
JIANG Chao,GENG Ze-xun,LOU Bo,WEI Xiao-feng and SHEN Chen
Computer Science. 2013, 40 (12): 295-297. 
Abstract PDF(0KB) ( 200 )   
References | RelatedCitation | Metrics
SIFT algorithm has invariance of rotation,scale and translation,so it is used widely in the field of image matching and 3D reconstruction.But the SIFT algorithm is complicate,making the processing speed slow,difficult to meet the application of high real-time requirements.On the basis of analysis of the principle of SIFT algorithm,in view of the large numbers of extracted features,high-dimensional of the feature vector,we refined the algorithm for parallel processing to take advantage of modern graphics hardware,and compared it with the CPU SIFT algorithms.Experiments demonstrate that the algorithm based on GPU SIFT significantly increases efficiency,and can reach the speed ratio of more than ten times averagely.
Fast Scene Matching Algorithm Resistant to Image Blur
FU Yan-jun,ZHANG Xiao-yan and SUN Kai-feng
Computer Science. 2013, 40 (12): 298-300. 
Abstract PDF(0KB) ( 187 )   
References | RelatedCitation | Metrics
In view of the actual image being degraded,blur-invariant moments are used in scene matching.To solve the problem of the matching algorithm with large quality of computation,two measures including simplifying the computation of features matched and optimizing searching policy were adopted respectively.Aiming at the computation of those blur-invariant moments,an efficient method only suitable to matching process was proposed on the basis of 21sum-tables established beforehand.Aiming at the searching policy,considering that those blur-invariant moments are sensible to image resolution,genetic algorithm was served as the searching method which is performed across the original reference image.Experiment show that with the actual image blurred and noise added,the proposed matching algorithm is good in matching precision,furthermore,and its consuming time is several orders of magnitude lower than the traditional moment-based matching methods and can meet the navigation system’s requirement for real-time.
Research on Physically Based Simulation of Fluid Movement
GUO Li,QIN Pei-yu and CHEN Chuan-bo
Computer Science. 2013, 40 (12): 301-303. 
Abstract PDF(0KB) ( 187 )   
References | RelatedCitation | Metrics
A physically based algorithm for fluid movement simulation was proposed.Compared with the traditional simulation technique,physically based simulation can represent realistic movement better.The full Navier-Stokes equations are used as the model,and the operator splitting method is employed to split the model into external force term,advection term,diffusion term and projection term.Every step is stable,so the whole process is also stable.Thus,the large timestep can be taken in the fluid movement simulation.
Research on Algorithm of Moving Target Detection and Tracking Based on MB-LBP Feature Extraction and Particle Filter
QU Zhong,ZHANG Kang and QIAO Gao-yuan
Computer Science. 2013, 40 (12): 304-307. 
Abstract PDF(0KB) ( 182 )   
References | RelatedCitation | Metrics
In complex environments,because of high density and movement randomness of pedestrians,it is too difficult to detect and track moving targets,therefore leading to counting errors.An algorithm of detecting and tracking moving targets combining the MB-LBP(Multi-scale Block Local Binary Pattern) feature extraction and particle filter algorithm was proposed in this paper.Firstly,we adopted the algorithm of AdaBoost to extract the MB-LBP features which are trained to generate a classifier,detect head targets,and remove part of false detection based on the size ranges of head targets.Secondly,we improved the original particle filter algorithm to predict and track moving targets.Finally,we counted the moving targets which are tracked.The experiments show that the algorithm can effectively detect and track multiple moving targets in complex environments,and count pedestrians in video frames accurately and rapidly.
Close and High Precision Fixed Positioning Communication System for Car
FAN Yong-sheng,SUN Lu-nan and YU Hong-ying
Computer Science. 2013, 40 (12): 308-311. 
Abstract PDF(0KB) ( 185 )   
RelatedCitation | Metrics
According to the precision of the GPS communication system on the close cars,this paper designed a close high-precision positioning auxiliary system,and implemented an efficient real-time communication around the cars.Ultrasonic technology was used to get the distance between cars,and calculate the relative angle.This data was binded with the device communication address through effective communication initialization algorithm and dynamic update algorithm,generating a complete node information.Node informations are processed to build 2D plans about the environment around cars in real-time,in order to achieve the purpose of the selective communication between close cars.