Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 42 Issue Z6, 14 November 2018
  
Concurrency Control Algorithm Based on Dynamic Decision
CHEN Yi-rui and ZHUANG Yi
Computer Science. 2015, 42 (Z6): 1-4, 28. 
Abstract PDF(1260KB) ( 40 )   
References | RelatedCitation | Metrics
The concurrency control algorithm can guarantee the isolation and consistency of transactions when multiple users access the same date at database simultaneously.This paper proposed an adaptive decision concurrency control algorithm to solve the problem of poor adaptability in existing concurrency control algorithms.This algorithm divides its concurrency control process into two phases:the execution authorizing phase and the strategy selecting phase.In the exe-cution authorizing phase,the algorithm compares effectiveness of transactions to determine the execution order of conflicting transactions.After that,the algorithm selects optimistic/pessimistic conflict resolution strategy dynamically according to the transactions’ read/write status and the current conflict rate in the strategy selecting phase.DDCC algorithm has high efficiency no matter whether the system is busy or idle.Performance tests show that DDCC algorithm proposed in this paper is superior to classical strict two phases locking algorithm and hybrid concurrency control algorithm.
Approach for Urban Popular Event Detection Using Mobile Crowdsourced Data
ZHANG Jia-fan, GUO Bin, LU Xin-jiang, YU Zhi-wen and ZHOU Xing-she
Computer Science. 2015, 42 (Z6): 5-9, 37. 
Abstract PDF(1474KB) ( 54 )   
References | RelatedCitation | Metrics
This paper proposed an urban popular event detection and classification approach using the crowdsourced data from Sina Weibo.The detected events can be categorized into physical events or virtual events,which can be used for different applications.Our approach firstly extracts the hot words from crowd posts according to the characteristic of word frequency.With the context of hot words,hierarchical clustering is then used to obtain the description of popular events.By analyzing the three proposed features,including lexical entropy,temporal dynamics,and content originality,we applied various methods to do event classification.The experiment results indicate that all different classification methods can achieve a higher precision under our approach.
Improved Combination Approach for Bodies of Evidence Containing Non-singleton Evidence
LIU Zhe-xi, HONG Chun-zhe, YANG Jian-hong and YANG De-bin
Computer Science. 2015, 42 (Z6): 10-12, 15. 
Abstract PDF(955KB) ( 39 )   
References | RelatedCitation | Metrics
In the existing combination methods of conflicting evidence,if the bodies of evidence contain non-singleton evidences,and there is conflict between the bodies of evidence,these combination methods have limitations when using Dempster’s rule in data fusion.In order to solve this problem,an improved combination method of conflicting evidence based on the modified pignistic probability distance was proposed.In the improved method,firstly,the conflict level and the degree of similarity of bodies of evidence are represented by the modified pignistic probability distances among bodies of evidence.Secondly,the weight coefficients are determined according to the support degree among bodies of evidence.Thirdly,the basic probability assignments of the bodies of evidence are revised by weight coefficient based on the thought of the discount rate.And finally,the ultimate combination results are obtained from the revised bodies of evidence by directly applying Dempster’s rule.The analyzed results of numerical examples prove that the proposed new combination method has better applicability and effectiveness compared with the existing methods.
Variable Window Stereo Matching Based on Phase Congruency
GUO Long-yuan, SUN Chang-yin, ZHANG Guo-yun and WU Jian-hui
Computer Science. 2015, 42 (Z6): 13-15. 
Abstract PDF(719KB) ( 72 )   
References | RelatedCitation | Metrics
The major challenge in area stereo matching algorithms is to find appropriate window size and shape.Phase congruency is robust to noise and can reflect the gray changes.With these benefits,the paper first detected image and determined pixels characteristics according to phase congruency.Then,different window was used to match according to different feature pixels.After using cost function which combined with non-parametric measure and gray va-lue,the final disparity map was obtained.The result of experiment indicates that the algorithm can generate more accurate dispari-ty map.
Real-time Visual Inspection Method to Detect Prepreg Edge Straight Based on Line Fitting
NI Jin-hui, XIAO Jun and WEN Li-wei
Computer Science. 2015, 42 (Z6): 16-19, 23. 
Abstract PDF(1121KB) ( 45 )   
References | RelatedCitation | Metrics
Using real-time visual inspection to identify the prepreg surface defects is a new quality control method.In order to detect the width changing of prepreg accurately,the prepreg edge needs to be transformed from image space into parameter space.To solve the problem that the traditional Hough transform has large amount of computation,according to “many-to-one” mapping principle,this paper combined with prepreg image features,proposed a prepreg edge line detection method based on linear fitting.Through rough fitting,error compensation,noise filtering and precise fitting,this algorithm can ensure the inspection accuracy and greatly improve the detection efficiency.In the case of adding Gaussian noise and impulse noise,the accuracy of this algorithm is almost the same with Hough transform and the operation speed can be increased about tenfold.
Research of Live Migration of Virtual Machines Selection Strategy Optimization Problems Based on Modified Particle Swarm Optimization
WU Xing-yu, SUN Lei, HU Cui-yun and SUN Rui-chen
Computer Science. 2015, 42 (Z6): 20-23. 
Abstract PDF(1144KB) ( 40 )   
References | RelatedCitation | Metrics
Due to easy implementation,high accuracy,fast convergence,particle swarm optimization algorithm is consi-dered to have advantages in solving the problem of multi-objective optimization.Virtual machine migration selection policies are defined based on the definition of matching distance and the theory of particle swarm optimization algorithm.Moreover,the particle swarm optimization algorithm is improved by introducing the ideas of avoid list.In this way,the servers,which do not have enough remaining performance to meet the demand of the virtual machine,are added to the avoid list.Therefore,it can avoid multiple virtual machines that satisfy pareto optimal solutions to migrate to the same server,causing the resource usage rate to exceed the maximum resources limit of the node.Simulation experiment was done based on the CloudSim platform,and compared with basic particle swarm optimization algorithm,our algorithm was proved to have faster speed of convergence and choice.
Fuzzy Reasoning Triple I Sustaining Method Based on Family of Implication Operator L-λ-Π
SHUANG Jing-ning, HUI Xiao-jing and HE Jin-rui
Computer Science. 2015, 42 (Z6): 24-28. 
Abstract PDF(877KB) ( 44 )   
References | RelatedCitation | Metrics
The new family of implication operator L-λ-Π was given,and that it is a general form on ukasiewicz implication operator and Goguen implication operator was illustrated. Three methods were discussed and proved based on family of implication operator L-λ-Π,and they are FMP model,FMT model fuzzy reasoning triple I sustaining method and α- triple I sustaining method.
Simulation of Queuing System with Part of Preparation Period
LU Xi and CAO Ju
Computer Science. 2015, 42 (Z6): 29-32. 
Abstract PDF(997KB) ( 49 )   
References | RelatedCitation | Metrics
Queuing system with preparation period is that part of the customers have preparation period with probability p,which means they have a period to prepare between arrival time and service part,and the others start the service directly.A program was written to simulation the problem.To simulate different systems,the number of service stations,parameter values,the number of service counters and the distribution which the preparation period obeyed were adjusted.The research results were obtained through the analysis of performance index and the comparison of queuing graphs.The overall results show that the preparation period will affect the system,and aggravate congestion degree.
Grammatical Bees Algorithm for Classification Problem
LIU Kun-qi, ZHOU Chong and WU Zhi-jian
Computer Science. 2015, 42 (Z6): 33-37. 
Abstract PDF(1035KB) ( 50 )   
References | RelatedCitation | Metrics
Bees Algorithm(BA) and Grammatical Evolution(GE) are two well-known evolutionary algorithms.BA for classification problems has shown faster convergence speed,but the individual coding is complicated.The operators of GE for classification problems are simple,which include crossover and mutation operators,but their classification accuracy is not high.In view of the strengths and weaknesses of the two algorithms,a new algorithm,named Grammatical Bees Algorithm(GBA) combining BA and GE,was proposed to solve the classification problems.Experiments on several benchmark data sets demonstrate the feasibility and effectiveness of GBA.Compared with gene expression programming(GEP) and improved GEP,GBA can achieve better classification accuracy and faster convergence speed.
Study and Implementation on Cache Algorithms
QIAN Pei-jie, WU Juan and GAO Cheng-ying
Computer Science. 2015, 42 (Z6): 38-44. 
Abstract PDF(1608KB) ( 42 )   
References | RelatedCitation | Metrics
In video on demand system,the speed of users accessing the videos is the key to improve the user experience.The speed of the users accessing the videos is related to the response speed of the servers,the net transmission and so on,among which the response speed of the servers is the main factor.As an important application to improve the user access speed in video on demand system,the cache technology attracts much attention in industry and academia.This paper focused on the comparison and analysis of LRU,LFU,LRFU,SC and some other classic algorithms.Simulated data and actual data were both used to demonstrate and research.The actual performance of the algorithms was observed in order to help select the right cache algorithm in video on demand system,which provides theory evidence to improve the hit ratio of cache in video on demand system.
Modeling of Dispatching Interlocking Process of Underground Transportation System
LIU Dian-jun, YANG Hong-rui, QI Wen-hai and GAO Xian-wen
Computer Science. 2015, 42 (Z6): 45-47, 78. 
Abstract PDF(1021KB) ( 44 )   
References | RelatedCitation | Metrics
At present,the artificial settings of scheduling strategy are commonly based on historical experience and work manners,which lack of adaptability and flexibility to production transportation situation.Considering this,the article selected the colored Petri nets,which is adapted to the typical discrete event dynamic system.After fully researching the basic underground dispatching interlocking rules,the article defined color sets,places and transitions which are suitable for the system.The theoretical model in the complete working path was considered and divided into six conditions in sequence.Finally,it is concluded that the complete interlocking system dispatching model is obtained.
Adaptive Particle Swarm Optimization Algorithm with Shrink and Expansion Operation
ZHAO Zhi-gang, YIN Zhao-yuan and LIN Yu-jiao
Computer Science. 2015, 42 (Z6): 48-51. 
Abstract PDF(837KB) ( 42 )   
References | RelatedCitation | Metrics
An adaptive particle swarm optimization algorithm with shrink and expansion operation was presneted,which can adaptively choose the behavior of the particle by detecting the degree of convergence during running.These two operations make particle swarm converge to extreme point and jump out of it quickly,and the evolutionary status of the population make them convert between these two operations adaptively .Experimental data show this algorithm has strong ability to get rid of the local optima and approach to global optima,especially in tackling the problem of high dimension multimodal function.
Dynamic Adaptive Differential Evolution Algorithm
LI Zhang-wei, ZHOU Xiao-gen and ZHANG Gui-jun
Computer Science. 2015, 42 (Z6): 52-56, 74. 
Abstract PDF(1361KB) ( 65 )   
References | RelatedCitation | Metrics
To solve the problems of convergence speed,computational cost and reliability caused by the choice of parame-ters and strategies,a dynamic adaptive differential evolution algorithm was proposed in this paper incorporating the abstract convexity theory.Firstly,an underestimate relaxed model of the objective function is built by constructing the supporting hyperplanes for the individuals of the population.Then,the underestimate values of the trial individuals that generated by the strategies in the strategies pool can be obtained from the underestimate relaxed model.So the parameters and strategies can adjust adaptively according to the underestimate and the evolutionary experience before.In addition,the underestimate also can be used to guide the update process.Finally,the underestimate supporting hyperplanes are updated according to the result of evolutionary.Numerical experiment results of the six benchmark problems verify the effectiveness of the proposed algorithm.
Community Detection Algorithm Based on Single Objective PSO
YANG Ling-xing and ZHANG Xi-bin
Computer Science. 2015, 42 (Z6): 57-60. 
Abstract PDF(910KB) ( 61 )   
References | RelatedCitation | Metrics
The community detection problem is modeled as single objective optimization problem,and the PSO algorithm is adopted to solve it.The traditional PSO algorithms are used to handle with continuous optimal problems,while the community detection problem is a graph based on discrete optimal problem in our paper.In this case,a new coding scheme and particles updating scheme were used to overcome this shortcoming.Besides that,a neighbor based strategy was adopted in the updating scheme to confirm that the neighborhood information can guide their updating which is con-sistent with the property of the real world complex networks.Besides,the general modularity density function is taken as the objective function to overcome the resolution problem,which confirms proposed algorithm can detect the construction of community under different resolutions.Experimental results indicate that this algorithm is effective and can detect the community construction of different resolutions.
Local Monte Carlo Search Approach to Multimodal Problem in Protein Conformation Space Optimization
CHEN Xian-pao, ZHANG Gui-jun, QIN Chuan-qing and HAO Xiao-hu
Computer Science. 2015, 42 (Z6): 61-66. 
Abstract PDF(1527KB) ( 41 )   
References | RelatedCitation | Metrics
We elucidated the native structure of a protein molecule from its sequence of amino acids.A problem known as de novo structure prediction,is a long standing challenge in molecular biology.High dimensional conformational space search is the key issue of protein structure prediction that is needed to be solved.Based on differential evolution algorithm framework,we proposed a multimodal protein conformational space optimization algorithm to address the multiple-minima problem in decoy sampling for de novo structure prediction.Algorithm builds the index of similarity measure that is based on the vectors of features of proteins,using exclusion strategy to implement global search.Local minimum search strategy with fragment assembly is able to avoid premature convergence,and can balance the convergence rate and the diversity of the population.A greedy search maps a child conformation to its nearest local minimum,and the molecular fragment replacement technique and differential evolution algorithm help child jump out of local minimum,thus the algorithm can get better search ability.Using Rosetta coarse-grained energy model,results show that the additional mini-mization and the exclusion strategy based on conformation space are key to obtaining a diverse ensemble of decoys.Compared with Baker research team and Shehu research team,the proposed algorithm can achieve better prediction accuracy.
Fatigue Driving Monitoring Based on BP Neural Network
CHEN Zhi-yong, YANG Pei, PENG Li, MO Zi-xing and CAI Gang
Computer Science. 2015, 42 (Z6): 67-69, 93. 
Abstract PDF(934KB) ( 59 )   
References | RelatedCitation | Metrics
In road safety,fatigue driving is one of the main causes of serious traffic accident.Based on fatigue driving characteris-tics,we first used the statistical data to analysis the relationship between different conditions(fatigue,alert ) and vehicle data(velocity,acceleration,steering wheel angle etc.) to choose the index variable of driver fatigue.Then we pretreated the collect data and put variables into BP neural network,and established and fatigue detection model to rea-lize the monitoring of drivers’ safety.Experiments show that the accuracy rate of model is 91.67%.
Optimization Selection Method for Uncertain Internet Public Opinion Emergency Decision Plans under Interval-valued Fuzzy Environment
ZHANG Qian-sheng, XIE Bai-lin and ZHANG Xin-meng
Computer Science. 2015, 42 (Z6): 70-74. 
Abstract PDF(1186KB) ( 44 )   
References | RelatedCitation | Metrics
Due to the insufficient information,significant uncertainty characteristics and social harm in the internet public opinion emergency,the relevant emergency management departments depending on self-interest tend to make many alternative emergency plans for coping with public opinion crisis.In order to assist the top decision-maker ultimately choose the optimal emergency plan,in this paper we first extracted some important evaluation indicators which can effectively measure the effect of uncertain internet public opinion emergency plans,and assigned the reasonable weights for the selected indicators based on interval fuzzy entropy measure.Then we got the combined interval value effect of each alternative emergency decision plan by the weighted aggregation of all the assessed index value of emergency plan.Finally,by pair-wise comparisons of all the combined interval values,we could get their possible priority degree and prioritized all the alternative emergency decision plans,greatly improving the decision efficiency of internet public opinion emergency.
Title Research on Deep Neural Network Based Chinese Speech Synthesis
WANG Jian and ZHANG Yuan-yuan
Computer Science. 2015, 42 (Z6): 75-78. 
Abstract PDF(1058KB) ( 72 )   
References | RelatedCitation | Metrics
In order to improve the quality of speech synthesis based on HMM,this paper discussed the different structure and parameters on the effect of DNN training and demonstrated the validity of DNN discriminating S/U/V.The paper finished the speech synthesis of DNN on the HMM synthesis system were converted to the original speech spectrum parameter.Then,we studied on temporal decomposition(TD) algorithm to get the parameters of conversion program,and for DNN training set up the conversion model and event with no conversion function resynthesis of event vectors.The experiment proves that DNN conversion spectrum synthesis is closer to the original spectrum,and the subjective evaluation shows that this method can effectively improve the synthesized speech quality.
Minority Language Websites’ Automatic Identification and Collection
LAN Yi-yong, LIU Hai-feng and YANG Yuan-yuan
Computer Science. 2015, 42 (Z6): 79-82. 
Abstract PDF(1061KB) ( 34 )   
References | RelatedCitation | Metrics
This paper presented features of Chinese minority script collection on websites,analysed the problems of webpage identification of Chinese minority script,and put forward an identification method.Based on this method,we designed a software to identify and collect Chinese minority language script such as:Mongolian,Tibetan,Uyghur,Kazak,Kirgiz,Yi script Tai Lue script,Korean,Russian,Zhuang script and so on.The average correct identification rate reaches above 95%.
Concept Algebra-based Representation Model and Conceptual Calculation Rules for Event
ZHANG Xu-jie, LIU Zong-tian and LIU Nian-zu
Computer Science. 2015, 42 (Z6): 83-88, 101. 
Abstract PDF(1363KB) ( 34 )   
References | RelatedCitation | Metrics
As a large-grained unit of knowledge,“event” is an important information media on Web,which has attracted more and more attention and high regards from the academia.This paper proposed a novel representation model and conceptual calculation rules of events on the basis of Nilsson’s concept algebra.The representation model is able to express the relations between events and their elements.Furthermore,it can express dynamic procedure of event and relations between events.Case studies demonstrate the representation model and conceptual calculation rules are effective and widely applicable.
Multiple Hypothesis Testing and its Application in Feature Dimension Reduction
PAN Shu and QI Yun-song
Computer Science. 2015, 42 (Z6): 89-93. 
Abstract PDF(1253KB) ( 59 )   
References | RelatedCitation | Metrics
The existing feature dimension reduction methods can roughly be categorized into two classes:feature extraction and feature selection.In feature extraction problems,the original features in the measurement space are initially transformed into a new dimension-reduced space via some specified transformation.Although the significant variables determined in the new space are related to the original variables,the physical interpretation in terms of the original variables may be lost.So,feature extraction will change the description of the original data.Unlike feature extraction,feature selection aims to seek optimal or suboptimal subsets of the original features by preserving the main information carried by the complete data to facilitate future analysis for high dimensional problems.Often,the selected features are a subset of the original features,and those insignificant and redundant features may be discarded.It is worth mentioning that almost all of the existing dimensionality reduction methods are not high fidelity methods.The result of these me-thods is only suitable for specific subsequent data analysis tasks,which is only a particular task under the preprocess.In this paper,with the technique of multiple hypothesis testing,we studied the dimensionality high fidelity reduction problem.The processing results can save all the useful information and eliminate the irrelevant features from the original data.
Attribute Reduction Algorithm Based on Relative Refinement Capacity
XU Jie and GUO Ming
Computer Science. 2015, 42 (Z6): 94-97, 114. 
Abstract PDF(1031KB) ( 40 )   
References | RelatedCitation | Metrics
Attribute reduction is one of the key problems in the research field of rough set theory,which can eliminate unnecessary attributes of the information table.However,the number of possible subsets is always very large when the number of attributes N is large,because there are 2N subsets for N attributes.Hence exhaustively examining all subsets of attributes for finding the minimal reduction is a NP-hard problem.Many heuristic algorithms are proposed to solve this difficulty.But most of them have not considered two problems at the same time which are the completeness and the data noise.An efficient algorithm named REDA that can process data noises was proposed in this paper.Firstly,the feature that the significance of attribute is related to the property of its refinement was analyzed.Then the relative refinement capacity was employed as the heuristic information.The completeness and the correctness of REDA were proved from the theoretic analysis.The examples show the minimal attribute reduction can be found by REDA in some cases with a higher probability than that gained by other approaches.Finally,the experiment indicates REDA is an efficient and complete attribute reduction algorithm,which can reduce the temporal and spatial complexity for the next work.
Method of Semantic Annotation Based on Chinese Framework
LIU Yong and WEI Guang-ze
Computer Science. 2015, 42 (Z6): 98-101. 
Abstract PDF(947KB) ( 39 )   
References | RelatedCitation | Metrics
The semantic information labeled in sentences from true corpus needs one complete standard system.Using Chinese framework and the frame elements in a system,the paper explained the concept of semantic annotation based on the Chinese framework,and formulated some rules in syntactic functions,phrase types and frame elements.Finally,the paper analyzed the characteristic of the method of semantic annotation based on Chinese framework by comparing with the other methods.
Comparison of Several Classification Approaches to Digit and Letter Recognition:Experimental Results
CHEN Ai-xiang
Computer Science. 2015, 42 (Z6): 102-106, 121. 
Abstract PDF(569KB) ( 195 )   
References | RelatedCitation | Metrics
Classification is an important problem in machine learning.This paper built several image datasets consisting of ten digits and 26 upper case and 26 lower case english letter to evaluate the performace of six classifiers,SVM(Support Vecter Machine),NB(Nave Bayes),RT(Random Tree),MLP(Multi Layer Perception),BOOST,Knearest.Experimental results show SVM has better generalization ability,while NB and MLP are more sensitive to datasets.In addition,the recognition accuracy of our system based on SVM reaches to 94.2191%,which is better than many publicly reported results in the literature.
Research of Wireless Sensor Network Routing Algorithm Based on Improved Ant Colony Algorithm
LIU Jian-ming and ZHAO Ri-ji
Computer Science. 2015, 42 (Z6): 107-111. 
Abstract PDF(832KB) ( 47 )   
References | RelatedCitation | Metrics
In this paper,through the study of the the ant colony algorithm,wireless sensor network and its routing algorithm,and according to the characteristics that a single sensor node’s intelligence is limited,and multiple nodes are needed to accomplish complex tasks synergistically,we put forward the wireless sensor network routing algorithm based on ant colony algorithm with the ant colony algorithm of swarm intelligence characteristics.On the basis of the basic ant colony algorithm,the properties of the ant and the energy were inceased,and time delay and bandwidth were taken into account to optimize the ant colony algorithm. Wireless sensor network routing algorithm based on improved ant colony algorithm was prposed.Then,through the algorithm performance analysis,and found that the improved algorithm in time delay and network life has better improvement.
Odor Source Searching Strategy on Basis of Detected Concentration Polygon
XIE Yan-chun, PAN Xin-yu and WANG Jian
Computer Science. 2015, 42 (Z6): 112-114. 
Abstract PDF(681KB) ( 34 )   
References | RelatedCitation | Metrics
An improved path planning strategy of four detecting points for searching for odor source in earth,based on the concentration polygon consisting of detected values,was introduced.By using the strategy,a mobile robot with single gas sensor performing circulation detection and a robot with multiple gas sensors performing simultaneous detection both can give out searching path whose step size is dynamically adjustable.Computer simulation results show that the strategy performs well in odor plume detection,odor tracing and odor source localization.
Comparison of Circuit Reliability Calculation Methods Based on PTM Model and BN Model
JIANG Yu-fang and DENG Zuo-xiang
Computer Science. 2015, 42 (Z6): 115-117, 133. 
Abstract PDF(984KB) ( 34 )   
References | RelatedCitation | Metrics
The development of VLSI technology results in the dramatical improvement of the performance of integrated circuits,and integrated circuits become more susceptible to soft errors.For the reliability of circuits under the soft error,two typical gate-level estimation methods based on the conditinal probability,named PTM method and BN mehod separately,were chosen in this paper.The principles of the two methods were introduced,and the function,applicability and complexity were analyzed respectively by combining the experiemnts.Finally,the results were concluded,and the future research was illuminated.
Simulation and Dynamic Identification System Based on Improved Neural Network Model Algorithm
ZUO Jun and ZHOU Ling
Computer Science. 2015, 42 (Z6): 118-121. 
Abstract PDF(846KB) ( 33 )   
References | RelatedCitation | Metrics
For identifier,the connection weight of neural network corresponds to model parameter.By adjusting weight of neural network,the network outputs are approximated to the system outputs.When taking neural networks as identifier NNI and doing some training on it,network weights will become the estimation of system parameters.The traditional algorithm is improved by introducing weighted factor so as to control the impact of estimated value made by the input factors.The estimated values of parameters are always uniformly asymptotic convergence in a wide range. The network steady state were thought as the minimum point of objective function for any optimization problem.The convergence process from the initial state to the steady state is the optimization calculation.At last,simulation experiments were developed to test some specific cases and the obtained results are ideal and reasonable.
Vending Machine Sales Forecasting Based on Time Series Analysis
HONG Peng and YU Shi-ming
Computer Science. 2015, 42 (Z6): 122-124. 
Abstract PDF(678KB) ( 39 )   
References | RelatedCitation | Metrics
As vending machine takes a lot of material resources to replenish goods while a single commodity sells out,we developed a reasonable method using RBF neural network for forecasting sales of each product to reduce the costs.Taking the sales data of vending machine maybe constrained by poor sales program into account,we established ARMA model to optimize the actual sales and accelerate the adjustment of the speed of the sales program.We compared before and after correction of the observational data to predict the result with the actual sales to verify the validity of the model in vending machine sales forecasting.
Evolutionary Algorithm of Nine Points Calibration Based on Ultrasonic Infrared
LI Wen-jing, ZHAO Guo-bin and LIU Ying-can
Computer Science. 2015, 42 (Z6): 125-127, 142. 
Abstract PDF(984KB) ( 33 )   
References | RelatedCitation | Metrics
Coordinate values of existing interactive electronic whiteboard often lack portability and flexibility.Therefore,the parameter varies with time and locating definition won’t be high enough.Thus,a revolutionary algorithm of nine points calibration based on ultrasonic infrared was proposed.This kind of algorithm can deal with the fluctuant data received by the receivers on both sides,work out the coordinate values of the probes on the two receivers,set up the coordinate system using one of the coordinate values and make small-range accuracy adjustment through functional fitting so that the location will be more accurate.According to the experiment,the evolutionary algorithm of nine points calibration based on ultrasonic infrared is able to reduce errors efficiently when the locating parameter changes with time.
Study of Severe Disease Diagnosis Mechanism and Algorithm of Olfactory Based on Network Transmission
LI Lian-ning
Computer Science. 2015, 42 (Z6): 128-133. 
Abstract PDF(1597KB) ( 46 )   
References | RelatedCitation | Metrics
This paper mainly discussed the use of MEMS spectrum absorption photochemical gas sensor based on intelligent mobile phone platform.The characteristic spectrum of exhaled gas was obtained when the users use mobile phones,at the same time,it was sent to the cloud computing cloud storage odor database query in network,and BP algorithm was used for spectral features of disease code comparison.To these patients whose potential rate is more than a certain percentage the potential rate is more than a certain percentage,SMS alarm on disease characteristic code conforms to supervise the medical check.For early detection of severe disease,it can improve the medical effect,reduce the mortality of patients.
Semi-supervised SVDD-KFCM Algorithm and its Application in Bearing Fault Detection
LI Jun-li and LI Wei-hua
Computer Science. 2015, 42 (Z6): 134-137. 
Abstract PDF(913KB) ( 33 )   
References | RelatedCitation | Metrics
Machinery is always running under multiple operating regimes,and it is difficult to collect the specific fault samples to train the learning machine,which also leads to the low accuracy in fault detection and limits the generalization of the intelligent fault detection methods.Combining the support vector data description and the kernel-based fuzzy C-means clustering,a semi-supervised SVDD-KFCM algorithm for machine defect detection was proposed.Experiment results demonstrate that the proposed scheme is capable of detecting the incipient bearing fault effectively and correctly.
Particle Swarm Optimized Wavelet Neural Network Models for Forecasting Monthly Precipitation
LONG Yun, HE Xin-guang and ZHANG Xin-ping
Computer Science. 2015, 42 (Z6): 138-142. 
Abstract PDF(1340KB) ( 42 )   
References | RelatedCitation | Metrics
To improve the forecasting accuracy of monthly precipitation and deal with the determination of the number of hidden neurons in neural networks,this paper introduced a particle swarm optimized wavelet multiple neural network model,which was applied to the prediction of monthly precipitation in Dongting Lake Basin.The standardized monthly precipitation and large-scale climate index time series were first decomposed at different temporal scales as predictors.Then the standardized monthly precipitation subseries were forecasted,respectively,under different time scales by using the cascade-forward(CF) neural networks in which the number of hidden neurons is optimized by particle swarm optimization(PSO).Finally the monthly precipitation was forecasted by combining all predicted subseries and using the inverse transform of standardized monthly precipitation.The results show that PSO-based wavelet multiple neural network model provides more accurate forecasts than wavelet single neural network model for monthly precipitation in Dongting Lake Basin and improves the prediction accuracy of extreme monthly rainfall.
Dynamic Synergetic Neural Network Algorithm for Authorship Classification of Texts
ZHANG Ai-hua
Computer Science. 2015, 42 (Z6): 143-145. 
Abstract PDF(697KB) ( 31 )   
References | RelatedCitation | Metrics
To improve the accuracy of authorship classification of texts,a dynamic synergetic neural network algorithm was proposed in this study.Specifically,this algorithm makes use of the characteristics of fast training and strong noise-immunity,and dynamically adjusts the attention parameters.Thus,the initial mis-identified patterns are adaptively corrected by measuring similarity between the prototype pattern and the testing pattern in evolution process.Compared with classification result under balanced attention parameter,the experimental result demonstrates that self-learning ability of the network is significantly improved in the dynamic synergetic neural network algorithm,thus the classification performance and robustness are improved.
High-dimensional Data Discretization Method Based on Improved LLE
XU Tong-de
Computer Science. 2015, 42 (Z6): 146-150, 157. 
Abstract PDF(1415KB) ( 36 )   
References | RelatedCitation | Metrics
Discretization algorithms for continuous features play a very important role in data mining,machine learning and pattern recognition.Existing methods mainly concentrate on discretizing low-dimensional data.However,there are high-dimensional nonlinear data in the real world.Based on this,this paper presented a high-dimensional data discretization method based on improved locally linear embedding(LLE),namely ILLE-HD3.First,LLE could be improved by considering class information of the data to effectively reduce dimensions of high-dimensional data.This facilitates the discretization method to be implemented in a low-dimensional space.Second,with the dimensionality reduction,we proposed a discretization algorithm for continuous features based on difference-similitude set(DSS).It uses class-feature interdependency to determine the selection of cut points in continuous value domain.Meanwhile,it defines a classification error criterion to control information loss generated by partition of continuous domain.Finally,by using the decision tree classification tools,C4.5 and C5.0,the proposed ILLE-HD3 algorithm achieves a better result on high-dimensional nonlinear data and higher classification accuracy than the existing algorithms.
Super-resolution Reconstruction of Medical Images Based on Group Sparse Representation
HUANG Hao-feng and XIAO Nan-feng
Computer Science. 2015, 42 (Z6): 151-153, 189. 
Abstract PDF(918KB) ( 35 )   
References | RelatedCitation | Metrics
Medical diagnosis needs a lot of medical image processing.Due to the technological and economical limits,the medical diagnosis is not able to get the clear medical images.Therefore,it is necessary to reconstruct the medical images with super-resolution methods.Based on super-resolution reconstruction of single image by the sparse coding,and considering that there are obviously repetitive image structures in the medical images,this paper proposed a reconstruction method for the super-resolution medical images based on the group sparse representation.In addition,this paper also presented an dictionary train algorithm which combines the Group Lasso with K-SVD.The experimental results indicate that the proposed algorithms have higher performance than that of the existing methods.
Threshold Based Adaptive Vibe Target Detection Algorithm
WANG Hui and SONG Jian-xin
Computer Science. 2015, 42 (Z6): 154-157. 
Abstract PDF(1022KB) ( 53 )   
References | RelatedCitation | Metrics
Vibe algorithm is an effective pixel level background modeling algorithm.During the moving object detection process,the Vibe algorithm can’t eliminate ghost quickly and change the update speed according to the change speed of foreground.To solve these problems,this paper proposed a threshold based adaptive Vibe target detection algorithm.When a pixel is judged as foreground by Vibe model,Otsu algorithm will be used to calculate the threshold of image segmentation.The pixel will be judged again to eliminate ghost pixel according to the threshold,and the background model of the pixel is reinitialized.According to calculate the change of the centroid of the moving object,the improved algorithm changes the update rate of the background adaptively.The results show that the proposed algorithm can effectively absorb the ghost in less frames and detect the foreground object more accurately than the original Vibe algorithm.
Direction Optimization of Sampling Matrix and SAR Image Denoising in ND-GSM Model
CHEN Shuang-ye, ZHOU Er-jiang and WU Qiang
Computer Science. 2015, 42 (Z6): 158-162, 167. 
Abstract PDF(1564KB) ( 35 )   
References | RelatedCitation | Metrics
A new improved algorithm of the sampling matrix direction based on the model of ND-GSM combining with the nonsubsampled Direction lettransform and the model of Gaussian scale mixtures was presented.First,binary wavelet transform is applied to segment subgraphs of SAR image to determine thesampling matrix of optimizing direction of SAR image.Then,by combining nonsubsampled Directionlet transform which optimizes the direction of the sampling matrix with Gaussian scale mixtures in the segmentation subgraphs,the marginal distributions of neighbor coefficients in the nonsubsampled Directionlet domain with the sampling matrix of direction optimization are modeled.Finally,for removing the speckle noise,the Bayes least square estimation is adopted to evaluate each coefficient.The denoised segmentation subgraphs is composed to obtain the SAR image after denoising.The method solves the phenomenon that image approximation effect is bad when the direction of nonsubsampled Directionlet basis function and the direction of the image anisotropic target is inconsistent.Simulation results show that theme thod can fully reflect strong correlation among the amplitudes of neighbor coefficients,have obvious advantage in image detail preservation,improve the image visual effect and achieve a better denoising performance than the spatial filtering and wavelet method.
CUDA-based Acceleration Algorithm of Bilateral Filtering
ZENG Xuan-jie, CHEN Qiang, TAN Hai-peng, NIU Si-jie and SUN Quan-sen
Computer Science. 2015, 42 (Z6): 163-167. 
Abstract PDF(1299KB) ( 42 )   
References | RelatedCitation | Metrics
Bilateral filtering is a good de-noising algorithm while preserving image edge details,so it has been widely used in the field of image processing.The algorithm complexity of bilateral filtering is so high that it cannot meet the requirements of real-time processing.In this paper,by analyzing the bilateral filtering algorithm,a parallelization bilateral filtering algorithm based on CUDA was proposed,and according to the characteristics of CUDA,the proposed algorithm was optimized further.Experimental results and analysis prove that our method can achieve a speedup up to 75 or more under keeping the traditional bilateral filtering effect.
Parallel Implementation of Genetic Neural Network in Face Recognition
LI Hai-peng, LI Jing-jiao, YAN Ai-yun, WANG Ai-xia and WANG Jiao
Computer Science. 2015, 42 (Z6): 168-170, 174. 
Abstract PDF(1066KB) ( 34 )   
References | RelatedCitation | Metrics
The face recognition system of parallel genetic neural network was realized in the paper.BP network is one of the most effective algorithms in face recognition field.This paper pointed out some places that not fit the multi core computer system in the algorithm of BP network,and then put forward a new parallel algorithm of genetic neural network,which optimizes the BP neural network algorithm from the topology and weight value.The new algorithm improves the recognition rate and speed than the original algorithm more significantly.The effectiveness of genetic neural network parallel algorithm was proved in the ORL experimental data.
Modified One-dimensional Otsu Algorithm Based on Image Complexity
DONG Zhong-yan, JIANG Li-xing, WANG Jun-ya and XIAO Kai
Computer Science. 2015, 42 (Z6): 171-174. 
Abstract PDF(866KB) ( 32 )   
References | RelatedCitation | Metrics
Self-adaptive binaryzation is widely used in image segmentation and edge detection.Threshold extraction is one of the key technologies of digital image processing.Classic Otsu algorithm is an exhaustivity way and there will be a large computing redundancy.On condition of limited computer RAM and resource,this paper put forward a modified one-dimensional Otsu algorithm based on image complexity.According to different complexity,it improves Otsu algorithm speed on the basis of accuracy requirements.According to the experiments done on DM3730,this algorithm’s complexity is better than the classic one and its speed can be improved by about 40%.It can meet embedded system’s real-time requirements and the segmentation effect is almost the same with the classic one.
Subtle Video Motion Magnification by Spatial-temporal Filtering and Image Warping
ZHANG Jun and DAI Xia
Computer Science. 2015, 42 (Z6): 175-179. 
Abstract PDF(1263KB) ( 38 )   
References | RelatedCitation | Metrics
An image warping based video motion magnification method was introduced to reveal subtle motion in the input video that are difficult or impossible to see with the naked eye.The main advantage of the presented method is that it amplifies the video motion without increasing the frame noise.The proposed method fuses the approaches of Eulerian and Lagrangian to calculate the motion.The Eulerian video magnification method is used as a spatial-temporal motion analyzer to get pixel-level motion mapping of each frames in the input video.Then each frame of the input video is warped based on this mapping to amplify the input video motion.Experiments show that the presented method is signi-ficantly less sensitive to noise and its data processing pipeline is high scalable for introducing advanced image pre-processing filters or mesh post-processing algorithms to further improve the visual quality of the output video.
Face Recognition Based on Matrix Regression with Low-rank and p Sparse Constraints
YANG Guo-liang, LUO Lu, LU Hai-rong, FENG Yi-qin and LIANG Li-ming
Computer Science. 2015, 42 (Z6): 180-183, 198. 
Abstract PDF(1135KB) ( 38 )   
References | RelatedCitation | Metrics
This paper presented a model of matrix regression for face recognition to deal with varying illumination,as well as occlusion and disguise.To ensure low rank and sparse prosperities of the model,we used low rankness to constraint the regression error,and used the p-norm to constraint the regression coefficients in order to guarantee the sparest solution.We applied generalized iterated shrinkage algorithm for p-norm,and alternating direction method for regression coefficients.Experiment results on face database of AR and Extended Yale B show that the face recognition method proposed in this paper has a higher recognition rate than the current regression methods.And our method is more powerful for removing the structural noise caused by occlusion,and more robust for alleviating the effect of illumination.
Research on Key Technologies of 3D Fingerprint Based on Monocular Multi-view Machine Vision
YANG Rui-da, XIA Shao-jie and TANG Yi-ping
Computer Science. 2015, 42 (Z6): 184-189. 
Abstract PDF(1689KB) ( 41 )   
References | RelatedCitation | Metrics
For the low security problem of 2D fingerprint image,a monocular multi-view device which consists of one camera and two flat mirrors was designed to obtain three fingerprint images from three different perspectives.Then three different perspectives of fingerprint images are fused,and a 3D fingerprint image with large effective area can be got.Finally image enhancement and recognition are used in 3D fingerprint image.The experimental results show that the design of monocular multi-view device can guarantee a one-time access to multiple perspectives of fingerprint image,and the result image enhances the security of fingerprint recognition.
Research on Improved Local Fuzzy C-means Clustering Segmentation Algorithm
LIU Meng-jiao and WU Cheng-mao
Computer Science. 2015, 42 (Z6): 190-194, 202. 
Abstract PDF(1386KB) ( 46 )   
References | RelatedCitation | Metrics
In order to improve the complex image segmentation precision and noise resistance, fuzzy clustering segmentation of fully considering the pixel neighborhood information has caused scholars great attention.For the robust local fuzzy C-means clustering algorithms proposed by Krinidis and Gong Mao-Guo et al,the iterative formulas of their clustering centers lack rigorous mathematical theory,and Lagrange multiplier method was used to turn objective function and constraints of robust fuzzy local C-means clustering algorithm into unconstrained optimization problem.The partial derivative equations was made equal to zero to obtain the membership degree,and the clustering center new expressions was also gained.Then the new algorithm can be applied.The proposed clustering segmentation algorithm was used to segment clustering artificial synthetic graphics and remote sensing image.The results show that the proposed segmentation clustering algorithm is reasonable,and the new robust fuzzy local C-means clustering segmentation algorithm is more suitable for complex images segmentation.
Improvement of LPG-PCA Method for Image Denoising
LI Xu-guang, CUI Li-hong and HUANG Shou-yong
Computer Science. 2015, 42 (Z6): 195-198. 
Abstract PDF(978KB) ( 54 )   
References | RelatedCitation | Metrics
The Principal Component Analysis(PCA) can remove the correlation between signals,and it is easy to distinguish the signal and noise.Before the processing of target pixel block,first of all,we should find the training samples in a certain local pixel search domain which are similar to the target pixel block.In this paper,we get the copy of the image,and use the Bivariate shrinkage denoising method(Bishrink) to manage it.After that,we use Euclidean distance of the corresponding pixel block to replace the similarity between the target pixel block and the local pixel block,which will reduce the effect of the noise,and play an important role in the follow-up PCA transformation.Simulation results show that,improved LPG-PCA method has the very big enhancement in the quality of the image compared with the method before improvement.
3D Shape Dense Correspondence by Combining Heat Kernel and Geodesic Distance
RUAN Yi-zhang, TONG Wei-huai, PAN Xiang and ZHANG Guo-dong
Computer Science. 2015, 42 (Z6): 199-202. 
Abstract PDF(907KB) ( 37 )   
References | RelatedCitation | Metrics
This paper addressed the problem about dense correspondence between two 3D deformable shapes.It proposed a new algorithm by combining heat kernel and geodesic distance.It can effectively improve the accuracy of correspondence by taking pose-insensitive character of the kernel signature.The whole algorithm mainly consists of three steps.Firstly,it extracts multi-scale heat kernel for each feature point.Secondly,it finds the triple matching by considering two factors since it builds a triple matching by combining similarity that the algorithm detects the external feature points from input 3D meshes.Secondly,the algorithm uses HKS to define the local feature for external points.Finally,the algorithm searches the best three-point matching by combining local feature descriptor and geodesic distance of external points.In experiment,the proposed algorithm is proven to be very robust in sparse matching for 3D deformable shapes.
Research on Error Analysis of Angle Measurement Based on Matlab Image Processing
CHEN Yan-jun and ZHANG Xue-dian
Computer Science. 2015, 42 (Z6): 203-204, 208. 
Abstract PDF(705KB) ( 37 )   
References | RelatedCitation | Metrics
The angle measurement is popular in digital image processing,earthquake monitoring,aerospace engineering and missile guiding.We described a new method based on matlab image processing and analyzed the angle measurement error.We marked and recognized the images to obtain the coordinates,and finally calculated the rotating angle.In addition,the processing error was detailedly analyzed.
Investigation of Improved FOD Detection Algorithm
GAO Hong-wei, WANG Hui-ke and LI Zhuo
Computer Science. 2015, 42 (Z6): 205-208. 
Abstract PDF(954KB) ( 30 )   
References | RelatedCitation | Metrics
The airport runway foreign object debris(FOD) detection system was studied on the basis of the research of the algorithms of airport runway foreign object detection at home and abroad.First,the algorithm processes of the detection system were outlined,and then the key algorithms of foreign body detection algorithm——moving target detection algorithm was studied.We studied an improved frame difference method and an algorithm combining adaptive Gaussian mixture background subtraction and improved frame difference.Experimental results show that the foreign body detection results appear empty phenomenon can be effectively improved,and the effects of test results in background mutation can be inhibited by the improved algorithm.The foreign object detection on the runway pavement can be achieved effectively and real-timely.
Fast Two-dimensional Maximum Entropy Threshold Segmentation Method Based on Sobel Operator
LI Feng and KAN Jian-xia
Computer Science. 2015, 42 (Z6): 209-210, 220. 
Abstract PDF(706KB) ( 39 )   
References | RelatedCitation | Metrics
The classic two-dimensional maximum entropy threshold segmentation algorithm takes a long time to compute and consumes large space to store information.To solve these problems,in this paper a fast threshold recursion method was proposed based on the standard two-dimensional maximum entropy threshold segmentation algorithm,at the same time,the threshold obtained by Sobel operator edge diction was applied to the fast threshold segmentation algorithm in order to solve the problem of the loss of details.Finally,the experiments show that this improved algorithm makes processing time reduce from O(L4) to O(L2) through the recursive formula.It not only reduces the complexity of the calculation,but also protects the details.
Fusion of Infrared and Visible Images Based on Visual Saliency
GUO Ling and YANG Bin
Computer Science. 2015, 42 (Z6): 211-214, 235. 
Abstract PDF(1245KB) ( 40 )   
References | RelatedCitation | Metrics
Most existing fusion rules for infrared and visible images are only defined by the local image features such as contrast,variance and gradient.However,there is no consideration of global characteristics of the whole source images.Therefore,the global “interesting” objects in the source images couldn’t be highlighted in the fused image.To solve the problem,a novel fusion algorithm with non-subsampled contourlet transform was proposed.The global visual saliency feature of the source images was used to guide coefficients fusion in order to emphasize the global “interesting” object in the fused image.The experimental results demonstrate that the proposed algorithm performs better than the traditional fusion methods in terms of both subjective and objective qualities.The“interesting” objects for visual system were preserved in the fused image with the proposed method and the visual background was close tothe real scene.
Local Nonlinear Distribution Feature and Extraction Algorithm for SAR Images
GUAN Tao and YU Hao-jie
Computer Science. 2015, 42 (Z6): 215-217. 
Abstract PDF(701KB) ( 37 )   
References | RelatedCitation | Metrics
Spectral clustering is a current research focus and most algorithms are applied to image segmentation.With the capacity of finding low dimension space of spectral clustering,this paper analyzed the principle of feature representation of spectral clustering,proposed a new local nonlinear distribution feature extracted from sub-blocks of SAR images and used to describe the properties of sub-blocks.These feature vectors are rotationally invariant and they are obtained via several steps.First,the initial feature vectors are obtained via spectral clustering and then they are transformed by discrete Fourier transform.We used Nystrm approach to compute the eigenvalues of spectral clustering.In order to avoid weakening the difference of local characteristics in sub-graphs,we adopted Minkowski distance to compute simi-larity among sub-graphs.The efficiency of our features is validated by experiments.
Application of LIP Theory in Color Image Enhancement
CHEN Zhi-ang, XU Xiao-gang and XU Guan-lei
Computer Science. 2015, 42 (Z6): 218-220. 
Abstract PDF(754KB) ( 38 )   
References | RelatedCitation | Metrics
In recent years,with the rapid development of computer technology,such as multimedia technology and computer vision research,espelially popular color image capture device,color image has been more and more widely used in various fields of society.For the nature of color images,this paper proposed a recursive enhancement algorithm based on the theory of LIP.The core idea of the algorithm is that in the RGB color space images the proportional relationship between the different components of different pixel is investigated.In addition,under the LIP framework,each pixel was amplied,achieving the color image enhancement.
Sobel Edge-detection on Color Image Based on Interoperability between CUDA and OpenGL
LI Chi-xin and LAN Cong-hua
Computer Science. 2015, 42 (Z6): 221-222, 230. 
Abstract PDF(665KB) ( 65 )   
References | RelatedCitation | Metrics
With the quick development of general-computing on GPU,many jobs which were once implemented by CPU now can be delivered to GPU.In this paper,depending on the interoperability between CUDA and OpenGL,jobs of edge-detection on color image and displaying result are all finished by GPU,the only job for CPU is delivering data to GPU.By doing so,computing speed is increased and efficiency of GPU is maximized.The experimental results indicate that interoperability between CUDA and OpenGL is an effective method which can combine parallel processing capability of GPU and displaying capability of GPU.Compared with other 2 kinds of scheme which only uses CPU to process image and only uses GPU to parallelly compute data,this kind of scheme can achieve 80 times speedup when processing high resolution image.
Embedded Tree Measurement System Based on BP Neural Network Image Segmentation
SHU Xin-zhan, FANG Kai and HU Jun-guo
Computer Science. 2015, 42 (Z6): 223-225. 
Abstract PDF(717KB) ( 34 )   
References | RelatedCitation | Metrics
Because traditional tree measurement can not synchronously measure the tree’s height and breast-height diameter,and the measuring instruments have the limitations and other human factors,this paper designed an embedded tree measurement system based on BP neural network image segmentation,which can synchronously measure the main characteristic of tree.Testing results show that the proposed system can run efficiently and operate simply.Furthermore,the measurement precision error fully meets the requirement of less than 5%.So the proposed system has good application value,which is the supplement of the existing methods of tree measurement.
Research on Detection of Image Region-duplication Forgery Affected by Intrinsically Identical Objects
CUI Wen-cheng, LIANG Shuang-shuang, SHAO Hong and WANG Hai-yu
Computer Science. 2015, 42 (Z6): 226-230. 
Abstract PDF(1193KB) ( 51 )   
References | RelatedCitation | Metrics
Intrinsically identical objects can cause false positive when the image is detected by region-duplication forgerydetection.A detection algorithm was proposed based on local outlier estimation and cluster compactness.Firstly,SIFT feature is extracted and bidirectional matching is applied.Then local outlier factor and local reachability density are applied to analyze distribution of matching points after estimating influence on its neighborhood by counterfeit-Gaussian influence function,and combined with cluster compactness after affinity propagation Clustering.Finally,the support vector machine is applied to distinguish the intrinsically identical objects.Experimental results show that the proposed method,which has high accuracy rate and low false detection rate,can resist intrinsically identical objects interference effectively.
Flame Detection Based on SIFT Algorithm and One Class Classifier with Undetermined Environment
LIN Tao, HUANG Ji-feng and GAO Jian-hua
Computer Science. 2015, 42 (Z6): 231-235. 
Abstract PDF(1018KB) ( 37 )   
References | RelatedCitation | Metrics
Under undetermined and sophisticated environment,it is an unresolved puzzle for early flame detection based on image.Therefore,for the improvement in detection efficiency,this paper not only introduced the theory of histogram equalization to this field,but also brought in the SIFT algorithm to image process of determining multi-scale features of the flame pictures,identifying flame extreme point as well as feature matching.We regarded fractal dimension as one of flame features.Since flame is the anomalous value mostly and there are some advantages in one class classifier,such as low cost,easy to obtain features and high precision,one class classifier is used to identify the flame.The experiment proves in this research that there is excellent rate of true positive and false positive under the circumstances of close quarters and bright light,in addition,there is high detection rate of flame and low false alarm rate under the dim light condition.
Rapid Visualization Method Based on 3D Delaunay Triangulation
LI Chun-xin and PENG Ren-can
Computer Science. 2015, 42 (Z6): 236-237, 248. 
Abstract PDF(774KB) ( 37 )   
References | RelatedCitation | Metrics
With the rapid development of digital ocean,there are higher demands for the virtual of true feeling environments and the reveal of the rules hidden in massive marine data.In order to know the variation of seawater information intuitionally,a rapid visualization method based on fuzzy clustering and 3D Delaunay triangulation was proposed,and it is utilized to visualize marine temperature.In the method,fuzzy clustering is used to attain isothermal dataset,and an efficient 3D Delaunay algorithm which improves the speed by establishing relative relationship of nodes and optimizing the new tetrahedral construction is applied to construct 3D surface from the large scale isothermal dataset.Besides,the color model is adopted to visualize the surface.Experimental results illustrate that the method can visualize marine temperature efficiently,and vector field visualization will be researched based on the presented method in the near future.
Image Automatic Annotation Based on Correlation of Keywords
XU Gong-wen, LIAO Ming-hai, ZHENG Sen-hong, ZHAO Hong-luan, ZHANG Zhi-jun, ZHAO Qian and XU Chun-xiu
Computer Science. 2015, 42 (Z6): 238-240, 252. 
Abstract PDF(931KB) ( 39 )   
References | RelatedCitation | Metrics
With the growth of the number of images,image retrieval technology has become an active area of research.Image annotation can organize and process large amounts of picture information effectively and retrieve the useful information which the user needs.Image automatic annotation methods tag image based on region after the image segmentation,but the accuracy rate is not high.This paper presented a method to tag images combined the correlation of keywords.Firstly,it tags images according to the similarity of areas.Then it uses the correlation of keywords to improve the results.The experimental results show that this tagging pictures method is effective.
Low-light Image Enhancement Based on Improve Histogram
HE Wei
Computer Science. 2015, 42 (Z6): 241-242, 262. 
Abstract PDF(697KB) ( 34 )   
References | RelatedCitation | Metrics
In this paper,a novel method is proposed in order to enhance the contrast of the images in low-light conditions,which combines improved histogram equalization with the contrast enhancement algorithm.Firstly,the partial information is used to enhance the contrast.Then we adjust the histogram with the improved histogram equalization.Extensive experiments show the efficiency of our method in enhancing the overall contrast and shaping the details of the low-light images.
Survey on Performance of Software Defined Networking
ZENG Shan, CHEN Gang and QI Fa-zhi
Computer Science. 2015, 42 (Z6): 243-248. 
Abstract PDF(1690KB) ( 32 )   
References | RelatedCitation | Metrics
The emergence of server virtualization and cloud computing applications has changed the traditional form of network services.SDN(Software Defined Networking) decouples the control plane from data plane which provides a new solution for the future internet technologies.But with the explosion of data,the network configuration requirement changes frequently which needs an effective SDN implementation.This paper starts with the origin and development of SDN,and introduces the three layer architecture of SDN proposed by ONF in details which includes the infrastructure layer,the control layer,the application layer,south bound interface and north bound interface.Then combined with the research development of the industry, The problems and the solutions of SDN performance are analyzed in details.In the end,a future SDN performance optimization scheme based on the intelligent flow table management and the scalable control layer management is proposed.
Research and Analysis of OpenDaylight Controller
XU Ming-guang, LIU Ya-ping and DENG Wen-ping
Computer Science. 2015, 42 (Z6): 249-252. 
Abstract PDF(983KB) ( 46 )   
References | RelatedCitation | Metrics
OpenDaylight controller(ODL) is a new open source SDN controller.It is divided into three layers:southbound protocol layer,control layer and northbound interface layer.Southbound protocol layer supports Openflow,LISP and other protocols.Control layer has complete base network services.Northbound provides REST API for web application to access these services.Meanwhile,it supports cluster mode and can be applied to large-scale networks.By analyzing the overall framework of the ODL,work processes,module extensions,and LISP services,an experiment on LISP communication system can be built.We can see that ODL has clear architecture and reasonable function.It is also easy to expand and actual deployment.
Research of Software Defined Storage Area Network Based on iSCSI
WU Yi-zhi, TIAN Shuang-jie and ZHOU Yu-yan
Computer Science. 2015, 42 (Z6): 253-255, 259. 
Abstract PDF(944KB) ( 30 )   
References | RelatedCitation | Metrics
iSCSI is a network storage protocol based on TCP/IP.It has advantages such as convenience of setting up,high expansibility over long distance,etc.But in practical situation,expensive hardware has to be deployed,e.g.,HBA network interface card,to get high performance.Based on high-performance and common compute,storage and network technology,we designed a iSCSI-oriented software defined storage area network(iSCSI-SDSAN) to improve iSCSI SAN’s access performance without specific hardware.The iSCSI-SDSAN’s architecture,component and algorithm are discussed in the paper.In the end,implementation on Ubuntu using Java shows the IO performance improved largely especially for writing bandwidth with 30% increase.
Research and Analysis on Differences of Domain Routing Protocol OSPF and IS-IS
LI Yan-gang, DENG Wen-ping, WANG Hong and LIU Ya-ping
Computer Science. 2015, 42 (Z6): 256-259. 
Abstract PDF(1030KB) ( 39 )   
References | RelatedCitation | Metrics
Currently,OSPF is the most widely used domain routing protocol in the world,but the advantages of IS-IS are increasingly getting more attention.This article compares OSPF and IS-IS on the working mechanism,the convergence,scalability and security in detail.The practical analysis and experimental results show that OSPF and IS-IS have the advantages of fast convergence and safety features.OSPF has more abundant routing strategies,variety of regional types,PDU and circuit types.OSPF is more suitable for small and medium-sized network and enterprise using.IS-IS is relatively more simple,stable and scalability.In the massive route objective circumstances,IS-IS is more efficient than OSPF and more suitable for deployment of network operators and ISP.
Wireless Sensor Network Interface Design Based on JASO
CHEN Song-li, YANG Chun-hui, DAI Qing-yun and LIU Yi-hong
Computer Science. 2015, 42 (Z6): 260-262. 
Abstract PDF(735KB) ( 33 )   
References | RelatedCitation | Metrics
Based on wireless sensor network(WSN) test control platform this article designed the general control platform and the platform between the wireless sensor network interface.The module,function,key algorithm and the experimental tests of wireless sensor network interface based on JASO protocol were emphasized.
Hybrid Method for Capturing Snapshot of Routing Table of DHT Networks
YU Jie, LI Qiang, LI Sha-sha, MA Jun and LI Zhou-jun
Computer Science. 2015, 42 (Z6): 263-265, 270. 
Abstract PDF(1027KB) ( 48 )   
References | RelatedCitation | Metrics
DHT network is the most widely used P2P protocol in the world and routing table is the key component for its self-organization.It’s difficult to measure the global routing table of a DHT network due to its feature of totally distributed architecture.In this paper,we propose a hybrid method to capture the snapshot of the routing table of DHT network.We first introduce a repetition degree to define the efficiency when crawling the snapshot.Then,we propose a DHT snapshot method that uses breadth-first search first and then changes to depth-first search.Finnally,we propose a self-adaptive method to crawl the routing tables based on the un-even feature of routing table.The experiment on Kad network shows that,the DHT snapshot method is 91.2% better than Blizzard,64.5% better than breadth-first and 27.4% better than depth-first.Routing table snapshot method reaches the best when g=5 and it is 187.4% better than the random method and 38.9% better than g=7.
iBGP and RCP Routing Protocol Convergence Time Analysis
HU Qiao-lin, ZHAO Guo-lin, LIU Jian-hao and SHI Zi-yan
Computer Science. 2015, 42 (Z6): 266-270. 
Abstract PDF(1274KB) ( 38 )   
References | RelatedCitation | Metrics
Mechanism of iBGP single best path propagation may prevent router-level path diversity and correctness,which resulting in long convergence time and inter-domain churn.Through the detailed route convergence time analysis for both iBGP and Route Control Platform(MP-RCP),We can get the theoretically upper limit convergence time for iBGP.The experiments prove the correctness of theoretically analysis,which also suggests that MP-RCP reduces convergence time and inter-domain churn.
Test Method of Performance of Network Layer Device
LI Zhao-bin, XIA Xiao, LIU Qian and MA Yu
Computer Science. 2015, 42 (Z6): 271-273. 
Abstract PDF(754KB) ( 47 )   
References | RelatedCitation | Metrics
The performance of the network layer device is related to the operational control of the subnet,and directly affects the communication quality of the entire network.Therefore,testing performed on the performance of network layer device is essential.This paper introduced several general index which can measure the performance of the network layer device,and stated the methodology of these index in detail.Then we used the test method to practice in reality environment and obtained the testing reports.
Probability Routing Protocol Based on Delay Tolerant in Wireless Body Area Network
LI Yan, WANG Cheng and FENG Xian-ju
Computer Science. 2015, 42 (Z6): 274-275, 289. 
Abstract PDF(780KB) ( 40 )   
References | RelatedCitation | Metrics
For data transmission reliability problem in wireless body area network,this paper presented a probability-based delay tolerant networks protocol.The delay tolerant networks protocol based on the protocol considers the nodes’ cache and the impact probability of encounter between the nodes and the connection state of the network.The analysis shows that compared to other routing protocols,this method can indeed improve the reliability of data transmission in the entire network.
Correlated Channel Capacity of Virtual MIMO Based on Distributed Satellites
YANG Huan and HE Yuan-zhi
Computer Science. 2015, 42 (Z6): 276-278. 
Abstract PDF(729KB) ( 49 )   
References | RelatedCitation | Metrics
In order to study the correlated channel capacity of virtual MIMO system constructed by multi-antenna of disputed satellites,a MIMO capacity model under correlated fading channel was proposed.We analyzed the correlated fading channel capacity of virtual MIMO by decomposing eigenvalue of correlated matrix and modeling correlative coefficient using exponent model.The simulation shows that the virtual MIMO constructed by disputed satellites can promote capacity of the channel while the correlated channel capacity will recede with enhancing the channel correlation.
Study of Wireless Sensor Network Model Based on Novel Local World Networks
WANG Jia-li, LI Hui-jia and JIA Chuan-liang
Computer Science. 2015, 42 (Z6): 279-284. 
Abstract PDF(1357KB) ( 35 )   
References | RelatedCitation | Metrics
The existing wireless sensor network model seldom considers heterogeneous and load balance.And evaluation method of the model is limited.This paper proposed a wireless sensor network model based on novel local world network and took heterogeneous as well as node localization and local adjustment factor into account.The new model concludes wireless sensor network model based on energy aware local world network and wireless sensor network model based on load regulation local world network,which perfects the original model.By using MATLAB to simulate,this paper studied the degree distribution of the model and proposed an evaluation method of the model by comparing the characteristic parameters of complex networks.It introduces centrality in social network field to evaluate the importance of network node more deeply.Both theoretical analysis and experiment show that new model can describe the wireless sensor network well and can further optimize the network as well as improving communication ability.The idea of complex network used in this paper has positive effect on the research of wireless networks.
Double Layers Routing Algorithm on Large Road Networks Based on Overlapping Communities Detecting
YANG Xu-hua and ZHOU Shi-jie
Computer Science. 2015, 42 (Z6): 285-289. 
Abstract PDF(1293KB) ( 40 )   
References | RelatedCitation | Metrics
The fast searching algorithms of shortest path on large road network have wide application in navigation,transportation system,traffic assignment,etc.Although the calculated performance in several existing hierarchical algorithms has been improved,there are still some problems such as heavy computation and low calculation efficiency.By detecting the hierarchical structure based on overlapping communities,we proposed a new hierarchical routing algorithm using overlapping communities.We divided the whole network into communities containing the overlapping nodes and constituted a double-layer structure of road network.The first layer contains the original road network nodes and in the second layer the communities are regarded as nodes,the overlapping nodes and intercommunity edges are regarded as the edges.Based on the network architecture,routing is broken into two processes,the heuristic overall routing in the second layer and local routing in the communities in the first layer.We denoted the overlapping node as the key routing node and used it into the hierarchical algorithms which can reduce the searching space and computational complexity effectively.Several experimental results of the real city road networks show the effectiveness of the algorithm.
Adaptive Duty Cycle Algorithm Based on Traffic Prediction for WSNs
ZHANG Long-mei and LU Wei
Computer Science. 2015, 42 (Z6): 290-293, 306. 
Abstract PDF(1125KB) ( 34 )   
References | RelatedCitation | Metrics
A new adaptive duty cycle algorithm based on traffic prediction was proposed for WSNs,named AdcbTP.AdcbTP predicts the future traffic and estimates duty cycle on the basis of the predicted values,and then adaptively controls duty cycle of SMAC protocol.In the macro scale of work periods,it works out theoretic duty cycle value using traffic prediction model,and implements rough prediction control to duty cycle.In the micro scale of sleep/listen periods,it implements subtle incremental adjustment to duty cycle.NS2 simulations show little variance between the predicated value and the real value.Furthermore,extensive simulations on energy exhausting and latency simulation show that AdcbTP saves energy greatly without loss of latency.
Data Management Framework for Internet of Things
SHI Jun-ru, HEI Min-xing and YANG Jun
Computer Science. 2015, 42 (Z6): 294-298. 
Abstract PDF(1388KB) ( 39 )   
References | RelatedCitation | Metrics
The Internet of things(IoT) is a networking paradigm where interconnected smart objects continuously generate data and transmit it over the Internet.However,the solutions to manage and utilize the massive volume of data produced by these objects are yet to mature.Traditional database management solutions fall short in satisfying the sophisticated application needs of an IoT network that has a truly global-scale.Current solutions for IoT data management address partial aspects of the IoT environment with special focus on sensor networks.We surveyed the data management solutions proposed for IoT or subsystems of the IoT.And finally proposed a data management framework for IoT that acts as a seed to a comprehensive IoT data management solution.The proposed framework adapts a federated,data- and sources-centric approach to link the diverse things with their abundance of data to the potential applications and services that are envisioned for IoT.
Data Collection Strategy Based on Improved LEACH Protocol
LIU Lin-feng, GUO Ping, ZHAO Juan and LI Ning
Computer Science. 2015, 42 (Z6): 299-302. 
Abstract PDF(970KB) ( 38 )   
References | RelatedCitation | Metrics
This paper improved the protocol based on traditional LEACH protocol,and two influence factors called the residual energy and the number of elected cluster head of the nodes were introduced to make the clustering more ideal.Because the nodes forward data through multi hop in traditional methods of collecting data,and some of the nodes energy rapid depletes due to forward data to other nodes,this paper proposed a new strategy DCST that leads mobile sink into wireless sensor networks,and let it move along the planning optimal track to collect data.DCST uses the ant colony algorithm to find out the optimal path which connects all the cluster heads and lets the mobile sink move along this path to collect data based on clustering the WSN with the improved LEACH protocol.Optimizing and comparing the moving speed of mobile sink,we find out the ideal moving speed.Simulation results show that compared to the traditional Leach algorithm and other improved algorithm,the improved LEACH protocol and DCST can prolong the network life cycle more effectively and reduce the energy consumption of the whole network.
Key Point Created by More AP Wireless Network and Extended Applied Research
ZHOU Tong and SONG Hai-jun
Computer Science. 2015, 42 (Z6): 303-306. 
Abstract PDF(1075KB) ( 51 )   
References | RelatedCitation | Metrics
Creating a wireless network is a project.AP interface configuration has requirements that which configuration can only be used between the AP and which can be used when connected to the host must be clear.Otherwise,the network will encounter obstacles.We tested explored the basic configuration of the network in the process of summarizing the AP rules,and implemented a cheap wireless mobile printing applications in the network construction.
Receiver-initiated Sender-dominated MAC Protocol Design in Wireless Sensor Networks
ZHANG Qi-fei, DAI Zhi-feng and GUI Chao
Computer Science. 2015, 42 (Z6): 307-310, 331. 
Abstract PDF(1299KB) ( 34 )   
References | RelatedCitation | Metrics
In the design of asynchronous receiver-initiated wireless sensor network medium access control protocols,conventional protocols adopt receiver-dominated wakeup manner,which prolongs the period for persistent channel detection of upstream sender,thus increasing both delivery latency and energy consumption.The impact of different wakeup methods on delivery delay is studied and a novel receiver-initiated sender-dominated medium access control (RISD-MAC) protocol is thus proposed to adjust the wakeup time of the intended receiver adaptively based on the wakeup schedule of its upstream sender,which guarantees the rendezvous of data delivery of both sides to achieve communication efficiency.The simulation results show that compared with RI-MAC,a state-of-the-art medium access control protocol in wireless sensor networks,RISD-MAC achieves less delivery latency and lower duty cycle while maintaining comparable packet delivery ratio.
Associated Coverage Using Data Correlation for Sensor Networks
WU Shun, WAN Ying, SUN Ya-juan, XU Da-wei and W ANG Huan-zhao
Computer Science. 2015, 42 (Z6): 311-314, 320. 
Abstract PDF(1206KB) ( 39 )   
References | RelatedCitation | Metrics
This paper studied the coverage schuduling problem without the use of node location information.An associa-ted coverage protocol using data correlation(ACPUDC) for WSNs was presented.ACPUDC describes the data redundancy in the networks as formalized data correlation,analyzes the problem of network connectivity,and presentes judgment model of redundant node based on data correlation.The residual-energy-based backoff mechanism in ACPUDC ensures balanced energy consumption of each node.The simulation results verify that ACPUDC can significantly reduce the number of working nodes,and meanwhile,guarantee the QoC and connecivity of networks.Moreover,ACPUDC can make energy comsumption of each node more fair.
Trust Model of Cloud Computing Based on Multi-parameters Evaluation
WANG Jun, LIU Wen-fen and GAO Yan
Computer Science. 2015, 42 (Z6): 315-320. 
Abstract PDF(1458KB) ( 39 )   
References | RelatedCitation | Metrics
A novel trust model was put forward to solve the trust problem of cloud resource providers.In order to help users select the safest resource in a large number of providers,trust evaluation considered five parameters in cloud service process:availability,reliability,data integrity,honesty and turnaround efficiency.These parameters make a comprehensive assessment on the resource providers with subjective evaluation and objective data.The model achieves dynamic analysis of massive data by using some methods,such as Bayesian statistics and time attenuation function,and the statistical result can reflect the quality of service in time.Due to the different degree of users relying on each parameter,the model distributes the weight value of trust factor combining the way of constructing weight vector in analytic hierarchy process(AHP) to meet the special need of individual users.The result of simulation experiment verifies the model can resist strategic attack with certain practicality.
Research on Application and Optimization of SPICE in Transparent Desktop Service Mechanism
WANG Bin, YU Jia-qi, LI Wei-min, ZHU Chong and SHENG Jin-fang
Computer Science. 2015, 42 (Z6): 321-324, 348. 
Abstract PDF(1376KB) ( 39 )   
References | RelatedCitation | Metrics
With the development of computer network and virtualization technology,the desktop virtualization solution is becoming more and more mature.Desktop virtualization technology changes the past decentralized and independent service mode and provides users a kind of unified and secure desktop service.The SPICE(Simple Protocol for Independent Computing Environment) can provide the dynamic adaptive virtual desktop service with high performance as an open source virtual desktop solution.Based on virtualization technology and SPICE protocol,this article built a desktop virtualization platform to provide users with desktop virtualization services,named the transparent desktop services.However,there are still many deficiencies in this solution.For instance,it cannot be applied to the environment of high user controllability,and the quality of graphic interactive experience is to be improved.Therefore,SPICE was optimized in this paper.A transparent file transfer mechanism was proposed,which achieves file transfer between Guest OS and client.And the performance of SPICE graphical interaction was improved,which reduces response time of users’ operations.Finally,experiment results show that the optimizations can provide users with higher user-controllable service and better experience quality.
Reliability-based Job Scheduling Algorithm in Cloud Computing
WANG Yong, LIU Mei-lin, LI Kai, REN Xing-tian and XU Rong-qiang
Computer Science. 2015, 42 (Z6): 325-331. 
Abstract PDF(1658KB) ( 40 )   
References | RelatedCitation | Metrics
As an emerging computing model with commercial properties,cloud computing has been widely concerned.In the research of cloud computing,task scheduling is a key issue.In this paper,we considered the requirements of reliability as the main target to optimize.Using game theory,we modeled the task scheduling system as a cooperative game.In the cooperative game model,computing nodes are participants,the ability that computing nodes can provide under steady state is the utility function and the strategy in the game is the task rate allocation strategy on computing nodes.To make the system provide max ability under steady state,computing nodes in the system cooperate with each other,choosing their own game strategy.We regarded the computing nodes as M/G/1 queues in this paper,and according to the M/G/1queuing theory we analyzed the ability that computing nodes can provide under steady state.We proved that Nash bargaining solution exists.On this basis,we proposed the balancing task scheduling algrithm based on reliability.
Adaptive Replacement Cache Mechanism for Fault Tolerance in Cloud Storage
WU Qiu-ping, LIU Bo and LIN Wei-wei
Computer Science. 2015, 42 (Z6): 332-336, 340. 
Abstract PDF(1546KB) ( 39 )   
References | RelatedCitation | Metrics
Hadoop adopts replicas as the default data fault tolerance,while this fault tolerance mechanism occupies much storage space and has a low storage efficiency.To solve this problem,this paper proposed an adaptive replacement cache mechanism for fault tolerance in cloud storage,based on analyzing the cache replacement algorithm ARC.When users access the file system,ARCMFF statistics files’ access frequency by maintaining a LRU queue and a LFU queue,and then adds those files highly accessed to the cache for better system performance.In ARCMFF,the majority of files are stored by erasure code,while only few files in the cache are stored by replica.The erasure code is highly encoded and has a higher storage efficiency,resulting that the distributed file system can save amount of storage space.According to series of experiments,we approved that the distributed file systems with ARCMFF can save file storage space greatly,improve storage efficiency and achieve higher reading and writing performance.
Analytical Model for Qemu-based Live Migration Strategy
WANG Sen, ZHU Chang-peng and HAN Bo
Computer Science. 2015, 42 (Z6): 337-340. 
Abstract PDF(911KB) ( 37 )   
References | RelatedCitation | Metrics
Live migration is a powerful management tool in data center and has been widespreadly applied for virtual machine load balancing,fault tolerance,power management and other applications.Whether evaluation for performance of live migration of virtual machines is precise or not has directly influence on effects of live migration decisions.Therefore,we proposed an analytical model for qemu-based live migration of virtual machines.Based the model,we extracted key parameters that affect the performance of live migration,and analyzed mathematic relations between these parameters and performance of live migration.Finally,we built some experiments to evaluate and verify the correctness and precision of the analytical model by comparing experiential and analysis results.Our experiential results show that the model yields higher than 95% prediction accuracy in migration time and total transferred data.
Task Scheduling Strategy in Cloud Computing Based on Sequential Game
LIU Mei-lin, WANG Yong, LI Kai, LIU Peng-fei, REN Xing-tian and YANG Jian-hong
Computer Science. 2015, 42 (Z6): 341-344, 358. 
Abstract PDF(1113KB) ( 44 )   
References | RelatedCitation | Metrics
With the popularity of Internet applications,cloud computing has become the hot spots in all sectors of business and academics.Cloud computing is a new computing model after distributed computing,parallel computing and grid computing.In the study of cloud computing,task allocation is the focus of the research.In this paper,after summarizing the research status of cloud computing,we proposed a task scheduling strategy based on sequential game theory,which has a better optimization in resonse time of the task.
Design and Implementation of Desktop Cloud Management Platform Based on VDI Mode
LI Mei, LUO Nan-lin and CAI Jian-xuan
Computer Science. 2015, 42 (Z6): 345-348. 
Abstract PDF(978KB) ( 45 )   
References | RelatedCitation | Metrics
With the advance of digital campus construction in universities,office computer and teaching computer’s desktop system maintenance workload are increasing.So they need more advanced,convenient and fast solution.In this paper,one solution of desktop cloud management platform based on the VDI mode was designed for this situation.The experiment proves that this scheme can effectively realize the centralized management of user’s desktop.It can supported personalization,device drivers and network boot.This scheme can guaranteed security of data,able to quickly reco-ver from a disaster,and also solve the complexity of the maintenance work for desktop system,which improves work efficiency and saves the cost.
Taking Lower Network Maximum Flow Algorithm
ZHAO Xiao-rong
Computer Science. 2015, 42 (Z6): 349-350, 377. 
Abstract PDF(601KB) ( 34 )   
References | RelatedCitation | Metrics
This paper aimed at discussing the maximum flow problem on the capacity of the arc with upper and lower bounds,as well as giving an algorithm and a mathematical model associated.In other words,in the situation where the network has upper and lower limits,the maximum flow problem was transferred to minimum cost flow problem.Furthermore,an algorithm used to solve the problem was provided.
Communication Research of Communication Identity Management Terminal Based on ZigBee Wireless Sensor Networks
LI Bing-yi, LIANG Ke, HU Yin, DENG Xue-bo and WEN Yong-yi
Computer Science. 2015, 42 (Z6): 351-354, 361. 
Abstract PDF(1294KB) ( 36 )   
References | RelatedCitation | Metrics
This paper firstly introduced the related knowledge about the wireless sensor network and ZigBee technology,and then choosed a ZigBee hardware scheme offered by Freescale,named MC13224.After reading and analyzing the ZigBee protocol,this paper described the NWK layers and application layer.Then we developed a simple application instance on this platform.After that data structures and important functions to implement the application were discussed.Finally this paper showed the application by a third party software,and introduced the method of testing the ZigBee net working mode based on the communication terminal identification management.
Internet of Things RFID Middleware Technology and its Applicated Research in Electric Power Communication Identity Management
LIANG Ke, LI Bing-yi, HU Yin, DENG Xue-bo and WEN Yong-yi
Computer Science. 2015, 42 (Z6): 355-358. 
Abstract PDF(1012KB) ( 35 )   
References | RelatedCitation | Metrics
RFID technology is one of the core technology in construction of the Interect of things.This paper discussed the system structure of Internet of things,then analyzed the demand and problems for RFID middleware platform for embedded systems applications.In consideration of energy consumption and storage efficiency,we designed three middleware platform including equipment management,data management and application interface layer and analyzed the main function modules and the realization of techniques of the respective layers detailedly.At last,the article introduced how to use the RFID middleware in identity management system.
Improved iSCSI Authentication Based on USB-Key
GUO Yan, LI Yong-tang and LI Chun-jie
Computer Science. 2015, 42 (Z6): 359-361. 
Abstract PDF(771KB) ( 34 )   
References | RelatedCitation | Metrics
The iSCSI authentication method CHAP was introduced and analyzed,especially its security risks.Cases such as leakage of username and password were considered,and improvement was proposed.Experiments demonstrate that CHAP with USB-key can effectively prevent multi-initiators logging in with same username/password pair,and the security level and stability of iSCSI are significantly improved.
Web Attack Detection Method Based on Support Vector Machines
WU Shao-hua, CHENG Shu-bao and HU Yong
Computer Science. 2015, 42 (Z6): 362-364. 
Abstract PDF(733KB) ( 48 )   
References | RelatedCitation | Metrics
Web attack detection is a kind of dynamic Web security protection technology,but the intruder can use diffe-rent coding schemes,mixed case,alternative statements and other skills,bypassing defense mechanism.For the particularity of web security and the shortage of the existing detection technology,we took SQL injection and cross site scripting attacks as an example.Firstly,the thesis studies the feature selection and extraction of SQL injection and cross site scripting attacks,and uses the artificial selection and mathematical statistical methods to covert the original payload into fixed dimension feature vector.Secondly,it marks the sample data after feature selection and extraction,and performs support vector machine training and classification.Finally,using the Weka,it verifies the feasibility and effectiveness of the approach.The experimental results show that features after selection and extraction can reflect the nature of the original data and this method has higher detection rate.
Keyword-based Privacy-preserving Retrieval over Cloud Encrypted Data
YU Zhi-bin and ZHOU Yan-hui
Computer Science. 2015, 42 (Z6): 365-369, 401. 
Abstract PDF(1582KB) ( 41 )   
References | RelatedCitation | Metrics
More and more organizations and individuals outsource their data storage into cloud and using cloud-based services provide data management.Query providing user capabilities of accessing cloud data and obtaining information,is an important component of cloud services.Thus,how to protect both privacy of user queries and data privacy in cloud while providing query services and eventually providing query results meeting needs quickly and timely has become a vital concern.The private information retrieval (PIR) protocol allows user to perform a query from a database without revealing his private information;meanwhile the privacy of the database will be protected.Corresponding to limitations encountered in the applications of existing PIR protocol in the cloud environment and large capacity data scenario,a computational PIR protocol based on additively homomorphic encryption scheme and MapReduce was proposed.The protocol uses the batch queries method to lower the communication complexity.Further,we used proposed protocol as the building-block as well as perfect hash function tool to construct a cloud keyword-based encrypted data retrieval scheme.The scheme combines high query efficiency and the practicality of keyword-based retrieval.
FAPP:A Float-car-aided Privacy-preserving Authentication Protocol for VANETs
YANG Tao, WANG Ya-kun, GE Yun-feng and LIN Yu
Computer Science. 2015, 42 (Z6): 370-377. 
Abstract PDF(467KB) ( 67 )   
References | RelatedCitation | Metrics
VANETs are one of the most important Internet of Things(IoT) applications in the intelligent transportation field.VANETs have attractive prospects for development.The research about security and privacy of VANETs is becoming a hot spot,and there has been a large number of research results.Using a float-car-aided group forming method,we proposed a float-car aided privacy-preserving communication protocol for VANET(FAPP).In FAPP,the float car F forms a group G which members are the vehicles around it.As a group leader,F takes charge of the verification of the member car through the revocation list from the transportation regulation center(TRC).F generates the session key and determines the configuration for the group,too.F can anonymize the message from the group member,and then send it to other group members or other group leader after inserting a corresponding trace entry into the trace log.If required,trace execution department(TED) can trace out the disputed message’s real signer with the cooperation of the TRC.Comparison with other existing schemes in the literature has been performed to show the efficiency and applicability of our scheme and can match the VANET conditional privacy protecting objects well through security analysis.
Security and Efficiency Negotiation Model
LI Jian-li, DENG Xiao, WANG Yi-mou and XIE Yue
Computer Science. 2015, 42 (Z6): 378-381, 392. 
Abstract PDF(1304KB) ( 36 )   
References | RelatedCitation | Metrics
Automated trust negotiation is a way to establish trust for strange peers in the distributed environment.Du-ring negotiation,peers have to not only conceal sensitive information,but also reveal information to strengthen mutual trust,that contradictory situation makes safety and efficiency become the main concern problem for researchers.We proposed a new negotiation model which adds trust file repository and trust evaluation module into the traditional model.Trust file is used to record historical negotiation information of two peers and trust evaluation module is used to eva-luate the trust level of two peers.When negotiation starts,it firstly queries if there is available trust file to be directly used.If it exists,verify it to omit the process of exchanging credentials.Otherwise,it uses the success and fail negotiation times to evaluate their trust level of two peers.Since the trust level of the two negotiators has increased and the sensitivity to the digital credential to each other has decreased,the exchange times of the access control policy and the digital credential will be decreased during the negotiating.Furthermore,it will shorten the time spent and increase the efficiency.By doing experiment in Trustbuilder2,the proposed model is able to increase negotiation efficiency.By analyzing,it is able to protect negotiation form denial of service by using the recorded fail negotiation time.Therefore,the proposed model is safe and efficient.
Improved Algorithm for Buffer Overflow Detection Based on Libsafe Library
XIE Wen-bing, JIANG Jun, LI Zhong-sheng and NIU Xia-mu
Computer Science. 2015, 42 (Z6): 382-387, 424. 
Abstract PDF(1851KB) ( 50 )   
References | RelatedCitation | Metrics
Due to the lack of boundary checking mechanism of C/C++,buffer overflow is one of the most serious attacks caused by the unsafe functions,such as strcpy.This paper firstly discussed the current mechanism of libsafe lib-rary and analyzed the drawbacks using stack frame pointer to look back upon the stack information.We proposed a method through matching the attribute code of instruction’s opcode to look back upon the stack information.By ma-tching each opcode with the candidate opcode,we could get the stack information.We also introduced hash function to store the stack information that have been computed and the return address is used as key of the hash function.We analyzed the feasibility and complexity of our improvement algorithm.Experiments were done from different perspectives of cushion,integrity,accuracy.Performance shows the effectiveness of the algorithm.
Trust Evaluation Mechanism for Nodes Based on Adaptive Cloud Model in Wireless Sensor Network
YANG Yong-fei, LIU Guang-jie and DAI Yue-wei
Computer Science. 2015, 42 (Z6): 388-392. 
Abstract PDF(1317KB) ( 44 )   
References | RelatedCitation | Metrics
As the supplementary measure of the traditional encryption-based security system for wireless sensor network,trust evaluation of nodes can be employed to recognize the malicious nodes and conduct the attacks from inside of the network.With the temporal correlation and spatial correlation of the data gathered by nodes in clustered WSN for data acquisition,standard cloud benchmark was implemented with the construction of standard cloud model,the self trust and adjacent trust were computed respectively,and a two-tiered trust evaluation mechanism for nodes was proposed based on trust-feedback cloud model.Simulation results show that the proposed scheme can detect both single-node attack and inner-cluster multi-node collusive attack effectively.
File Encrypting Method on Kernel Level for Specific Application
XU Guo-chun and YIN Hong-wu
Computer Science. 2015, 42 (Z6): 393-394, 398. 
Abstract PDF(686KB) ( 36 )   
References | RelatedCitation | Metrics
Encryption file system such as eCryptfs and dm-crypt can avoid information leakage by storage lost.But they do not distinguish processes accessing the file,so they can not prevent information leakage by the trojan program.This paper introduced a method which puts the cryptograph in the kernel page cache,and only the specific application can access the plain text.This method eliminates the way by which the trojan program accesses the plain text,improves the security of information system.
Key Management Scheme of Wireless Sensor Network Based on Trust Evaluation Algorithm between Nodes
CHEN Hao and HUANG Hai-ping
Computer Science. 2015, 42 (Z6): 395-398. 
Abstract PDF(930KB) ( 33 )   
References | RelatedCitation | Metrics
This paper presented a key management scheme based on trust mechanism.Through evaluating the trust va-lue of the originating node,it determines whether the node can be the head node of a cluster for key updating.This scenario with a two-way trust packet forwarding mechanism,can not only get the trust value of the node,but also verify the correctness of the trust packet.It further improves the security of the key management implementation process.Meanwhile,this paper did theoretical research and analysis from the aspect of security,communication complexity and memory consumption,and experimental simulation.
Research on Virtual Desktop System Authentication Method Based on CPK
JU Lei, CHI Ya-ping, LIU Qiao-yu and FENG Hua-min
Computer Science. 2015, 42 (Z6): 399-401. 
Abstract PDF(751KB) ( 70 )   
References | RelatedCitation | Metrics
Virtual desktop technology separates the users and the resources,contributing to terminal security solutions and improvement of resource utilization.It also provides the convenience for the centralized management of resources,but the introduction of virtualization technology also makes unique safety risks exist in virtual desktop.Identity authentication is the key technology to solve the problem of virtual desktop security problems and also is the foundation of more complex and fine-grained security protective measures.This article first described the basic principle of the combined public key(CPK) cryptosystems,and then according to the characteristics of the virtual desktop,based on CPK authentication methods was proposed under applying the virtual resources and using virtual resources two scenarios.Through the federated identity,the binding of the user and the virtual machine comes ture.At last,the safety and performance analysis of the proposed authentication method was given.
Attack-resistant and Low-cost AES Implementation for Wireless Sensor Network
LUO Xin-qiang, QI Yue, WAN Ya-dong and WANG Qin
Computer Science. 2015, 42 (Z6): 402-407, 434. 
Abstract PDF(1775KB) ( 102 )   
References | RelatedCitation | Metrics
Advanced encryption standard(AES) is specified as the core cipher algorithm of the data link layer by many wireless sensor network(WSN) standards.But traditional AES implementation is hard to perform on the resource-constrained WSN nodes due to its high computation complexity.Look-up table(LUT) can improve the speed of AES software implementations significantly,but the traditional AES implement(4-T) based on 4 LUTs consumes high storage and faces the threat of access-driven cache attack.This paper proposed an AES implementation(1-T) based on one 512-Byte LUT,by optimizing the structure of the LUT,decreasing its storage consumption and increasing its ability of against access-driven cache attack significantly at the same time.In order to eliminate the encryption speed impact on 1-T,the round encryption function of 1-T was optimized as well.The experiment result on ARM shows that,the 1-T’s encryption time is increased 43.5% comparing to 4-T’s,but only 38.55% of the one of the AES implementation based on hardware accelerator.
Review of Typical Attacks on SSL/TLS
ZHANG Ming, XU Bo-yi and GUO Yan-lai
Computer Science. 2015, 42 (Z6): 408-412, 419. 
Abstract PDF(1610KB) ( 40 )   
References | RelatedCitation | Metrics
SSL/TLS is a cryptographic protocol widely used on the Internet.It works on behalf of the underlying transport layer and encrypts the data of network connections in the application layer to provide confidentiality and integrity guarantees.The protocol standards of SSL/TLS are constantly improved,but there are also increasing attacks.We first introduced some basic knowledge of SSL/TLS,and then analyzed the typical attacks on SSL/TLS.Attacks are divided into three categories:attacks related to mechanisms,attacks related to implementations,and attacks related to trust models.For each category,several specific instances were presented.
Study of Cipher Text Retrieval Based on Homomorphic Encryption
CHENG Shuai and YAO Han-bing
Computer Science. 2015, 42 (Z6): 413-416. 
Abstract PDF(1005KB) ( 54 )   
References | RelatedCitation | Metrics
Along with the rapid development of the information society,the information resource is increased day by day.The full-text retrieval technique provides a high-efficient means for searching of information resource,but more problems occur to the information security.With respect to the aforesaid problems,we provided a scheme of the improved homomorphic encryption algorithm,called new homomorphic encryption algorithm,and designed a cipher text retrieval scheme based on NHE algorithm.The scheme uses the properties of homomorphic encryption algorithm in the retrieval process,and the inverted index structure and binary search method is used to search.And it can effectively solve the problem of cipher text retrieval.
Research of Bias Data Encryption and Protection Methods for GIS
CUI A-jun and WANG Xiao-ming
Computer Science. 2015, 42 (Z6): 417-419. 
Abstract PDF(678KB) ( 36 )   
References | RelatedCitation | Metrics
Aiming at the problems that there has large,diverse and complex data in the GIS system,a bias data encryption and protection method for GIS was researched,proving it is necessary and feasible to protect data bias.The data are classified according to key and bias data protection,and different data are protected in different way.Meanwhile,a universal encryption method was discussed based on traditional method.Experiment shows that,to some extent,this solution reduces the spend of encryption and decryption.
Enhanced WAI Certificate Authentication Process
XIAO Yue-lei, ZHU Zhi-xiang and ZHANG Yong
Computer Science. 2015, 42 (Z6): 420-424. 
Abstract PDF(1302KB) ( 32 )   
References | RelatedCitation | Metrics
Based on the existing WAI certificate authentication process,an enhanced WAI certificate authentication process was proposed.Besides implementing the function of the existing WAI certificate authentication process,it can establish a secure channel between the Station(STA) and the authentication service unit(ASU),and a secure channel between the access point(AP) and the ASU,supporting the platform-authentication of the trusted connect architecture(TCA) well.Moreover,this enhanced WAI certificate authentication process was proved securely by using the strand space model(SSM) and we pointed out backward compatibility with the existing WAI certificate authentication process.
Study on Evaluation Method of Multi-layer Hybrid Intrusion Detection System
LI Yun-ting, XIA Zhong-ping and XIONG Jing
Computer Science. 2015, 42 (Z6): 425-428, 443. 
Abstract PDF(1045KB) ( 34 )   
References | RelatedCitation | Metrics
Increasing sophistication and diversification of network attacks challenge network security seriously.Evaluating intrusion detection system thoroughly and objectively has important implications with various technology-based intrusion detection systems continuously emerging.The evaluation methods most of which involve few evaluation metrics have their own weaknesses such as partial and subjective considerations.Aiming at these issues,a general evaluation system of intrusion detection systems was introduced which can be applied in typical intrusion detection systems,including universal and credible indexes chosen by specific principles.And throught the introduction of AHP and the variation coefficient method,hybrid AHP comprehensive evaluation model was constructed.Finally the algorithm of comprehensive evaluation of mixed layer analysis method and coefficient of variation method was realized.The comprehensive evaluation method of subjective and objective evaluation method can accurately complete the evaluation of the IDS task.
Design and Implement of Exact Matching Algorithm Based on Bloom Filter
WANG Peng-chao, DU Hui-min, CAO Guang-jie, DU Qin-qin and DING Jia-long
Computer Science. 2015, 42 (Z6): 429-434. 
Abstract PDF(1480KB) ( 41 )   
References | RelatedCitation | Metrics
Because the technology of Bloom filter makes the element which does not belong to a set misjudge the element belonging to the set and element removal is difficult,this paper introduced CAM(Content Addressable Memory) for two exact matches.We proposed that the k hash values of the Bloom filter store in CAM,in order to determine whether an element belongs to this collection,and it is easy to remove elements.The results of the algorithm based on Snort2.9 rule database show that,compared with single-stage CAM seeking,in false positive rate of 0.01,the BF-CAM system reduces by more than 5-fold reduction in resource consumption,10 times in power consumption,and it can reduce the load of system,improve system performance,and adapt to detect strings in the high-speed networks.
Research on Real-time Network Security Situation Assessment Based on Rough Set
WU Chao-xiong, WANG Xiao-cheng, WANG Hong-yan and SHI Bo
Computer Science. 2015, 42 (Z6): 435-437, 458. 
Abstract PDF(970KB) ( 54 )   
References | RelatedCitation | Metrics
Aiming at the problem of the accuracy and real-time of situation assessment,a real-time network security si-tuation assessment based on rough set method was proposed.It acquires high-quality rule sets from multi-sample through rough set theory,and generates multi-level rule trees,then integrates rule into real-time attack awareness engine to achieve online analysis and detection of dynamic data stream at the same time.And the result as the evidence of situation assessment is used to compute the value of situation in whole network according to the model of situation assessment at last.The method improves the assessment on accurate,real-time,objective sufficiently by experiments.
Research of Wireless Sensors Network Security Algorithms
TAN Long and PEI Chao
Computer Science. 2015, 42 (Z6): 438-443. 
Abstract PDF(1591KB) ( 37 )   
References | RelatedCitation | Metrics
The influences from Byzantine nodes are important for the reliability and the usability of the entire network.The design of light weight Byzantine fault-tolerant protocols has a key significance to enhance the fault-tolerant ability for large-scale wireless sensor network.We first researched the status and development of Byzantine generals problem and fault-tolerant problem for WSN briefly.We analyzed the security targets and the security threats for WSN in detail,then on its basis,summed up the focus and technological difficulty about Byzantine fault-tolerant in WSN,which are choosing the appropriate network topology and the appropriate cryptography.Combining with fast ECDSA optimal threshold scheme,we proposed a lightweight Byzantine fault-tolerant protocol(ELBFT) through OPNET simulation.ELBFT based on double levels hierarchical architecture,operates different BFT to reduce the communication messages.ELBFT balances energy consumption and vastly improves the fault-tolerant capability for large-scale wireless sensor network.
Efficient Attribute Based Key Agreement Protocol
CHEN Jian, WANG Xiao-ying, WANG Yong-tao and LI Yao-sen
Computer Science. 2015, 42 (Z6): 444-446, 469. 
Abstract PDF(938KB) ( 50 )   
References | RelatedCitation | Metrics
We proposed an efficient attribute based key agreement protocol.Due to the property of fuzzy identification on the participants of the protocol,attribute based key agreement protocol has been a hot topic of research in recent years.We presented a concrete construct of a two-party efficient attribute based key agreement protocol,based on the key abstraction of the most efficient attribute based encryption scheme at present.The new protocol achieves more expressive access structure and is effcient.Futhermore,the new protocol was proved secure in the standard model under the decisional q-parallel bilinear diffie-hellman exponent assumption(BDHE).
Three-party Authenticated Key Agreement Protocol Based on One-way Isomorphism
CHEN Hai-hong
Computer Science. 2015, 42 (Z6): 447-450. 
Abstract PDF(871KB) ( 40 )   
References | RelatedCitation | Metrics
Key agreement is an important mean to realize the secure communication between participants under the open and fair environment.This paper proposed a novel three-party key agreement protocol which has the sender authentication property.We used a one-way isomorphism to generate session key.And this mean avoids the insecurity of Hash function.We then gave the rigid security proof for our new protocol and compared the computational cost with some refe-rences.The result shows that our protocol has known key security,perfect forward security and can resist on the key compromise impersonation attack,and our protocol also has the acceptable computational cost.
Analysis on Security Risk and Control Strategy of Petroleum Enterprise Internal Network
LU Kun-qiao, WANG Jin-he, DENG Tao and LI Hong
Computer Science. 2015, 42 (Z6): 451-452, 478. 
Abstract PDF(723KB) ( 34 )   
References | RelatedCitation | Metrics
With the development of information technology,the internal network of the petroleum enterprise rises to prop up the function increasingly and importantly in produce activity,putting forward the higher request to the loading ability and safety of the network.How to set up an eligible security system,how to guarantee the security core data from enterprise,those are issues need to be solved immediately according to the enterprise pressing requirement.This article takes the internal network security of the Xinjiang oilfield company No.2 production plant as an example,analyzing existing security risk of internal network,and put forward three control strategy:excellent and perfect network structure,deploying the security protection system,building up the safety guarantee system,to provide the reference for the oil-field enterprise network safety.
Feature Extraction Method Based on Sparse Principal Components for Gene Expression Data
SHEN Ning-min, LI Jing, ZHOU Pei-yun and ZHUANG Yi
Computer Science. 2015, 42 (Z6): 453-458. 
Abstract PDF(1481KB) ( 44 )   
References | RelatedCitation | Metrics
Cluster analysis is a popular method for gene expression data,which can be used for finding cancer cell so that the diseases can be diagnosed accurately and rapidly through the gene class label.However,more attributes and less samples will produce a mass of redundant or disturbed information,resulting in the decline of the accuracy of the direct clustering in high dimensional data.Principal Component Analysis(PCA) is a classical method for dimension reduction which can transform high dimension data into low space under maintaining maximal variance.The shortcoming of PCA is the lack of strong interpretation for loadings that have no characteristic of sparsity.In this paper,a sparse PCA methodbased on Truncated Power was applied into the feature extraction for gene expression data,then the sparse PCA was fed into K-means process for clustering.Finally,the experimental results on Colon cancer,leukemia and lurg cancer three typical gene datasets verify that the sparse gene data can improve the efficiency and accuracy on clustering.
Parallel Fp-growth Algorithm in Search Engines
HUANG Jian, LI Ming-qi and GUO Wen-qiang
Computer Science. 2015, 42 (Z6): 459-461, 483. 
Abstract PDF(905KB) ( 29 )   
References | RelatedCitation | Metrics
Web log file is generated for the user history retrieval process.The paper studied whether the query words and click on the link belong to frequent itemsets,and frequent itemsets mining efficiency under the distributed conditions.Based on Hadoop framework,we designed a parallel Fp-growth algorithm to mine the search engine web log.Si-mulation results show that those query words and click on the link frequent itemsetss satisfying the given support are prevalent in web logs.With the increase of the number of nodes in the Hadoop,the performance of parallel Fp-growth algorithm will be improved greatly.Thus,the mining efficiency of frequent itemsets is significantly improved.Simulation results also show if the amount of data is greater,the improvement is more obvious.
Index Algorithm for Audio Fingerprints Based on Compressed Suffix Array
LIU Xue-zheng, SHI You-qun, LUO Xin and TAO Ran
Computer Science. 2015, 42 (Z6): 462-464, 488. 
Abstract PDF(1012KB) ( 39 )   
References | RelatedCitation | Metrics
In music retrieval methods based on audio fingerprints,a large database is required in order to compare with the fingerprints leading to low retrieval efficiency.In this paper,we proposed a method for index compression using a compressed suffix array,in order to overcome the disadvantage of large memory cost with full-text indexing.The proposed method compresses data sequences lossless by run length encoding,mainly taking advantage of the fact that the repetitivecharacters occur frequently in higher bits of the sorted audio fingerprint data.The experimental results show that the proposed method,compared with the conventional method,only needs 30% of the space of an audio fingerprints database for a music database consisting of 12000 songs,and around 80% of the index space for a database of 2000 songs.Moreover,theentire space cost is reduced to around 60%,compared with the method based on the suffix array.
Comparison Research on Mahout Clustering Algorithms under Hadoop Platform
NIU Yi-han and HAI Mo
Computer Science. 2015, 42 (Z6): 465-469. 
Abstract PDF(1185KB) ( 38 )   
References | RelatedCitation | Metrics
Clustering is an important technique in data mining,and it is used to divide the congregation of physical or abstract objects into multiple classes consisting of similar objects.How to apply the traditional clustering algorithm into the clustering of large scale data is the hot research issue in the current data research field.This article conducts the theo-ry analysis and comparison on the principle of three kinds of clustering algorithms of Canopy,Standard K-means and Fuzzy K-means in open-source machine learning software library—Mahout under cloud computing platform—Hadoop and the achievement of MapReduce,and on the cluster constructed by the nodes with different number,under the data sets with different scales,conduct experiment on the three kinds of clustering algorithms,and then conduct comparison from the three aspects of speedup ratio,scalability and scale growth.The experimental results show that:in parallel environment,the running speed of Canopy algorithm is the fastest,K-means algorithm is the second and Fuzzy K-means is the slowest;the three kinds of algorithms have better speedup ratio,and among them,the speedup ratio of Canopy algorithm is the best,the speedup ratio of Fuzzy K-means algorithm substantially increases after the amount of data and the number of nodes achieving a certain scale;the three kinds of algorithms have better scalability and scale growth,and among them,the scalability of Canopy algorithm is the best,the increasing amplitude of scalability and scale growth of Fuzzy K-means algorithm is the largest.
Parallel PSO-kmeans Algorithm Implementing Web Log Mining Based on Hadoop
MA Han-da, HAO Xiao-yu and MA Ren-qing
Computer Science. 2015, 42 (Z6): 470-473. 
Abstract PDF(1041KB) ( 40 )   
References | RelatedCitation | Metrics
With the rapid development of Internet technology,Web log mining based on a single node becomes very difficult.The emergence of Hadoop cloud platform provides a new solution to this problem.However,the traditional Web log mining clustering algorithm k-means is sensitive to the initial cluster centers selection,so it will easily affect the accuracy of clustering.Thus for this problem,this paper proposed a k-means algorithm based on particle swarm optimization which makes the k-means algorithm not be affected by the initial cluster centers.And the algorithm is realized in the Hadoop MapReduce programming platform.Experimental results show that: compared with traditional k-means algorithm the proposed algorithm has the higher clustering accuracy,and compared with stand-alone serial algorithm, the operating efficiency improved greatly.
Extracting Sentiment Element from Chinese Micro-blog Based on POS Template Library and Dependency Parsing
ZHANG Ling and FENG Xin
Computer Science. 2015, 42 (Z6): 474-478. 
Abstract PDF(1131KB) ( 40 )   
References | RelatedCitation | Metrics
The paper proposed a method for extracting sentiment elements from Chinese micro-blog based on POS template library and dependency parsing.First,the sentiment elements were searched by using emotional words as the benchmark.Second,the templates were trimmed and screened by calculating the priori probability for building the POS template library,which was used for fetching the basic sentiment elements.Then,basic sentiment elements were expanded to full sentiment elements,according to a set of rules based on dependency trees.Experiments show that the precision and recall are better than the average of 19 sets of results in NLP&CC2013 evaluation.The F-measure of the results ranked 9 in strict indicator and ranked 7 in lenient indicator.That proves the effectiveness of the method.
Method of Query Expansion Based on LCA Prune Semantic Tree
LI Wei-jiang and WANG Feng
Computer Science. 2015, 42 (Z6): 479-483. 
Abstract PDF(1202KB) ( 37 )   
References | RelatedCitation | Metrics
Searching and finding out useful information from a mass stock in a very short time is becoming a tough tax and many times the user will have to receive a lot of information that may not appear any useful for the users.Problems mainly come from the query which users have provided without enough accuracy,the mismatches of the queries and the expression of the documents,or query optimization.To deal with these problems,this paper proposed a novel hybrid query expansion method which synthesizes the merits of semantic query expansion and local context analysis(LCA).Firstly,we retrieved the documents by LCA method,then used these terms to trim the semantic tree,and calculated the weight of expansion term based on this improved algorithm.We compared the effectiveness of these approaches.And the results show that,although local context analysis has some advantages,the LCA prune semantic tree yields better performance than the techniques on the simple query expansion.
Survey of Network Data Visualization Technology Based on Association Analysis
SUN Qiu-nian and RAO Yuan
Computer Science. 2015, 42 (Z6): 484-488. 
Abstract PDF(1246KB) ( 56 )   
References | RelatedCitation | Metrics
With the rapid development of world wide web and social relationship networks,forum administrator and other analysts are confronted with great challenge from massive high-dimensional forum data.It is difficult for people to manage and analyze network data with rich information resources behind it.While,association rules can provide an effective way to find potential relationship and predict its trend,and visualization techniques can help to display data and assist our decisions clearly.By combining the correlation analysis and data visualization,we focused on forums with files of data and complex structure,illustrated related definition of association rules and data visualization and its overall target,and summarized related techniques.We raised an implement framework of network data visualization technology based on association analysis including the association rules mining,topic mining and visualization and other techniques,which can help people understand quickly in the limited time and analyze massive data set from forums.Finally,we discussed the prime problems and challenges existing in data visualization.
Construction of One Kind of Full-text Searching & Recommending System Based on Clustering
ZHANG Ke-jun, REN Peng, QIAN Rong, JU Rong-bin, JIANG Chen and ZHANG Guo-liang
Computer Science. 2015, 42 (Z6): 489-490, 512. 
Abstract PDF(757KB) ( 40 )   
References | RelatedCitation | Metrics
In recent years,the development of the search engine and the sorting algorithm is updating fast.But the recom-mending system,which cannot provide valuable keywords in the past,is barely evolved.Our project is specialized in order to solve this problem.This project goes from word to document,and from document to cluster,then from cluster to the word which is to be returned.It realized the K-means clustering algorithm and the TFIDF weight algorithm with Java.Users can find not only the web pages including the specific keyword,but also the most valuable keyword recommended which is to help them finding information related.
General Overview on Clustering Algorithms
WU Yu-hong
Computer Science. 2015, 42 (Z6): 491-499, 524. 
Abstract PDF(2598KB) ( 169 )   
References | RelatedCitation | Metrics
Data mining techniques can be used to find out potential and useful knowledge from the vast amount of data,and it plays a new significant role to the stored data in the info-times.With the rapid development of the data mining techniques,the technique of grid clustering,as important parts of data mining,are widely applied to the fields such as pattern recognition,data analysis,image processing,and market research.Research on grid clustering algorithms has become a highly active topic in the data mining research.In this thesis,the author presented the theory of data mining,and deeply analyzes the algorithms of grid clustering.Based on the analysis of traditional grid clustering algorithms,we advanced some improved grid clustering algorithms that can enhance the quality and efficiency of grid clustering compared with the traditional grid clustering algorithms.Based on the analysis of traditional algorithms for multi-density,we advanced a grid-based clustering algorithm for multi-density(GDD).The GDD is a kind of the multi-stage clustering that integrates grid-based clustering,the technique of density threshold descending and border points extraction.As shown in the research,GDD algorithm can not only clusters correctly but find outliers in the dataset,and it effectively solves the problem that traditional grid algorithms can cluster only or find outliers only.The precision of GDD algorithm is better than that of SNN.The GDD algorithm works well for even density dataset and lots of multi-density datasets;it can discover clusters of arbitrary shapes;it isn’t sensitive to the input order of noises and outliers data,but it is imperfect to cluster on some multi-density datasets.
Topic Related Space-based Microblog User Interests Mining and Visualization Method
ZHAO Hua, JI Xiao-wen, ZENG Qing-tian and HAO Chun-yan
Computer Science. 2015, 42 (Z6): 500-502, 509. 
Abstract PDF(960KB) ( 39 )   
References | RelatedCitation | Metrics
Microblog is an effective platform to get the user interests.Based on the analysis of the habit and the characteristic of publishing microblog,this paper proposed a topic related space-based microblog user interests mining me-thod,with the help of the location feature.This method firstly creates the topic related space based on the topic detection,secondly calculates the interest index value of each word based on the combination of TFIDF and location feature,and finally visualizes the mining results based on the 3D tag clouds.The experimental results show that the proposed method is useful.
Research on Web Music Push Model of Mobile Terminal Ubiquitous Context Adaptation
ZHANG Xiu-yu
Computer Science. 2015, 42 (Z6): 503-509. 
Abstract PDF(1830KB) ( 37 )   
References | RelatedCitation | Metrics
The main current information push model is based on user interests or terminal situation,lacking theoretical support to the two elements together.To improve the approach of pushing music,realizing mobile-termination-oriented pattern,this paper presented a music push model that focused on terminal situation.Firstly,the paper defined the concept of terminal situation.Secondly,the paper put forward the music push model based on Bayesian network and collaborative filtering algorithm.Next,the paper defined this model’s establishment process.Finally,the paper validated the effects of terminal-situation-oriented model through living examples,indicating that this model is more applicable and effective.
Discussion on Network Clustering Algorithm
WU Yu-hong and WANG Wei-feng
Computer Science. 2015, 42 (Z6): 510-512. 
Abstract PDF(714KB) ( 41 )   
References | RelatedCitation | Metrics
Data mining is a very popular research direction in the IT industry recently,clustering analysis is the core technology of data mining,and hierarchical clustering algorithm is the most widely used one.But the method often encounter merging or split point selection problem.Once a set of objects is merged or split,the next processing will be performed on the newly created cluster,the processing has been done can not be undone,and the object can not be exchanged between the clusters.If a step does not choose to merge or split very well,that may lead to lower quality clustering results.The author presented an improved hierarchical clustering algorithm,called mesh clustering algorithm,and the experimental results show that the algorithm has a higher accuracy rate.
Correctness Test of TI DSP C Compiler
SUN Hai-yan, CHEN Yue-yue, WANG Feng, YANG Can-qun, YANG Liu and WANG Ji
Computer Science. 2015, 42 (Z6): 513-515, 545. 
Abstract PDF(895KB) ( 31 )   
References | RelatedCitation | Metrics
TI DSP is widely used in industry control areas.The reliability of the executable code not only relies on user programs,but also relies on the compilers.This paper did correctness test on the C compiler of TI C6701 DSP,which is widely used in many industry control areas.The test result shows that if users use TI C6701 Compiler without any restricts,they may meet the C compiler’s correctness problems which may affect the correctness of the whole applications.
New Algorithm of Simplifying GCC Syntax Tree
TIAN Bing-chuan, SUN Ke and CHAO Han-qing
Computer Science. 2015, 42 (Z6): 516-518, 530. 
Abstract PDF(847KB) ( 37 )   
References | RelatedCitation | Metrics
Abstract syntax tree(AST) is tree representation of program source code,which plays a significant role in code analysis and feature extraction process.GCC can help to export the AST file in the source program of C language,which however contains a large amount of redundant information and irrelevant information that will exert a negative influence on the accomplishment of relevant tasks.To address this problem,a simplified GCC AST algorithm was presented to remove the nodes irrelevant to source program and rebuild AST file by means of linear time complexity based on a complete basic structure of AST for the sake of simplification.
Structure Verification Method for Software Evolution Process Based on Incidence Matrix
LIU Jin-zhuo, YU Qian, ZHAO Na, XIE Zhong-wen, YU Yong, HANG Fei-Lu and JIN Yun-zhi
Computer Science. 2015, 42 (Z6): 519-524. 
Abstract PDF(1320KB) ( 33 )   
References | RelatedCitation | Metrics
The software evolution process,the inter-discipline of software process and software evolution,becomes a key area in software engineering.In the recent past,many researchers have paid more attention to and devoted great efforts in this area and have made great progress.As the structure of software evolution process has not been effectively verified,the incidence matrix approach is used to prove the properties such as the structural boundedness,repeatability,conservativeness and so on.Therefore,the structure properties of the model that modelling software processes by using the white box approach is proved.Consequently,the quality of software evolution processes is improved.
Detection of Context-based Inconsistencies Bugs
WANG Wei, LIU Yuan, ZHANG Chun-rui, WEN Ping and XIE Jia-jun
Computer Science. 2015, 42 (Z6): 525-530. 
Abstract PDF(1372KB) ( 39 )   
References | RelatedCitation | Metrics
In order to detect context-based inconsistencies bugs induced by copy-paste in the development of software,based on the model of clone code detection via sequential pattern mining algorithm,filter rules of context-based inconsistencies were improved.Both context constructs inconsistency bugs and context conditional predicates inconsistency bugs were detected.To recognize semantically equivalent with different syntactic structure(i.e.syntax-tree),the standardization of expressions syntax-tree was added.The experimental results on the open source codes show that the model has low false-positive rate and 0% false-negative rate.It is especially suitable for safety-critical software.
Software Reliability Prediction Approach Based on UML Activity Diagram
SU Yue, LI Mi, WANG Wen-xin and ZHANG De-ping
Computer Science. 2015, 42 (Z6): 531-536, 560. 
Abstract PDF(1909KB) ( 36 )   
References | RelatedCitation | Metrics
In the context of component-based software development,this paper proposed an approach to automatically transform UML activity diagrams of software architecture to Markov chain for the quantitative evaluation of reliability.Based on the component-based software architecture,it utilizes four types of UML diagrams:use case,sequence,activity and component diagrams,extending them and annotating them with reliability related attributes.Then,the diagrams are transformed into a Markov chain based on analysis model by constructing an intermediate model called Component Transition Graph(CTG).Result of this transformation can be directly used in the existing analysis methods to predict software reliability,which facilitates the analysis task of software designer.
Survey of MapReduce Parallel Programming Model
DU Jiang, ZHANG Zheng, ZHANG Jie-xin and TAI Ming
Computer Science. 2015, 42 (Z6): 537-541, 564. 
Abstract PDF(1572KB) ( 46 )   
References | RelatedCitation | Metrics
MapReduce parallel programming model simplifies the complexity of parallel programming.Through calling a convenient interface and runtime support libraries,MapReduce parallel programming model makes large scale parallel computing tasks automatically execute concurrently without caring about the underlying implementation details,thus it can exert significant computing power in the large-scale low-performance cluster,which is cost saving as well.This paper reviewed the research of MapReduce parallel programming model at home and abroad,analysed the strengths and weaknesses of current research achievements,and prospected the future trend for the MapReduce.
Improved Method to Process Inconsistent Answer Set Program
LU Fang-fang and WANG Jie
Computer Science. 2015, 42 (Z6): 542-545. 
Abstract PDF(865KB) ( 34 )   
References | RelatedCitation | Metrics
Answer Set Programming(ASP) is a kind of declarative programming.At present,ASP has become an important and active research in the field of logic programming.In practical applications,one of the possibles in answer set programming is the absence of any solutions in case of inconsistent information.To remedy this,Zhu Tao presented a minimal principle based method to process inconsistency in ASP.However,this method will delete the most important information for users and can’t find the most satisfactory solution according to their own preferences.In order to solve this problem,based on weight logic program and using a weight quantitative method that associates a weight with each rule in a program,these weights define the “cost” of deleting a rule,the solution is preferred if it minimizes the sum of the weights of its deleted rules and after that,we compared this method with related work.
Study on Method of Data Warehouse Real-time Data Updating Based on Mechanism of CDC
TAN Guang-wei and WU Tong
Computer Science. 2015, 42 (Z6): 546-548. 
Abstract PDF(832KB) ( 58 )   
References | RelatedCitation | Metrics
This article analysed the real-time decision requirement of the data warehouse of a specific application system,identified database tables which need to be updated in real-time to data warehouse.Then compared several overall plan of real time data loading,after comprehensive consideration,we designed a method of real time data loading—using the CDC mechanism to acquire the changed data,and then circularly load the changed data to data warehouse via data loading program.After experimental verification,this method can meet the requirement of real time data loading,while doesn’t have much influence over the Database Transaction.So it’s suitable to be applied to the system.
Parallelized Singular Value Decomposition Method with Collaborative Computing of CPU-GPU
ZHOU Wei, DAI Zong-you, YUAN Guang-ling and CHEN Ping
Computer Science. 2015, 42 (Z6): 549-552. 
Abstract PDF(974KB) ( 40 )   
References | RelatedCitation | Metrics
In applications of tracking,singular value decomposition(SVD) is often used as a basic tool for dynamic library construction.However,facing the large amount of dataset per second and high accuracy requirements,the calculation of SVD is too time-consuming to satisfy the real-time constraint in the applications.This paper put forward the CPU-GPU collaborative method of parallel singular value decomposition.It utilizes the feature of asynchronous execution of GPU,and organizes the process of SVD into pipeline-style,with which we can largely exploit the parallelism.Experiments show that this method outperforms better than normal parallelized one-sided Jacobi method for SVD on GPU by 20%.Compared with the Intel MKL SGESVD on CPU,out approach can achieve a 6.8x performance improvement,which makes it satisfy the requirements of real-time applications.
Design and Implementation of Paperless Handbill System
HAO Shi-yuan, DAI Xin-yu, HU Qing-chun and QIAO Yan-hui
Computer Science. 2015, 42 (Z6): 553-556. 
Abstract PDF(938KB) ( 35 )   
References | RelatedCitation | Metrics
A paperless handbill system is proposed in this paper on Visual Studio by using the C/S and B/C mode together with the vb.net and asp.net techniques.The proposed system uses the PC clients,mobile phone WeChat and the Web as the information interaction platforms,and enables handbill release,seat selection management,ticket business system and the interaction between organizers and participants.To facilitate real-time man-machine interaction and rapid transmission of underlying large documents,the vb.net and SQL Sever techniques are adopted to develop the intelligent chat robot and the underlying file transmission module.As for the database design,SQL Server is chosen for background management of the database.The background database can be shared by three information interaction platforms and the database is separated from the user interface.Therefore,the proposed system is highly portable,scalable and conducive to further development and improvement.
Research of Agile Software Development Process Based on Reusable Techniques
JIA Ning, ZHENG Chun-jun and GAO Zhi-jun
Computer Science. 2015, 42 (Z6): 557-560. 
Abstract PDF(1119KB) ( 42 )   
References | RelatedCitation | Metrics
With the expansion of software size and complexity increase,the development team is particularly concerned about the efficiency of software development.As one of the effective ways to improve the efficiency,reusing technology is combined with agile development and cross platform APPS development mode,and a lot of high quality programs can be developed rapidly.In order to complete the program development,unified APPS standards and regulations need to be worked out.In addition,according to formulation of development process,reusing resources must be carried out in the basic condition and correctness.With the help of cross project de-velopment platform,reliability and efficiency of the development process can be proven through the verification.
Survey of Domain Analysis
WU Bao-ming
Computer Science. 2015, 42 (Z6): 561-564. 
Abstract PDF(1071KB) ( 40 )   
References | RelatedCitation | Metrics
Domain analysis is an important work in modern software analysis field used on reusability.Study on the evolution of domain analysis assists the decomposability and anticipation of software analysis and software development patterns,thereby aiding to forecast the perspective of future software development.This paper retrieved the emergence and transition history of domain analysis,and tried to find laws of evolution of domain analysis.By the way of comparison of domain analysis in different periods,we pointed out the aims and characteristics of domain analysis during each time.And a new conception and perspective of domain analysis were given in the conclusion of this article.