Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
CODEN JKIEBK
Editors
Current Issue
Volume 41 Issue 8, 14 November 2018
  
Overview of Physically-based Ocean Waves Simulation
DUAN Xing-feng and REN Hong-xiang
Computer Science. 2014, 41 (8): 1-6.  doi:10.11896/j.issn.1002-137X.2014.08.001
Abstract PDF(1444KB) ( 949 )   
References | Related Articles | Metrics
Realistic fluid simulation is one of the hotspots and difficulties in computer graphics.Ocean waves simulation is an important part of the fluid simulation in computer animation,and reality,real time and interaction are the key of the simulation of the ocean wave scene.This paper presented a survey of the development of physically based ocean waves simulation,introduced the hotspots of ocean wave simulation such as advection,overturning,breaking waves,splash,foam,fluid-solid interaction,multi-phase fluid in detail and focused on the acceleration technology because the system resources could not meet the demands of modeling computing.Future research is using various methods to optimize reasonably the wave modeling and improve the data structure,and implement on GPU using CUDA in order to meet the demand of real time wave simulation.
Time-weighted Uncertain Nearest Neighbor Collaborative Filtering Algorithm
ZHENG Zhi-gao,LIU Jing,WANG Ping and SUN Sheng-li
Computer Science. 2014, 41 (8): 7-12.  doi:10.11896/j.issn.1002-137X.2014.08.002
Abstract PDF(550KB) ( 549 )   
References | Related Articles | Metrics
To overcome the limitations of the traditional collaborative filtering recommendation algorithm,this paper proposed a Time-Weighted Uncertain Nearest Neighbor Collaborative Filtering Algorithm (TWUNCF).According to the actual application situation of recommendation system,the author weighted the product similarity and user similarity to ensure the data validity firstly,and then improved the calculation method of the similarity.And then the author introduced the neighbor factor to select the trusted neighbors of the recommendation object adaptively.Based on these, balanced the prediction result by using dynamic metrics of uncertain nearest neighbors.Experimental results show that the algorithm can be used to improve data validity according to the time attribute,and balance the impact of the different groups on the recommendation result,and avoid the problems caused by the data sparseness.Theoretical analysis and simulation experiment show that the algorithm this paper proposed outperforms existing algorithms in recommendation quality,and improves the system’s accuracy and recommendation efficiency.
Non-linear Sparse Representation Theory and its Applications
LUAN Xi-dao,WANG Wei-wei,XIE Yu-xiang,ZHANG Xin and LI Chen
Computer Science. 2014, 41 (8): 13-18.  doi:10.11896/j.issn.1002-137X.2014.08.003
Abstract PDF(529KB) ( 814 )   
References | Related Articles | Metrics
The theory frame of non-linear sparse representation was proposed,based on analysis and conclusion of the limitation and problems to be solved of existing sparse representation method.The main contents include contructing the single objective non-linear sparse representation model,analyzing the solution characteristics and designing the solving algorithms to break through the linearity limitations of sparse representation theory in existence and extend its application domains;constructing the multi-objective non-linear sparse representation model and designing the solving algorithms to make up the shortage of current sparse representation theory which can only deal with single-objective and to solve the problem of processing performance uncertainty in current model;combining the objective and solving procedure of non-linear sparse representation model to study the self-adaptive solving method for the model with unknown hyper-parameters and design the solving procedure for optimal performance of the model.The applying method was discussed taking the optimal configuration problem of TT&C resources.
Research on Constraint Conflicts and Analysis of Shared Resources in CMP Based on WCET
GAN Zhi-hua,GU Zhi-min,AN Li-kui and ZHAO Xin
Computer Science. 2014, 41 (8): 19-24.  doi:10.11896/j.issn.1002-137X.2014.08.004
Abstract PDF(3023KB) ( 527 )   
References | Related Articles | Metrics
Chip multi-processor(CMP)is well suited to fulfill the increasing performance requirement of real-time embedded systems.However,the use of CMP introduces new challenges to WCET analysis due to conflicts of shared resources.In this paper,the newest research achievements about conflicts of shared resources were introduced and divided into two main categories,i.e.,conflict analysis and constraint conflict,based on their research objective.For the conflict analysis of shared resources,we discussed the different causes of shared resources conflict, compared and surveyed the limitation and superiority of different types of conflict analysis approaches.For constraint conflict, the major research directions were presented,meanwhile, several mainstream method of constraint conflicts were investigated and compared respectively.At last,several major issues and research direction of conflict of shared resources for further exploration were pointed out.
Reasearch on Low-latency and Low-consumption CORDIC Algorithm and Architecture
REN Xiao-xi and SHEN Jian-long
Computer Science. 2014, 41 (8): 25-29.  doi:10.11896/j.issn.1002-137X.2014.08.005
Abstract PDF(346KB) ( 697 )   
References | Related Articles | Metrics
CORDIC algorithm has been widely used because it is easy to implement in hardware to calculate a variety of transcendental functions.How to reduce the number of iterations and maintain the calculation and the compensation of the correction factor is the difficulty of the algorithm,and it also needs to extend the range of rotation angle.In this paper,conventional CORDIC algorithm was divided into two steps to minimize the number of iterations.At the same time,the modified algorithm uses shift operations instead of the lookup table to reduce the time used by searching table,and also help to reduce power consumption.Finally,two algorithms were implemented in Altera Corporation Cyclone series chip EP4CGX22CF19C6.The experimental results show that compared to conventional algorithm,this algorithm saves 34.84% resources ,and its delay is about six clock cycles less on various frequencies,and the power consumption on various frequencies declines about 5.54% at least,and the higher the frequency rises,the more the power consumption declines.
Rapid Design and Evaluation for Pervasive Application
TANG Lei,ZHOU Xing-she and YU Zhi-wen
Computer Science. 2014, 41 (8): 30-37.  doi:10.11896/j.issn.1002-137X.2014.08.006
Abstract PDF(1773KB) ( 567 )   
References | Related Articles | Metrics
Pervasive computing makes it possible to create an environment of focusing on the people.However,develo-pers have spent much time and energy on building applications with feasibility and high satisfaction.In this paper,new challenges to the design and evaluation of pervasive applications were analyzed firstly,and then two hot technologies including development methodology and simulation testing were introduced and compared.The prospects for future deve-lopment and suggestions for possible extensions were also discussed.
Power-efficient Formal Scheduling Policy of VMs in Virtualized Clusters
ZHANG Lu-fei and CHEN Zuo-ning
Computer Science. 2014, 41 (8): 38-41.  doi:10.11896/j.issn.1002-137X.2014.08.007
Abstract PDF(1235KB) ( 712 )   
References | Related Articles | Metrics
Aiming at the power problem of virtualized cluster,based on the defining of cluster,physical machine,opera-ting point,energy,jobs,virtual machine,the formal scheduling policy of Virtual Machines (VMs) was proposed to research the deployment scheme.And the common algorithm was extended and modified with Dynamic voltage and frequency scaling (DVFS) technology for energy-awareness.Using the FFD (First-Fit Decreasing) algorithm,the virtual machines were arranged in frequency sequence,then deployed on a physical machine with lowest voltage.Using perfor-mance apperceive policy,the voltage physical machines were set just enough for the total requirement of virtual machines to reduce “luxury” energy respectively.And using the voltage update rules after no virtual machines deployed,the misdirection of the inaccurate heuristic information was avoided.The modified algorithm was researched with experiments in an ideal model.Finally the performance and energy of the modified algorithm were compared with the old.The experimental results show that the modified algorithm meets the need of the jobs,and saves a great deal of energy.
Modeling and Formal Verification for Software Designs Based on Hierarchical Timed State Transition Matrix
ZHOU Kuan-jiu,REN Long-tao,WANG Xiao-long,YONG Jia-wei and HOU Gang
Computer Science. 2014, 41 (8): 42-46.  doi:10.11896/j.issn.1002-137X.2014.08.008
Abstract PDF(408KB) ( 474 )   
References | Related Articles | Metrics
State Transition Matrix (STM) is a table-based modeling language.The popularization and application of this modeling method are greatly limited by the singleness of its event variable type,the state space explosion problem caused by the increasing events and states,and the weak expressive power in time semantics of software systems.We firstly presented a concept of Hierarchical Timed State Transition Matrix (HTSTM) which is mainly used for design,modeling and verification of software systems with time constraints.Then a formalization of HTSTM designs was pro-vided as a state transition system.Based on the formalization,we proposed a symbolic encoding approach which adopts the idea of Bounded Model Checking (BMC).Finally the LTL properties to be verified were determined by Satisfiability Modulo Theories (SMT) so that the correctness of the software design was proved to a certain extent.
Super-resolution Image Reconstruction Based on Non-local POCS
LUO Guo-zhong,YIN Jian-ping and ZHU En
Computer Science. 2014, 41 (8): 47-49.  doi:10.11896/j.issn.1002-137X.2014.08.009
Abstract PDF(1192KB) ( 690 )   
References | Related Articles | Metrics
Image acquisition process,affected by the imaging system,cannot get all the information of the original scene.Super-resolution image reconstruction is the technique to boost image quality under the premise of not changing the imaging system.POCS (projection onto convex sets algorithm) can take advantage of multi-frame low-resolution ima-ge to reconstruct a high resolution image.However,the traditional POCS algorithm usually produces the “jaggy” edge.In nature images,there will be many similar-edge structure.Taking advantage of local similarity structure can effectively eliminate the “jaggy” edge.Therefor a non-local POCS-based super-resolution image reconstruction algorithm was proposed.It can sharpen image edge effectively,and improve the visual senses of the image.
Web Service Concurrent Interaction Mechanism Based on BPMN
JIANG Dong-ming and XUE Jin-yun
Computer Science. 2014, 41 (8): 50-54.  doi:10.11896/j.issn.1002-137X.2014.08.010
Abstract PDF(379KB) ( 543 )   
References | Related Articles | Metrics
Business Process Modeling (BPM) aims to describe and abstract the interaction of complex systems associa-ted,is an important part of Service-oriented computing (SOC).While many researches focus on BPM,they ignore the crucial problem how to describe the concurrent interaction between the constituted components.Therefore,this paper presented a formal model of concurrent interaction of Web services based on BPMN.First,we used BPMN to describe the concurrent interaction among Web components.Secondly we maped the concurrent interactive model of BPMN to Orc language,a function language to describe concurrent orchestration of Web service.Finally,we illustrated our approach by mean of an example.
Complete Algorithm for 2D Rectangular Packing Problem
HE Kun,YAO Peng-cheng and LI Li-wen
Computer Science. 2014, 41 (8): 55-59.  doi:10.11896/j.issn.1002-137X.2014.08.011
Abstract PDF(2044KB) ( 577 )   
References | Related Articles | Metrics
The classical complete algorithm for the NP hard 2D rectangular packing problem is not only related to the number of items but also related to the width and height of the rectangular container.By observing the characteristics of feasible layouts for this packing problem,we corresponded each feasible layout to a pair of acyclic directed graphs.Then based on the prüfer code,we proposed a new complete algorithm whose computational complexity is only based on the number of items.
Research on Classification of Precomplete Classes in Partial Multiple-valued Logic Function Sets
WANG Ting,LIU Ren-ren and MA Ke
Computer Science. 2014, 41 (8): 60-62.  doi:10.11896/j.issn.1002-137X.2014.08.012
Abstract PDF(237KB) ( 405 )   
References | Related Articles | Metrics
The simplest decision on Sheffer function is an important theoretical and practical problem in structure theory of multiple-valued logic functions.According to the completeness theory of partial multiple-valued logic functions,this paper studied the classification of pre-complete classes in multiple-valued logic function sets.The obtained results provide some basis for judging Sheffer functions in partial multiple-valued logic.
Multi_reconfigurable Task Partitioning Method on Architecture Matching
HAO Shui-xia and ZENG Guo-sun
Computer Science. 2014, 41 (8): 63-66.  doi:10.11896/j.issn.1002-137X.2014.08.013
Abstract PDF(446KB) ( 417 )   
References | Related Articles | Metrics
Heterogeneous systems have been the main model of high-performance computing currently,such as cloud platform.Its advantage is that heterogeneous system is feasible.Yet its performance cannot be fully utilized due to not matching between tasks executed and its architecture.So using reconfigurable ideas,the paper provided multi_reconfigurable partitioning method combining architecture.Firstly the paper defined the multi-level reconfiguration concept.Secondly the paper analysed heterogeneous matching theory and a process of heterogeneous feature,further proposed heterogeneous multi-level task partitioning method.And finally the paper verified the effectiveness of the method.
Construction Method of Sentiment Lexicon for News Reviews
ZHOU Yong-mei,YANG Ai-min and YANG Jia-neng
Computer Science. 2014, 41 (8): 67-69.  doi:10.11896/j.issn.1002-137X.2014.08.014
Abstract PDF(342KB) ( 968 )   
References | Related Articles | Metrics
Research on sentiment lexicon is an important content on text sentiment analysis research field.The method of text sentiment analysis based on sentiment lexicon is very effective.It is very meaningful on automatic analysis sentiment for news reviews on internet.In this paper,a construction method of sentiment lexicon for news reviews was proposed,which learns Graph-Ranking model principle.The basic idea of the proposed method is as follows.Firstly,Get the word set of news reviews and the seed words with the corpus and basic sentiment lexicon.Then calculate the strength and polarity of the word set of news reviews according to the proposed algorithm based on PageRank.Finally,Construct the sentiment lexicon for news reviews.From two aspect the experiments show that the proposed method is effective,one is the accuracy of deciding the polarity of sentiment words,the other is the classification performance using the sentiment lexicon which was constructed by proposed method.
Quasi-human Algorithm for Finding Non-interfering Disjoint Paths in Wireless Networks
DONG Gao-xiu,LING Shan and CHEN Wei-dong
Computer Science. 2014, 41 (8): 70-74.  doi:10.11896/j.issn.1002-137X.2014.08.015
Abstract PDF(404KB) ( 641 )   
References | Related Articles | Metrics
For the NP-hard problem of finding two non-interfering disjoint paths between two nodes s and t in a wireless network,a quasi-human algorithm was proposed.Starting with two node-disjoint paths from node s to node t obtained by using the flow technique,this algorithm tries to change these two paths into two non-interfering disjoint paths from s to t by adjusting them gradually through a quasi-human strategy.Experimental results show that compared with the existing algorithms,the quasi-human algorithm can quickly find two shorter non-interfering disjoint paths from s to t with a higher probability.
Cloud Parallel Task Scheduling Algorithm Based on Fuzzy Clustering
ZHANG Qian,LIANG Hong and XING Yong-shan
Computer Science. 2014, 41 (8): 75-80.  doi:10.11896/j.issn.1002-137X.2014.08.016
Abstract PDF(1418KB) ( 434 )   
References | Related Articles | Metrics
Parallel task scheduling is one of the key problems in the field of cloud computing research area,which mainly researches parallel scheduling problems in cloud computing environment by the reference to the high performance computing required by massive oil seismic exploration data processing.Because of the natural reparability of Seismic data,it can maximize the full use of computing resources to put the job file to the resource nodes,which can just meet the task computing requirements.This paper proposed scheduling optimization strategy of task and resource hybrid clustering based on fuzzy clustering.The strategy takes matching degree of task and resource nodes as reference and with the clustering partition solution of concurrent job,narrows task scheduling scale and at the same time,lays foundation for the dynamic scheduling of tasks.After the division is completed,improved Bayes classification algorithm is introduced to fast match tasks and computer according to real-time load and queue operations.In the end,the experiments verify that this scheme has higher efficiency.
Priority Edge Ordering Strategy and Performance Analysis
PAN Zhu-sheng,MO Yu-chang and ZHAO Jian-min
Computer Science. 2014, 41 (8): 81-84.  doi:10.11896/j.issn.1002-137X.2014.08.017
Abstract PDF(353KB) ( 398 )   
References | Related Articles | Metrics
The computational complexity of BDD-based network reliability analysis linearly depends on the size of BDD which largely depends on the edge ordering strategies.What’s more,the edge ordering issue is very important to the analysis of BDD-based network reliability.We firstly designed the Priority Edge Ordering Strategy (PEOS) from the characteristics of network structure,and then researched the relationship between the BDD size and different ordering star-ting points with PEOS.The experiment results show that the high-performance starting points are not the source or the center but always located in the boundary of a given network.On the contrary,the center is the worst ordering starting point.These conclusions can provide important reference for the PEOS how to affect the BDD size and how to design a high-performance edge ordering strategy.
Study on Schema Matching with Uncertain Column Names and Data Values
HUANG Dong-mei,FENG Kai,ZHAO Dan-feng and GUO Ying-xin
Computer Science. 2014, 41 (8): 85-89.  doi:10.11896/j.issn.1002-137X.2014.08.018
Abstract PDF(381KB) ( 419 )   
References | Related Articles | Metrics
Schema matching is an important research in the field of data integration.The uncertainty of column names and data values is a common situation.The common method at present dealing with schema matching problem is based on mutual information and Euclidean distance.But this method does not solve the mistaken matching problem caused by the identity or the high similarity of the attributes.To solve this problem,this paper proposed multiple iterative screening method,which firstly,in two relation models,fixes some of the corrects attribute pairs in one time and then selects the best optimized attribute pair.Secondly,this paper lodged the method based on conditional mutual information,which utilizes the best optimized attribute pair to calculate the conditional mutual information of un-matched attributes and further calculates the Euclidean distance between each attribute.Finally,the matching result was acquired.The wrong matching problem was solved.The experiment result indicates the given algorithm is correct and effective.
Trust Propagation Based on Probability
ZHANG Shao-wu,LIN Hong-fei,LIU Xiao-xia and DOU Yan-zhao
Computer Science. 2014, 41 (8): 90-93.  doi:10.11896/j.issn.1002-137X.2014.08.019
Abstract PDF(391KB) ( 599 )   
References | Related Articles | Metrics
Trust relationship between users in a social network can provide ground to user to judge whether the information is trustworthy.Existing method of trust computes usually computes trust by searching the paths from source node to target node,and/or adding some external limitations,such as the length of path,the minimum trust values and so on.Few of them considers the similarity between nodes.This paper combined the trust propagation model with the similarity between nodes,and computed the distribution of the similarities between nodes by Bayesian probability formula.We also analyzed the influence of decay coefficient on the result and found the similarity between trust users is much higher than un-trust users in order to demonstrate the Bayesian probability formula can improve the method by data statistical analysis.At last,a trust propagation model based on probability was presented.After experiment on Epinion datasets,the validity of our method is proved by the result.
Image Encryption Scheme for Android Mobile Platform
WANG Wei and JIN Cong
Computer Science. 2014, 41 (8): 94-96.  doi:10.11896/j.issn.1002-137X.2014.08.020
Abstract PDF(1510KB) ( 471 )   
References | Related Articles | Metrics
Mobile platform has been widely used in our real life,such as smart phone,which causes the image security problem is becoming more and more prominent.It is extremely urgent to protect the image information security of smart phone.The image encryption technology by traditional computer platform has been be widely researched and applied,but the mobile platform is limited by the current hardware architecture,so that it can not inherit traditional security technology platform directly.Based on Android mobile platform,an image encryption scheme was put forward in this paper which aims at the image information security problems of smartphones and other mobile platforms.And the scheme is an innovation of the combination between gray level transformation and image method.Experiments show that the proposed scheme has higher efficiency in image encryption,which can effectively protect the security of image information in the mobile platform,and has extensive application value.
Dependency Parsing for Traditional Mongolian
SU Xiang-dong,GAO Guang-lai and YAN Xue-liang
Computer Science. 2014, 41 (8): 97-100.  doi:10.11896/j.issn.1002-137X.2014.08.021
Abstract PDF(324KB) ( 379 )   
References | Related Articles | Metrics
Dependency parsing has become increasingly popular in natural language processing in recent years.Nevertheless,dependency parsing focused on traditional Mongolian has not attracted much attention.We investigated it with Maximum Spanning Tree (MST) based model on Traditional Mongolian dependency treebank (TMDT).This paper briefly introduced traditional Mongolian along with TMDT,and discussesd the details of MST.Much emphasis was placed on the performance comparisons among eight kinds of features and their combinations in order to find a suitable feature representation.Evaluation result shows that the combination of Basic Unigram Features,Basic Bi-gram Features and C-C Sibling Features obtains the best performance.Our work establishes a baseline for dependency parsing of traditional Mongolian.
Attribute Granular Reasoning Based on Attribute Petri Net
ZHOU Ru-qi and FENG Jia-li
Computer Science. 2014, 41 (8): 101-105.  doi:10.11896/j.issn.1002-137X.2014.08.022
Abstract PDF(375KB) ( 358 )   
References | Related Articles | Metrics
To deal with the uncertain knowledge is often encountered in artificial intelligence research.Attribute Petri net model based on qualitative mapping has the advantages of a dynamic representation of uncertainty knowledge and logical reasoning in cognitive thinking.The basic definition and basic reasoning of attribute granular were given in the property topological space in this paper.Uncertainty knowledge can be expressed with the attribute granular in attribute Petri net.Finally,the basic form and basic algorithm of resolution reasoning were given in attribute Petri net.The results show that this method can make qualitative mapping and Petri net to more dynamically explicit expression of the cognitive uncertainty knowledge,and it can also provide a reference for further study of Petri net in the cognitive model.
Research and Design of Java Exception Information Analysis Plugin
SONG Dao-yuan and BEN Ke-rong
Computer Science. 2014, 41 (8): 106-108.  doi:10.11896/j.issn.1002-137X.2014.08.023
Abstract PDF(248KB) ( 439 )   
References | Related Articles | Metrics
Exception handling is an efficient method to improve the software robustness,while an inappropriate handling will cause a serious software failure.This paper proposed a method for analyzing the Java exception information and gi-ving a code suggestion to improve the efficiency of the development.We proposed a method for constructing the exception control flow graph for Java with exception construct,which will be used to code analyse and optimization.Based on the Eclipse,this paper designed a Java exception information analysis plug-in,to analyse the Java exception information,gave a code suggestion and produced the exception control flow graph,which will help developers write exception handling code faster and better.
Software Dynamic Execution Network Modeling and Cascading Failure Analysis
WANG Xiao-long,HOU Gang,REN Long-tao,ZHOU Kuan-jiu,CHANG Jun-wang and WANG Zhu
Computer Science. 2014, 41 (8): 109-114.  doi:10.11896/j.issn.1002-137X.2014.08.024
Abstract PDF(520KB) ( 409 )   
References | Related Articles | Metrics
As the functional requirements of software keep growing,the structure and scale of software systems become more and more complicated.In order to analyze the topology and quality of complex software systems,the theory of complex networks was introduced to model and solve software engineering problems.This paper regarded functions in the source code of the software as nodes,function-calls in the source code of the software as directed edges,and the number of function-calls as the weight of edges,then presented a method of constructing the weighted software dynamic execution routes topological network.The results on the statistical analysis of the networks obtained from three software programs,TAR,GEDIT and EMACS show that the weighted network of the software execution process fits in with the small-world effect and the scale-free property of complex networks.Based on that,we further took advantage of the CML (Coupled Map Lattice) model in complex networks to simulate and analyze the cascading effect for software systems and discovered the main factors that influence the cascading failures in software systems,which will give an important support for the research of software quality assurance.
Realization of Toffoli Gate Only Using CNOT Gate in Hybrid Multi-value Reversible Logic
FAN Fu-you,YANG Guo-wu,LI Xiao-yu and LUO Qing-bin
Computer Science. 2014, 41 (8): 115-117.  doi:10.11896/j.issn.1002-137X.2014.08.025
Abstract PDF(337KB) ( 704 )   
References | Related Articles | Metrics
Synthesis of Toffoli gate is the key step in the process of synthesizing hybrid multi-valued quantum reversible logic circuit.In order to resolve the problem of hybrid multi-valued 5-qubits quantum reversible logic circuit synthesis,we constructed a special quantum gate PMX and verified the synthesis ability of CNOT gate,then achieved the synthesis of Toffoli gate,and according to the algorithm of bi-direction search,accomplished the optimum of synthesis of quantum circuits.
Research on Management Decision of Project Based on Possibility Measure
LI Zhao-ni,MA Zhan-you and LI Yong-ming
Computer Science. 2014, 41 (8): 118-121.  doi:10.11896/j.issn.1002-137X.2014.08.026
Abstract PDF(370KB) ( 382 )   
References | Related Articles | Metrics
This paper aimed to consider an extension of possibilistic Kripke structure,called possibilistic Kripke cost structure (PoKCS).We focused on expected measures and multi-attribute decision making problem on PoKCS.A possibilistic Kripke cost structure is a possibilistic Kripke structure in which transitions (or states) are augmented with costs,natural numbers that can be interpreted as costs,or dually as bonuses.We only considered equipping transitions with costs.Finally,we described a case study about the management decision of project to illustrate application of the methods discussed in the paper.
Improved Algorithm for Finding Weight-constrained Maximum-density Path
LIU Kun-liang,ZHANG Da-kun and WU Ji-gang
Computer Science. 2014, 41 (8): 122-124.  doi:10.11896/j.issn.1002-137X.2014.08.027
Abstract PDF(231KB) ( 447 )   
References | Related Articles | Metrics
A tree was given in which each node is associated with a pair of numbers that represent the value and the weight of the node.The weight constrained maximum-density path algorithm is to find the maximum-density path in the tree,that is the ratio of the summation of the nodes’ values to the summation of the nodes’ weights is maximum.After studing the current algorithm,we found that the maximum density path algorithm has limitations.We gave the way of bypassing the limitations.Finally we designed and improved the algorithm.
Statistic Reduction for Uncertain Time Series
XIAO Rui,LIU Guo-hua,CHEN Ai-dong and SONG Zhuan
Computer Science. 2014, 41 (8): 125-129.  doi:10.11896/j.issn.1002-137X.2014.08.028
Abstract PDF(409KB) ( 538 )   
References | Related Articles | Metrics
Due to the length of uncertain time series and the uncertainty of values in each sample point,time complexity is very high when matching two uncertain time series.So the dimension reduction is the primary task to match fast for uncertain series.Now,always taking wavelet transform reduces dimension for uncertain time series,but the method does not consider the correlation between every sample points.We put forward a new method based on statistics and data correlation.It divides uncertain time series to probability dimension and time dimension and performs dimension reduction respectively in the two dimensions.We used a sampling point to represent the subsequent sampling points with high correlation in time dimension,and used large probability point to represent the adjacent small probability points in pro-bability dimension.Experimental results show that the compression ratio is remarkable when using the method to reduce uncertain time series.In addition,it can approximately recover the uncertain time series with reduced outcomes.
Research on Data Association Location Method for Web Services Composition
MA Bing-xian,CUI Ji-peng and ZHANG Zheng-ming
Computer Science. 2014, 41 (8): 130-134.  doi:10.11896/j.issn.1002-137X.2014.08.029
Abstract PDF(1146KB) ( 355 )   
References | Related Articles | Metrics
Petri net plays an increasingly important role in Web service composition due to its advantages at the description of concurrency and distribution system.To study Web service composition using Petri net,it is crucial to assure the association between parameters of single Web services.This paper first introduced the description of Web services using Petri net,as well as a Petri-net-based register system.Further more,the conception of data association was defined to determine the association between Web service arguments.Using data association,fusible places can be found and Petri net for Web service composition can be established by automatic sharing synthesis of Petri nets.Finally,an applying example was given about the usage of data association.
Research on Wireless Mesh Network QoS Based on M/M/n/m Model under Non-preemptive Limited-priority
ZHANG Ting,LI Tao-shen and GE Zhi-hui
Computer Science. 2014, 41 (8): 135-138.  doi:10.11896/j.issn.1002-137X.2014.08.030
Abstract PDF(319KB) ( 542 )   
References | Related Articles | Metrics
According to the multi-hop characteristics of the wireless Mesh network,simple M/M/1 queuing theory model can not describe the performance of the Mesh network very well.By using M/M/n/m queue theory to solve wireless Mesh network traffic modeling problem,this paper presented a finite non-preemptive priority M/M/n/m queuing model based on wireless Mesh networks.It can solve the problem that high-priority right business occupies network resources for a long time while low priority business is delayed in services,by distinguishing different business traffic and considering fairness of different priorities.Simulation experiments show that when network traffic is bigger,the average waiting time of high-priority customers changes little,and the average waiting time of the second priority customers is significantly reduced,thus ensuring the equitable distribution of network services.
SSI:A Same Source Identification Model for Multiple IPv6/IPv4 Addresses
WANG Xuan,WANG Zhen-xing,WANG Yu and ZHANG Lian-cheng
Computer Science. 2014, 41 (8): 139-143.  doi:10.11896/j.issn.1002-137X.2014.08.031
Abstract PDF(396KB) ( 470 )   
References | Related Articles | Metrics
Same source identification for multiple addresses is a key problem of network management and topology discovery under the IPv6/IPv4 coexistence environment.Existing research has focused on the discovery of dual-stack in the subnet and alias resolution of a single IP protocol stack,so it is difficult to identify same-source addresses in the remote network with IPv6/IPv4 coexistence.First,the essential connections between same-source addresses were analyzed,then a same source identification (SSI) model for IPv6/IPv4 addresses was proposed,which improves the identification ability by combing methods including special address pattern matching,TCP clock fingerprint matching and upper-protocol short-time paralyzation.The experimental results indicate that the methods above can effectively identify multiple IPv6/IPv4 addresses with same source.SSI model has an ideal recognition rate and accuracy.
Energy-balanced Routing Algorithm Based on Cross-layer Design for Inter-PAN Communications in ZigBee Networks
CAO Jian-ling,LIU Wen-peng,REN Zhi and FAN Hai-bin
Computer Science. 2014, 41 (8): 144-147.  doi:10.11896/j.issn.1002-137X.2014.08.032
Abstract PDF(432KB) ( 395 )   
References | Related Articles | Metrics
To reduce the redundant overhead and to conserve nodes’ energy in energy-constrained and multi-PAN ZigBee networks,this paper proposed an Energy-balanced Routing algorithm Based on Cross-layer Design(ERBCD).The proposed algorithm establishes the multi-path downstream routing by feedback mechanism based on gradient detection,disperses the information of nodes’ residual energy with little overhead by introducing cross-layer design,designs a composite routing criterion containing gradient and nodes’ residual energy to decrease the node overhead and balance the energy consumption.Theoretical analysis verifies the effectiveness of ERBCD.Simulation results show that ERBCD can significantly reduce communication overhead and the average energy consumption of a data packet.Meanwhile this algorithm dramatically prolongs the network lifetime,as compared to the existing IP-AODV routing algorithm.
Congestion Strategy Based on Traffic Trend Forecasting for Ad-hoc Networks
WANG Wen-tao,WANG Hao,ZHU Rong-bo,GUO Feng and ZHENG Fang
Computer Science. 2014, 41 (8): 148-153.  doi:10.11896/j.issn.1002-137X.2014.08.033
Abstract PDF(520KB) ( 350 )   
References | Related Articles | Metrics
For the RREQ message congestion problems of reactive MANET routing protocols in high traffic mode,this paper proposed a new mechanism.The mechanism sets different threshold to response to the route request according to the fitting curve of packet delivery ratio and packet sending rate,and the RREQ is dropped with a certain probability randomly when the average queue length exceeds the maximum threshold.The Hello packet interval is also determined by the fitting function.The simulation results indicate that without obvious increase of routing discovery frequency,the new mechanism can reduce the average end-to-end delay and Hello packet overhead effectively,and thereby improves the packet delivery ratio.
Decision Threshold-based Q Algorithm for RFID Anti-collision
REN Shou-gang,YANG Fan,WANG Hao-yun,XIONG Ying-jun and XU Huan-liang
Computer Science. 2014, 41 (8): 154-157.  doi:10.11896/j.issn.1002-137X.2014.08.034
Abstract PDF(1297KB) ( 737 )   
References | Related Articles | Metrics
One of the hotest research topics in RFID technology is the speed of tag identification,which is the key to wide and large-scale application.Thereby this may increase the system identification of the time overhead.To increase the speed of tag identification,the article proposed the decision threshold based anti-collision algorithm—QA-DTCI algorithm,and elaborated the concept of the algorithm,the process of operation and the determination method of threshhold parameters.In this algorithm,every reader is added with two counters to calculate the number of collisions and the idle slots respectively.When a collision slot is detected,the collision counter will increase ,while when an idle slot is detected,the idle counter will increase as well.Comparing the subtraction of collision counter and idle counter aganst the setting threshold,it’s can adjust the Q dynamically.The simulation results prove that the proposed algorithm can not only shorten the identificaton delay by maximum 4% but also increase the identificaton speed by 10%,without affecting the the system throughput.
Path Forging Detection Approach Based on Aggregation
YANG Bin,LU Yu-liang,YANG Guo-zheng and ZHANG Liang
Computer Science. 2014, 41 (8): 158-163.  doi:10.11896/j.issn.1002-137X.2014.08.035
Abstract PDF(505KB) ( 681 )   
References | Related Articles | Metrics
This paper presented a novel algorithm for detecting routing path forging based on aggregation.By selecting change of AS path as the detection object,using the country which the prefix belongs to as the standard,the change of AS path was converged.The definition of AS link probability deviance,intermediate country appearance probability,and intermediate country distance deviance were introduced.Based on these metrics,we introduced path-level detecting me-trics and integrated these metrics to check routing path forging.The data of actual routing path forging event was tested by the proposed method.Experimental results demonstrate that the method is more valid and practical than previous methods.
Routing Algorithm Based on Non-uniform Distribution of Virtual Channel
GUO Lin-lin,LI Guang-shun and WU Jun-hua
Computer Science. 2014, 41 (8): 164-168.  doi:10.11896/j.issn.1002-137X.2014.08.036
Abstract PDF(525KB) ( 653 )   
References | Related Articles | Metrics
With increasing integration of system on chip (SoC),communication between IP cores become an urgent problem.In recent years,network on chip (NoC) has been proposed as an effective solution to the complex on-chip communication problems.Virtual channel and routing algorithm play an important role in NoC design.They have great impact on latency,throughput and other performance of NoC.According to the characteristics of load distribution of NoC,we presented a novel non-uniform distribution technology of virtual channel (VCND).The technology of virtual channel is used in the internal Mesh and non-virtual channel is used on the boundary of Mesh.Hence,the amount of buffer unit is reduced.Then a modified deadlock-free routing algorithm was proposed,called combination XY (CXY).Simulation results show that CXY routing algorithm can improve network throughput and keep low message latency compared with XY and XY-YX routing algorithms.And VCND can save router’s area and increase the utilization rate of buffer unit obviously with a little losing of network throughput and latency,compared with uniform distribution technology of virtual channel.
Spectrum Handoff Based on Jointly Considering Channel Switching Idle and Switching Probability
WU Jun-sheng and LI Xue-mei
Computer Science. 2014, 41 (8): 169-171.  doi:10.11896/j.issn.1002-137X.2014.08.037
Abstract PDF(318KB) ( 343 )   
References | Related Articles | Metrics
In order to reduce delay time of spectrum handoff and to improve performance of the spectrum handoff,this paper proposed a novel spectrum handoff method of cognitive network based on jointly considering channel switching idle and switching probability.Channel scheduling not only considers the channel idle probability when switching occurs, but also considers the probability that switching no longer occur when using the channel.The performance of the method was tested through the simulation.The results show that compared with other switching frequency spectrum method,this method can not only obtain minimum delay of,but also significantly decrease the handoff times compared with the traditional method,so it can provide high-quality service for users.
Authenticated Key Exchange in eCK Model
LIU Xiu-mei,GAO Ke-ning,XUE Li-fang,CHANG Gui-ran and ZHOU Fu-cai
Computer Science. 2014, 41 (8): 172-177.  doi:10.11896/j.issn.1002-137X.2014.08.038
Abstract PDF(482KB) ( 1039 )   
References | Related Articles | Metrics
How to construct the security of key agreement protocol is one of the challenging problems in information security field.Most security protocols can only reach the “heuristic” security,and the assumption of security protocol is not ideal.To solve these problems,this paper presented a new computational assumptions (CDH) based third-party authentication key exchange protocol,and by using the trapdoor test theory,formally proved that the protocol is safe under the eCK model and better supports the adversary inquiries.
Automatic Classification Mechanism of Malicious Code Based on A_Kohonen Algorithm
XU Xiao-long,XIONG Jing-yi,WANG Xin-heng and WANG Ru-chuan
Computer Science. 2014, 41 (8): 178-182.  doi:10.11896/j.issn.1002-137X.2014.08.039
Abstract PDF(1322KB) ( 357 )   
References | Related Articles | Metrics
The current mass of malicious code reports has become a huge burden of cloud-security-based anti-virus network systems.The utilization of efficient,scientific and automatic classification mechanism is the basic premise for responding quickly to large-scale known and unknown malicious codes and their new variants.In order to achieve the automatic classification of malicious codes,we improved the Kohonen algorithm,a classic neural network model with no mentors and proposed a new neural network model A_Kohonen with part supervised learning of the process.Then the A_Kohonen-based automatic classification mechanism of malicious codes was provided to support anti-virus experts to refine and analyze malicious codes further.Experimental analysis shows that the mechanism can initially classify malicious codes effectively and accurately.
Quantum Cloning-based Quantum Algorithm for Dihedral Hidden Subgroup Problem
JIN Guang-long and YUAN Jia-bin
Computer Science. 2014, 41 (8): 183-185.  doi:10.11896/j.issn.1002-137X.2014.08.040
Abstract PDF(362KB) ( 406 )   
References | Related Articles | Metrics
Kuperberg presented a quantum algorithm for the dihedral hidden subgroup problem,but with sub-exponential time and query complexity.We presented a quantum cloning-based quantum algorithm for dihedral hidden subgroup problem with polynomial time and query complexity.Dihedral hidden subgroup problem is related to poly(n)-unique shortest vector problem.If the dihedral hidden subgroup problem is solved efficiently,the lattice-based cryptography can be breaken,which is secure against quantum computers.
Privacy Preservation of Sensitive Edges Based on Dynamic Social Networks
CHEN Wei-he,ZHU Jiang and LI Wen-jing
Computer Science. 2014, 41 (8): 186-191.  doi:10.11896/j.issn.1002-137X.2014.08.041
Abstract PDF(488KB) ( 491 )   
References | Related Articles | Metrics
In order to solve the issues of privacy preservation of sensitive edges in dynamic social networks data publication,we proposed a novel technique about the privacy preservation of sensitive edges based on dynamic social networks.The atta-cker uses the degrees of target nodes at different times as their background knowledge.Firstly,by using k-grouping and (k,Δd)-anonymous,it can be sure that the target nodes can not be uniquely identified by privacy atta-ckers.The probability of being uniquely identified is no more than 1/k.Secondly,this method can ensure that the leakage probability of sensitive edges will not exceed the user defined parameter u.Theoretical analysis and experiments show that the method presented in this paper can resist sensitive edges identification attacks.It can not only protect the users privacy information effectively but also ensure the utility of published data in dynamic social networks.
Independent Identification of Encrypted Data in Data Link Layer Based on Randomness Test
WU Yang,MA Yun-fei,WANG Tao and XING Meng
Computer Science. 2014, 41 (8): 192-196.  doi:10.11896/j.issn.1002-137X.2014.08.042
Abstract PDF(419KB) ( 403 )   
References | Related Articles | Metrics
To identify the encrypted data of data link layer,an independent identification scheme was proposed by mainly using the technique of frequency test within a block.As the identification rate of frequency test within a block is impressionable to the size of the block,a block size chosen scheme was provided based on the principle of variance.Furthermore,as the identification ability of frequency test within a block is also limited to short bit sequence,random sampling was introduced into its information extraction process to heighten the identification rate.Eventually,by taking the link layer bit sequence of a wiriness network as the identification object,the identification rate of the proposed schemes was verified.Experimental results demonstrate that the proposed schemes have high rates to identify encrypted data of data link layer and the correlative research paves the way for the further protocol identification research.
Novel Method for Impossible Differential Cryptanalysis of 9-Round AES_256
HU Zhi-hua,QIN Zhong-ping and ZHANG Qing
Computer Science. 2014, 41 (8): 197-201.  doi:10.11896/j.issn.1002-137X.2014.08.043
Abstract PDF(1268KB) ( 706 )   
References | Related Articles | Metrics
Through profound study of the 4-round encryption characteristics of advanced encryption standard (AES),a new 4-round differential path with an existing probability to of 2-30 has been derived.Based on this path,a novel method was proposed for impossible differential cryptanalysis of 9-round AES_256.The analysis method requires 295 pairs of chosen plaintexts,about 2163 words of memory and 2193 encryption/decryption computations.According to the analysis process,it was found that the confusing level of the MixColumns transformation in AES algorithm is insufficient,which provides a theoretical basis to improve the AES security.
Web Service Discovery Model Supporting Service QoS Difference Degree Control
HE Xiao-xia and TAN Liang
Computer Science. 2014, 41 (8): 202-208.  doi:10.11896/j.issn.1002-137X.2014.08.044
Abstract PDF(1559KB) ( 374 )   
References | Related Articles | Metrics
Quality of Service (QoS) describes the ability of a service meeting customer needs.In a dynamic,open and diverse network environments,the QoS uncertainty makes a large deviation between the service selection results and the actual results,as well as the difficulty of guaranteeing the QoS.To solve this problem,a Web service discovery model was designed to support QoS difference control.In this model,we added the third party monitoring which is called Interceptor to ensure the reality of the QoS certification center’s information provided by service providers and customers,and its real credit data to service registry center.Also,as a notary,it ensures both sides comply with SLA fairly and justly,and ensures the legitimate transaction service.Furthermore,QoS attribute local compliance verification mechanism and the global compliance verification mechanism were added in the service user terminal and QoS certification center,to control the difference between the users’ required values and offering data by providers,making a more accurate and objective reputation from service and service providers in the transaction process.The experiment indicates that this method can effectively control the QoS difference,which can provide more satisfying Web service for the users.
Data Dependence Research on Uncertain Relation
ZHOU Yu,LIU Guo-hua and YE Jie-min
Computer Science. 2014, 41 (8): 209-212.  doi:10.11896/j.issn.1002-137X.2014.08.045
Abstract PDF(315KB) ( 415 )   
References | Related Articles | Metrics
Data dependencies are the formal representation for the mutual constraints condition between the values of attributes,and functional dependencies are some of important influence data dependencies for the schema design and query in the database.In the uncertain relation,there are several possible values for every attribute in a tuple,so the mutual constraints for the values of attributes are more complicated than in the certain relation.We proposed a formal definition for uncertain relation and three kind of uncertain function dependencies,and then proved their inference rules.We would illustrate that above uncertain function dependences have important meaning in the guidance of normal form design in the uncertain relation.
Research of Parallel Component Non-functional Attributes
PENG Yun-feng and WANG Rui-ping
Computer Science. 2014, 41 (8): 213-218.  doi:10.11896/j.issn.1002-137X.2014.08.046
Abstract PDF(1107KB) ( 404 )   
References | Related Articles | Metrics
This paper proposed an extension of CCA component architecture.We defined a minimal set of non-functional attributes of parallel component.We implemented non-functional components for managing these attributes.This paper defined some interfaces related to these non-functional attributes.Parallel components can provide these interfaces optionally.Parallel components register their attributes to non-functional components.They provide their attribute information to non-functional components though non-functional interfaces.Component developers implement the management part which is specific to certain components.Non-functional components management the non-functional attributes of parallel components uniformly.Our way improves the performance of parallel component applications,and provides an easy way for the management of parallel component execution.
Modeling and Test Case Generation for Ajax-based WA
HE Tao,MIAO Huai-kou and QIAN Zhong-sheng
Computer Science. 2014, 41 (8): 219-223.  doi:10.11896/j.issn.1002-137X.2014.08.047
Abstract PDF(469KB) ( 361 )   
References | Related Articles | Metrics
Ajax technology enables Web applications to get data from the server through an asynchronous request,and partially refresh the Web page.This allows a Web page contains multiple different states.The sharp increase of the number of states makes the Web applications more complicated and brings greater difficulty to modeling and testing of Web applications.We researched modeling and test case generation method of Web applications and gave a feasible technology to generate test case.Finally,we verified the method combining with the project developed by our research group.And according to results of verification,it can generate test case effectively.
Metadata Checking and Testing of Web Application Based on Invariance
FU Teng and GAO Jian-hua
Computer Science. 2014, 41 (8): 224-228.  doi:10.11896/j.issn.1002-137X.2014.08.048
Abstract PDF(380KB) ( 360 )   
References | Related Articles | Metrics
Metadata plays a very important role in Web application.With the increasing scale of metadata,maintaining metadata will spend intensive time and effort.The current compiler cannot notify the faults caused by metadata inconsistency,and the relationship between metadata and codebase is hidden.This paper extracted all the invariance of the projects based on the research regarding with the metadata invariant by experiments.It illustrated that the metadata invariants are respectively extracted and compared through such different methods as framework-based invariant discovering and frameless-based invariant discovering.When the program is refactored or enhanced,the metadata invariance will be determined.A message will be given if the programs violate invariance.
Research on Method of Constructing Chinese Character Confusion Set
SHI Heng-li,LIU Liang-liang,WANG Shi,FU Jian-hui,ZHANG Zai-yue and CAO Cun-gen
Computer Science. 2014, 41 (8): 229-232.  doi:10.11896/j.issn.1002-137X.2014.08.049
Abstract PDF(413KB) ( 934 )   
References | Related Articles | Metrics
The set of Chinese characters which is easily confused is one of the important sources during the process of identifying wrongly written characters.In the study, firstly we sorted out 11935 possibly-wrongly written characters by hand.Then taking those characters as nodes and "possibly-wrongly written characters" relations as sections, we constructed the set of wrongly written characters which is easily confused into a diagram.Due to the great limitation of manually sorting out wrongly written characters, on the basis of the diagram, we designed the internal-expanding algorithm that expands the set of wrongly written characters and the open source data external-supplementing algorithm that supplements the set of wrongly written characters through large quantity of corpus.In that way, we would expand the diagram and new pairs of wrongly written characters.According to the experiment, we newly found 15133 groups of wrongly written characters pairs.After proofreading samples at random, accuracy reachs 87.35%.
Novel Method of Uncertain Data Modeling and Classification Based on Cloud Model
QIN Li and LI Bing
Computer Science. 2014, 41 (8): 233-240.  doi:10.11896/j.issn.1002-137X.2014.08.050
Abstract PDF(1425KB) ( 906 )   
References | Related Articles | Metrics
Data contain inherent uncertainty.Sampling errors,data staling and repeated measurements are all the sources of uncertainty,so the analysis of uncertain data obtains more and more attention in many applications.The value of each traditional uncertain data is represented as domain over which a probability distribution function is defined.Because uncertainty data have fuzziness and randomness,and the traditional probability distribution function is difficult to define the actual distribution of the uncertainty data,this paper proposed a cloud modeling process of uncertainty data by the cloud drops distribution,and also designed a classify method by cloud union and similarity computing of cloud.Cloud model can effectively merge the randomness and fuzziness together,and can analyze uncertain data more effectively.For it’s realistic reflection of actual distribution of the uncertain data,our experiments also prove the validity of this method.
Incremental Learning Algorithm of Non-negative Matrix Factorization with Sparseness Constraints
WANG Wan-liang and CAI Jing
Computer Science. 2014, 41 (8): 241-244.  doi:10.11896/j.issn.1002-137X.2014.08.051
Abstract PDF(326KB) ( 542 )   
References | Related Articles | Metrics
Non-negative matrix factorization is a useful method of subspace dimensionality reduction.However,with the increasing of training samples,the computing scale of non-negative matrix factorization increases rapidly.To solve this problem and improve the sparseness of the data obtained after factorization as well,an incremental learning algorithm of non-negative matrix factorization with sparseness constraints was proposed in this paper.Using the results of previous factorization involved in iterative computation with sparseness constraints,the cost of the computation is reduced and the sparseness of data after factorization is highly improved.Experimental results on both ORL and CBCL face databa-ses show that the proposed method is effective on dimensionality reduction.
Cluster Algorithm Based on Edge Density Distance
WU Ming-hui,ZHANG Hong-xi,JING Cang-hong and CAI Wen-ming
Computer Science. 2014, 41 (8): 245-249.  doi:10.11896/j.issn.1002-137X.2014.08.052
Abstract PDF(416KB) ( 376 )   
References | Related Articles | Metrics
Clustering algorithms based on grid have a drawback of low clustering precision,and most clustering algorithms based on density have high time complexity.In order to improve clustering performance,a cluster algorithm based on edge density distance was proposed in this paper.The new cluster algorithm makes new definitions of density and category.In the clustering process,data are divided into grids and some initial information is recorded firstly for the operation of finding k near points.Then in the process of finding a new clustering center,a method come from bucket sort is used,which makes it fast to find the clustering center.A subsequent procedure is to iteratively analyse k near points of one category to judge whether they are density accessible.Analysis in theory and result of experiments show that the proposed algorithm has both high quality in clustering result and low time complexity.
Equalization Fuzzy C-means Clustering Algorithm
WEN Chuan-jun,WANG Qing-miao and ZHAN Yong-zhao
Computer Science. 2014, 41 (8): 250-253.  doi:10.11896/j.issn.1002-137X.2014.08.053
Abstract PDF(330KB) ( 574 )   
References | Related Articles | Metrics
Fuzzy C-means clustering(FCM) is a fast and effective clustering algorithm,but it doesn’t consider the difference of the samples size,while the capacities of each class are of large difference,and the decision of FCM will be benificial to the class with less samples.A new clustering algorithm was proposed in the paper and named as equalization fuzzy C-means clustering(EFCM).The minimum objective function of FCM was modified and the factor of samples size was added in EFCM objective function.The parameter optimal solutions of EFCM were calculated through PSO algorithm in which sample fuzzy memberships are seted as coding object.The properties of EFCM were obtained by theoretical analysis.The effectiveness of EFCM for balansed and unbalanced datasets was proved by simulation experiments.
Ecotoxicology Dynamics-based Optimization with Impulsive Toxicant Input
HUANG Guang-qiu,XU Xiao-long and LU Qiu-qin
Computer Science. 2014, 41 (8): 254-262.  doi:10.11896/j.issn.1002-137X.2014.08.054
Abstract PDF(686KB) ( 375 )   
References | Related Articles | Metrics
To solve some function optimization problems,the optimization algorithm based on the impulsive toxicant input model of ecotoxicology dynamics was constructed.In the algorithm,an environment system corresponds to the search space of an optimization problem,and there is pollution in the environment system,and some pollution sources pour toxicant pollutants into the environment system impulsively and periodically.Many different classes of population live in the system,and there are competition and predatory-prey relation among different classes of population,and each population in a class of population is just an alternative solution of an optimization problem.The ecotoxicology dynamics model is mapped into describing the change of some features of a population.The interaction between environment and populations as well as among populations is used to construct evolution operators of populations,and these operators realize sufficient information exchange between environment and populations as well as among populations.The research results show that environment pollution gives influence on a very small part of features of a population,which means that only a very small part of features take part in computation.Then convergence speed of the algorithm can be substantially improved,and impulsively discharged toxicant pollutants result in fierce change of state value of a feature of the population,which enables a search to jump out from local optima easily.Strong populations who can endure pollution keep growing,while week populations who can not endure pollution will stop growing,which ensures the algorithm to converge.The case study shows that for some function optimization problems the algorithm has higher speed of convergence and higher accuracy of global optima than the existed population-based intelligent optimization algorithms.
Research on Semantic Similarity Algorithm of Linked Data Based on Dynamic Weight
JIA Li-mei,ZHENG Zhi-yun,LI Dun and WANG Zhen-fei
Computer Science. 2014, 41 (8): 263-266.  doi:10.11896/j.issn.1002-137X.2014.08.055
Abstract PDF(433KB) ( 376 )   
References | Related Articles | Metrics
Semantic similarity calculation has an important role in information retrieval of linked data,and the results of calculation directly affect the effect of data mining.The attribute information of instance is an essential factor for semantic similarity computation of linked data.To solve the problem of lower computation precision caused by lack of considering the importance of attribute and type of attribute value,this paper proposed a new semantic similarity calculation method based on dynamic weight.This method dynamically computes the attribute weight according to quantity of different attribute values,distribution of attribute values,and validity of attribute.Then,it chooses the matching similarity algorithm of attributes according to the types of attribute value.Finally,it combines the dynamic weight of attributes to calculate the semantic similarity of instances.The experiment confirms that the computation precision of the semantic similarity of instances obtained from the methods in this thesis is better than existing methods.
Visualization of Association Rules Based on S-C MetaGraph
CHEN Min,ZHAO Shu-liang,GUO Xiao-bo,LIU Meng-meng and LI Xiao-chao
Computer Science. 2014, 41 (8): 267-273.  doi:10.11896/j.issn.1002-137X.2014.08.056
Abstract PDF(2378KB) ( 485 )   
References | Related Articles | Metrics
Considering the problems aroused by the traditional association rules visualization approaches which are orienting to expert users while ignoring the normal user perception,even more when the number of rules increases,particularly prone to overlap among edges and nodes,as well as result in reducing the performance and readability of rule representation,this paper proposed a novel form of visualization display method based on S-C metagraph to show one-to-one,one-to-many,many-to-many association rules.Firstly,gave the basic definition of S-C metagraph and the model showing association rules using S-C metagraph.Then gave the properties and derivation process of S-C metagraph for visualizing association rules.Finally,with the help of experimental data obtained from demographic data of a province,combining with the Preattentive Processing Theory and Gestalt Theory,we illustrated multi-mode association rules in the combination form of S-C metagraph and spindle,and analyzed the effect of the visualization display.The realistic application analysis and eperimental results turn out that the proposed visualization method has excellent visual effects.
Sub-period Considering Vehicle Scheduling Problem with Uncertain Demands
RONG Li-xia
Computer Science. 2014, 41 (8): 274-277.  doi:10.11896/j.issn.1002-137X.2014.08.057
Abstract PDF(409KB) ( 429 )   
References | Related Articles | Metrics
Considering the impact of different traffic conditions on transport,the daily traffic condition was divided into flow,normal and peak according to the number of vehicles.According to the speed of the three periods,an uncertain vehicle routing model was presented with chance-constrained based on uncertainty theory.In order to solve the uncertain measure of demands,a hybrid intelligent algorithm with uncertain simulation and genetic algorithm was provided.At last,a numerical example was provided.In the example,we analyzed the model solutions based on dividing of traffic periods,and the solutions of different coefficient value according to customer satisfaction.
Image Retrieval Algorithm Based on Laplacian Sparse Coding
WANG Rui-xia,PENG Guo-hua and ZHENG Hong-chan
Computer Science. 2014, 41 (8): 278-280.  doi:10.11896/j.issn.1002-137X.2014.08.058
Abstract PDF(1264KB) ( 529 )   
References | Related Articles | Metrics
Due to the overcomplete codebook and the independent coding processing,the similarity of the image is lost between block and block to be encoded.To preserve such similarity information,we proposed the image retrieval algorithm based on Laplacian sparse coding.Given initial sparse coding and calculating the Laplacian matrix,similarity preserving term was incorporated into the objective of sparse coding.We used the feature-sign search algorithm and the golden section line search algorithm to update one by one each coefficient of sparse coding.The experiments show that Laplacian sparse coding can enhance the robustness of sparse coding.Compared with the improved SPM model,the new image retrieval algorithm better improves the retrieval accuracy.
Research on Extended BoF Model
LIANG Ye,LIU Hong-zhe and YU Jian
Computer Science. 2014, 41 (8): 281-285.  doi:10.11896/j.issn.1002-137X.2014.08.059
Abstract PDF(443KB) ( 376 )   
References | Related Articles | Metrics
BoF feature is one of the most popular image representation methods by now.Aiming at the weaknesses of hard assignment coding and discarding spatial information,the improvements of feature coding and pooling in traditional BoF paradigm were proposed.The new image representation can be used for image classification.First,multi-annulus partition method was proposed for feature pooling,which can be embedded more spatial information.Second,multi-words hard assignment coding method was proposed according to long-tail distribution of dense samples and relatively even distribution of features in scene images.The new representation not only preserves merits of BoF paradigm but also is more compact and has more spatial information.The experimental results prove the efficiency of the new method.
Keyframe Extraction Based on Representative Evaluation of Contents
GU Yi-jun,XIE Yi and XIA Tian
Computer Science. 2014, 41 (8): 286-288.  doi:10.11896/j.issn.1002-137X.2014.08.060
Abstract PDF(2073KB) ( 660 )   
References | Related Articles | Metrics
The keyframe extraction is a visual summary method.It enhances the accessibility to the visual content.Traditional methods extract keyframes through clustering.These methods don’t provide reliable descriptions of keyframe representative.This paper proposed a novel keyframe extraction method through a graph model representing the candidate keyframes and the correlations between them.The representative of candidate keyframe was calculated through propagating grade between correlated candidate keyframes iteratively.To support the calculation of the representative,the paper introduced a correlation calculation method according to how well the maximally stable colour regions of two frames match to each other.The experiments were conducted on several test videos and the results validated our keyframe extraction method.
Gaussian Mixture Model Terrain Classification Based on Hybrid PSO
HAN Guang,SUN Ning,LI Xiao-fei and ZHAO Chun-xia
Computer Science. 2014, 41 (8): 289-292.  doi:10.11896/j.issn.1002-137X.2014.08.061
Abstract PDF(1176KB) ( 437 )   
References | Related Articles | Metrics
Gaussian mixture model terrain classification based on improved hybrid particle swarm optimization (PSO) algorithm was presented.The expectation-maximization (EM) method is a popular method to solve the Gaussian mixture model,but it is a local optimization method with instable convergence rate and initial value sensitivity.Therefore the hybrid PSO algorithm was introduced,and a series of improvement was conducted.Experimental results show that the improved algorithm can greatly improve the global convergence ability and enhance the rate of convergence.Using the improved algorithm to solve Gaussian mixture model can improve the accuracy of parameter estimation,and the proposed terrain classification method also has excellent performance in the terrain classification experiment of outdoor scene image.
Palmprint Recognition Method Based on Energy Spectrum of GLBP
ZHAO Zhi-gang,WU Xin,HONG Dan-feng and PAN Zhen-kuan
Computer Science. 2014, 41 (8): 293-296.  doi:10.11896/j.issn.1002-137X.2014.08.062
Abstract PDF(1237KB) ( 539 )   
References | Related Articles | Metrics
A novel palmprint recognition algorithm based on information entropy under GLBP (EGLBP) was proposed in this paper,and it was first introduced into palmprint recognition.At the same time,in order to improve the recognition accuracy and reduce the complexity of the algorithm,information entropy was used to measure the information of palmprint.The bigger the entropy is,the more information is contained.Firstly,the images were decomposed though Gabor transform,and their information entropies were calculated.Then several small images were removed.Secondly, LBP feature extraction for each block was processed using the block idea,and the features were parallelly fused.Finally,chi-square distance was used to determine palmprint category.After PolyU palmprint database’s verification of the region of interest, compared with the traditional palmprint recognition algorithm, EGLBP algorithm has a recognition rate of 99.89% and a recognition time of 113.9ms,demonstrating its superiority and effectiveness.
Low-rank Graph with Spatial Constraint for Face Recognition
YANG Guo-liang,XIE Nai-jun,LUO Lu and LIANG Li-ming
Computer Science. 2014, 41 (8): 297-300.  doi:10.11896/j.issn.1002-137X.2014.08.063
Abstract PDF(435KB) ( 434 )   
References | Related Articles | Metrics
The low-rank representation (LLR) model can reveal the subtle data structure information and show a strong robustness when dealing with noises.Based on the framework for graph embedding dimensionality reduction method,we proposed a face recognition algorithm which establishes low-rank graph using low-rank representation model.In addition,we constructed a novel low-rank graph with spatial constraint by using spatial information of the tracked points to improve recognition performance.To demonstrate the effectiveness of the presented algorithm,our comparative experiments were conducted using ORL and PIE face image databases.Experimetal results show that the effectiveness and robustness to noises are always better than other state-of-the-art methods.
Image Denoising by Principal Component Analysis with Structural Information
ZHENG Xiu-qing,HE Kun and ZHANG Jian
Computer Science. 2014, 41 (8): 301-305.  doi:10.11896/j.issn.1002-137X.2014.08.064
Abstract PDF(1217KB) ( 503 )   
References | Related Articles | Metrics
The noise is inevitable in the process of image acquisition,image storage and image transmission.To suppress the noise effectively and preserve the structural information of the image,a new method based on structural information was presented.The method extracts the sub-block sample from the pure image to build the structural element library.The structure elements are regarded as the sample of the same general.By RPCA transform,the parse representation kernel function of the structure element library of the same type samples is built adaptively which can smooth the image and preserve the edge effectively.The method analyses the distributions of noise and structure information in kernel space from the theory.Experimental results show that the proposed algorithm can suppress the noise of the image and protect the structural information of the image more effectively.
Multilevel Thresholding Image Segmentation Based on Improved Artificial Fish Swarm Algorithm
CUI Li-qun,SONG Xiao,LI Hong-xu and ZHANG Ming-jie
Computer Science. 2014, 41 (8): 306-310.  doi:10.11896/j.issn.1002-137X.2014.08.065
Abstract PDF(1324KB) ( 514 )   
References | Related Articles | Metrics
In order to realize the effective image segmentation,this paper proposed a new multilevel thresholding algorithm based on improved fish swarm algorithm for image segmentation.It introduces the idea of domain search for basic AFSA to make further improvements,and then realizes global optimization for the maximum entropy function.The improved algorithm could automatically adjust the parameters of fish swarm algorithm according to the individual fitness and the degree of dispersion of the population,while it ensures the diversity of the population and accelerates the convergence speed at the same time,and finally gets the vintage thresholds for image segmentation.It overcomes the shortcomings of basic AFSA including poor convergence,easy to fall into local optimum and other issues.Experimental results show that the proposed algorithm can achieve a more stable,fast and accurate image segmentation.
Graph Embedding Projective Non-negative Matrix Factorization Method for Image Feature Extraction
WANG Juan,DU Hai-shun,HOU Yan-dong and JIN Yong
Computer Science. 2014, 41 (8): 311-315.  doi:10.11896/j.issn.1002-137X.2014.08.066
Abstract PDF(1302KB) ( 494 )   
References | Related Articles | Metrics
To overcome the disadvantage that projective non-negative matrix factorization (PNMF) fails to discover the intrinsic geometrical and discriminating structure,a novel graph embedding projective non-negative matrix factorization (GEPNMF) was proposed for image feature extraction.The paper constructed two adjacent graphs that are separately used to characterize the intrinsic geometrical structure of data and interclass separability.Using the Laplacian matrices of the adjacent graphs,the paper designed a graph embedding regularization that incorporates with PNMF’s objective function to construct the GEPNMF’s objective function.Since the graph embedding regularization is adopted by the objective function,the learned subspace of GEPNMF can preserve the data geometrical structure while it maximizes the margins between different classes.That is to say,it has more discriminability.In addition,the paper introduced an orthogonal regularization into the objective function to ensure the learned bases to be parts-based.The paper deduced a multiplicative update rule (MUR) to optimize the objective function.The experimental results on Yale and CMU PIE face image datasets suggest the effectiveness of GEPNMF.
Forest Fire Identification Method Based on Improved Genetic Algorithm and SVM
LOU Xiong-wei,HUANG De-cai,FANG Lu-ming and XU Ai-jun
Computer Science. 2014, 41 (8): 316-321.  doi:10.11896/j.issn.1002-137X.2014.08.067
Abstract PDF(2174KB) ( 506 )   
References | Related Articles | Metrics
Computer vision is one of the popular topics in computer technology research field,and the selection and extraction of target object are the core issues of computer vision.The article put forward the method to sieve the assemblage features of flame image from three aspects—textures,dynamic and geometric,based on analyzing and researching the method to predict the flame,and specifically introduced the extraction method of dynamic features.The article made a more in-depth study on the feature selection of flame recognition,made comprehensive utilization of the content features of the flame,and proposed the new fitness function based on the genetic algorithm,and made the feature selection more scientific,while SVM can make full use of its advantages in identification process based on the theory of structural risk minimization,so it achieves the desired effect in the experiment.
Correction of Interpolation Methods in ENVI for Remotely Sensed Imagery
MI Hui-chao,ZHANG Xiao-di and LAI Kai
Computer Science. 2014, 41 (8): 322-326.  doi:10.11896/j.issn.1002-137X.2014.08.068
Abstract PDF(438KB) ( 556 )   
References | Related Articles | Metrics
ENVI software was developed in IDL (Interactive Data Language) to process remotely sensed imagery,from which it can extract useful information quickly,conveniently and accurately.ENVI software provides a set of image interpolation methods to analyse and process imagery.However,due to the theoretic shortage,a bias exists in the interpolation image obtained by interpolation methods of ENVI software.Moreover,this bias will be transmitted to the subsequent image processing procedure.In this paper,the interpolation model was modified theoretically.And the experimental results indicate that the proposed modified interpolation algorithms outperform the original ones and therefore obtain more accurate results.