Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
Current Issue
Volume 42 Issue 7, 14 November 2018
Overview of Time Synchronization in Wireless Sensor Networks
HU Bing SUN Zhi-xin
Computer Science. 2015, 42 (7): 1-4, 11.  doi:10.11896/j.issn.1002-137X.2015.07.001
Abstract PDF(431KB) ( 99 )   
References | Related Articles | Metrics
Time synchronization in wireless sensor network is an important underlying technology and the precondition about realizing collaborative awareness,communications,energy management of nodes and other network functions.In light of different application requirements of WSN,many time synchronization algorithms for WSN were studied.Classification methods for time synchronization mechanism in wireless sensor network were summarized,and typical time synchronization mechanism of each type was analyzed in detail.Finally,the development direction of time synchronization mechanism and the open research issues were summarized.
Detecting Snapshots for Web Tables
WANG Ning REN Hong-wei
Computer Science. 2015, 42 (7): 5-11.  doi:10.11896/j.issn.1002-137X.2015.07.002
Abstract PDF(817KB) ( 48 )   
References | Related Articles | Metrics
In recent years,a large number of structured tabular data have emerged on the Internet constantly.However,the value of Web tables depends not only on the data itself,but also on the relatedness between the data.Only when the potential relatedness between them is detected,can these structured data be fully utilized.We proposed a new type of relatedness between Web tables called snapshot relationship,and a framework for capturing snapshots that meet a certain matching condition with a given table.The snapshots are beneficial for query optimization,and also helpful for returning partial results rapidly when querying on big data.The relatedness between an original Web table and its snapshot can be computed based on entity consistency and schema consistency.In order to assign high weights on tables which provide more fresh entities,the concept of entity freshness was introduced into our scoring method.Meanwhile,the content consistency of Web tables can be enhanced by applying Bayesian analysis to our relatedness capturing framework.As a consequence,accuracy of finding snapshots is improved.Extensive experiments demonstrate that the algorithms can capture snapshots with high quality,and perform well in query precision and recall.
On NP-completeness of MSP Problem
WU Tian-jun JIANG Xin-wen
Computer Science. 2015, 42 (7): 12-14, 27.  doi:10.11896/j.issn.1002-137X.2015.07.003
Abstract PDF(277KB) ( 106 )   
References | Related Articles | Metrics
Based on the MSP (multistage graph simple path) problem in [1,2],we studied the correlation between MSP,K-Coloring and sub-graph isomorphism,revealed the expressive property of MSP to better understand NP-completeness,and analyzed the phase-transition phenomenon of the MSP problem in order to generate hard test cases for the relevant polynomial-time algorithm proposed in [1,2].
Named Entity Recognition for Military Text
FENG Yun-tian ZHANG Hong-jun HAO Wen-ning
Computer Science. 2015, 42 (7): 15-18, 47.  doi:10.11896/j.issn.1002-137X.2015.07.004
Abstract PDF(437KB) ( 287 )   
References | Related Articles | Metrics
This paper presented a semi-supervised named entity recognition method based on conditional random field model for named entities in the military text,which aims at merging military named entities such as military appointment and military rank,military equipment,military supplies,military facilities,military institutions (including code designation) and military place names into a unified technical framework.The method establishes an efficient feature set according to the grammatical features of the military text,builds conditional random field model to identify the military named entities,and develops the method based on dictionary and the method based on rules to improve the results in turn.Experiments show that the method is able to complete the named entity recognition task in the military text well,and make F-value about testing the language material up to 90.9%,which is close to the level of named entity recognition in the commons area.
GPU Application on the Phase-field Simulation
HU Yan-su, GAO Ang, WANG Zhi-jun and MU De-jun
Computer Science. 2015, 42 (7): 19-21, 56.  doi:10.11896/j.issn.1002-137X.2015.07.005
Abstract PDF(332KB) ( 78 )   
References | Related Articles | Metrics
As a very preponderant microstructure numerical simulation method,the phase field method has been widely used in the research of solidification microstructure evolutionary mechanism.But whether the simulation scale or the evolution time is considered,there is huge computation which sets a high demand for the computer.Compared with the traditional CPU,GPU is much more efficient for the parallel computing.The paper proposed a GPU-based acceleration strategy to complete the directional solidification phase field simulation in large scale.The results show that the computation time can reduce 30 times compared with a single CPU,which brings a new opportunity for the phase field simulation in large scale.
Geographical Watermarking Method Supporting Incremental and Multiple Watermark Embedding
LIAO Zhang XIONG Sheng-chao PENG Zhi-yong PENG Yu-wei
Computer Science. 2015, 42 (7): 22-27.  doi:10.11896/j.issn.1002-137X.2015.07.006
Abstract PDF(471KB) ( 61 )   
References | Related Articles | Metrics
There are a lot of geographical data watermarking methods,but most of them lack consideration for the protection of fractional geographical data.This paper analyzed incremental and multiple watermarking problems on geographical data.Based on variable step quantization modulation technique,this paper proposed a watermarking method which can be used for both incremental and multiple watermark embedding.By dividing the feature set points into two disjoint subsets and zooming up/out all points in the subsets respectively,the method ensures the watermark embedding without disturbing the multiple watermarks,and guarantees that all the vertices are modified within the scope of fidelity for one time.So fidelity is controlled.In addition,by using the zoom modulation strategy,the shape of geographical data is preserved as well as possible.Experiments show that the method can meet the requirements of the incremental and multiple watermarks,and has good robustness as well.
Singleton Sub-problem Arc Consistency Based on Order
CHEN De-quan ZHANG Yong-gang XIN Ying LIU Wen-zhuang
Computer Science. 2015, 42 (7): 28-31, 37.  doi:10.11896/j.issn.1002-137X.2015.07.007
Abstract PDF(366KB) ( 53 )   
References | Related Articles | Metrics
In investigation of the consistency techniques,a new consistency concept SSAC was proposed based on the recently propagation levels.The algorithm and properties of SSAC were given successively in the paper.Thereafter we concluded that simplification of SSAC dosen’t change the solution set of the original constraint satisfaction problem.Meanwhile the propagation ability of SSAC is between SAC and AC.Experiments results show that the performance of SSAC is 2 or 3 times than the existed algorithms.
Analysis of Programs with Pointer Arithmetic by Combining Points-to and Numerical Abstractions
YIN Banghu, CHEN Liqian and WANG Ji
Computer Science. 2015, 42 (7): 32-37.  doi:10.11896/j.issn.1002-137X.2015.07.008
Abstract PDF(471KB) ( 51 )   
References | Related Articles | Metrics
ions YIN Bang-hu CHEN Li-qian WANG Ji (National Laboratory for Parallel and Distributed Processing,School of Computer Science, National University of Defense Technology,Changsha 410073,China) Abstract Programs with pointer arithmetic often involve runtime errors such as array out of bound,buffer overflow,etc.Pure pointer analysis and pure numerical analysis cannot deal with pointer arithmetic.To combine pointer analysis and numerical analysis,we proposed a new pointer memory model.On this basis,we presented an abstract domain to capture points-to and offset information of pointers.Finally,under the framework of abstract interpretation,we implemented a static analyzer prototype named PAA for analyzing C programs with pointer arithmetic.Experimental results show that PAA can analyze points-to and numerical properties of programs with pointer arithmetic effectively.More-over,PAA can achieve a reasonable trade-off between efficiency and accuracy.
Visualization Method of BDL Model to UML State Diagram
MA Li, WU Guo-qing, HUANG Bo, CHENG Ming and CUI Meng-tian
Computer Science. 2015, 42 (7): 38-43.  doi:10.11896/j.issn.1002-137X.2015.07.009
Abstract PDF(984KB) ( 64 )   
References | Related Articles | Metrics
Aimed at finding a solution to the difficulty understand for the requirement model a complex software,this paper presented a visualization method by using UML state diagram discribing the requirement model.This method is based on system behavior sequences,which are indicated by behavior description language(BDL).It builds up a system behavior model.Mapping rules which are defined construct the relation between behaviors of the BDL model and the migrations of UML state diagram,and the state of UML state diagram associates the state produced by behavior.Then,according to the transformation algorithm,the information for each node is extracted automatically to visualize BDL requirement model.At last,the instance was presented to verify the effectiveness of the method.
Research on Decidability of CoL2 in Computability Logic
LI Xing-xiang LUAN Jun-feng
Computer Science. 2015, 42 (7): 44-47.  doi:10.11896/j.issn.1002-137X.2015.07.010
Abstract PDF(335KB) ( 88 )   
References | Related Articles | Metrics
Computability or algorithm solvablity,is one of the important concepts in the field of mathematics and computer science.Computability logic (abbreviated as CoL) is a formal theory of computability and an interactive resource logic.CoL2 logic uses game semantics.CoL2 is an extension of classical propositional logic.What makes CoL2 more expressive than classical propositional logic is the presence of choice operators and general atom.CoL2 has a higher proof efficiency.In this paper,the decidability of CoL2 was presented,in other words,by presenting an algorithm whether any CoL2 formula is provable can be determined,and we proved that the algorithm runs in polynomial space.
Occasion Determination of Clustering Ensemble
MENG Xiao-long YANG Yan WANG Hong-jun XIAO Wen-chao
Computer Science. 2015, 42 (7): 48-51, 84.  doi:10.11896/j.issn.1002-137X.2015.07.011
Abstract PDF(354KB) ( 60 )   
References | Related Articles | Metrics
Ensemble learning technique may improve the clustering performance.In the experiment,we discovered that combining the mid-to-late solutions of cluster members in different initial conditions probably get the better ensemble results than combining the end ones.We used the bias/variance trade-off of generalization ability in ensemble network to explain this phenomenon,applied the early stopping rules to the clustering ensemble and proposed the concept of clustering ensemble occasion.The experimental results show that the performance of clustering ensemble based on the early stopping rules is superior to that based on the end solutions of cluster members,while the former takes less time,thus giving some useful suggestions for seeking the best clustering ensemble occasion.
Multi-label Feature Selection Algorithm Based on Information Gain
LI Ling, LIU Hua-wen, XU Xiao-dan and ZHAO Jian-min
Computer Science. 2015, 42 (7): 52-56.  doi:10.11896/j.issn.1002-137X.2015.07.012
Abstract PDF(416KB) ( 385 )   
References | Related Articles | Metrics
Multi-label feature selection is a kind of technology which is used to improve the performance of multi-label classifiers.However,the existing multi-label feature selection methods fail to make a tradeoff between the possible dependence among the labels and computational complexity in the process of obtaining reasonable feature subsets.Therefore,a novel multi-label feature selection algorithm based on information gain was proposed in the essay.It assumes that the features are independent with each other.The proposed method firstly uses information gain between a single feature and a set of labels to measure their correlation degree,and then removes the irrelevant and redundant features according to a threshold value.The experimental results show that the proposed algorithm can more effectively promote the performance of multi-label classifiers.
Three-valued Quantum Elementary and Implementation of Quantum Fourier Transform Circuit
FAN Fu-you, YANG Guo-wu, ZHANG Yan and YANG Gang
Computer Science. 2015, 42 (7): 57-61.  doi:10.11896/j.issn.1002-137X.2015.07.013
Abstract PDF(332KB) ( 121 )   
References | Related Articles | Metrics
In theory,quantum elementary gates can be put together to implement any quantum circuit and build a scalable quantum computer.Because the number of quantum elementary gates required to build quantum logic circuits is too large,exactly controlling them is not easy.Therefore,how to reduce the number of quantum elementary gates to build quantum circuits is a very important and significant topic.Three-level quantum system was proposed to build quantum computer in this paper,and a set of three-valued quantum elementary gates were defined,including function,operator matrix,quantum circuit diagram.These elementary gates mainly includ e three-valued quantum NOT gate,three-valued quantum controlled-NOT gate,three-valued Hadamard gate,three-valued quantum SWAP gate and three-valued CRk gate and so on.This paper extended the quantum Fourier transform(QFT) to three-valued quantum states,and quantum circuits were successfully built to implement QFT with partial three-valued quantum elementary gates.By the quantitative analysis,the complexity of three-valued QFT circuit is lower than two-valued case at least 50%.The result indicates that the three-valued quantum elementary gates have a huge advantage in respect of reducing the circuit complexity about quantum computation.
Method of Behavior Analysis for Complex System Based on Hierarchical Bayesian Petri Net with Time Factor
CHEN Qian, SHE Wei and YE Yang-dong
Computer Science. 2015, 42 (7): 62-67, 102.  doi:10.11896/j.issn.1002-137X.2015.07.014
Abstract PDF(526KB) ( 67 )   
References | Related Articles | Metrics
For the problem that the status is proned to explosive growth when we try to model and analyse a complex system with huge scale,this paper proposed an Bayesian hierarchical Petri net model with the extending time factor (TF-HBPN),and based on this model proposed a recursive construction method and recursive abductive behavior analysis method.Firstly,our method creates top-level TF-HBPN according to the observed system’s behavior and decomposes behavior analysis problem of complex systems through hierarchical recursion.Then it calculates the fault probability of the correct time sequence chain of fault events by recursive abductive reasoning.Finally,it calculates the bayesian probability of the event chain of system’s behavior obtained by recursive abductive reasoning and time series analysis.The analysis results are compared with the right event chain to separate interference information.The experimental cases show that this method can model and analyze complex fault quickly and still can do system’s behavior analysis and abductive reasoning with alarm missing.Compared with the general Petri nets,this method has a lower degree of mode-ling difficulty and is more concise and simple.
DRPFSP Algorithm for Solving Permutation Flow Shop Scheduling Problem
WEI Jia-yin QIN Yong-bin XU Dao-yun
Computer Science. 2015, 42 (7): 68-73, 107.  doi:10.11896/j.issn.1002-137X.2015.07.015
Abstract PDF(781KB) ( 67 )   
References | Related Articles | Metrics
For the permutation Flow Shop scheduling problem,a new algorithm named DRPFSP,which is based on the study of the classic heuristic algorithms,was proposed in this paper.The algorithm normalizes the matrix A of processing times firstly.Secondly,it transforms the original problem containing m machines into a new problem containing 2 machines by introducing a probability matrix P2×m and a corresponding dimension reduction function fp(A)=PA.Thirdly,it uses the Johnson algorithm to solve the new problem and finds a scheduling sequence π0.Finally,it processes π0 with the insert neighborhood fast evaluation method to obtain a scheduling scheme π for the original problem.The experiment results show that,compared with the classical heuristic algorithms,DRPFSP algorithm is more effective for the permutation Flow Shop scheduling problem.
Cardinality-constrained Load Balancing Problem
LI Wei-dong LI Jian-ping
Computer Science. 2015, 42 (7): 74-77, 90.  doi:10.11896/j.issn.1002-137X.2015.07.016
Abstract PDF(386KB) ( 54 )   
References | Related Articles | Metrics
A special case of the cardinality-constrained load balancing problem,called 2-semi-matching problem was considered.The computational complexity and three approximation algorithms were presented under three different objectives,respectively.
Efficient Reduction Algorithms for Directed Acyclic Graph
HOU Rui WU Ji-gang
Computer Science. 2015, 42 (7): 78-84.  doi:10.11896/j.issn.1002-137X.2015.07.017
Abstract PDF(556KB) ( 59 )   
References | Related Articles | Metrics
When deploying an application to a given network-on-chip(NoC) to execute,each task of the application should be assigned to a tile in the NoC seperately first.The application is generally modeled as a directed acyclic graph (DAG) in which tasks are repersented as vertices,and the deployment process is equivalent to mapping a DAG to the topology of a NoC.With the increasing scale of applications and NoCs,finding out an optimal mapping scheme is a typical intractable problem.To accelerate the process of mapping DAGs to a given NoC system,this paper proposed an efficient reduction algorithm for DAG,so that the number of vertices in the DAG after reduction is close to the number of tiles in the NoC system.The proposed algorithm can effectively identify all reducible subgraph,which can be reduced to a single vertex.The new algorithm extends the applicable scope from nested graphs to arbitrary DAGs,with the same level of computational complexity compared to the original algorithm.This paper also presented a parallelized algorithm which can shorten the process of seraching reducible subgraph.
Digital Watermarking Algorithm for Android Smart Phones
Computer Science. 2015, 42 (7): 85-90.  doi:10.11896/j.issn.1002-137X.2015.07.018
Abstract PDF(1260KB) ( 56 )   
References | Related Articles | Metrics
In this paper,a digital watermarking algorithm based on mean gray value was proposed for the Android smart phones.In the embedding phase,the watermark after the pixel reorganization is embedded into the carrier image.In the extraction phase,the initial watermark is extracted at first,and then whether the initial watermark needs replacing its areas is determined.If necessary,the areas with a better one are replaced.Otherwise,any areas won’t be replaced.Finally,the initial watermark is transformed into the desired watermark.Experimental results show that the algorithm has a fine transparence of embedded watermark and is robust to attacks such as noise adding,compressing and cropping.
Research on Compensating Transaction Based Business Processes Exception Handling Model
LEI Yi-wei BEN Ke-rong
Computer Science. 2015, 42 (7): 91-94, 113.  doi:10.11896/j.issn.1002-137X.2015.07.019
Abstract PDF(400KB) ( 63 )   
References | Related Articles | Metrics
The decision of compensation trigger conditions affects the correctness of compensation.Due to the compensation relations among business process activities,especially the compensation relations in concurrent structures,process developers are prone to make mistakes when designing the compensation processes.This paper analyzed the compensation dependences in the structure of several basic processes such as sequence,selection,concurrency,and their composite structures.Computation methods of compensation trigger conditions,as well as the construction process of Petri-net models were given,and the feasibility of the method was also illustrated by an example.
Heuristic Algorithm for Server Placement in Distributed Interactive Applications
ZHENG Jing-jing ZHANG Jing WU Ji-gang
Computer Science. 2015, 42 (7): 95-98, 121.  doi:10.11896/j.issn.1002-137X.2015.07.020
Abstract PDF(428KB) ( 62 )   
References | Related Articles | Metrics
Distributed interactive applications(DIAs) are networked systems that allow multiple participants at different locations to interact with each other in real time.The interaction quality heavily depends on network latencies which can be reduced by reasonable distribution of location where the servers of the DIAs are placed.Thus,the placement of ser-vers is a key factor to the interactivity performance of DIAs.The simulated annealing algorithm and tabu search algorithm were used to solve the server placement problem in this paper.Then these two algorithms were compared with genetic algorithm.The experiment results show that the genetic algorithm is much faster in the speed of obtaining a better solution,but simulated annealing algorithm and tabu search algorithm outperform genetic algorithm in the quality of the solution and obtain average latency reduction of 15.5% and 15.2% respectively for the same number of server,which effectively improve the quality of the interaction.
Survey on Access Control Technology of Composite Web Services Business Process
SHANG Chao-wang, LIU Qing-tang and WANG Yan-feng
Computer Science. 2015, 42 (7): 99-102.  doi:10.11896/j.issn.1002-137X.2015.07.021
Abstract PDF(374KB) ( 56 )   
References | Related Articles | Metrics
Access control of business process is one of the key technologies in secure and reliable Web services composition value-added application.This paper briefly reviewed the state of the research for access control of business process in Web services composition.We firstly analyzed the security problems concerning business process.Then,we discussed the research progress on the key access control technology from three respects of access control model of composite Web services business process,authorization constraint of business process in run-time and consistency detection in authorization coordination.Finally,the discussion of future directions and challenges was presented.
Model Checking of Parallel Attack in Andrew Secure RPC Protocol Based on SPIN
XIAO Mei-hua ZHU Ke MA Cheng-lin
Computer Science. 2015, 42 (7): 103-107.  doi:10.11896/j.issn.1002-137X.2015.07.022
Abstract PDF(371KB) ( 121 )   
References | Related Articles | Metrics
Andrew Secure RPC protocol is a kind of protocol with functions of identity authentication and key exchange,which is widely used in symmetric cryptography because of its conciseness.Model checking technology is widely used in verification of protocols due to its high automation,however,there is also a disadvantage in model checking technology that it can only find out attacks in single round of protocol session,which is hard used in multi rounds of protocol session.We proposed a modeling method,called combinatorial identity modeling method,which uses SPIN to verify Andrew Secure RPC protocol in consideration of the parallel environment and potential danger in Andrew Secure RPC protocol.According to the research conclusion,we found out two kinds of attacks which are reflection attack and type flaw attack in Andrew Secure RPC protocol.With this conclusion,we offered a new direction in model checking research in verifying protocol under complicated environment.
Improved MCL Clustering Algorithm in PPI Networks
HU Qing-sheng LEI Xiu-juan
Computer Science. 2015, 42 (7): 108-113.  doi:10.11896/j.issn.1002-137X.2015.07.023
Abstract PDF(439KB) ( 82 )   
References | Related Articles | Metrics
Protein-protein interaction (PPI) network is a new research field in the bioinformatics.Recently MCL clustering algorithm has played an important role in the field of predicting the function of unknown proteins.However,the quality of the clustering result is low.An improved MCL clustering algorithm was proposed in this paper,in which the penalty and mutation factors are introduced.New structured regulation operation takes place of expansion operation in the basic MCL algorithm.Experiments were performed on PPI data.And the results show that the new algorithm can not only minimize the number of small clusters,but also improve F-measure and Avg.F value compared with the basic MCL algorithm.
Improved Algorithm for 3D NoC Floorplan Based on Particle Swarm Optimization
SONG Guo-zhi ZHANG Da-kun TU Yao HUANG Cui WANG Lian-lian
Computer Science. 2015, 42 (7): 114-117, 124.  doi:10.11896/j.issn.1002-137X.2015.07.024
Abstract PDF(427KB) ( 53 )   
References | Related Articles | Metrics
We presented an improved particle swarm optimization algorithm based algorithm to optimize the floorplan,called improved particle swarm optimization(IPSO) algorithm,and replaced the original floorplan optimization algorithm based on simulated annealing algorithm (SA algorithm) to make it more suitable for large-scale three-dimensional networks-on-chip simulation.We compared the basic idea of these two algorithms.The details of the implementation steps of the two algorithms and improvement ideas of IPSO algorithm were given.We verified the performance improvement using an existing 3D NoC simulator.The simulation results show that the proposed IPSO algorithm is more suitable for large-scale three-dimensional simulation of 3D NoCs than the original SA algorithm.
Verification of Network Protocols Based on Abstraction and Composition
CHEN Daoxi, ZHANG Guangquan, XU Chengkai and CHEN Guobin
Computer Science. 2015, 42 (7): 118-121.  doi:10.11896/j.issn.1002-137X.2015.07.025
Abstract PDF(341KB) ( 38 )   
References | Related Articles | Metrics
ion and Composition CHEN Dao-xi1 ZHANG Guang-quan2 XU Cheng-kai2 CHEN Guo-bin3 (Suzhou Senior Technician Institute,Suzhou 215009,China)1 (School of Computer Science & Technology,Soochow University,Suzhou 215006,China)2 (Rongzhi College,Chongqing Technology and Business University,Chongqing 400033,China)3 Abstract Due to the state explosion problem in model checking,it is always impossible to verify the composition model of a multi-agent protocol.To relieve this problem,we analyzed the impact of the increase in the number of agents on that of states and then proposed an approach based on abstraction and composition.Firstly,Kripke structures of individual agents are established according to the LTL properties to be verified,and these structures are abstracted.Then,these abstraction models are composed.Finally,the tool Spin is used to verified the composed model.To validate the efficiency of this approach,we verified the protocol NSPK.The results show that there are significant decreases in the length of state-vector,depth searched and the number of states stored and transitions,which will help relieve the state explosion problem.
Computing Approximate Network Reliability Based on Truncated Edge Expansion Diagram
SUN Yun ZHONG Fa-rong MO Yu-chang PAN Zhu-sheng
Computer Science. 2015, 42 (7): 122-124.  doi:10.11896/j.issn.1002-137X.2015.07.026
Abstract PDF(223KB) ( 76 )   
References | Related Articles | Metrics
Exact reliability of small-scale network can be calculated quickly,but for large-scale networks,reliability calculation is difficult.We proposed an approximate analysis algorithm for network reliability based on truncated edge expansion diagrams.The experiment results show that our algorithm can obtain the approximate network reliability based on generating smaller edge expansion diagrams and equivalent binary decision diagrams.
Application of Generalized Fuzzy Sets GFScom to Fuzzy Comprehensive Evaluation
ZHANG Sheng-li and LI Yong-ming
Computer Science. 2015, 42 (7): 125-128, 161.  doi:10.11896/j.issn.1002-137X.2015.07.027
Abstract PDF(370KB) ( 67 )   
References | Related Articles | Metrics
Since there are several shortcomings over sketching the relationships among fuzzy knowledge and its different three sorts of negation in the novel fuzzy sets FScom given by Pan,the generalized fuzzy set GFScom was proposed,which is well done in describing the above relationships from viewpoint of philosophy.Based on GFScom,this paper presented a novel sort of fuzzy comprehensive evaluation.The demonstration shows that,in terms of processing three kinds of negation,GFScom is not only able to simplify the computing of the fuzzy comprehensive evaluation,but also makes the result reasonable and effective.
Automatic Configuration and Management of Business Process
WU Ya-zhou YANG Qi-fan JIANG Jian-min ZHANG Shi
Computer Science. 2015, 42 (7): 129-133, 141.  doi:10.11896/j.issn.1002-137X.2015.07.028
Abstract PDF(503KB) ( 56 )   
References | Related Articles | Metrics
A business process model can be configured to meet the specific requirements of the organization through the configuration.Automatic configuration activities need identifying variants of a process model which can be configured,while ensuring the correctness of this particular process model.However,there are few methods to solve this problem.This paper proposed an innovative approach for automatically separating a configurable process model into atomic and correct sub-process models without abnormal behavioral problems.Compared with existing approaches,our approach avoids independently handling the configuration activity and reduces computational complexity.Moreover,our approach is language-independent.
Verification of Termination for Nondeterministic Quantum Programs Constituted by Two Kinds of Quantum Walks
LEI Hong-xuan
Computer Science. 2015, 42 (7): 134-137.  doi:10.11896/j.issn.1002-137X.2015.07.029
Abstract PDF(276KB) ( 62 )   
References | Related Articles | Metrics
Firstly,the model of nondeterministic quantum programs constituted by quantum walks was proposed in C3 and C4.Secondly,the sets of reachable states,terminating states and diverging states of nondeterministic quantum programs starting in initial states under different measurement operators were discussed.It shows that the termination,diverging,the sets of reachable states and diverging states of nondeterministic quantum programs depend closely on the selection of measurement operators.The nondeterministic quantum programs starting in common initial states under different measurement operators is possible to terminate or diverge and the terminating states and diverging states of nondeterministic quantum programs coexist in the sets of reachable states starting in common initial states.
Virtual Network Mapping Algorithm of Searching Virtual and Substrate Network Synchronously
PENG Li-min
Computer Science. 2015, 42 (7): 138-141.  doi:10.11896/j.issn.1002-137X.2015.07.030
Abstract PDF(336KB) ( 46 )   
References | Related Articles | Metrics
Aiming at the resource allocation problem in virtual network mapping,a mapping model of searching virtual network and substrate network synchronously was proposed in this paper.By using the principle of the prim minimum spanning tree algorithm,the virtual node waiting for being mapped in the virtual network and the right substrate node satisfying the current virtual node in the substrate network are searched simultaneously,so that adjacent nodes in the virtual network are mapped into adjacent nodes in the substrate network in a proper order,at the same time the virtual nodes and their adjacent links are mapped in a harmony way,resulting in that topological property of the virtual network is also kept in the substrate network after being mapped.Simulation results show that the DS-VNM algorithm can reduce the mapping path length of virtual links effectively,and improve ratio of the network revenue and network cost and the acceptance ratio of the virtual network requests,and achieve good performance while allocating network resource.
Improved Power Allocation Strategy in OFDM Two-way Relaying
PAN Pei-sheng ZHAO Xi-feng
Computer Science. 2015, 42 (7): 142-145, 169.  doi:10.11896/j.issn.1002-137X.2015.07.031
Abstract PDF(396KB) ( 68 )   
References | Related Articles | Metrics
In order to improve the system sum-capacity of amplify-and-forward scheme for two-way relaying over orthogonal frequency division multiplexing (OFDM),this paper developed a new power allocation strategy with low complexity to maximize the total system rate.This strategy firstly builds a linear power relation between S-R and R-S to simplify the optimization problem from three layers to a layer,so the power allocation problem can be reformulated into a more tractable form and the power proportion between each node and the sub-carrier is obtained with low complexity.Then under the total power constraint,it employes the primal-dual interior-point methods to obtain the power allocation of each sub-carrier pair,and finally it allocates the power of each sub-carrier pair to all nodes.Simulation results show that the proposed power allocation strategy can significantly improve the total system rate,and the performance is improved as the number of sub-carrier increases.
Analysis of Fractal Property on GitHub Network Based on Weak Ties Theory
KUANG Li, YI Yun-fei and LI Yuan-xiang
Computer Science. 2015, 42 (7): 146-149.  doi:10.11896/j.issn.1002-137X.2015.07.032
Abstract PDF(302KB) ( 110 )   
References | Related Articles | Metrics
Researchers have found that many real-world networks have fractal properties.It is widely believed that the fractal property origins from disassortative mixing.Thus,the fractal property of social networks,which are mostly assortative mixing, is rarely investigated.As in the open-source collaboration platform,there are many large projects,where many developers do not have real collaboration activities.In the paper,we introduced the power of links to remove the weak links in the network.Based on the renormalization group analysis,we found the network transfers from small-world to fractal network when we removed the weak links.Furthermore,by analyzing the Pearson correlation coefficient and neighbor connectivit y, we found the fractal networks formed by strong links are assortative mixing.It enhances our former conclusion on the origin of fractality.
Dynamic Service Scheduling and Migration Scenarios Based on JBPM
WANG Yong, CHANG Jing-bo, QIANG Bao-hua, WANG Yu-feng and ZHANG Xue-qing
Computer Science. 2015, 42 (7): 150-155.  doi:10.11896/j.issn.1002-137X.2015.07.033
Abstract PDF(463KB) ( 151 )   
References | Related Articles | Metrics
In order to adapt to dynamically change business needs in complex environment and to provide support for dynamical process scheduling and recycling that centralize service,a dynamical process scheduling and migration schema based on JBPM workflow was proposed.This schema analyzes the limitations of workflow in terms of the current process scheduling,displays the dynamical process orchestration model combining with the characteristics of flexible workflow and formally describes the four operations produced by process changes.Through the study on the issues triggered by the migration process and continual performing,a process migration algorithm was proposed.Finally,instances and performance testing tools were employed to verify the feasibility and efficiency of the algorithm.
Network Behavior-oriented CDN Cache Allocation Strategy
FENG Xiang YANG Tan LI Song
Computer Science. 2015, 42 (7): 156-161.  doi:10.11896/j.issn.1002-137X.2015.07.034
Abstract PDF(486KB) ( 70 )   
References | Related Articles | Metrics
Lying behavior may destroy the fairness of CDN cache allocation.The selfish lying behavior of servers during CDN cache allocation with the game theory was studied.The essence of lying behavior is that servers will apply more cache when the total cache is not enough, otherwise honestly apply requisite cache volume when the total cache is enough.We proposed fairness algorithm to deal with lying behaviors.We considered the historical application volume while calculating the new one.In addition,we introduced the age factor to calculate the application’s effectiveness in different phase.In this way,we could urge the lying servers to stop lying by making them lose more than the honest ser-vers.At the same time,we guaranteed the optimal throughput of system and introduced price mechanism to make honest servers to be more demand-satisfying.The experiment shows the fairness algorithm has a good improvement for lying behavior.
Adaptive Sampling Algorithm Based on TCP Congestion Strategy
YANG Ming-xia, WANG Wan-liang and SHAO Peng-fei
Computer Science. 2015, 42 (7): 162-164, 181.  doi:10.11896/j.issn.1002-137X.2015.07.035
Abstract PDF(310KB) ( 84 )   
References | Related Articles | Metrics
We presented the design of a novel adaptive sampling technique based on TCP congestion strategy,in which the temporal data correlations provide an indication of the prevailing environmental conditions and are used to adapt to the sensing rate of a sensor node.It uses irregular data series prediction to reduce sampling rate in combination with change detection to maintain data fidelity.The prediction method employs Wright’s extension to Holt’s method of exponential double sampling (EDS) coupled with a change detection mechanism based on exponentially weighted moving averages (EWMA).The main advantages are that it does not require heavy computation,incurs low memory and communication overhead and the prediction model can be implemented with ease on resource constrained sensor nodes.
Suppression Algorithm of Network Fluctuation Hop Signal Based on Perturbation Characteristic Decomposition and Feedforward Modulation
CHEN Wei-jun and SUI Dan
Computer Science. 2015, 42 (7): 165-169.  doi:10.11896/j.issn.1002-137X.2015.07.036
Abstract PDF(417KB) ( 84 )   
References | Related Articles | Metrics
In the network switching and data communication,a network time-frequency hop resonant signal can be produced,and such fluctuation hop signal should be suppressed to improve network stability.An improved suppression algorithm of network fluctuation hop signal was proposed based on perturbation characteristic decomposition and feedforward modulation,and the network fluctuation hop signal resonant mathematical model was constructed.The ray model was used to estimate the transmission loss,Doppler frequency shift algorithm was used to extract disturbance characteristics,and slowly varying envelope slice was used to gather the signal energy in the disturbance direction.In the Hilbert space,perturbation characteristic decomposition was obtained,the feedforward filter was designed,and the signal suppression was completed.The simulation results show that this algorithm can effectively suppress the resonant signal in network fluctuation hop,the information loss is avoided,data packet loss rate is reduced,and it has good real-time performance.The problems such as network startup delay,server load,trembling are solved.
Fog Computing Based Hospital Information Service System
CHEN Dong-mei and LI Zhi
Computer Science. 2015, 42 (7): 170-173, 190.  doi:10.11896/j.issn.1002-137X.2015.07.037
Abstract PDF(471KB) ( 60 )   
References | Related Articles | Metrics
Due to local information requirements and the lack of the cloud computing framework support,this paper proposed a fog computing framework based on “intelligent front-terminal” for mobile applications.The framework extends a fog layer between cloud services and mobile devices,aiming at the smooth,low-latency and economical services deliveryfrom cloud to mobile.In hospital scene,this paper designed and developed an information service system,which provides medical information,queuing waiting time and multimedia services.Moreover,the multi-band load balancing algorithm and queuing waiting time estimation based on local information algorithm were proposed in the system.The availability of the system and the efficiency of algorithms were verified in real-world experiments.
Duplicate Data Remove Algorithm of Cloud Storage System Based on Fractional Fourier Transform
XU Yi-yi and TANG Pei-he
Computer Science. 2015, 42 (7): 174-177, 209.  doi:10.11896/j.issn.1002-137X.2015.07.038
Abstract PDF(671KB) ( 47 )   
References | Related Articles | Metrics
Duplicate data of cloud storage system is taken as one of a large amount of redundant data,and the effective and timely remove can guarantee the stability and operation of cloud storage system.Because of the interference of data,the SNR is low,the traditional method has false peaks in the fractional Fourier domain,and it cannot effectively detect and remove the duplicate data.An improved duplicate data remove algorithm of cloud storage system was proposed based on fractional Fourier transform cumulant detection.Firstly,the delete system architecture for cloud storage system was taken,the fitness function of data storage point was defined,and system subset random probability distribution function of the cloud storage node was gotten.The constraint function was used for blocking the calibration data of storage nodes,the detection of duplicate data removing processing was taken,and the fractional Fourier transform was used to preprocess the residual signal filtering in cloud storage system.The 4 order cumulanted slice post operator was used to divide each file into blocks.To delete each file block,duplicated data detection post filtering was obtained,and data storage resource detection and deletion were realized.Simulation results show that this algorithm can improve the utilization efficiency of cluster cloud storage system resource,and duplicate data can be accurately removed with higher rate.It can effectively avoid the error removing caused by interference and leakage removing,and it has superior performance.
Security Model for Distributed System Based on Seal Calculus
HUANG Yong WU Jin-zhao
Computer Science. 2015, 42 (7): 178-181.  doi:10.11896/j.issn.1002-137X.2015.07.039
Abstract PDF(338KB) ( 58 )   
References | Related Articles | Metrics
To address the weaknesses of the current security model for distributed computation,this paper proposed a non-interference security model,which is described in the setting of Seal calculus.The new model reduces the characteri-zation of systems security to the location bisimulation equivalence of certain processes in which the position and mobility of systems have been taken into consideration.This paper also proved that the model can define a composite security property according to the security requirements of distributed systems.Finally,a case study was illuminated to show the practical application of this model.
3D Chaos Map and Cellular Automata Based Image Encryption
LI Jing-yi CHEN Ju-hua
Computer Science. 2015, 42 (7): 182-185, 203.  doi:10.11896/j.issn.1002-137X.2015.07.040
Abstract PDF(388KB) ( 65 )   
References | Related Articles | Metrics
To design a high efficiency and good performance image encryption system,a novel scheme was proposed based on 3D chaos map and 2D second order cellular automata.In this algorithm,image pixels are permuted using 3D cat chaos map.Then in diffusion stage,each pixel value in bit level is permuted using 2D second order cellular automata.This scheme belongs to the class of symmetric systems,which also has a large key space,and both encryption and decryption are parallel.The results of several experiments on brute force attack,statistical analysis and differential attack show that the scheme can achieve a high security in few iterations and won’t enlarge the cipher image.
Pairing-free Certificateless Group Key Agreement Protocol for Wireless Sensor Network
QIAN Qi-feng CHENG Chun-ling
Computer Science. 2015, 42 (7): 186-190.  doi:10.11896/j.issn.1002-137X.2015.07.041
Abstract PDF(415KB) ( 89 )   
References | Related Articles | Metrics
Due to the high computational overhead of group key agreement protocol in wireless sensor network(WSN),this paper presented a pairing-free certificateless group key agreement protocol.During system initialization phase,the partial private key is generated by key generation center of the certificateless public key cryptography.Each node gene-rates private key via multiplying secret value by corresponding partial private key.During node authentication phase,this protocol introduces node authentication mechanism based on scalar multiplication of elliptic curves,determining the nodes identity information by calculating scalar multiplication of partial private key and temporary public key with authentication information.During session key generation phase,session key is generated by utilizing scalar multiplication to reduce the computational overhead.Finally,we analyzed computational overhead and communication cost.The results show that this protocol can not only ensure security of node communication,but also reduce computational overhead.
Research on Impossible Differential Attack of Cipher SMS4
SUN Cui-ling WEI Hong-ru
Computer Science. 2015, 42 (7): 191-193, 228.  doi:10.11896/j.issn.1002-137X.2015.07.042
Abstract PDF(299KB) ( 60 )   
References | Related Articles | Metrics
To analyze impossible differential cryptanalysis on the block cipher SMS4,the results were presented based on one 14-round impossible differential route.One impossible differential attack was applied to 16-round and 18-round reduced SMS4,and improved result on 17 round CLEFFIA-256 was given with the number of chosen plaintexts being reduced to O(269.47).Computing result shows that the attack of 16-round SMS4 needs O(2103) choosing plaintext operations,and O(292) encrypting computations,and the attack of 18-round SMS4 needs O(2104) choosing plaintext operations and O(2123.84) encrypting computations.
Declassification Policy Based on Automaton Monitoring
JIN Li and ZHU Hao
Computer Science. 2015, 42 (7): 194-199.  doi:10.11896/j.issn.1002-137X.2015.07.043
Abstract PDF(488KB) ( 47 )   
References | Related Articles | Metrics
Static enforcement mechanisms of declassification policies have the flaw of over restrictive,which exclude the programs judged secure by semantic conditions of declassification policies.In order to provide more permissive enforcement mechanisms,we established the dynamic monitoring mechanisms for the two-dimension declassification policy based on the automaton theory.Command events generated during the running of a program are abstracted as the inputs of automaton,and these inputs are used by the automaton to track the information flow during the program running.The command that violates the declassification policy will be forbidden.Additionally,we proved that the mechanisms based on automaton monitoring are sound.
Research on Revising Conflict Evidence of D-S Evidence Theory in Network Security Situation Awareness
KOU Guang, TANG Guang-ming and XU Zi-liang
Computer Science. 2015, 42 (7): 200-203.  doi:10.11896/j.issn.1002-137X.2015.07.044
Abstract PDF(334KB) ( 60 )   
References | Related Articles | Metrics
D-S evidence theory is a kind of important uncertainty reasoning methods,has been widely used in many ways.The paper mainly studied application of D-S evidence in the data integration process of network security awareness.And researching contradictions between combination results of multi-source evidence and intuition,it put forward a new solution.The scheme revises the origin of conflict evidence through using the support ideas in order to solve the conflict of evidence.Finally,through a numerical example in the network security situational awareness environment,it proved the feasibility of this method.
Self-adaptive Bit-level Colour Image Encryption Algorithm Based on Spatiotemporal Chaotic System
CHAI Xiu-li and GAN Zhi-hua
Computer Science. 2015, 42 (7): 204-209.  doi:10.11896/j.issn.1002-137X.2015.07.045
Abstract PDF(1014KB) ( 59 )   
References | Related Articles | Metrics
A new self-adaptive colour image encryption scheme based on spatiotemporal chaotic system was introduced,and it operates at the bit level.Firstly,plain image is converted to R,G,B three vectors,and then a self-adaptive method is employed to encrypt the image.The method is as follows:B vector image is used to encrypt R vector image,and R′is given;R′ is used to encrypt G vector image,and G′ is given;G′ is used to encrypt B vector image,and B′ is gotten;B′ is used to encrypt R′,and R″ is attained,then cipher image appears after one turn.Encryption schemes are composed of confusion process and diffusion process.Spatiotemporal chaotic system——coupled map lattices(CML) is used to permute the positions of the image pixels at the bit level,and logistic chaotic system is adopted to diffuse the shuffled bit image.The initial values of chaotic systems are influenced by the plain image,and the method is sensitive to the plain image.Tests and analyses of key space,image histogram,key sensitivity,correlation,information entropy,plain image sensitivity and steganogram attack were carried out and the results demonstrate the superior security and high efficiency of the proposed scheme.Moreover,the encryption scheme has huge application potential in image secure communication field.
Secure Model of Cloud Storage Supporting Attribute Revocation
ZHANG Bing-hong, ZHANG Chuan-rong, JIAO He-ping and ZHANG Xin-wei
Computer Science. 2015, 42 (7): 210-215.  doi:10.11896/j.issn.1002-137X.2015.07.046
Abstract PDF(524KB) ( 89 )   
References | Related Articles | Metrics
To solve the problem of coarse-grained attribute revocation for data users and huge computation for key distribution in the existing cloud storage model,we proposed a new secure model of cloud storage supporting fine-grained attribute revocation over the composite order bilinear groups.Data owner is also the attribute distributing authority,assuring the absolute control of the data in the cloud,which ensures that the data stored in open environment is secure on condition that the cloud service provider is unbelievable.We studied the model in two aspects,the frame of the model and the key distribution.The strict mathematical proofs of the model show that the scheme is adaptively secure.Based on the model,data access strategy is flexible and diverse,therefore it is suitable for open environment like cloud storage.
Implementation and Detection of Network Covert Channel
DONG Li-peng CHEN Xing-yuan YANG Ying-jie SHI Wang
Computer Science. 2015, 42 (7): 216-221, 244.  doi:10.11896/j.issn.1002-137X.2015.07.047
Abstract PDF(870KB) ( 529 )   
References | Related Articles | Metrics
Network covert channel uses normal network protocols to pass hidden information,which can provide carriers for Trojan,spyware circumvent security detection.Aiming at the problems that number of convert channels is large,the features are complicated and the detection is inconvenient,we analysed the communication model and application model,proposed a classification method based implementation mechanisms and abnormal features of network covert channel according to the basic features of protocols and fields,analysed existing detection methods and their weaknesses.And the future research direction was given.
Research on Framework of Safety Verification Based on Fault-extended SysML Activity Diagram
WU Zhi-peng HUANG Zhi-qiu WANG Shan-shan CAO De-jian
Computer Science. 2015, 42 (7): 222-228.  doi:10.11896/j.issn.1002-137X.2015.07.048
Abstract PDF(855KB) ( 50 )   
References | Related Articles | Metrics
Embedded system has been widely used in safety-critical areas such as energy,transportation,etc.The safety analysis and verification for embedded software have always been one of the hot topics in both academia and industry.In order to unify function model and safety requirements analysis model of the system,we introduced extended SysML activity diagrams with semantic information of fault tree.On the basis of retaining the semantic descriptions of both the fault tree and the SysML activity diagram,we presented a kind of safety verification framework based on fault-extended SysML activity diagrams.Firstly,we used minimal cut sets to extract fault information and presented transformation rules of fault tree logic gate.Then,we presented build steps of fault-extended SysML activity diagrams.Finally,we used Promela to model fault-extended SysML activity diagrams and used model checking tool SPIN to analyze and verify it.We verified the effectiveness of the method by using it in a gas stove control system.
Research on Non-full Length Usage of SIMD Vector Instruction
XU Jin-long ZHAO Rong-cai ZHAO Bo
Computer Science. 2015, 42 (7): 229-233.  doi:10.11896/j.issn.1002-137X.2015.07.049
Abstract PDF(927KB) ( 57 )   
References | Related Articles | Metrics
Large-scale SIMD architecture provides stronger vector parallel support on hardware.However,a large number of loops which are short of iterations can not provide sufficient parallelism,and it is difficult to achieve them with the equivalent vector mode.In order to make full use of SIMD,this paper presented a vectorization method which can use non-full length of SIMD vector instruction.This paper studied the vector register usage,achieved a non-full vector operation based on non-full length usage of vector register,which can vectorize short loops.Finally,this method was used to vectorize the common loops.Moreover,This paper provided a benefit analysis method to guide the vectorization method.Experimental results show that the method is available,the target loops of the selected test programs are vectorized and the average speedup is about 1.2.
Research of Keyword Search Model over RDF Data Graph
ZHENG Zhi-yun LIU Bo LI Lun WANG Zhen-fei
Computer Science. 2015, 42 (7): 234-239, 249.  doi:10.11896/j.issn.1002-137X.2015.07.050
Abstract PDF(569KB) ( 69 )   
References | Related Articles | Metrics
As huge amounts of the semantic Web data have sprung up,people are more concerned about query efficiency over RDF data graph. Retrieving RDF data graph directly by keyword matching is an area of research focus.In this paper,a retrieval model was proposed,which enables keyword search for RDF graph.First,for the improvement of query efficiency,an algorithm named ISGR (an Iterative way to SubGraph Retrieval) was proposed,in which query keywords can be matched with subgraphs from RDF data graph,and a collection of subgraphs which should be unique and maximal is got.Next,in order to solve the problems of redundant results and deviation that frequently emerge in keyword search,a mixture ranking model(SimLM) was proposed,which considers the structural information between keyword graph and result graph,and mixs statistical language model.A numbers of contrast experiments over two kinds of open source real datasets prove that the retrieval and ranking model proposed in this paper outperforms well-known techniques in the field of consistency and relevance.
Geo-temporal Intent and Preference-based Personalized Web Search Framework GT-WSearch
YANG Dan, SHEN De-rong and CHEN Mo
Computer Science. 2015, 42 (7): 240-244.  doi:10.11896/j.issn.1002-137X.2015.07.051
Abstract PDF(430KB) ( 76 )   
References | Related Articles | Metrics
Traditional Web search ignores user’s geographic (geo) and temporal preference.Personalized Web search results according to query’s implicit geo-temporal intent and preferences can help to satisfy users’ different information needs greatly.GT-WSearch framework was proposed to capture users’ geo-temporal preferences leveraging both query profile and user profile.The query profile signals the geo-temporal characteristic of the query itself.The user profile is obtained by mining not only search results snippets but also user click-through data.GT-WSearch considers geo-temporal relevance of documents when re-ranking search results.The experimental results show that our personalizing approach effectively improves the quality of Web search results.
Spatiotemporal Convolutional Neural Networks and its Application in Action Recognition
LIU Cong XU Wei-sheng WU Qi-di
Computer Science. 2015, 42 (7): 245-249.  doi:10.11896/j.issn.1002-137X.2015.07.052
Abstract PDF(960KB) ( 97 )   
References | Related Articles | Metrics
The key thing that distinguishes action recognition from other recognition tasks is to encode motion explicitly.But,so far,most works based on convolutional neural networks (CNN) cannot properly handle the spatiotemporal interaction in video.We developed a spatiotemporal-CNN that explicitly exploits this important cue provided by video.Instead of summing filter responses,responses are multiplied and our approach is based on that.Specifically,the spatiotemporal-CNN divides convolutional kernels into two groups forming sinusoidals of Fourier Transform.Then,the responses of convolutional kernels are multiplied by multiplicative layer as calculating covariance and the outputs are put into sum layer.In this way,the inputs and adjacent frames are mapped into the subspaces spanned by the eigenvectors,and the special geometric transformations or motion features can be checked by the rotating angles in that space.Additional theoretical analysis proves that spatiotemporal-CNN is sensitive to both motion and content.The experiment shows that our approach produces more accurate classification than current algorithms.
Two Novel Tree Structure-based Methods for Gene Selection
XIE Qian-qian, LI Ding-fang and ZHANG Wen
Computer Science. 2015, 42 (7): 250-253.  doi:10.11896/j.issn.1002-137X.2015.07.053
Abstract PDF(359KB) ( 74 )   
References | Related Articles | Metrics
Cancer diagnosis is one of the most significant topics in bioinformatics.For the microarray datasets,selecting a small subset of genes from thousands of genes (named gene selection) is helpful for accurate identification and treatment of cancerous tumors.Motivated by the instinct of random forests measuring variable importance (named ‘PBM’),we proposed two novel methods based on the tree structures for gene selection,namely FBM and ABM.They respectively make use of gene frequency and average scores yielded by a great number of decision trees,which are constructed on the microarray datasets.In computational experiments,the optimal gene subsets are determined by three methods,and random-forest classifiers are built on subsets to evaluate the performance of gene selection methods.AUC scores of PBM are greater than 0.900 when selecting 26 genes for leukemia dataset and 48 genes for colon cancer dataset,while the classifiers with FBM and ABM can achieve the AUC score of 0.989 for leukemia dataset and AUC score of 0.900 for colon cancer dataset respectively with top ten genes selected.In addition,the proposed methods have better perfor-mance than the developed methods (such as mRMR and ECRP),which play the critical roles in the accurate diagnosis and treatment of cancer.
Weighted Nuclear Norm Minimization Model for Matrix Completion
ZHANG Wei-qi, ZHANG Hong-zhi, ZUO Wang-meng and CUI Meng-tian
Computer Science. 2015, 42 (7): 254-257, 290.  doi:10.11896/j.issn.1002-137X.2015.07.054
Abstract PDF(385KB) ( 82 )   
References | Related Articles | Metrics
Collaborative filtering is one of the popular techniques used in recommendation system.It has some advantages over traditional recommendation technologies.But the limitation is that it is constrained by the data sparsity.Matrix completion technology can be used to solve this problem.This paper proposed an weighted nuclear norm minimization (WNNM) model for matrix completion to improve the flexibility of nuclear norm.Under certain condition,it can be proved to get global optimal solution.Meanwhile,convergence for another form of the proposed model was confirmed.With two real data sets,convex optimization algorithm of nuclear norm minimization was achieved to verify the proposed model.The result proves that to some extent it improves the computational speed and accuracy.
Video Prefetching Strategy Based on Congestion Finding with Reinforcement Learning in P2P VOD Networks
SHEN Xiang-jun, YAO Yin and ZHA Zheng-jun
Computer Science. 2015, 42 (7): 258-261, 275.  doi:10.11896/j.issn.1002-137X.2015.07.055
Abstract PDF(408KB) ( 63 )   
References | Related Articles | Metrics
A proper video prefetching strategy in P2P (peer to peer) VOD (Video on Demand) networks can effectively solve such circumstances as waiting time for buffering is too long and server is overloaded.While the existing video prefetching methods in P2P networks just consider video content finding but ignore monitoring nodes status,which makes poor video play when the network congestions occur.This paper presented a video prefetching strategy based on congestion finding with reinforcement learning in P2P VOD network.Through monitoring nodes status of congestion,bandwidth and other parameters,the Q-learning learning algorithm was used to evaluate network nodes,which can guide nodes selection and reduce the video prefetching from congested nodes.Simulation results show that the proposed method can make video playback smooth.It can also avoid too long waiting time when congestions happen and improve the efficiency of video playback.
Mining Residues Distance Constraints from Protein Evolution Couplings by Classification
WANG Cai-xia, LV Qiang, LI Hai-ou and LUO Sheng
Computer Science. 2015, 42 (7): 262-264, 299.  doi:10.11896/j.issn.1002-137X.2015.07.056
Abstract PDF(354KB) ( 42 )   
References | Related Articles | Metrics
Protein evolution couplings refer to the relatively stable interactions between residues formed in the process of evolution.Based on the known evolution couplings,we adopted machine learning classification technique to convert evolution couplings into distance constraints,thus quantitative residues distance constraints are extracted from qualitative protein evolution couplings between residues,which can be used as a new guidance for the prediction of protein structure.
Positional Language Model-based Chinese IR System
CHEN Ya-lan, HU Xiao-hua, TU Xin-hui and HE Ting-ting
Computer Science. 2015, 42 (7): 265-269.  doi:10.11896/j.issn.1002-137X.2015.07.057
Abstract PDF(450KB) ( 107 )   
References | Related Articles | Metrics
In most existing retrieval models,the facts are often overlooked that the proximity of matched query terms in a document and passage retrieval used to score can also be exploited to promote scoring for documents.Inspired by this,a Chinese information retrieval system based on the positional language model was proposed.Firstly,we defined the concept of propagated count to establish a positional language model for each position.Then through combing KL-divergence retrieval model and positional language model,we scored for each individual position.Finally,we scored the document by the multi-parameter strategy.The experiment also focuses on comparing the retrieval effect of the two Chinese indexing approaches named multi character-based and dictionary-based on positional language models.Experiments on standard NTCIR5,NTCIR6 test sets show that the performance of the two indexing approaches of IR system improves greatly and it performs better than the vector space model,okapi bm25 model and classical language model.
Multi-label Text Classification Based on Robust Fuzzy Rough Set Model
ZHANG Jing, LI De-yu, WANG Su-ge and LI Hua
Computer Science. 2015, 42 (7): 270-275.  doi:10.11896/j.issn.1002-137X.2015.07.058
Abstract PDF(494KB) ( 58 )   
References | Related Articles | Metrics
Owing to the uncertainty of multi-label data and noise data,a novel multi-label robust fuzzy rough classification model was proposed,which is an extension of k-mean robust statistics fuzzy rough classification model that is used to solve the single label classification problem.First,for each unlabeled instance,the membership with respect to each label was obtained by similarity measures.Second,according to the membership,the degree of correlation was defined.Finally,an appropriate threshold was given to demarcate the correlated and uncorrelated labels. The experimental results on three benchmark multi-label datasets and one actual multi-label datasets indicate that the proposed model is superior to ML-kNN and rank-SVM across six popular multi-label evaluation metrics.
Annotation and Classification of Temporal Relation between Chinese Events
ZHENG Xin LI Pei-feng ZHU Qiao-ming
Computer Science. 2015, 42 (7): 276-279, 313.  doi:10.11896/j.issn.1002-137X.2015.07.059
Abstract PDF(414KB) ( 226 )   
References | Related Articles | Metrics
The research on temporal relation between events plays an important role in natural language processing,such as question answering system,information extraction and text summarization.We first focused on the temporal relation between Chinese events,divided it into four categories and presented an annotation method.Then we proposed a method to classify the temporal relation between Chinese events.Finally,the results of the classification on the annotated corpus show that our method outperforms the rule-base baseline.
Constrained Nonnegative Matrix Factorization with Sparseness for Image Representation
HU Xue-kao, SUN Fu-ming and LI Hao-jie
Computer Science. 2015, 42 (7): 280-284, 304.  doi:10.11896/j.issn.1002-137X.2015.07.060
Abstract PDF(997KB) ( 84 )   
References | Related Articles | Metrics
Matrix decomposition is widely applied in many domains since it is exploited to process the large-scale data.To the best of our knowledge,nonnegative matrix factorization (NMF) is a non-negative decomposition method under the condition that constraint matrix elements are non-negative.By using the informati on provided by a few known labeled examples and large number of unlabeled examples as well as imposing a certain sparseness constraint on NMF, this paper put forward a method called constraint nonnegative matrix factorization with sparseness (CNMFS).In the algorithm,an effective update approach is constructed,whose convergence is proved subsequently.Extensive experiments were conducted on common face databases,and the comparisons with two state-of-the-art algorithms including CNMF and NMF demonstrate that CNMFS has superiority in both sparseness and clustering.
Group Feature Selection Algorithm for Data Sets with Missing Data
Computer Science. 2015, 42 (7): 285-290.  doi:10.11896/j.issn.1002-137X.2015.07.061
Abstract PDF(460KB) ( 61 )   
References | Related Articles | Metrics
Many real data increase dynamically in size.With the rapid development of data processing tools,new data are usually increased in groups.In this paper,based on rough set theory,a group rough feature selection algorithm was proposed to deal with dynamic data sets with missing data.Firstly,the group incremental mechanism of information entropy was analyzed,and then significance of feature was defined based on the mechanism.On this basis,a group feature selection algorithm was constructed,which can be used to deal with dynamic data sets with missing data effectively.Experimental results show that the new algorithm is feasible and efficient.
Automatic Identification and Rule Mining for Relation Words of Chinese Compound Sentences Based on Bayesian Model
YANG Jin-cai GUO Kai-kai SHEN Xian-jun HU Jin-zhu
Computer Science. 2015, 42 (7): 291-294.  doi:10.11896/j.issn.1002-137X.2015.07.062
Abstract PDF(417KB) ( 53 )   
References | Related Articles | Metrics
The compound sentence is an important unit of the Chinese sentence and its annotation is important to the research on comprehending Chinese texts.Identification of relation words is the basis of compound sentence annotation.Based on a comprehensive analysis of Chinese compounds corpus,this paper extracted features of relation words from their context and collocation.Those features are described in formulas.A combination of mutual information with information gains is used for selecting features and eliminating redundant features.The Bayesian model is used for training and testing feature sets.Rules are created from the statistics results,and rule base is configured with rules,which are used for automatic identification of relation words.The experimental results show that our method obtains a high accuracy in identification,which proves the feasibility and effectiveness of the method.
Anti Congestion Vehicle Path Planning Algorithm Based on Cloud Grid Integrated Scheduling
XUE Ming XU De-gang
Computer Science. 2015, 42 (7): 295-299.  doi:10.11896/j.issn.1002-137X.2015.07.063
Abstract PDF(946KB) ( 71 )   
References | Related Articles | Metrics
In the road traffic network,traffic congestion problem is a complicated dynamic process of interaction between flow and the structure of the network.Through the vehicle path planning,the integration of the road network grid scheduling is realized,and traffic throughput can be improved.The traditional method adopts parallel microscopic traffic dynamic prediction algorithm to realize the vehicle congestion scheduling and vehicle routing planning,but the algorithm can not accurately judge the density of vehicles,and the performance is not good.An improved anti congestion vehicle path planning algorithm was proposed based on cloud grid integrated scheduling.The cloud road network model is constructed based on Small-World model,and RFID label is used to collect the traffic information.The intrinsic mode function weighted average is used to calculate the vehicle congestion state function of each lane,and the density of vehicles in all lanes is obtained from the statistical average available vehicle density cluster.The traffic road network congestion detection algorithm was designed,searching for the current road information of individual one-dimensional neighbor,then the vehicle path planning and best objective function optimization are realized.The dynamic game way is used to get the approximate optimal trajectory to improve the path planning algorithm.The simulation results show that the algorithm can accurately achieve the optimal vehicle path planning and control,and traffic speed and network throughput performance are improved in severe congestion state.It has better performance than traditional method.
Study on High Compact Recognition Method for Continuously Overlaid Chinese Handwriting
SU Tong-hua, DAI Hong-liang, ZHANG Jian, MA Pei-jun and DENG Sheng-chun
Computer Science. 2015, 42 (7): 300-304.  doi:10.11896/j.issn.1002-137X.2015.07.064
Abstract PDF(681KB) ( 95 )   
References | Related Articles | Metrics
Continuous Chinese handwriting recognition is the primary bottleneck for Chinese handwritten character input method.Naturally and quickly inputting Chinese text is the fundamental goal to the pattern recognition field even to the artificial intelligence.A novel recognition method was proposed for overlaid Chinese handwriting.It follows a segmentation-recognition integrated framework.Firstly,an over-segmentation algorithm is used to partition the handwriting trajectory.Then a perceptron algorithm is developed to locate the candidate character boundaries.Finally,multiple contexts including character recognition score,geometrical score and linguistic score,are utilized to decode the optimal recognition path.To match different mobile terminals,an appealing compression algorithm was proposed to make the character classifier compact,which reduces the storage consumption both in memory fingerprint and disk storage.The principled method is successfully ported to Android platform,enabling overlaid Chinese handwriting to be input on smart phones and further tested on large overlaid Chinese handwriting samples.Experimental results verify the effectiveness and efficiency of the method.It also works smoothly on smart phone,whose overlapped handwriting input function makes handwriting input remarkably efficient.
Soft Combination of Probabilistic Neural Network Classifiers for Face Recognition
ZHAI Jun-hai ZHAO Wen-xiu
Computer Science. 2015, 42 (7): 305-308.  doi:10.11896/j.issn.1002-137X.2015.07.065
Abstract PDF(314KB) ( 56 )   
References | Related Articles | Metrics
Probabilistic neural network (PNN) classifiers have fast learning speed and can be easily implemented.The outputs of PNN are posterior probabilities which facilitate the soft combination of classifiers.We proposed a face recognition algorithm named SCPNN,which combines PNN classifiers with fuzzy integral,and makes full use of the superiori-ty of PNN and ensemble learning.The main steps of the proposed method include:the incomplete wavelet packet decomposition of face images,training PNN classifiers with wavelet subspace images which include low frequency components and combination of the trained PNN classifiers by fuzzy integral.The proposed algorithm SCPNN was compared with 3 matrix subspace algorithms on 4 face databases,which are JAFFE,YALE,ORL and FERET.The experimental results confirm that the proposed method outperforms the 3 matrix subspace algorithms in recognition accuracy and CPU time.
Image Object Detection Based on SC-AdaBoost
ZHANG Zhao-hui, LIU Yong-xia and LEI Qian
Computer Science. 2015, 42 (7): 309-313.  doi:10.11896/j.issn.1002-137X.2015.07.066
Abstract PDF(447KB) ( 62 )   
References | Related Articles | Metrics
Although AdaBoost-based object detection from image/video dada holds the characteristics of good detection precision and high detection speed,the training procedure is much more slowly especially when the number of both samples and feature dimensionality is high.With the aim of efficiently improving the training performance,this paper proposed an algorithm called SC-AdaBoost.Experimental results for car detection based on VOC2006 datasets show that when the number of training samples is very large,the proposed algorithm can evidently reduce the whole training time without loss of detection performance.
Dense Depth Map Reconstruction via Image Guided Second-order Total Generalized Variation
WU Shao-qun YUAN Hong-xing AN Peng CHENG Pei-hong
Computer Science. 2015, 42 (7): 314-319.  doi:10.11896/j.issn.1002-137X.2015.07.067
Abstract PDF(954KB) ( 52 )   
References | Related Articles | Metrics
The depth map reconstruction using image colors may recovery the depth discontinuities at object boundaries,but will damage depth uniformities inside objects.In order to solve this problem,we formulated depth reconstruction as a convex optimization problem which is regularized by image guided total generalized variation.By incorporating image diffusion tensor into the variation regularizer,the proposed method generates piecewise smooth depth while preserving discontinuities at object boundaries.To efficiently solve the problem,a first-order primal-dual scheme was derived based on the Legendre-Fenchel transformation.Experimental results demonstrate that our method can preserve depth discontinuities at object boundaries and uniformities inside objects and outperform existing methods in terms of peak signal-to-noise ratio,normalized cross-covariance and mean absolute error.