Started in January,1974(Monthly)
Supervised and Sponsored by Chongqing Southwest Information Co., Ltd.
ISSN 1002-137X
CN 50-1075/TP
Current Issue
Volume 43 Issue 11, 01 December 2018
Visual Object Tracking Based on Correlation Filters:A Survey
Wei Quanlu, Lao Songyang and Bai Liang
Computer Science. 2016, 43 (11): 1-5, 18.  doi:10.11896/j.issn.1002-137X.2016.11.001
Abstract PDF(2017KB) ( 188 )   
References | Related Articles | Metrics
ion Based on Object Detection and TrackingTIAN He-lei et al.(297) Medical CT Image Enhancement Algorithm Based on Laplacian Pyramid and Wavelet TransformLV Li-zhi et al.(300) Image Compression Based on Discrete Tchebichef Moments and Soft Decision QuantizationLU Gang et al.(304) Image Co-segmentation by Constraints of ShapePAN Xiang et al.(309) Image Registration Algorithm for Infrared and Visible Light Based on Non-subsampled Contourlet TransformLIU Gang et al.(313) Fractal Dimension Based Wavelet Packet EBCOT Core Image CompressionTANG Guo-wei et al.(317) 《计算机科学》审编委员会The Editorial Board of Computer Science(以姓氏拼音为序) 到稿日期:2015-10-30 返修日期:2016-03-15 魏全禄 男,博士生,主要研究方向为计算机视觉、多媒体信息系统,;老松杨 男,博士,教授,主要研究方向为多媒体信息系统与虚拟现实、人机交互、指挥决策分析等;白 亮 男,博士,副教授,主要研究方向为数字视频、音频处理和检索等。 基于相关滤波器的视觉目标跟踪综述 魏全禄 老松杨 白 亮 (国防科学技术大学信息系统工程重点实验室 长沙410073) 摘要 视觉跟踪是一个重要的计算机视觉任务,有着广泛的应用,由于 现实场景中存在着众多困难,视觉跟踪仍是一个活跃的研究领域。判别式分类器是现代跟踪方法中的一个核心组成部分,其在线学习一个二值分类器以在每一帧中区分目标与背景,充分利用机器学习中丰富的学习算法,取得了许多突破。相关滤波器已成功应用到目标检测和识别中,其由于计算效率高,近年来作为一种判别式跟踪方法被应用到视觉跟踪领域,取得了很好的效果。首先简要介绍了判别式跟踪算法;然后对相关滤波器基本理论及几种典型的相关滤波器构造方法进行了描述;最后重点介绍了近年来相关滤波器在视觉跟踪中的应用及研究进展,并总结了可能的研究方向和发展趋势。 关键词 视觉跟踪,判别式学习方法,相关滤波器 中图法分类号 TP391 文献标识码 A DOI 10.11896/j.issn.1002-137X.2016.11.001 Visual Object Tracking Based on Correlation Filters:A Survey WEI Quan-lu LAO Song-yang BAI Liang (Science and Technology on Information Systems Engineering Laboratory,National University of Defense Technology,Changsha 410073,China) Abstract Visual object tracking is a fundamental task in many computer vision applications which is still an active research field due to the challenges in real scenes.The core component of most modern trackers is a discriminative classi-fier which can use the abundant algorithms in machine learning to learn a binary classifier online to separate the object from the surrounding environment.Due to the high computational efficiency,as a discriminative tracking method,correlation filters which have been successfully applied to a variety of pattern recognition applications are introduced to the topic of visual tracking in recent years.The discriminative learning methods in visual tracking were introduced briefly firstly.Then the fundamental theory and some kinds of the typical methods of correlation filters were described.Finally,a detailed review of the applications of the correlation filters in visual tracking was provided,and the future applications and research trends were discussed.
Review of Research and Application on Hadoop in Cloud Computing
XIA Jing-bo, WEI Ze-kun, FU Kai and CHEN Zhen
Computer Science. 2016, 43 (11): 6-11, 48.  doi:10.11896/j.issn.1002-137X.2016.11.002
Abstract PDF(614KB) ( 341 )   
References | Related Articles | Metrics
Hadoop is one of the most popular technologies in the area of cloud computing and big data nowadays,the combination of its relevant software ecosystem with Spark technology influences the academic development and business model.This paper firstly introduced the origin and advantages of Hadoop,and clarified the relevant technical principles,such as MapReduce,HDFS,YARN,Spark and so on.Then we focused on the analysis of the current Hadoop academic research achievements,and summarized four aspects:the improvement and innovation of the MapReduce algorithm,optimization and innovation of technology of HDFS,secondary development and other combination,innovation and practice of application field.And then the developing situation of domestic and foreign application was described.Hadoop with the Spark is the trend of the future.This paper finally discussed the development direction of the future research and some crucial problems which should be solved pressingly.
Advances Research on Color Constancy Computation under Single Illuminant
TANG Zheng, LIU Hong-zhe and YUAN Jia-zheng
Computer Science. 2016, 43 (11): 12-18.  doi:10.11896/j.issn.1002-137X.2016.11.003
Abstract PDF(1280KB) ( 228 )   
References | Related Articles | Metrics
Advances in color constancy computation under a signal illumination is one of the most popular topics in the computer vision and machine learning.The effect of image automatic white balancing and other applications is directly affected by the reasonableness of the algorithm.Currently,the existing methods on color constancy methods are various.This paper divided color constancy into the unsupervised,the supervised,the algorithms fusion and color invariant description.This paper highlighted four typical color constancy computation ways and analyzed their advantages and disadvantages respectively .Finally,we discussed the evaluation methods and trends of color constancy computation simply.
Representative Selection Framework Approach for Videos
JIANG Yong and ZHANG Hai-tao
Computer Science. 2016, 43 (11): 19-23, 60.  doi:10.11896/j.issn.1002-137X.2016.11.004
Abstract PDF(513KB) ( 49 )   
References | Related Articles | Metrics
In order to solve the process problem of massive videos data,a representative selection method of identifying the optimal subset of data points as a representative of original massive dataset was proposed.The selected data points of subset can represent inner structure of original massive dataset.And the novel representative selection method is based on 1-norm non-negative sparse graph for the original massive dataset.The massive data points are partitioned into some clusters by using a spectral clustering algorithm based on the non-negative sparse graph generated in previous steps.Each cluster is viewed as a point in the Grassmann manifold,and the geodesic distances among these points are measured.By using a min-max algorithm,geodesic distances are analyzed to build an optimal subset of clusters.Finally,the principal component centrality method is used to detect a representative after analyzing the sparse graph of selected clusters.The proposed framework is validated on the problem of video summarization,where a few key frames should be selected in long video clips which contain massive frames.The comparison of the results obtained between the proposed algorithm and some state-of-the-art methods was producted.Result indicates the effectiveness and feasibility of the proposed framework.
Higher-order Logic Formalization of Function Matrix and its Calculus
YANG Xiu-mei, GUAN Yong, SHI Zhi-ping, WU Ai-xuan, ZHANG Qian-ying and ZHANG Jie
Computer Science. 2016, 43 (11): 24-29.  doi:10.11896/j.issn.1002-137X.2016.11.005
Abstract PDF(437KB) ( 182 )   
References | Related Articles | Metrics
Function matrix is widely employed in the modeling and analysis of dynamic system.Paper-pencil calculations,numerical calculations and symbolic algebra methods,which are used in traditional analysis of function matrix,cannot guarantee to provide accurate or correct results.Higher-order logic theorem proving,as a kind of highly reliable formal verification method,can overcome the above shortcomings.This paper formalized the related theories of function vector and function matrix in the higher-order logic theorem prover HOL4.Formal contents include the formal definitions and proving of function vector and function matrix,and their properties of continuity,differential and integral.The formal verification of the differential equation of rotation matrix in the robotic kinematics was given to illustrate the application of the formalization.
High-precision Architecture-level Power Model Based on GPU
WANG Zhuo-wei, CHENG Liang-lun and XIAO Hong
Computer Science. 2016, 43 (11): 30-35.  doi:10.11896/j.issn.1002-137X.2016.11.006
Abstract PDF(505KB) ( 77 )   
References | Related Articles | Metrics
As hardware functions are constantly developing,and software development environments gradually mature,the graphics processing unit (GPU) has been applied to general purpose computation to help the central processing unit (CPU) accelerate a program.To obtain high performance,a GPU generally contain hundreds of core arithmetic units.Owing to the existence of high-density computing resources,the performance of the GPU is much superior to that of the CPU,while its power consumption is larger than that of the CPU.Power consumption has become one of the important issues restricting the development of GPU.Based on the study of the Fermi GPU architecture,a high-precision architecture-level power model was proposed in this research.In this model,the power consumed by different native instructions,and each memory access,were first calculated,then the power consumption was analysed and predicted according to the execution instructions as applied to the hardware,and the sampling results were acquired using sampling instruments.Finally,the results obtained from practical testing and the power model were compared by using 13 benchmark applications.It is demonstrated that the prediction accuracy of the model can reach approximately 90%.
Change Propagation Analysis in Business Process Based on Behavior Inclusion and Behavior Inheritance of Petri
FANG Xian-wen, ZHAO Fang, LIU Xiang-wei and FANG Huan
Computer Science. 2016, 43 (11): 36-39.  doi:10.11896/j.issn.1002-137X.2016.11.007
Abstract PDF(321KB) ( 52 )   
References | Related Articles | Metrics
Business process modeling is the core problem of business process management,the purpose of which is to adapt to the changing business needs flexibly.However,there will be a series of problems in the process of modeling,and there may exist the same problem in some models,that in turn imposes a phenomenon for the propagation of changes.Existing methods analyze the change propagation mainly from the reduction of boundary nodes and the inter-boundary nodes,so there are some limitations.To study the propagation of changes,based on the trace equivalence and behavior inclusion relations between the source and target models,we gave the change region of the source model and used the change propagation analysis method of behavior inheritance to find the change region of the target model.Finally,a specific process instance is used to show the feasibility of this method.
BPMN Formalization Based on Extended Petri Nets Model
LI Zong-hua, ZHOU Xiao-feng, WU Ke-li and CHEN Fu-bing
Computer Science. 2016, 43 (11): 40-48.  doi:10.11896/j.issn.1002-137X.2016.11.008
Abstract PDF(3421KB) ( 148 )   
References | Related Articles | Metrics
The business process modeling notation(BPMN),as a standard captures business processes in the early phases of system development,instructs the designing and development.The correctness of the BPMN model is a key to influencing software success.In view that the BPMN formal model can verify the correctness of the model, an extended Petri nets model was proposed to apply model-driven development technology to realize formalization automatic execution of the BPMN model. By refining the Transition and Place elements of the basic Petri nets,and increasing the Organization Identifier and Group Identifier container,the model is not only able to describe the dynamic behaviour of the BPMN model,but also can describe the dynamic behaviours cooperation and the static organization structure of the BPMN model.This paper analyzed the elements of the extended Petri net model from metamodel structure,grammar and notation detailed,and used model-driven development to design the transformation rule from BPMN model elements to extended Petri net model elements.In order to carry out the formalization automatic executing,the executing of the transformation rule uses ATL model transformation language and the ATL transformation code runs on the Eclipse framework.Finally,the Travel Agency system was applied to demonstrate the executing result of the formalization plug(BPMN2 Extend Petrinets).
Transforming UML to GSPN for Performance Analysis
HU Xiang, JIAO Li and CHAI Ye-sheng
Computer Science. 2016, 43 (11): 49-54.  doi:10.11896/j.issn.1002-137X.2016.11.009
Abstract PDF(489KB) ( 113 )   
References | Related Articles | Metrics
An UML model cannot be analyzed for performance requirements directly,and it should be transformed into analyzable models such as queueing models,stochastic process algebra models or stochastic Petri nets models.In this paper,three kinds of UML models(use case diagrams,deployment diagrams and activity diagrams) and suitable annotations from the profile for MARTE were chosen to build performance models by the tool Papyrus on the platform Eclipse.UML models are transformed into GSPN models by ATL,and the obtained GSPN models are further transformed into the formats that analyzers can support.At last,the performance can be analyzed by using the performance analysis method based on GSPN.Some performance metrics are given to investigate the system,including utilization,throughput,the average number of waiting requests and response time,which can be referred by system designers and develo-pers to analyze and optimize the performance.
Reachability Tree of Mobile Net
ZHANG Rui-hua, YANG Ru and DING Zhi-jun
Computer Science. 2016, 43 (11): 55-60.  doi:10.11896/j.issn.1002-137X.2016.11.010
Abstract PDF(472KB) ( 115 )   
References | Related Articles | Metrics
With the fast development of computer technology and network communication technology,process algebra,Petri net and other formal analysis methods were proposed for concurrent distributed systems.Due to the emergence and rapid development of the mobile internet in recent years,pi calculus is developed by adding mobility to process algebra,and at the same time,Petri nets field also uses predicate/transition net and color net to model the mobile system.But there still exist some weakness of them.Based on all previous works,A.Asperti and N.Busi proposed mobile net.Mobile net is obtained by adding mobility to Petri net and combining with the strengths of process algebra.It is suitable for the description of mobile computing system.However,no corresponding analysis method of mobile net has been developed now.Hence,this paper studied the analysis method for mobile net.The construct algorithm of the reachability tree of mobile net was proposed,and the reachability analysis method based on its reachability tree was provided.In addition,an example of the mobile vehicle telephone communication system was illustrated to demonstrate the effectiveness of this method.
Safety Analysis Method Based on Stochastic Time Petri Nets
PENG Ying, YAO Shu-zhen and TAN Huo-bin
Computer Science. 2016, 43 (11): 61-65, 76.  doi:10.11896/j.issn.1002-137X.2016.11.011
Abstract PDF(480KB) ( 53 )   
References | Related Articles | Metrics
After analyzing the shortage of current methods combining safety analysis with Petri net,a system safety analysis method based on stochastic time Petri nets(sTPN) was proposed.System model built by sTPN is neither limi-ted to exponential and deterministic transitions nor to enabling restrictions for generally distributed transitions.Safety metrics based on path can be obtained through modified transient stochastic state classes graph and transient analysis algorithm of sTPN.Experimental results are reported to show the usability and reasonability of the method.
Modeling OpenStack Single Plane Network Based on Token Selection
LI Hua, XING Yi and ZHANG Yu-rong
Computer Science. 2016, 43 (11): 66-70, 106.  doi:10.11896/j.issn.1002-137X.2016.11.012
Abstract PDF(495KB) ( 101 )   
References | Related Articles | Metrics
CPN suits for modelling the hardware and the software systems,which contain a large number behaviors of concurrencies,communication,synchronous sharing,and analyzing system function and performance.In CPN modelling,the traditional exhaustive method of selecting token is adopted,and the large number of token could cause that the generated state space is quite huge so as to bring reachability state explosion.On account of the above problem,this paper combined symbolic execution with CPN modelling,used selection method during the execution of the CPN model,and then obtained the reachability state space of the CPN model.Furthermore,a single plane network created by an OpenStack cloud platform was used as an example to be modeled,and the state space and reachability state generated by the traditional method and the new method were compared.The results show that the proposed method is effective.
Analysis of Mobile Application Monitoring Platform Based on Message Sequence Diagram and Petri Net
JI Jian-wei, CHEN Xin and HUANG Hao-jun
Computer Science. 2016, 43 (11): 71-76.  doi:10.11896/j.issn.1002-137X.2016.11.013
Abstract PDF(416KB) ( 151 )   
References | Related Articles | Metrics
With the rapid development of mobile Internet,the number of mobile applications presents the explosive eruption,and the real-time and effective monitoring and analysis for the performance,failure and short board of mobile applications is the key to ensure the normal operation of system.Unified Modeling Language(UML),as a powerful object-oriented graphics modeling tool,can be applied for modeling analysis of mobile monitoring platform.However,its process description lacks the strict semantics.As a discrete event dynamic system modeling and analysis method,Petri network has the direct and logical method of graphics method,and it provides an effective method to study the characteristics and performance of the system under the logic time.The modeling method based on UML message sequence dia-gram and Petri net is applied to the analysis of the process of mobile application monitoring platform.A message sequence diagram and Petri net model for the monitoring task of the user are constructed.The interactive relationship between the object of the platform and the time sequence is verified by using the message sequence diagram.The correctness and reachability of the model are analyzed by means of Petri net simplification rule and state equation.
Multi-state System Reliability Analysis Based on Fuzzy-colored Petri Nets
ZHANG Xin-ju and YAO Shu-zhen
Computer Science. 2016, 43 (11): 77-82, 101.  doi:10.11896/j.issn.1002-137X.2016.11.014
Abstract PDF(546KB) ( 72 )   
References | Related Articles | Metrics
In the multi-state system,due to the performance degradation,partial failure,system or component may exsit a series of middle states between the perfect state and the failure state,these middle states will influence the system reliability directly.According to the problem,the fuzzy-colored Petri nets was put forward.The model fully characterized the multi-state reliability through fuzzy state information and the dynamic transition between state nodes.In the fuzzy-colored petri nets,the weighted threshold will change with the dynamic change of fuzzy information and node state.So the adaptive fuzzy reasoning algorithm of threshold adjustment was proposed,which provides sufficient model guidance for multi-state system reliability analysis and the performance optimization.Multi-state reliability is verified,which shows that the fuzzy-colored petri nets and the parameter adjustment strategy are reasonable and effective,and the overall performance of the multi-state system is improved.
Hierarchy Structure of Process Net
GUO Feng, QIAO Lei and MAO Wen-xiang
Computer Science. 2016, 43 (11): 83-87.  doi:10.11896/j.issn.1002-137X.2016.11.015
Abstract PDF(331KB) ( 65 )   
References | Related Articles | Metrics
Process net is a novel petri net model with a combination of process algebra and petri net theory.Process net system model is used in practical applications,however,when the system is too complicated,node explosion problem will be encountered,at this moment,it is necessary to introduce process net with hierarchy.A process net with hierarchical structure was presented and the hierarchical model and modeling algorithm were given.It solves the propblem of large,complex systems modeling and settles space explosion.It can clearly reflect the level of model,so as to facilitate the process net refinement to obtain the accurate model.It also makes it easy to use stepwise refinement,top-down approach to complete modeling of the simulate system and help users to achieve a variety of size simulation services.
Cell Selection GSPN Model Constrained by Call Admission in Macro-Femto Networks
WANG Kai, CHEN Xin and XIANG Xu-dong
Computer Science. 2016, 43 (11): 88-93.  doi:10.11896/j.issn.1002-137X.2016.11.016
Abstract PDF(480KB) ( 54 )   
References | Related Articles | Metrics
Alone with the increase of data service,it can not meet the traffic demand only by using macrocell deployment.As an efficient means to offload macrocell traffic,femtocell networks are introduced and formed the Macro-Femto architecture.Cell selection will be an open challenge when Macro-Femto network is deployed.As for channel resource scarcity in femto-base station,putting call admission on femtocell network may also be of great concern.Graphi-cal generalized stochastic Petri net(GSPN) characters parallelism,uncertainty and asynchrony,and is suitable to analyze complicated system.For call admission constrains in Macro-Femto networks,the occupancy of calls to channels was first studied and GSPN model was built on call admission by advancing call retrials.Based on GSPN approach,the modified part-retrial-based call admission control strategy was introduced after analyzing the influence of call admission control strategy on call blocking rate.Concerning high-speed service characteristic in femtocell and high capability in macrocell,load-based femto-priority(LFP) selection scheme was proposed with adoption of part-retrial call admission control strategy and GSPN.Simulation results show that,compared with femto-priority selection scheme.The proposed LFP scheme can achieve lower new and handover calls blocking probability by 2.7% and 4.6%.
Method of Process Models Mining Based on Quasi Indirect Dependence
HUA Pei, FANG Xian-wen and LIU Xiang-wei
Computer Science. 2016, 43 (11): 94-97.  doi:10.11896/j.issn.1002-137X.2016.11.017
Abstract PDF(303KB) ( 79 )   
References | Related Articles | Metrics
Process mining aims to achieve the reasonable process model which we need from the event logs recorded by information system,which helps us to improve or rebuild the business process.In the past,the process model based on the direct dependence between the tasks has a lot of limitations.Although there are many methods can discover indirect dependence,they do not do analysis from the perspective of the process behavior.The process mining algorithm based on quasi indirect dependence takes the behavioral profiles into account and the initial model is established according to the behavioral profiles.Then model is adjusted based on incremental logs and quasi indirect dependence.Finally,the optimal model is selected according to the evaluation criteria.This method is especially suitable for mining the process model with indirect dependence.
Developing Graph and Generating Algorithm for Constant Continuous Petri Net with Effective Conflict
ZHAO Yi-jun and ZHANG Xiao-xuan
Computer Science. 2016, 43 (11): 98-101.  doi:10.11896/j.issn.1002-137X.2016.11.018
Abstract PDF(281KB) ( 61 )   
References | Related Articles | Metrics
Maximum constant continuous Petri net(CCPN),one of the timed continuous Petri net models,was first put forward by David.An efficient way for analysis of CCPN properties is developing graph of CCPN.However,for CCPN with effective conflict,it makes the construction of developing graph become more difficult because of the uncertainty of transition excitation caused by effective conflict.Based on the two kinds of solutions for effective conflict-determining priority or distributing flow in proportion,algorithm 2 for calculating the velocity of every transition was presented,and generating algorithm 3 of developing graph was further put forward for bounded CCPN with effective conflict.By means of which,analysis can be made for analyzing the properties of CCPN with effective conflict.
Modeling and Scheduling of Virtual Enterprises Based on Petri Nets
WAN Jun and ZHAO Bu-hui
Computer Science. 2016, 43 (11): 102-106.  doi:10.11896/j.issn.1002-137X.2016.11.019
Abstract PDF(1031KB) ( 57 )   
References | Related Articles | Metrics
Based on the analysis of existing modeling and its scheduling methods of virtual enterprise,this paper discussed the modeling and scheduling method of virtual enterprise based on an extended Petri nets class.The formal definition and transition rules of a new class of Petri nets called T-timed generalized cyber net was developed.Considering the characteristics of virtual enterprises project,the modeling process of virtual enterprise was described.Based on the established model,aiming at the scheduling target of shortest time or minimum cost,A* search algorithm was designed to solve the scheduling scheme for virtual enterprise project.The validity of the proposed model and scheduling algorithm is verified by a practical example.
Study of Events Collaborative Control Method Based on Petri Nets
FANG Huan, WANG Su-cheng, FANG Xian-wen and WANG Li-li
Computer Science. 2016, 43 (11): 107-110, 116.  doi:10.11896/j.issn.1002-137X.2016.11.020
Abstract PDF(357KB) ( 45 )   
References | Related Articles | Metrics
The collaborative control method of events is an important research aspect in the area of discrete event system,while the hybrid restrictions including resource constraints and event constraints are especially difficult problems in the system construction.Based on the weighted Petri nets model of DES, and hybrid restrictions including transitions and places are taken in consideration,the event collaborative constraints are transformmed into structure constraints of the corresponding Petri net model,and then the hybrid restrictions can be transformed to place restrictions.First of all,the converting method of places and transitions hybrid restriction conditions in DES was studied,and the transition collaborative control algorithms based on Petri nets have been proposed.By structure converting of Petri nets,the hybrid restriction conditions of places and transitions of DES are converted into pure places restrictions,and the goal of studying events collaborative control from the point of pure places restrictions is realized.Secondly,under three characteristic situations that behaving different coefficient restrictions in≤restriction expression,the converting rules by structure converting of Petri nets were proposed,then the transitions≤restriction conditions were converted into places restrictions,and the converting algorithms and procedures were elaborated in detail.Finally,the key difficult problems of events collaborative control were studied,and research directions and application problems for further study were discussed.
DAS:Generalized Stochastic Petri Nets Approach to Component Carrier Dynamic Adaptive Scheduling
JIA Yu-dong, CHEN Xin and XIANG Xu-dong
Computer Science. 2016, 43 (11): 111-116.  doi:10.11896/j.issn.1002-137X.2016.11.021
Abstract PDF(474KB) ( 115 )   
References | Related Articles | Metrics
As users’ demand for mobile services shows diversity trend,the demand of limited wireless spectrum resource has been unable to meet the demand of high-quality,high-efficiency and high-bandwidth data services.By introducing the high frequency carrier in the low frequency carrier environment,it will improve the system capacity.Howe-ver,with multiple component carriers deploying in a cellular,how to design an effective scheduling strategy becomes one of the key issues in improving the quality of service requirements and making radio resource more efficient for LTE users.In light of the component carriers scheduling problem,we built the multiple component carriers systems by Generalized stochastic Petri nets.Meanwhile,based on the analysis of the types of users’ services and characteristics of component carriers,a dynamic adaptive scheduling(DAS) strategy was proposed.Through the use of TimeNets simulation tool,an emulation environment which includes component carrier selection,LTE users queue mechanism and resource block allocation was set up.The simulation result shows that compared with the based-service scheduling(BSS) strategy and the shortest queue priority scheduling(SQP) strategy,the drop probability of DAS strategy is lower.Besides,in terms of throughput,DAS strategy is close to BSS strategy when arrival rate is low and close to SQP strategy when arrival rate is high.
Study on Petri Net Platform for Web Service Composition Construction and Execution
SUN Qiang, MA Bing-xian and SUN Hua-qiang
Computer Science. 2016, 43 (11): 117-120, 134.  doi:10.11896/j.issn.1002-137X.2016.11.022
Abstract PDF(402KB) ( 43 )   
References | Related Articles | Metrics
According to the specific application of Petri net in service composition,Petri net based the service composition software platform was constructed in this paper.Firstly,semantic based service function system is established for domain services.And then,atomic service is registered and published,and it is tied to a concrete function in the semantic function system.Secondly,view based service composition construction is realized,and users can select services,construct service composition through related view and get service composition function process.Furtherly,Petri net of service composition can be get and submitted to execution engine,completing the execution of service composition.The work of this paper realized the process from construction to execution for service composition and will provide software support for the use of Petri net within dynamic execution of services composition.
Method for Finding Critical Paths Based on Concurrent Reachable Marking Graph with Tags
HAN Yao-jun
Computer Science. 2016, 43 (11): 121-125, 141.  doi:10.11896/j.issn.1002-137X.2016.11.023
Abstract PDF(453KB) ( 185 )   
References | Related Articles | Metrics
The color timed Petri net model was gotten by transforming AOE network in this paper.The earliest event start time was calculated while constructing Petri net model.The algorithm of constructing concurrent reachable mar-king graph with tags for color timed Petri net modeling AOE network was given.The critical paths were gotten and the shortest time of completing all activities was calculated from tags of concurrent reachable marking graph.The example and simulation show that the execution efficiency of the algorithm is better than traditional algorithm for finding critical paths when there are more than three concurrent activities in AOE network.The more the concurrent activities are,the higher the efficiency is.
Stochastic Simulation of Time Petri Nets
PAN Li and YANG Bo
Computer Science. 2016, 43 (11): 126-129, 159.  doi:10.11896/j.issn.1002-137X.2016.11.024
Abstract PDF(386KB) ( 94 )   
References | Related Articles | Metrics
Simulation is a common method of system analysis for Petri nets.Time Petri nets describe firing time ranges of transitions by time intervals,thus the firing time pionts of transitions are uncertain in their time intervals.A stochastic simulation method for time Petri nets was proposed.When a transition becomes enabled,a firing time point in the fi-ring interval of this transition is determined according to a certain random distribution,such as the uniform distribution.Based on experimental data by Petri net simulation,we presented an algorithm for constructing a state class tree of the Petri net model,and used a statistical analysis method for evaluating firing time intervals and its probabilities of transition sequences.The information obtained by the stochastic simulation is of good value for system analysis using time Petri nets.
Design of Multicast Code Division Multiple Access Information System Based on Cooperative Relay
LV Min-hui, XIONG Wei and SHEN Lai-xin
Computer Science. 2016, 43 (11): 130-134.  doi:10.11896/j.issn.1002-137X.2016.11.025
Abstract PDF(385KB) ( 61 )   
References | Related Articles | Metrics
In order to improve the performance of multigroup multicast relay network system,an improved cooperative relay multipoint broadcast code division multiple access(CDMA) system was proposed.In this scheme,the distributed beamforming based on cooperative relaying is used to realize the multicast of single antenna base station,and a better spatial diversity gain is obtained.In this system,a plurality of base stations use multiple relay nodes to multiple destinations in each group to spread the news.CDMA techniques are used to reduce the relay node and the multiple access interference(MAI) barrier of the destination node.Each relay node as a linear precoding beamformer can reshape the space base station signal in the appropriate code.The linear beamforming matrix is optimized to make the power relay nodes minimize,so as to meet the requirements of QoS in signal to interference noise ratio.Through the contrast experiment,the performance of the system simulation shows that the proposed scheme outperforms the conventional orthogonal multiplexing scheme(FDMA / TDMA).
Multipath Routing Protocol Based on Braid-multipath Network Coding Model for Wireless Sensor Networks
WANG Jun, DU Wei-qi, LIU Hui and WANG Lei
Computer Science. 2016, 43 (11): 135-141.  doi:10.11896/j.issn.1002-137X.2016.11.026
Abstract PDF(613KB) ( 68 )   
References | Related Articles | Metrics
How to use the network coding technology in wireless sensor network to improve the performance of the networks is becoming a hot research area in recent years.Multipath transmission is an effective method to improve the transmission reliability.Among the multipath network coding models,the braid-multipath network coding model is a reliable routing model and is very suitable for large-scale networks.In this paper,we designed a new routing protocol called BRGNC(Braided multipath Routing protocol based on Grid with Network Coding) for large-scale wireless sensor networks.Firstly,we divided the network into virtual grids by using the nodes’ location information,and we selected the next-hop grid based on many factors including the nodes’ residual energy,link quality and network density.And then we selected the “optimal” nodes as the candidate sets.At the same time,we used network coding technology to improve the transmission reliability.Simulation results show that the BRGNC protocol can greatly improve the reliability of the transmission,extend the lifetime of the network in large-scale sensor networks with the instable link.
Research and Improvement of ECG Compression Algorithm Based on EZW
PENG Zi-ran and WANG Gou-jun
Computer Science. 2016, 43 (11): 142-147.  doi:10.11896/j.issn.1002-137X.2016.11.027
Abstract PDF(456KB) ( 62 )   
References | Related Articles | Metrics
Embedded zero tree wavelet (EZW) is a kind of efficient compression method,although it has certain advantages in the coding,its multi-tree structure information coding will reduce the signal compression ratio.In this paper,the optimization and improvement of EZW compression algorithm were studied.First,the ECG signal is processed using the lifting wavelet scheme,and the law of lifting and lifting algorithm of wavelet transform are studied.Second,the improvement of the coding method of EZW compression algorithm is studied.Through the ECG information decomposition feature detection value,according to the feature information,the wavelet coefficients of ECG are weighted.By measuring the weight of the coefficient,and optimizing the coding,the goal of improving the efficiency of compression is achieved.
Community Detecting of Internet Macroscopic Topology
ZHANG Jun, ZHAO Hai, LIN Chuan and LIU Xiao
Computer Science. 2016, 43 (11): 148-151.  doi:10.11896/j.issn.1002-137X.2016.11.028
Abstract PDF(322KB) ( 59 )   
References | Related Articles | Metrics
A large number of complex systems in nature can be described by complex networks.Community structure is the most important feature of complex networks following the small-world and scale-free features.Community detecting is very important for understanding the macroscopic topology structure of Internet.Aimed at the structure features of the macroscopic topology of Internet,based on link clustering method,we proposed a community detecting algorithm which redefines link similarity with routing features to transform the link clustering process.It gives a better community structure in Internet macroscopic topology.It is further applied into other networks of different types by using link betweenness instead of link frequency,and better community structure can be gotten.
Study on WiFi Location Technology under Complex Indoor Environment
LU Yin and MIAO Hui-hui
Computer Science. 2016, 43 (11): 152-154.  doi:10.11896/j.issn.1002-137X.2016.11.029
Abstract PDF(259KB) ( 63 )   
References | Related Articles | Metrics
Under the complex indoor wireless local area networks(WLAN) environment,the improved motley keenan(MK) model and the weighted K nearest neighbors(WKNN) positioning methods are used to improve the performance of location.Firstly,the indoor propagation models and the improved MK model are introduced.Then,the fingerprint positioning methods and the basic theories of those two positioning algorithms are illustrated.According to the experimental data,the positioning performance and error sources of those methods are analyzed.Finally,the combined hybrid positioning method between MK model and WKNN was proposed,and the simulation and analysis were made.Simulation results show that the proposed algorithm improves the accuracy and reduces the positioning error.
Improved Multi-channel Access Protocol Study Based on Ad Hoc
LIU Hai-yan, TAN Liang and LIU Chun-ling
Computer Science. 2016, 43 (11): 155-159.  doi:10.11896/j.issn.1002-137X.2016.11.030
Abstract PDF(393KB) ( 52 )   
References | Related Articles | Metrics
In order to solve multi-channel hidden terminal problem and multi-channel Deafness problem that the multi-channel access protocols are facing,this paper added in probe packet and wait packets and adopted the node status table and the channel idle state table.And we also used NS2 simulation software to simulate its function for this protocol.The simulation results show that the improved protocol resolves the multi-channel hidden terminal problem and multi-channel Deafness problem and raises the network throughput and channel utilization,reduces the network transmission delays,enhances the performance of the network compared with DPC protocol.
Measurement Matrix Design for Multiple Target Localization Based on Compressive Sensing
GUO Yan, QIAN Peng, LI Ning and SUN Bao-ming
Computer Science. 2016, 43 (11): 160-163.  doi:10.11896/j.issn.1002-137X.2016.11.031
Abstract PDF(356KB) ( 52 )   
References | Related Articles | Metrics
A compressive sensing based multiple target localization approach was proposed by exploiting the intrinsic sparse nature of the localization problem in wireless sensor networks.The sparsity in our localization approach is reflected by the locations of targets,which can be formulated as a sparse vector.We used the received signal strength (RSS) to achieve the target localization.It only requires a small number of measurements for accurately recovering the location vector by solving the 1-minimization program.Moreover,we proposed an improved measurement matrix design me-thod,which determines the distribution of sensors.Simulation results demonstrate that the localization accuracy and stability of the proposed measurement matrix design method have huge advantage in comparison with the random measurement matrix method widely used in CS-based localization.
Adaptive Cache Algorithm Based on Local Content Activity in Information-centric Networks
TIAN Ming, WU Jiang-xing and LAN Ju-long
Computer Science. 2016, 43 (11): 164-171.  doi:10.11896/j.issn.1002-137X.2016.11.032
Abstract PDF(706KB) ( 103 )   
References | Related Articles | Metrics
After modeling analysis,we found that replacement policy based on global content popularity does not suit for the distributed model of information centric networks.This paper presented a cache replacement policy based on local content activity called LAU.And an adaptive path caching algorithm named ACAP was proposed,so that the contents were cached successively in access path on the basis of local activity.The simulation results show that LAU strategy improves the cache hit rate of the single-node.Compared to the existing path caching algorithms,ACAP has low hit rate of server response and low hop ratio.At last,the applicable cache and topology structure of this algorithm were discussed and analyzed.
New Optimal Power Allocation Scheme in HDAF Protocol under High Signal to Noise Ratio Conditions
DUANMU Chun-jiang and WANG Zhen-yu
Computer Science. 2016, 43 (11): 172-175.  doi:10.11896/j.issn.1002-137X.2016.11.033
Abstract PDF(281KB) ( 60 )   
References | Related Articles | Metrics
The hybrid decode-amplify-forward(HDAF) protocol has recently been proposed in the literature.And it can remarkably outperforms the decode-and-forward and the amplify-and-forward protocol.In this paper,the optimal power allocation problem in hybrid decode-amplify-forward protocol was deeply studied.Firstly,practical formula for the symbol error rate in the hybrid decode-amplify-forward protocol is derived in the literature.Then,the mathematical model of the power allocation problem in the protocol is mathematically formulated,which minimizes the symbol error rate under the total power constraint.Finally,simulations using Matlab programs are designed to verify the proposed formulas and confirm the proposed approach to solve the problem.Numerical results demonstrate that the optimum power allocation factor in the HDAF protocol is not only related to the channel conditions of the source-to-relay channel and the relay-to-destination channel,but also related to the total power of system.It is also discovered in our research that with the sufficient increase of the allowed total power,the optimum power allocation factor will asymmetrically approximate 1/2.
Cluster Scheduling Algorithm for Real Time Tasks Transmission in Internet of Things
QIAN Xiao-jun, FAN Dong-ping and JI Gen-lin
Computer Science. 2016, 43 (11): 176-179, 189.  doi:10.11896/j.issn.1002-137X.2016.11.034
Abstract PDF(1062KB) ( 64 )   
References | Related Articles | Metrics
In the environment of Internet of things,it is necessary to optimize the transmission scheduling of the link nodes in order to improve the efficiency of process management and memory management of sensor nodes in the Internet of things.Traditional method adopts the priority list control of the Internet of things in the environment of real-time task scheduling method,so dynamic load balance is affected by the spatial distribution of nodes and the accuracy of task allocation is not high in the process of resource allocation.A clustering scheduling algorithm based on efficient time division multiple access time slot allocation for real-time task scheduling in Internet of things was proposed.The real-time task scheduling model is designed to extract the time slot allocation scale features of the cluster task information flow as the transfer direction function.Efficient time division multiple access protocol is designed and real-time task coordinated cluster scheduling in the Internet of things is improved,improving the balance and accuracy of real-time task scheduling.Simulation results show that when the proposed method is used to transfer the real-time tasks in the Internet of things the performance is superior and reliable.
Research on Degrees of Freedom of Interference Channel (IC) of Two Parallel Users in Cognition between Transmitting End Nets
QIN Wen-ming, WANG Xiao-feng and LIU Feng
Computer Science. 2016, 43 (11): 180-183.  doi:10.11896/j.issn.1002-137X.2016.11.035
Abstract PDF(279KB) ( 63 )   
References | Related Articles | Metrics
This paper,applying a strategic research method combining interference alignment,interference neutralization and perception network together,studies the freedom degrees of IC when it is respectively paralleled with the main network and the sub-network,considering the cognition between the main network and the sub-network.It summarizes all the 11 cases in the related studies,makes an ergodic analysis of the interference management strategy of every case,and finally finds that,the IC of the inner bound freedoms in Case 11 have increased compared with 4 users.The final conclusion is that the inner bound freedom-degree maximum is 3 under the condition of IC inter-network cognition of two parallelusers.
User Incentive Mechanism Based on Crowd Search Optimization and Cooperative Competition for Mobile Crowd Sensing Networks
LI Zhi-gang and TANG Xue-ming
Computer Science. 2016, 43 (11): 184-189.  doi:10.11896/j.issn.1002-137X.2016.11.036
Abstract PDF(438KB) ( 67 )   
References | Related Articles | Metrics
According to the issue of user incentive and protection of mobile node,a kind of user incentive mechanism was studied based on crowd search optimization and cooperative competition.In the mechanism,according to the time domain,space domain and frequency domain,a definition of position information was given for mobile node in the sen-sing area,and the transmitted signal and the received signal were conducted population search optimization.At the same time,cooperation and competition were done according to the characteristic values.Finally,user incentives and protection were achieved through adjusting the feature value,competition and cooperation and exiting.Experimental results prove that the proposed user incentive mechanism has obvious advantages in system energy usage rate,transmission delay and the load of system compared with the incentive mechanism based on priority assignment of static path channel and optimal power allocation without cooperative competition.
Sniffer-feedback Mechanism of Improving SPICE Protocol
CHENG Chang-jun and LIU Dan
Computer Science. 2016, 43 (11): 190-192, 209.  doi:10.11896/j.issn.1002-137X.2016.11.037
Abstract PDF(325KB) ( 213 )   
References | Related Articles | Metrics
According to the present problems of the SPICE(Simple Protocol for Independent Computing Environments):too large delay in multi-interaction scene and no response sometimes during playing video, sniff channel in origin structure was built,then some flexible policies for multi-interaction and video scenes based on feedback data from the channel were added.During playing video,SPICE server calculates a proper bandwidth from time-delay and the packet-loss-rate collected from the other channels.In multi-interaction scene,the server changes refresh interval dynamically based on the activities of input devices.The experiment results in different scenes show that it can improve the performance effectively.
System Safety Modeling and Analysis Method Based on Four-variable Model
HU Jun, SHI Jiao-jie, CHENG Zhen, CHEN Song and WANG Ming-ming
Computer Science. 2016, 43 (11): 193-199, 229.  doi:10.11896/j.issn.1002-137X.2016.11.038
Abstract PDF(1342KB) ( 75 )   
References | Related Articles | Metrics
Recently,the system safety analysis and verification method based on model is an important research direction in the field of safety critical systems engineering.A system safety modeling and analysis verification method based on four-variable model was proposed based on the AltaRica modeling language.The mapping rule between four-variable model and AltaRica was constructed through the studying of their semantics.A case of wheel brake system(WBS) in civil aircraft was used as an example to illustrate the entire validation process.Namely,first we used four-variable model to analyze the requirements of WBS from the level of system requirements,and constructed the AltaRica model according to the mapping rule.Next,we used fault tree analysis method to study the safety of WBS.Finally,based on the tool ARC,which is associated with AltaRica,the system safety attributes was validated.The practicability of the proposed method in the field of system safety engineering is illustrated by the verification results.
Robust Watermarking Algorithm Based on Fractional Fourier Transform and Spread Transform Dither Modulation
ZHANG Yan-hua and MA Xiao-hu
Computer Science. 2016, 43 (11): 200-204.  doi:10.11896/j.issn.1002-137X.2016.11.039
Abstract PDF(2274KB) ( 62 )   
References | Related Articles | Metrics
Aiming at the existent algorithms’ defects in transparency and robustness,a new robust image watermarking algorithm based on fractional Fourier transform and spread transform dither modulation was presented in this paper.First,lifting wavelet transform is performed on the original image.The lowpass subband is segmented into non-overlapping blocks,and each block is transformed by the fractional Fourier transform.Then,the low-frequency coefficients of the amplitude information are chosen to construct the host vectors depending on frequency.In the process of embedding watermark,spread transform is performed on the host vectors to obtain the projected host vectors,and then the binary watermark is embedded using dither modulation.A minimum distance decoder is used to decode the watermark.The experimental results show that the proposed algorithm can obtain better transparency as well as robustness against common image attacks such as JPEG compression,filtering,noisy,cropping and so on.Compared with previous similar schemes,the presented method achieves higher practicality and reliability.
Novel Two-way Security Authentication Wireless Scheme Based on Hash Function
WANG Jie-hua, LIU Hui-ping, SHAO Hao-ran and XIA Hai-yan
Computer Science. 2016, 43 (11): 205-209.  doi:10.11896/j.issn.1002-137X.2016.11.040
Abstract PDF(386KB) ( 48 )   
References | Related Articles | Metrics
With the development of science and technology,more and more network devices are connected to the wireless network.In order to ensure the legitimate users’ correct identification and connections,based on the Wen-Li authentication scheme,a novel two-way security authentication wireless scheme based on hash function(TSAWSH) was proposed.By using the send packet sequence number instead of of timestamp,the TSAWSH can avoid the influence to the certification process caused by the network delay and needn’t the strict clock synchronization between the devices.By comparing the security and complexity analyses with the Wen-Li scheme,the TSAWSH can effectively avoid the common network attacks,has higher safe property and less computational quantity,and also has lower computational complexity.The TSAWSH can effectively reduce the overhead of the actual system.
Workload Type-dependent Xen Virtual Machine System Performance Models
YU Yong, CHE Jian-hua, XU Huan-liang and JIANG Cheng-zhi
Computer Science. 2016, 43 (11): 210-214.  doi:10.11896/j.issn.1002-137X.2016.11.041
Abstract PDF(417KB) ( 86 )   
References | Related Articles | Metrics
In order to avoid the overload of Xen virtual machine system executing network I/O-intensive workload caused by the CPU resource exhaustion of Domain0 and solve the linear programming between the average perfor-mance and the number of guest domains when executing computing-intensive workload,two workload type-dependent perfor-mance models were proposed.Firstly,the network I/O request number computing models under the constraint of CPU core-sharing or CPU core-isolation were established by analyzing the CPU resource usage law of Xen virtual machine system executing network I/O-intensive workload.Secondly,the average performance analyzing models of guest domains executing computing-intensive workload were established by analyzing the relation between the average perfor-mance of multiple uniform guest domains concurrently executing computing-intensive workload and the performance of a same guest domain executing the same workload.The experimental results indicate that two performance models can effectively limit the network I/O request number of guest domains to prevent Xen virtual machine system from overloading and figure out the scalability number of guest domains in Xen virtual machine system executing computing-intensive workload with given computing resource respectively.
Key Management Scheme for Three-dimensional Acoustic Sensor Network Based on Cluster
HUANG Bin, LIU Guang-zhong and XU Ming
Computer Science. 2016, 43 (11): 215-220.  doi:10.11896/j.issn.1002-137X.2016.11.042
Abstract PDF(478KB) ( 51 )   
References | Related Articles | Metrics
In the three-dimensional acoustic sensor network based on cluster,the decorated area is divided into multiple rooms which are cubes.Among cluster head nodes,the symmetric key is generated to verify nodes by relative coordinates.Between nodes within the cluster,the symmetric key is generated to verify nodes by modified symmetric polynomials.When the nodes communicate in the network,they generate symmetric key which is used to communicate by symmetric matrices.It realizes complete connection in communication.Compared with the previous key management scheme for sensor network,this scheme can realize node revocation and key update,and it has high scalability,reliability and low cost.This scheme can also effectively resist DoS attack especially gather attack between nodes’ communication because of first lightweight node authentication mechanism.Compared with one-way hash method,this scheme can more efficiently resist cluster head compromise because of three-dimensional key.
Attribute-based Online/Offline Signcryption for Mobile Network
JIANG Di and HAN Yi-liang
Computer Science. 2016, 43 (11): 221-225.  doi:10.11896/j.issn.1002-137X.2016.11.043
Abstract PDF(374KB) ( 57 )   
References | Related Articles | Metrics
Signcryption combines the functionalities of signature and encryption and costs less than the traditional approach.Online/Offline techniques can improve the efficiency of the signature and encryption,and one motivating application for this technology is mobile network devices.The previous attribute-based signcryption scheme is not practical and efficient,on the basis of Li’s attribute-based signature,an efficient attribute-based online/offline signcryption scheme was proposed in this paper.This scheme is provable secure in the random oracle model and it satisfies indistinguishability against adaptive chosen ciphertext attack based on l-th decisional bilinear Diffie-Hellman inversion (l-DBDHI) assumption and satisfies existential unforgeability against adaptive chosen message attack based on the standard computational Diffie-Hellman(CDH) assumption.Although the size of ciphertext increases,the new scheme achieves confindentiality and authentication,which is more suitable for the practical applications.
Dynamic Integer Tent Map and its Characteristics Analysis
LIU Jian-dong, ZHANG Xiao, ZHAO Chen and SHANG Kai
Computer Science. 2016, 43 (11): 226-229.  doi:10.11896/j.issn.1002-137X.2016.11.044
Abstract PDF(296KB) ( 100 )   
References | Related Articles | Metrics
Dynamic integer tent map model was established through introduction of dynamic parameters based on integer tent map,which has the problem of short cycle.Its uniform distribution feature was demonstrated and compared with the integer logistic map.Periodicity and interdependency of dynamic integer tent map were analysed.This model eliminates the short-period defects of integer tent map and is easy to be realized in hardware.Experiment and simulation analysis show that this model is featured with excellent cryptography property and possesses high value in cryptographicapplication.
Highest Nonlinearity Rotation Symmetric Boolean Functions and Optimal Algebraic Immunity Function
HUANG Jing-lian and WANG Zhuo
Computer Science. 2016, 43 (11): 230-233, 241.  doi:10.11896/j.issn.1002-137X.2016.11.045
Abstract PDF(348KB) ( 55 )   
References | Related Articles | Metrics
In this paper,we studied the cryptographic properties of RSBFs,including the highest number of propagation,the highest nonlinearity and algebraic immunity,and also studied the problems of the existence and the construction of the optimal algebraic immunity function.Using the derivative and the e-derivative of the Boolean functions,we proved that there exist RSBFs whose nonlinearity is the highest,and verified the existence of a type of RSBFs from Bent functions whose propagation reaches n degree.Moreover,we also proved the existence of RSBFs with algebraic immunity by one-order or higher than two-order.We constructed inhomogeneous complete RSBFs with the optimal algebraic immunity and a large number of the optimal algebraic immunity Boolean functions from rotation symmetric Bent functions,and proved the existence of the two types of functions.Meanwhile,we also obtained inhomogeneous complete RSBFs with correlation immunity.
Consistency Analysis of Software Dynamic Evolution Based on Petri Net
XIE Zhong-wen, MING Li, LIN Ying, QIN Jiang-long, MO Qi and LI Tong
Computer Science. 2016, 43 (11): 234-241.  doi:10.11896/j.issn.1002-137X.2016.11.046
Abstract PDF(705KB) ( 46 )   
References | Related Articles | Metrics
On the basis of the challenges about analyzing software dynamic evolution,taking extended Petri nets as the main formalism,in the view of dynamic-evolution-oriented software architecture meta-model DEAM,the problem to be analyzed is how to ensure the consistency during dynamic evolution.Firstly,the main strategies of consistency analysis was discussed,and the components were selected as the basic analysis objects of dynamic evolution.Secondly,from the perspective of component structure evolution,the sub-net types of the components were analyzed and the methods were presented to assure the structure consistency.Thirdly,in the view of component behavior evolution,via setting up simulative relation,it was analyzed whether the component behavior consistency is preserved or not after evolution.Finally,a case study verifies the feasibility of the proposed method.
Consistency Analysis Method of Models Based on Net Process
ZHAO Pei-hai, WANG Mi-mi and FANG Xian-wen
Computer Science. 2016, 43 (11): 242-245, 251.  doi:10.11896/j.issn.1002-137X.2016.11.047
Abstract PDF(396KB) ( 38 )   
References | Related Articles | Metrics
In the similarity analysis process of business process models,sometimes there may be loop structure in the business process model,which leads the situation that internal behavior relation is consistent but external process of net is not.The existing methods mostly don’t consider the loop structure,and ignore the influence of loop structure on consistency analysis.On the basis of sequence relations of behavior profiles,by analyzing the relations between the internal behaviors of each process and depicting the relations between Petri net process section,a concept of Petri net process view was proposed in the paper.Then through the research on the external relations of Petri net process section,a mea-sure method of behavior consistency based on process view was proposed in this paper,which puts a ratio of the consis-tent process view and the total processes as the degree of two models.The theoritical analysis and specific example show that the method is very effective.
Software Defect Assignment Method Based on Defect Similarity and Tossing Graph
SHI Gao-xiang and ZHAO Feng-yu
Computer Science. 2016, 43 (11): 246-251.  doi:10.11896/j.issn.1002-137X.2016.11.048
Abstract PDF(522KB) ( 50 )   
References | Related Articles | Metrics
It has important significance to assign a bug report to the most suitable developer to repair in large software projects.At present,there are some methods using in automatic distribution for bug reports,such as utilizing description information of the historical bug reports,associating of bug reports,and historical information of bug report assignment.However,these methods do not fully exploit the information of defect report.This paper proposed to combine historical repair information with history assignment information.Firstly,a tossing graph is built up based on historical information of bug report assignment.Secondly,the similarity of new bug reports and historical bug reports is calculated,and the K final solvers who are corresponding to the K bug reports which have the highest similarity to the new bug report are selected.Finally,according to the dependent relations of these solvers in the tossing graph,a prediction assignment path is generated.To verify the validity of this method,we performed experiments with Eclipse and Mozilla defect report set.Experiments show that the method we proposed is superior to other methods in the accuracy of prediction.
String-type Test Data Generation Based on Mutation Particle Swarm Optimization
LI Gang, YU Lei, SUN Hui-hui, ZHANG Xing-long and HOU Shao-fan
Computer Science. 2016, 43 (11): 252-256, 279.  doi:10.11896/j.issn.1002-137X.2016.11.049
Abstract PDF(536KB) ( 46 )   
References | Related Articles | Metrics
Search-based algorithm is widely used in test data generation for path coverage.However,current methods are not efficient enough dealing with string-type test data.In order to generate string-type test data efficiently,a novel approach based on mutation particle swarm algorithm (MPSO) was presented.In this approach,once a swarm is randomly generated,PSO is used to evolve the swarm by drawing the whole swarm towards the optimal particle,and mutation operation is carried out with certain probability to avoid falling into local optimal.For the sake of providing effective guidance for the search process,this paper improved the classical fitness function by modifying the calculation of branch distance to make it applicable for string-type test data.The experimental results show that our proposed method not only has high success rate and stability,but also can improve the efficiency of test data generation.
Software Reliability Prediction Model Based on Hybrid Kernels RVM
LIANG Hong-tao, XU Jian-liang and XU Ke
Computer Science. 2016, 43 (11): 257-259.  doi:10.11896/j.issn.1002-137X.2016.11.050
Abstract PDF(256KB) ( 55 )   
References | Related Articles | Metrics
Reliability as an important measure of software quality,is of great significance to software management.A software reliability prediction model by using hybrid kernel function relevance vector machine was proposed in this paper to solve defects of single kernel function.Firstly,research status of software reliability is analyzed.Secondly,trai-ning samples are studied and modeled by using hybrid kernel function relevance vector machine.Finally,the prediction performance is analyzed with some specific examples.The results show that the proposed model can obtain ideal software reliability prediction results,and the performance is better than the single kernel function models,so it has important practical application value in software reliability prediction.
Real-time Cloud Environment Aware Web Service Selection and Evaluation
XIAO Gang, WU Fei-fei, XU Jun, LU Jia-wei and ZHANG Yuan-ming
Computer Science. 2016, 43 (11): 260-264.  doi:10.11896/j.issn.1002-137X.2016.11.051
Abstract PDF(463KB) ( 66 )   
References | Related Articles | Metrics
Currently,the quality of service(QoS) of Web services can not to be guaranteed,because the cloud environment lacks the ability to be real-time aware of their load capacity changes.Non-trusted and trusted Web service provi-ders may make the different service description of the meta-service,where deceit exists.Multiple consumers may simultaneously invoke the same high quality service and thus it is very likely to occur that the users’ access exceeds the load capacity of service leading to the decline in service capacity.In this paper,we proposed a service selection and evaluation method based on real-time environmental-aware.The service dispatch center proposed in this paper aims to select meta-service clusters by way of bidding and tendering to guarantee the QoS of the selected service.Meanwhile,the QoS aware module of each meta-service will make real-time perception of QoS to ensure the selected web service with high real-time and high reliability.Furthermore,the reputation model was proposed to evaluate the promised QoS of the service providers and a highly reliable service environment was established.Extensive simulation results show that the method can achieve evaluation and forecast of QoS efficiently,and provide an optimal Web service for users.
Column-stores Join Optimization on Coupled CPU-GPU Architecture
DING Xiang-wu and LI Zi-tong
Computer Science. 2016, 43 (11): 265-271, 308.  doi:10.11896/j.issn.1002-137X.2016.11.052
Abstract PDF(693KB) ( 63 )   
References | Related Articles | Metrics
Heterogeneous architecture is the new trend of the development of computer system central processor unit(CPU).Taking advantage of its powerful computer power has been a new research hotspot in database system field.First,in order to enhance the query performance of column-oriented database,we proposed a data partition model which is environmentally sensitive.The data partition model provides optimal data division for every processing unit dynamically by monitoring the CPU occupancy rate.Then,for GPU memory access optimization,we proposed a DFAT estimate model for prefetching.At the same time,we optimized GPU memory access based on coalesced access.We implemented a sort-merge join algorithm on a PC with an integrated CPU-GPU chip,which adopts the out data partition model and our cost model in prefetching.Our strategy is able to distribute data to different processing units automatically,and can make sort-merge join achieve a performance improvement of 33% on coupled CPU-GPU architecture.
Keyword Expansion Query Approach over RDF Data Based on Bipartite Graph
ZHENG Zhi-yun, WANG Zhen-tao, ZHANG Xing-jin and WANG Zhen-fei
Computer Science. 2016, 43 (11): 272-279.  doi:10.11896/j.issn.1002-137X.2016.11.053
Abstract PDF(684KB) ( 69 )   
References | Related Articles | Metrics
Using graph to express RDF data can both retain data correlation information and semantic information.To date,more and more keyword query methods based on graph structure have realized RDF data query processing.In this paper,an approach named RDF keyword expansion query approach based on bipartite graph was proposed.This approach enables keyword-based query over RDF data.RDF data is modeled as a RDF bipartite graph,in which all text information is encapsulated by nodes labels.Based on the keyword synonym expansion technology,the approach realizes the semantic extension of query keywords,effectively solves the problem of delivering the same object description words and also improves the query precision.Through RDF bipartite graph of the anti-symmetric adjacency matrix and its power matrix,the approach structures the subgraphs of query results consisting of key vertices,realizes the keyword query processing and then reduces the query response time.The experimental results show that when comparing query precision and query response time,KERBG method proposed in this paper is better than the current mainstream methods.
Adaptive Concurrency Control Algorithm Based on Conflict-rate Prediction
FAN Bi-jian and ZHUANG Yi
Computer Science. 2016, 43 (11): 280-283, 290.  doi:10.11896/j.issn.1002-137X.2016.11.054
Abstract PDF(435KB) ( 142 )   
References | Related Articles | Metrics
Concurrency control algorithm can guarantee the correctness and consistency of the database transaction.In order to improve the efficiency of concurrent transactions,an adaptive concurrency control algorithm based on conflict-rate prediction(ACC-PRC) was proposed.The algorithm is divided into two stages:information collection and strategy selection.The information collection stage uses a priori transaction queue PTQ to guarantee the serializable execution of the transaction,and a cyclic conflict queue CQR is used to collect the transaction execution state of the system.The strategy selection stage uses the improved weighted moving average method to predict the next stage of conflict using the cyclic conflict queue,and then chooses appropriate concurrency strategies by bidirectional threshold.The algorithm maintains good transaction efficiency while the transaction arrival rate is relatively high.The results show that the integrate performance of ACC-PRC algorithm is better than that of the HCC algorithm and ADCC algorithm.
Pattern Matching Algorithms for Graph Structured Fuzzy XML Documents
MIAO Feng-yu and WANG Hong-zhi
Computer Science. 2016, 43 (11): 284-290.  doi:10.11896/j.issn.1002-137X.2016.11.055
Abstract PDF(1316KB) ( 69 )   
References | Related Articles | Metrics
Fuzzy XML documents are XML documents which contain uncertain information.There are few research achievements of fuzzy XML documents,and all of them are based on tree-structure.According to the characteristics of the graph structured fuzzy XML documents,a group of efficient algorithms were proposed in this paper.These algorithms are based on an indexing scheme which is fit for graph-structured documents,and use the bottom-up search for nodes matchings to reduce the repeat judgements greatly.Such approaches neither need to merge the portions of ma-tching results nor need to design the filter function for PC relations.The theoretical analysis and experimental results show that,the pattern matching algorithms presented in this paper outperform the relevant algorithms in twig query performance,and accomplish the query of DAG pattern matching effectively.
Fusion Method of Multi-model Brain Images Based on Adaptive Cloud Model
ZHAO Jia, XIAO Bin, LI Wei-sheng and WANG Guo-yin
Computer Science. 2016, 43 (11): 291-296.  doi:10.11896/j.issn.1002-137X.2016.11.056
Abstract PDF(2427KB) ( 98 )   
References | Related Articles | Metrics
Through extracting and combining medical image information from different models of images,multi-model medical image fusion can obtain more clear,comprehensive,accurate and reliable image description on lesion site,and provide reliable basis for doctors to diagnosis of the disease and formulate reasonable treatment plan.Cloud model is a recently proposed theory in cognitive science,it has the advantage of taking the randomness and fuzziness into account,and has less application in image fusion at present.The paper introduced a method to fuse multi-model brain images such as two types MRI(Magnetic Resonance Imaging) brain image,MRI and PET(Positron Emission Tomography),MRI and SPECT(Single-Photon Emission Computed Tomography) using cloud model theory.Three steps are included in the proposed fusion method.At first,the histogram of input brain image with a smooth curve using the high-order spline function is fitted.Then the intervals in line with the valley point of the fitted curve are divided and cloud model is generated adaptively through reverse cloud generator.At last,cloud reasoning rules are designed and the fused image is gotten.The experimental results show that the characteristics of fused brain images gotten by the proposed method are clearer and the active regions are more significant than existing methods.The proposed method shows great improvement in both subjective effect and objective evaluation.
Research of Video Abstraction Based on Object Detection and Tracking
TIAN Helei, DING Sheng, YU Changwei and ZHOU Li
Computer Science. 2016, 43 (11): 297-299, 312.  doi:10.11896/j.issn.1002-137X.2016.11.057
Abstract PDF(1034KB) ( 56 )   
References | Related Articles | Metrics
ion Based on Object Detection and Tracking TIAN He-lei1 DING Sheng2 YU Chang-wei1 ZHOU Li2 (School of Instrument Science and Opto-electronics Engineering,Hefei University of Technology,Hefei 230009,China)1 (School of Computer and Information,Hefei University of Technology,Hefei 230009,China)2 Abstract In order to carry out a summary of massive surveillance video under the premise of not losing useful information,a video summarization technique based on object detection and tracking was proposed.Moving targets is detected by background subtraction in video by using Gaussian mixture model,and target tracking for detected target is done by using the idea of hierarchical correlation to get complete information of moving targets.Finally,the moving objects and video background are reassembled into the abstract video.The experimental results show that the method proposed in this paper can be used to concentrate the monitoring video.The summary of the video can completely preserve the original video,which reduces the storage space and cost.It is also convenient for relevant personnel to obtain useful information in time and improve work efficiency.
Medical CT Image Enhancement Algorithm Based on Laplacian Pyramid and Wavelet Transform
LV Li-zhi and QIANG Yan
Computer Science. 2016, 43 (11): 300-303.  doi:10.11896/j.issn.1002-137X.2016.11.058
Abstract PDF(1465KB) ( 79 )   
References | Related Articles | Metrics
The enhancement of medical images can improve the utilization of information. The traditional image enhancement method applied in medical image has many problems,such as the loss of image details,the weakening of the target edge information and the decrease of the image contrast.To solve above-mentioned problems,a new image enhancement algorithm based on wavelet transform and Laplacian Pyramid decomposition was proposed in this paper.First,the original medical image is decomposed by wavelet transform.Then,the high frequency information of medical image is decomposed by Laplacian Pyramid.Finally,the results of wavelet transform and Laplacian decomposition of Pyramid is used to reconstruct the image.Experimental results show that the proposed method is better than the traditional image enhancement algorithm,which can better enhance medical images and resist noise.
Image Compression Based on Discrete Tchebichef Moments and Soft Decision Quantization
LU Gang, XIAO Bin and WANG Guo-yin
Computer Science. 2016, 43 (11): 304-308.  doi:10.11896/j.issn.1002-137X.2016.11.059
Abstract PDF(1665KB) ( 43 )   
References | Related Articles | Metrics
Image compression coding can not only effectively decrease the information redundancy among image’s pi-xels,but also ensure its reconstruction quality and lower computation complexity.The transform domain based image compression coding is one of the most commonly used and the best performanced compression technologies,but discrete orthogonal moments based image compression method has not yet been deeply studied.This paper studied the basis procedures of encoding and decoding of JPEG,and proposed an image compression algorithm based on discrete Tchebichef moments.We studied the distribution of the transformed coefficients by the approach of KS test statistic and designed the optimized quantization table by taking advantage of soft decision quantization to approximate the rate and the distortion for the purpose of improving reconstruction quality.Then,we encoded the results of quantization by using Huffman entropy coding.Finally,we realized the whole process of image compression and reconstruction based on discrete Tchebichef moments.Under the framework of JPEG baseline system,through comparing with the mainstream DCT image compression method,the experimental results show that the algorithm is of higher compression ratio when the bit ratio exceeds 0.5bpp.The compression performance of DTT is apparently superior to DCT when PSNR is 35dB,40dB,45dB respectively.Meanwhile,they are similar on the elapsed time in encoding and decoding.
Image Co-segmentation by Constraints of Shape
PAN Xiang, YU Hui-bin, ZHENG He-rong and LIU Zhi
Computer Science. 2016, 43 (11): 309-312.  doi:10.11896/j.issn.1002-137X.2016.11.060
Abstract PDF(1647KB) ( 61 )   
References | Related Articles | Metrics
Existing image co-segmentation method fails to consider the similarity of shape between images,so that the segmentation results are inconsistent.We introduced a method of interactive image co-segmentation by the constraints of shape.First,the method preprocess an input image and finds the template of shape.Secondly,shape context matching algorithm is used to produce foreground and background mask for graph cut.Finally,minimum cut theory is used to optimize segmentation boundary.The experimental results show that compared with existing image co-segmentation method,this algorithm can significantly improve the quality of segmentation and make the segmentation results more semantical.
Image Registration Algorithm for Infrared and Visible Light Based on Non-subsampled Contourlet Transform
LIU Gang, ZHOU Heng, LIANG Xiao-geng and WANG Ming-jing
Computer Science. 2016, 43 (11): 313-316.  doi:10.11896/j.issn.1002-137X.2016.11.061
Abstract PDF(1523KB) ( 50 )   
References | Related Articles | Metrics
For the problem of registration which takes infrared image as the actual data and the visible light image as the referenced data,a multiresolution registration algorithm was presented based on non-subsampled contourlet transform(NSCT).The infrared image and the visible image are transformed into non-subsampled contourlet domain firstly,then the proposed method takes the gradient normalized mutual information as the analogous measure for the low frequency image of high scale and searches the registration parameters with the improved genetic algorithm(GA).The improved GA adopts the adaptive crossover operator and mutated operator and conquers the precocious phenomenon based on the description for the population grade of maturity.Therefore,the coarse registration result could be acquired.Subsequently,the low scale registration procedure is gone on furtherly under the guide of the coarse result and the whole registration result would be obtained in the end.The experimental results show that the proposed method has high registration accuracy and fast speed.
Fractal Dimension Based Wavelet Packet EBCOT Core Image Compression
TANG Guo-wei, ZHANG Yan, LI Jing-hui and MU Lin-huan
Computer Science. 2016, 43 (11): 317-321.  doi:10.11896/j.issn.1002-137X.2016.11.062
Abstract PDF(1538KB) ( 59 )   
References | Related Articles | Metrics
As an important information source of petroleum geological analysis and geological prospecting,the core ima-ge has the charaters of huge data amount, rich texture information and weak contrast.A fractal dimension based wavelet packet EBCOT core image compression method was proposed.The original core image is decomposed completely in way of wavelet packet.The fractal dimension is taken as the cost function and its value for each sub-band node is calculated by using the difference box-counting method.By comparing the fractal dimension value of the parent node and its child nodes to achieve pruning,and the optimal wavelet packet decomposition can be obtained.At last,the EBCOT algorithm is applied to implement the core image compression.The experimental results show that the PSNR and SSIM of the reconstructed images are both higher than those of the entropy based wavelet packet compression algorithm and those of JPEG2000 algorithm.